#codefresh (2019-04)
Archive: https://archive.sweetops.com/codefresh/
2019-04-01
So it turned out I was using ‘global variables’ and build variables wrong
I thought this would mean when I run the build
pipeline my Dockerfile’s ARG PORT
would get a value of 3060
Turns out it doesn’t
So I found this: https://codefresh.io/docs/docs/yaml-examples/examples/build-an-image-with-build-arguments/
Codefresh is a Docker-native CI/CD platform. Instantly build , test and deploy Docker images.
and that’s fine.. happy to define it in a codefresh.yml file
but then I realised how do I do multi-branch / stage CI… surely I’ll need multiple codefresh.yml because one example argument is APP_ENV
Did you try to use interpolations with:
build_arguments:
- key=value
e.g.
build_arguments:
- APP_ENV=${{APP_ENV}}
Also, what we do sometimes is create a reusable pipeline like build.yaml
that we call from something like some-app.yaml
; the some-app.yaml
can trigger the build
pipeline and pass settings to it.
Ahhh I did not try interpolating. I went from not declaring args in .Yaml to declaring hardcoded and removing the vars from the console
I was going to have a pipeline for each stage inside the codefresh UI which runs off the same codefresh.yml and simply has the build args in the UI
Does any of this sound like I’ve done / misunderstood it incorrectly, as having multiple codefresh.yml doesn’t feel DRY.. especially to only change a few build args. Thanks
Ah.. unless it should be ENV
instead of ARG
idk
how many variables are you setting at build time? I usually just use environment variables that are set at run time instead of build time. I add the environment variables through the pod definition. This allows my Dockerfile to be pretty generic
So far 2. Vant see it breaching 5
I see your screenshot that your are setting aws secrets and the port for your container. I don’t think the port matters on your Docker container, so I am assuming its more about your aws secrets. I never was a fan of keeping keys lying around, and I liked having everything controlled through IAM. May I suggest taking a look at https://github.com/uswitch/kiam ? That’s what we use to make pods have specific IAM permissions, and it works great.
Integrate AWS IAM with Kubernetes. Contribute to uswitch/kiam development by creating an account on GitHub.
Thanks Mark. I’ve actually removed it now. It was when I was trying put ECS
The two args I have are: application port and application env file.
Because I then do:
COPY ${APP_ENV} .env
2019-04-02
How would you go about checking if .env
is a directory or file inside a docker image inside a pipeline?
You could have a freestyle step running a bash IF -f or -d; you could have some sort of condition in Dockerfile maybe?
And also to add geodesic to CI, good to just use https://github.com/cloudposse/prod.cloudposse.co/blob/master/codefresh.yml ?
Example Terraform/Kubernetes Reference Infrastructure for Cloud Posse Production Organization in AWS - cloudposse/prod.cloudposse.co
Seems to be only pushing to codefresh registry which works for me. Unsure why it defines an old build harness?
Erik, where’s an example of you executing a command in a geodesic module from a freestyle step in codefresh?
what Demonstrate how to do CI/CD of Terraform with Codefresh why Larger goal is to apply this to all reference architectures and terraform modules
We should move that “use_codefresh” direnv function into geodesic
thanks
I get this funny bash script actually
DeployingDockerImage:
title: Deploying Docker Image with Ansible
image: r.cfcr.io/user/acme/${{MODULE}}.acme.co.uk:master
command:
- ansible --version
@oscarsullivan_old that’s the default behavior of geodesic
that’s why you can run
docker run myco/myinfra | bash
to install geodesic
to avoid that, do this:
what did I miss there Erik?
what Demonstrate how to do CI/CD of Terraform with Codefresh why Larger goal is to apply this to all reference architectures and terraform modules
because command
not cmd
?
cmd:
- "-l"
- "-c"
- "./tests/run.sh"
i am not sure about command
vs cmd
Run commands inside a Docker container
looks like cmd
is canonical
ah I went from this
CollectAllMyDeps:
title: Install dependencies
image: python:3.6.4-alpine3.6
commands:
- pip install .
Ok will try to make it closer looking to what you’ve got
oh, so yea, cmd
is the arg passe to the entrypoint
could be the difference
vs commands
is run after the entrypoint
ahhh
you can try this:
that makes sense
step_name:
title: Step Title
description: Step description
image: image/id
working_directory: ${{step_id}}
commands:
- bash-command1
- bash-command2
cmd:
- arg1
- arg2
although that confuses me again
cmd: ["-l", "-c", "true"]
commands:
- "my command"
cmd: ["--version"]
commands:
- "ansible"
no
hrm… maybe
seems dull I can’t just have commands: ansible --version
and have to split it up
nono
I’ll give these new combos a go
in that example, you are passing --version
to the entrypoint which is bash
so you should get the version of bash back
ooh
you could alternatively change the entrypoint
study up on ENTRYPOINT
vs CMD
in docker
then see what we’re doing in the Dockerfile
for geodesic
study […]
Thanks
Must admit I don’t really get the differences
Seen both in action but no further
yea, the subtle nuances are often misunderstood by Dockerfile
authors
and you see them misused and abused
I liken it to the system call
int execve(const char *filename, char *const argv[],
char *const envp[]);
ENTRYPOINT
~ filename
CMD
~ argv
(this is my mental model, not the docker explanation)
lol Erik it was because I had command
not commands
DeployingDockerImage:
title: Deploying Docker Image with Ansible
image: r.cfcr.io/xxx/xxx/${{MODULE}}.xxxx.co.uk:master
commands:
- ansible --version
# ansible-playbook pod.yml -i inventory/${{inventory}}.yml
Output:
Status: Downloaded newer image for r.cfcr.io/xxx/xxx/sandbox.xxx.co.uk:master
ansible 2.7.9
config file = None
configured module search path = [u'/conf/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15 (default, Jan 24 2019, 16:32:39) [GCC 8.2.0]
Reading environment variable exporting file contents.
Successfully ran freestyle step: Deploying Docker Image with Ansible
Now I gotta get my private Role + Playbook accessible
Thinking either have them in conf, have some kind of wget situation and publicise them (not ideal), try and do an import like with terraform modules?????
Our strategy for this is to write the key to chamber
However, I think for your purposes, it might be sufficient to write the key to a codefresh secret
You can easily add an SSH key to the agent using an environment variable
source <(ssh-agent -s)
ssh-add - <<<${ANSIBLE_SSH_PRIVATE_KEY}
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
Or my geodesic module has a git clone.. but then where do I store the deploy key hmmmm
Have come up with this solution
You could also clone as a step in the codefresh pipeline
that would be accessible from inside of geodesic
the benefit of doing it that way is that you can leverage the git integrations already available in codefresh without putting a secret (e.g. /conf/ssh/deployment-repo-key
into git)
2019-04-03
Oh wow
yeh you’re right
that’s perfect actually. thanks
Webhooks issue Apr 3, 09:27 UTC Resolved - The incident been resolved Apr 3, 09:24 UTC Investigating - We are currently investigating an issue where git webhooks don’t trigger pipelines. Running builds manually works as expected
Codefresh’s Status Page - Webhooks issue.
Codefresh availability issue Apr 3, 12:29 UTC Investigating - We are currently investigating a problem affecting Codefresh availability
Codefresh’s Status Page - Codefresh availability issue.
Codefresh availability issue Apr 3, 12:46 UTC Monitoring - Codefresh site is up, we’re still monitoring the system Apr 3, 12:29 UTC Investigating - We are currently investigating a problem affecting Codefresh availability
Codefresh availability issue Apr 3, 13:34 UTC Update - Google just updated us on an ongoing incident, we’re monitoring the issue with google team Apr 3, 12:46 UTC Monitoring - Codefresh site is up, we’re still monitoring the system Apr 3, 12:29 UTC Investigating - We are currently investigating a problem affecting Codefresh availability
Codefresh availability issue Apr 3, 14:23 UTC Resolved - This incident has been resolved. Apr 3, 14:22 UTC Update - We are continuing to monitor for any further issues. Apr 3, 13:34 UTC Update - Google just updated us on an ongoing incident, we’re monitoring the issue with google team Apr 3, 12:46 UTC Monitoring - Codefresh site is up, we’re still monitoring the system Apr 3, 12:29 UTC Investigating - We are currently investigating a problem affecting Codefresh availability
Codefresh’s Status Page - Codefresh availability issue.
2019-04-04
what format do you use for your Secrets Manager in #aws to connect to the Codefresh private repo? Can anyone please share an example
2019-04-05
Status: Image is up to date for cloudposse/build-harness:0.18.0
make: *** No rule to make target 'codefresh/notify/slack/deploy'. Stop.
[SYSTEM] Error: Failed to run freestyle step: Send notification to Slack channel; caused by NonZeroExitCodeError: Container
for step title: Send notification to Slack channel, step type: freestyle, operation: Freestyle step failed with exit code:
2
SendSlackDeployNotification:
title: Send notification to Slack channel
stage: "deploy"
image: cloudposse/build-harness:${{BUILD_HARNESS_VERSION}}
commands:
- make codefresh/notify/slack/deploy
Anyone aware of why this would be happening? LGTM according to build-harness readme and example here: https://github.com/cloudposse/example-app/blob/29f91a718522e4a702d77d172a41ed1f779d42fe/codefresh/pull-request.yaml#L107
Example application for CI/CD demonstrations of Codefresh - cloudposse/example-app
This also seems to follow the same pattern https://docs.cloudposse.com/release-engineering/cicd-process/build-charts/#examples
likewise make init
fails
Need more details.
Not too sure what else to share other than BUILD_HARNESS_VERSION == 0.18.0
@Igor Rodionov
So have tried the Codefresh commands of:
...
commands:
- make init
and as above
do you have declated env vars https://github.com/cloudposse/example-app/blob/29f91a718522e4a702d77d172a41ed1f779d42fe/codefresh/pull-request.yaml#L24
Example application for CI/CD demonstrations of Codefresh - cloudposse/example-app
?
@oscarsullivan_old ^
check this env vars
Example application for CI/CD demonstrations of Codefresh - cloudposse/example-app
Collection of Makefiles to facilitate building Golang projects, Dockerfiles, Helm charts, and more - cloudposse/build-harness
Collection of Makefiles to facilitate building Golang projects, Dockerfiles, Helm charts, and more - cloudposse/build-harness
To debug it would be useful to see env vars and codefresh.yaml
@oscarsullivan_old I just setup a pipeline using the codefresh/notify/slack/deploy/webapp target using BUILD_HARNESS_VERSION=0.18.0
I would check your environment variables and make sure they are set correctly, I was getting similar errors, and it was because of env variables not being set
Thanks, I’ll double check both these points (version and undeclared variables)
it is most likely the declaration of env vars
I did no special setup to use the build-harness tmeplates
does anyone have suggestions for the strategy I should use for deploying to production from codefresh? I feel unsafe giving the ClusterRole codefresh-role
permission to deploy to the whole cluster
Or is that the only way possible?
I think I use this
module "codefresh_user" {
source = "git::<https://github.com/cloudposse/terraform-aws-iam-system-user.git?ref=tags/0.4.1>"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "codefresh"
}
resource "aws_iam_user_policy_attachment" "default" {
user = "${module.codefresh_user.user_name}"
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPowerUser"
}
but not using K8s rn
2019-04-09
We’ve added support for DocumentDB (MongoDB) to our Codefresh Enterprise terraform module: https://github.com/cloudposse/terraform-aws-codefresh-backing-services
Terraform module to provision AWS backing services necessary to run Codefresh Enterprise - cloudposse/terraform-aws-codefresh-backing-services
This makes everything that is absolutely essential to running Codefresh Enterprise (onprem) a fully managed service by AWS.
2019-04-10
Awesome! I made this little example app a little more deployable… Still working out the kinks. https://github.com/codefresh-contrib/example-voting-app
Docker’s Example Voting App. Contribute to codefresh-contrib/example-voting-app development by creating an account on GitHub.
Requires a few IPs right now and only works with Cloud LB not sure how to make this more portable using Istio or NGINX as both would require a DNS integration using the LB as is while requiring 2 IPs per running Helm Release is a bit much but this is also meant to demo a few things then toss after playing.
2019-04-11
Bitbucket access issue Apr 11, 11:15 UTC Investigating - We are currently investigating a problem affecting bitbucket integrations
Codefresh’s Status Page - Bitbucket access issue.
Build services disruption Apr 11, 13:11 UTC Investigating - We are currently investigating this issue
Codefresh’s Status Page - Build services disruption.
Bitbucket access issue Apr 11, 14:43 UTC Resolved - The incident has been resolved Apr 11, 11:15 UTC Investigating - We are currently investigating a problem affecting bitbucket integrations
Codefresh’s Status Page - Bitbucket access issue.
Build services disruption Apr 11, 14:43 UTC Resolved - The incident has been resolved Apr 11, 13:11 UTC Investigating - We are currently investigating this issue
Codefresh’s Status Page - Build services disruption.
Infrastructure as code, pipelines as code, and now we even have code as code! =P In this talk, we show you how we build and deploy applications with Terraform using GitOps with Codefresh. Cloud Posse is a power user of Terraform and have written over 140 Terraform modules. We’ll share how we handl
Here are the slides from the webinar today.
Video will be posted as soon as it’s available.
Thank you
Here’s what a PR looks like: https://github.com/cloudposse/testing.cloudposse.co/pull/75
what Demo of adding a new user bucket why GitOps rocks! =)
Here’s the configuration: https://github.com/cloudposse/testing.cloudposse.co/tree/master/codefresh/terraform
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
Helm repositories issue Apr 11, 19:04 UTC Resolved - A fix has been deployed to production and confirmed to have fixed the issue. Helm repositories are fully operational. Apr 11, 18:49 UTC Monitoring - The issue has been identified, and the corresponding fix was applied. Helm repositories are accessible again, We’re closely monitoring the platform to ensure everything is working as expected. Apr 11, 18:47 UTC Identified - Currently Helm repositories provided by Codefresh are having issues. This issue is under…
Codefresh’s Status Page - Helm repositories issue.
2019-04-15
@Erik Osterman (Cloud Posse) Do you happen to have an example of running https://github.com/cloudposse/github-commenter as a step in Codefresh pipeline using a Docker image? I’ve seen in other project snippets of codefresh.yml formatted examples but nothing in this project.
Command line utility for creating GitHub comments on Commits, Pull Request Reviews or Issues - cloudposse/github-commenter
Came up with this really quickly.
GitHubCommenter:
title: Add GitHub Comment
image: cloudposse/github-commenter:latest
environment:
- GITHUB_TOKEN=${{GITHUB_TOKEN}} #Must be created see link for info below.
- GITHUB_OWNER=${{CF_REPO_OWNER}}
- GITHUB_REPO=${{CF_REPO_NAME}}
- GITHUB_COMMENT_TYPE=pr
- GITHUB_PR_ISSUE_NUMBER=${{CF_PULL_REQUEST_NUMBER}}
- GITHUB_COMMENT="" #Your custom comment goes here
this is the latest example that Erik did for the presentation https://github.com/cloudposse/testing.cloudposse.co/blob/osterman-patch-1/codefresh/terraform/pipeline.yml
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
the PR looks like this https://github.com/cloudposse/testing.cloudposse.co/pull/75
what Demo of adding a new user bucket why GitOps rocks! =)
Thanks!
uses this template file for github-commenter
https://github.com/cloudposse/testing.cloudposse.co/blob/osterman-patch-1/codefresh/terraform/pipeline.yml#L59 (update it for your needs)
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
2019-04-16
2019-04-18
Is there a how-to on helmfile in Codefresh like an example codefresh.yml with an explanation on the steps?
Not that well documented
Example application for CI/CD demonstrations of Codefresh - cloudposse/example-app
The “ctl” command can be ignored. It just shows how to call Helmfile
In this example, we support both blue/green and rolling
Also, this example shows how we use the monochart
I’d be happy to jump on a call and walk you through it
2019-04-19
How are you guys running stuff like integration tests and system tests? CF can’t directly access internal services that are deployed, so are you guys using some kind of proxy container in the cluster to run these kinds of tests?
Codefresh runtime-environment agent. Contribute to codefresh-io/venona development by creating an account on GitHub.
This allows you to run it all in your k8s cluster
we’ve used this for things like connecting to artifactory or using consul
Yup this runner allows you to talk to the internal pods of the Kubernetes cluster using the container ips if needed.
this will just show up as a different run time environment on your pipeline?
Add the runner to the cluster and then select the runner for the pipeline and everything happens on your cluster.
related to this, I have a question
we are about to make our kops cluster private. I know we can use venona, but how do the k8s integrations work then?
Yes
How to run Codefresh pipelines in your own secure infrastructure
Is there a way to split up what runs on CF runtime env and what runs on our infra? I assume that venona still reports everything up to CF UI, right?
Yes all comes back to UI
Each pipeline has a runtime setting. You can use our CLI to call another pipeline from your original as a step in the original and have that child pipeline run on your infra.
I think this should cover our needs, thanks!
2019-04-22
Has anyone been using cf_export stating the name of an existing environment variable? I have tried to use it, but the variable output is getting altered. FreeStye Bash Code
- FEATURE=$(echo ${{CF_BRANCH}} | cut -c 9- )
- echo $FEATURE
- cf_export $FEATURE
Output
TI-595
Exporting TI-595=-595
I am seeing the cf_export command parse the static environment variable value vs. storing the defined value with the assignment cf_export $FEATURE
.
@Michael Kolb here’s a pipeline step example to setup ENV vars
env:
title: Setup Environment
stage: Init
fail_fast: true
image: ${{build_image}}
working_directory: &cwd ${{main_clone}}/${{PROJECT}}
commands:
- cf_export BUILD_HARNESS_VERSION=0.18.0
# Github Commenter
- cf_export GITHUB_OWNER=${{CF_REPO_OWNER}}
- cf_export GITHUB_REPO=${{CF_REPO_NAME}}
- cf_export GITHUB_COMMENT_TYPE=pr
- cf_export GITHUB_PR_ISSUE_NUMBER=${{CF_PULL_REQUEST_NUMBER}}
- cf_export GITHUB_COMMENT_FORMAT_FILE=${{CF_VOLUME_PATH}}/${{CF_REPO_NAME}}/codefresh/terraform/comment.txt.gotmpl
it exports Codefresh env vars and also those defined in Codefresh UI (Global config section and ENV vars section in each pipeline)
you mean the example does not use the ${{...}}
syntax?
So, in the documentation it says that you can state an existing variable
state the name of an existing environment variable (like EXISTING_VAR)
https://codefresh.io/docs/docs/codefresh-yaml/variables/#using-cf_export-command
Codefresh is a Docker-native CI/CD platform. Instantly build , test and deploy Docker images.
However, this hasn’t stored that variable assignment
and the displayed use case does not show assignment in the code
ersion: '1.0'
steps:
freestyle-step-1:
description: Freestyle step..
title: Free styling
image: alpine:latest
commands:
- cf_export VAR1=VALUE1 VAR2=VALUE2 EXISTING_VAR
freestyle-step-2:
description: Freestyle step..
title: Free styling 2
image: ${{VAR1}}
commands:
- echo $VAR2
- curl http://$EXISTING_VAR/index.php
is the documentation referencing a pipeline variable defined in the pipeline vs. in the pipeline yaml?
Yes pipeline variables are in the pipeline SPEC file. If you need variables for the pipeline to be set during pipeline execution then you’d use cf_export to have them pop out to pipeline.
The existing variable section is mentioning step-2 in the YAML above.
You do not need to specify a variable name before the cf_export
thanks, the documentation page could use more examples of cf_export variable assignment like the example that you provided - cf_export GITHUB_OWNER=${{CF_REPO_OWNER}}
@Kostis (Codefresh) Can you add this to your backlog to add in some more use cases to this?
For example command export?
Maybe something like https://github.com/codefresh-contrib/example-voting-app/blob/master/.codefresh/codefresh-dvts.yml#L14
Docker’s Example Voting App. Contribute to codefresh-contrib/example-voting-app development by creating an account on GitHub.
@dustinvb will do
they display assignment as static references
and the reference EXISTING_VAR
is misleading if this is pulling from the pipeline variable definition vs. an existing variable
documentation clarification would help, since I have had to run several builds trying to figure out how the cf_export process handles environment variables
is there any documentation on the codefresh/kube-helm
image that provides connection to Kubernetes & Tiller?
Are you talking about the image used in our UI steps https://github.com/codefresh-contrib/images/tree/master/kube-helm or Helm step https://github.com/codefresh-contrib/cfstep-helm
useful docker images. Contribute to codefresh-contrib/images development by creating an account on GitHub.
Docker image for Codefresh Helm step. Contribute to codefresh-contrib/cfstep-helm development by creating an account on GitHub.
2019-04-23
the docker hub release
i found the command that i need kubectl config use-context <<cluster name>>
the container had all the tools installed. Unfortunately, there was not documentation on the docker hub site.
2019-04-24
Hi.. anyone got experience using submodules and codefresh?
version: '1.0'
stages:
- build
- push
- prepare
- deploy
steps:
get_git_token:
title: Reading Github token
image: codefresh/cli
commands:
- cf_export GITHUB_TOKEN_EXPORT=$(codefresh get context github --decrypt -o yaml | yq -y .spec.data.auth.password)
updateSubmodules:
image: codefresh/cfstep-gitsubmodules
environment:
- GITHUB_TOKEN=${{GITHUB_TOKEN_EXPORT}}
- CF_SUBMODULE_SYNC=true
- CF_SUBMODULE_UPDATE_RECURSIVE=false
debug:
title: Debug Submodules
image: codefresh/cli
commands:
- codefresh get contexts
- ls -lah /codefresh/volume/my_app/models
- ls -lah /codefresh/volume/my_app/lib/library
BuildingDockerImage:
title: Building Docker Image
stage: "build"
type: build
image_name: ${{IMAGE_NAME}}
working_directory: ./
tag: '${{CF_BRANCH_TAG_NORMALIZED}}'
dockerfile: pm2.Dockerfile
build_arguments:
- APP_ENV=.env-${{CF_PULL_REQUEST_TARGET}}
- PORT=${{PORT}}
So it gets my git token, updates the submodules in /codefresh/volume/my_app
then does an ls pf those dirs to show me it has cloned (it has)
and then in the docker build step I have:
COPY models/* models/
but the models/
dir is empty in the container..
So I can only assume that /codefresh/volume/my_app is not the path that my_app
is being cloned into during init
and therefore not the context of Docker when building
I would recommend trying to set the working_directory: ${{main_clone}}
for the step to ensure you’re in the clone repository directory.
2019-04-25
Docker incident: Docker Hub elevated errors Apr 25, 17:33 UTC Investigating - Docker has reported an incident affecting Docker Hub Web and Docker Hub Automated Builds components. It shouldn’t affect normal pulling and pushing operations. We will keep monitoring this.
More information here: https://status.docker.com/
Codefresh’s Status Page - Docker incident: Docker Hub elevated errors.
Our system status page is a real-time view of the performance and uptime of Docker products and services.
Docker incident: Docker Hub elevated errors Apr 25, 18:10 UTC Resolved - Docker has reported this incident as Resolved. “[Resolved] The issue has been resolved. The DB is backup again.”
More information here: https://status.docker.com/pages/incident/533c6539221ae15e3f000031/5cc1ed78790d0e1ca1c8fcd4 Apr 25, 17:33 UTC Investigating - Docker has reported an incident affecting Docker Hub Web and Docker Hub Automated Builds components. It shouldn’t affect normal pulling and pushing operations. We will keep monitoring this.
More…
Our system status page is a real-time view of the performance and uptime of Docker products and services.
2019-04-30
Does Codefresh have API that accepts commands to start an existing pipeline? I have an integration test that will take hours and would like the option to start another pipeline once the integration tests complete.
https://codefresh-io.github.io/cli/pipelines/run-pipeline/
Usage from a step in pipeline.
Docker’s Example Voting App. Contribute to codefresh-contrib/example-voting-app development by creating an account on GitHub.