#docker (2021-08)

docker

All things docker

Archive: https://archive.sweetops.com/docker/

2021-08-01

2021-08-06

loren avatar
Introduction to heredocs in Dockerfiles - Docker Blogattachment image

Learn from Docker experts to simplify and advance your app development and management with Docker. Stay up to date on Docker events and new version announcements!

bananadance1
1
sheldonh avatar
sheldonh

Oh this is nice! Thanks for sharing

Introduction to heredocs in Dockerfiles - Docker Blogattachment image

Learn from Docker experts to simplify and advance your app development and management with Docker. Stay up to date on Docker events and new version announcements!

1
sheldonh avatar
sheldonh

Ansible for Container Builds

Ran across this project and haven’t seen much mention of it (only one in our archives). ansible-bender: https://github.com/ansible-community/ansible-bender

I’ve wondered about this in prior threads too. I don’t quite get why something like this hasn’t taken off more. We’ve solved simplify installation of packages and apps dramatically with ansible, but then with Dockerfiles it feels like we step backward into bash scripts, curl calls, and all this depends on the distro too. In addition to the benefits of docker compose, you get more features from Ansible.

I’d think that Ansible for defining, building, installing packages, and more would have been embraced eagerly.

What’s the reason ya’ll think this type of approach didn’t gain traction?

GitHub - ansible-community/ansible-bender: ansible-playbook + buildah = a sweet container imageattachment image

ansible-playbook + buildah = a sweet container image - GitHub - ansible-community/ansible-bender: ansible-playbook + buildah = a sweet container image

bradym avatar

IMO it’s because you have to be comfortable with both ansible and docker. Both of those things have a bit of a learning curve as you’re getting started. And it’s not something that has an immediately visible transition path - if you start off without ansible in your Dockerfile, you’d have to rewrite your Dockerfile mostly from scratch to add ansbile.

GitHub - ansible-community/ansible-bender: ansible-playbook + buildah = a sweet container imageattachment image

ansible-playbook + buildah = a sweet container image - GitHub - ansible-community/ansible-bender: ansible-playbook + buildah = a sweet container image

bradym avatar

Also, Ansible is not exactly a small install. I have a docker image I use to run ansible playbooks, and using dive I see that installing ansible and the dependencies I need is 380MB.

bradym avatar

I’m guessing most people aren’t going to the level of creating a container where other containers are going to be built to take advantage of something like ansible.

bradym avatar

And while you could use build stages, that means diving into the image to figure out where things are getting installed to be sure you’re copying everything you need.

bradym avatar

I could see a tool like ansible-bender gaining traction among people who’ve used ansible before but aren’t as comfortable with Docker.

bradym avatar

But I can’t see ansible becoming a tool that most people are going to reach for when building docker images.

sheldonh avatar
sheldonh

Got you. I guess once you start installing any special tools it just feels like stepping backwards to script those out instead of doing a packages: ['build-essential', 'foo'] and stuff like that.

Not a big deal, just feels strange to have shifted the easy setup and installs to back to raw scripting.

Maybe I’ve been conditioned too much to liking yaml

bradym avatar

I can see what you mean.

On the other hand, I really like that anyone with a basic understanding of *nix can look at a Dockerfile and understand what’s going on, at least at a high level.

1

2021-08-18

Almondovar avatar
Almondovar

Hi colleagues, we are using php image 7.4.9-apache and we received a customer requirement to upgrade to debian v10.10 - by running exec into the container we get the info that its version 10. My question is, how do i know what image to pick that has debian v10.10? because in the image details i cant see any command relative to debian. Thanks!

# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="<https://www.debian.org/>"
SUPPORT_URL="<https://www.debian.org/support>"
BUG_REPORT_URL="<https://bugs.debian.org/>"
# %
sheldonh avatar
sheldonh

Did you try docker inspect on container and see if anything useful in there? That’s what I’d normally start with to explore.

2021-08-20

2021-08-22

Steffan avatar
Steffan

trying to understand how latest tags work for images. does docker pick the most recent version of the image when latest is specified or does an actual version of the image tagged latest have to exist before it will work. quite confused with all that i ve been reading can anyone help me understand how this works

mfridh avatar

It means an actual :latest tag in the repository.

By default, on docker pull - image references without a :tag defaults to :latest in the Docker Engine:

https://docs.docker.com/engine/reference/commandline/pull/#examples

docker pullattachment image

docker pull: Most of your images will be created on top of a base image from the Docker Hub registry. Docker Hub contains many pre-built images that you can pull

Steffan avatar
Steffan

ahh i see so it doesnt go in there and drag the most recent tag as the word might imply

Steffan avatar
Steffan

this article has helped me understand it more just for anyone who cares about this topic https://stevelasker.blog/2018/03/01/docker-tagging-best-practices-for-tagging-and-versioning-docker-images/

Docker Tagging: Best practices for tagging and versioning docker images

In any new tech, there are lots of thoughts around “best practices”. When a tech is new, what makes a best practice? Working at Microsoft, running the Azure Container Registry (ACR), talking with lots of customers, some that use Azure and some that don’t, we’ve had a lot of exposure to what customers have encountered. We’ve been working on a number of scenarios, including container life cycle management and OS & Framework patching with containers. This has surfaced a number of interesting issues. Issues that suggest, “both sides” of the tagging argument  have value. It’s easy to get caught up, arguing a particular best practice, …if you don’t first scope what you’re practicing. I’ll start by outlining two basic versioning schemes we’ve found map to the most common scenarios. And, how they are used together to solve the container life cycle management problem. Container Patterns Support a Self-Healing Model Before I get into specific tagging schemes, it may help to identify a specific scenario impacted by your tagging scheme. The most basic container scenario supports restart type policies. docker run –restart always myunstableimage:v0 This helps when a container fails due to some internal reasoning. But, the host is still available. In this case, our code must be resilient. Or, to an old friends point (Pat Helland), “we must write our software with a apology based computing approach.” Apology based computing suggest our code must deal with “stuff” that happens. A set of code, or a container may fail and need to re-run that logic. For the purposes of tagging, I’ll stay focused on the other aspect; we must assume a container host may fail at any point. Orchestrators will re-provision the failed node, requiring a new docker pull. Which begs the question: when you have a collection of containers running the same workload, do you want them to all run the same workload, or some of them have the last version, while new nodes have a newer version?

In the above diagram, we have: a local build of 3 images: web:1, api:1, cache:1. Looking closely, we can see every tag has an ID, otherwise known as a digest. When the images are pushed to a registry, they are stored as two parts. The image:tag, and the underlying digest. These are actually separate elements, with a pointer between them. A deployment is made, using the tag. Not the digest. Although, that is an option. Digests aren’t as clean as I show here. They actually look like this: f3c98bff484e830941eed4792b27f446d8aabec8b8fb75b158a9b8324e2da252. Not something I can easily fit in this diagram, and certainly not something you’d want to type, or even try and visually find when looking at the containers running in your environment. When a deployment is made to a host, the orchestrator keeps track of what you asked it to deploy and keep deployed. This is the important part. …wait for it… With a deployment saved, the orchestrator starts its work. It evaluates what nodes it has to work with and distributes the request. A few things here. While many would argue Kubernetes has become the defacto standard for container orchestrators, orchestration is a generalized concept. Over time, we expect orchestration to just become the underpinnings to higher level services. But, that’s another post…

As our development team moves along, a new version of the api and cache are built and pushed. Since this is just a minor fix to version 1, the team uses the same tag. Our registry acknowledges the request, saving the new :1 version of the api and cache. Notice the original tags are now pointing to the new digests (3rp, 1n4) At this point, you likely see where this is going.

As Pat would say “Stuff Happens”. Actually, Pat would say something else. Our Host-B node has failed. No problem, our orchestrator’s job is to maintain a requested deployment. It sees the failure, evaluates the request made of it, and provisions the necessary resources to recover. Our deployment chart clearly says, keep 3 copies of web:1, 3 copies of api:1 and 4 copies of cache:1. So, it does a docker run of each of those image:tag combinations. It’s doing what it was told. Our Host-C now has the replacement images running

Is this the desired state you want? The orchestrator did it’s job. It was told to deploy and maintain a given set of images. The problem was actually further upstream, when we re-used the same tag for our images. Now, if you’re like most people, you’re likely thinking: I can deploy with the digest – yup, and you can view lots of these: f3c98bff484e830941eed4792b27f446d8aabec8b8fb75b158a9b8324e2da252 I can use semantic versioning, and always bump the number – yup, that sort of works, but read on I can use my git commit id – that’s pretty good as each commit is unique, and you can easily match a container image to its source. However, do you only rebuild your images when your source changes? …read on OS & Framework Patching While I believe in “never say never”, using stable tags for deployments are fraught with problems. However, Stable Tags do have their value. Let’s consider what happens when developers stop checking in code changes. In the VM world, Ops would continue to patch the running hosts for updates to the framework and patches to the host operating system. And, at some point, those patches may break the apps running on those nodes. One of the many benefits of containers are the ability to isolate the things our apps and services depend upon. Those dependencies are contained within our image, declared within our dockerfile. A multi-stage dockerfile for an aspnetcore app, based linux FROM microsoft/aspnetcore:2.0 AS base WORKDIR /app EXPOSE 80 FROM microsoft/aspnetcore-build:2.0 AS build WORKDIR /src COPY HelloWorld.sln ./ COPY HelloWorld/HelloWorld.csproj HelloWorld/ RUN dotnet restore -nowarn:msb3202,nu1503 COPY . . WORKDIR /src/HelloWorld RUN dotnet build -c Release -o /app FROM build AS publish RUN dotnet publish -c Release -o /app FROM base AS final WORKDIR /app COPY –from=publish /app . ENTRYPOINT [“dotnet”, “HelloWorld.dll”] In the above example, we can see two images. The aspnetcore-build:2.0 image to compile our code, then we copy the output from our build image to an optimized asnetcore:2.0 image. See: .NET and MultiStage Dockerfiles for more details For the focus of tagging and OS & Framework patching, notice our base image, used for the image we actually deploy. FROM microsoft/aspnetcore:2.0 AS base You can see we’re using the stable tag, 2.0.  This is where semantic versioning does play a role. The Core team releases new capabilities, indicated by the major version. They shipped 1.0, 2.0 and you can imagine they’ll continue to ship new capabilities. In the real world, new capabilities means things change, behaviors change, and thus, you want the development team to make a conscious choice which major version of aspnetcore they depend upon. This basic pattern is used across all major frameworks, java, go, node, etc. While the team focuses on new capabilities, they must also service each version. The aspnetcore image contains aspnetcore, dotnet core, as well as linux or windows server nanao base layers, depending on which architecture you’re running. How do you get OS & Framework updates? Should you have to bump your FROM statement for every release? While the team will publish additional tags for minor, non-breaking changes, each team does update their stable major tags. This does mean, depending on when you pull aspnetcore:2.0, you’ll get different content. But, that’s the intent. Whenever you pull the stable tag, you should get the latest OS & Framework patched image. I say should as not all …

1
    keyboard_arrow_up