#docker (2020-10)

docker

All things docker

Archive: https://archive.sweetops.com/docker/

2020-10-02

maarten avatar
maarten

Anyone knows a good already dockerized private docker registry ? And are there any well known strategies for syncing from a public + commercial repo to a private closed network registry. Thanks!

Issif avatar

we use gitlab, but seems overkill for you

zeid.derhally avatar
zeid.derhally

I’ve been wanting to try this out, https://goharbor.io/

Harbor

Our mission is to be the trusted cloud native repository for Kubernetes

github140 avatar
github140

We use https://www.sonatype.com/nexus/repository-oss/download?smtNoRedir=1 It has also proxy support to external registries.

Download Repository OSS
Download Nexus Repository OSSThe world’s first and only universal repository solution that’s FREE to use.
github140 avatar
github140

If the registry doesn’t, then skopeo could sync repos https://github.com/containers/skopeo

containers/skopeo

Work with remote images registries - retrieving information, images, signing content - containers/skopeo

maarten avatar
maarten

Thanks ! @github140 How does proxy configuration work, is that something to configure to the docker cfg’s at the ‘ client side ? ‘ to use sonatype for say .. ‘nginx’ ?

github140 avatar
github140

The docker client connects to the onprem defined (Nexus) proxy endpoint which refers to e.g. dockerhub. https://help.sonatype.com/repomanager3/formats/docker-registry/proxy-repository-for-docker

maarten avatar
maarten

is the nexus one 1200 usd/year ?

github140 avatar
github140

So far we are good with OSS version.

Biswajit Das avatar
Biswajit Das

Hello All, I am looking for creating a docker image from a running windows container. I have made some manual changes

Biswajit Das avatar
Biswajit Das

Is there any way to do it?

Matt Gowie avatar
Matt Gowie

@Biswajit Das Check out docker commit

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

TIL:

1

2020-10-05

Sean Turner avatar
Sean Turner

Hey all, I’ve deployed postfix on fargate, and I’m trying to make a fifo that I am tailing just before postfix starts so that I can reduce noise in the logs, as I think the health check is getting logged as a connection in fargate. Is there a better “docker” way of doing what I’m trying to do? This is the end of my entrypoint script. This doesn’t end up working – emails still send when I connect to my container with telnet – but my logs aren’t appearing in stdout as I haven’t piped them to stdout properly.

mkfifo /var/postfix_logs
postconf -e "maillog_file=/var/postfix_logs"
tail -f /var/postfix_logs | grep -v 'connect from\|lost connection\|disconnect' &

postfix start-fg
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrm… interesting solution. I see what you’re trying to do…

But before trying to do this, did you first just try to do something like this:

postconf -e "maillog_file=/dev/stdout"
postfix start-fg | grep -v 'connect from\|lost connection\|disconnect'

(just a guess.. haven’t tried)

We have also deployed postfix in the past as a container for SMTP relay (E.g. in kubernetes), but these days just use this terraform module…

https://github.com/cloudposse/terraform-aws-ses

cloudposse/terraform-aws-ses

Terraform module to provision Simple Email Service on AWS - cloudposse/terraform-aws-ses

Sean Turner avatar
Sean Turner

ooooooooh

Sean Turner avatar
Sean Turner

Interesting!

Sean Turner avatar
Sean Turner

Yeah, we need postfix (which is in fargate) as the client has some older applications that don’t have smtp auth which means no SES

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, makes sense.

Sean Turner avatar
Sean Turner

Yep, I think this works. Should’ve tried the obvious lol

new:

Oct 07 06:04:37 email-smtp postfix/postfix-script[76]: starting the Postfix mail system
Oct 07 06:04:37 email-smtp postfix/master[77]: daemon started -- version 3.4.12, configuration /etc/postfix
Oct 07 06:04:49 email-smtp postfix/smtpd[81]: improper command pipelining after HELO from unknown[172.17.0.1]: MAIL FROM: sean.turner@$domain.govt.nz\r\nRCPT TO: [email protected]\r\nDATA\r\nSubject: Sending an e
Oct 07 06:04:50 email-smtp postfix/verify[84]: cache btree:/var/lib/postfix/verify_cache full cleanup: retained=0 dropped=0 entries
Oct 07 06:04:50 email-smtp postfix/cleanup[85]: 0A28E2C0068: message-id=<20201007060450.0A28E2C0068@email-smtp.ap-southeast-2.amazonaws.com>
Oct 07 06:04:50 email-smtp postfix/qmgr[79]: 0A28E2C0068: from=<[email protected]>, size=308, nrcpt=1 (queue active)
Oct 07 06:04:51 email-smtp postfix/smtp[86]: 0A28E2C0068: to=<[email protected]>, relay=email-smtp.ap-southeast-2.amazonaws.com[3.105.85.198]:587, delay=0.96, delays=0/0.03/0.84/0.08, dsn=2.0.0, status=deliverable (250 Ok)
Oct 07 06:04:51 email-smtp postfix/qmgr[79]: 0A28E2C0068: removed

old:

Oct 07 06:07:12 email-smtp postfix/postfix-script[76]: starting the Postfix mail system
Oct 07 06:07:12 email-smtp postfix/master[77]: daemon started -- version 3.4.12, configuration /etc/postfix
Oct 07 06:07:17 email-smtp postfix/smtpd[81]: connect from unknown[172.17.0.1]
Oct 07 06:07:20 email-smtp postfix/smtpd[81]: improper command pipelining after HELO from unknown[172.17.0.1]: MAIL FROM: sean.turner@$domain.govt.nz\r\nRCPT TO: [email protected]\r\n
Oct 07 06:07:20 email-smtp postfix/verify[84]: cache btree:/var/lib/postfix/verify_cache full cleanup: retained=0 dropped=0 entries
Oct 07 06:07:20 email-smtp postfix/cleanup[85]: A1A272C00C6: message-id=<20201007060720.A1A272C00C6@email-smtp.ap-southeast-2.amazonaws.com>
Oct 07 06:07:20 email-smtp postfix/qmgr[79]: A1A272C00C6: from=<[email protected]>, size=308, nrcpt=1 (queue active)
Oct 07 06:07:21 email-smtp postfix/smtp[86]: A1A272C00C6: to=<[email protected]>, relay=email-smtp.ap-southeast-2.amazonaws.com[54.79.14.233]:587, delay=0.64, delays=0/0.03/0.53/0.07, dsn=2.0.0, status=deliverable (250 Ok)
Oct 07 06:07:21 email-smtp postfix/qmgr[79]: A1A272C00C6: removed
Oct 07 06:07:56 email-smtp postfix/smtpd[81]: disconnect from unknown[172.17.0.1] helo=1 mail=1 rcpt=1 data=1 quit=1 commands=5
Sean Turner avatar
Sean Turner

No more connect from unknown[172.17.0.1], disconnect from unknown[172.17.0.1] helo=1 mail=1 rcpt=1 data=1 quit=1 commands=5

Sean Turner avatar
Sean Turner

Cheers mate

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

One thing I should have also recommended is adding:

set -opipefail

At the very top. That way you get the exit code of the first command to fail in the pipeline.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. in

postfix start-fg | grep -v 'connect from\|lost connection\|disconnect'

You want the exit code of postfix start-fg, so this is what it solves.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
set -opipefail
postconf -e "maillog_file=/dev/stdout"
postfix start-fg | grep -v 'connect from\|lost connection\|disconnect'

2020-10-06

2020-10-07

2020-10-17

sheldonh avatar
sheldonh

Would the new ECS docker compose integration allow me to spin up Grafana from a docker compose file with persisted storage volumes and more in fargate without any terraform?

I have some compose files like sql server, grafana, InfluxDB and more and just haven’t gotten the time to deal with the full terraform + ecs side. If I can use the new integration to spin up some services or tools in a way account for folks that sounds super promising. Any feedback on trying? I’m not clear yet on how I whitelist access either or if they are by default open to public. https://docs.docker.com/engine/context/ecs-integration/

Deploying Docker containers on ECS

Deploying Docker containers on ECS

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Looks like there is some work to support storage volumes: https://github.com/docker/compose-cli/issues/737 No idea if it’s released yet or not. Also it’s EFS so I’d expect high price and low performance.

As an aside, Fargate containers also get 10GB of local ephemeral storage so that may be worth looking into.

Create EFS filesystem(s) for compose volume(s) · Issue #737 · docker/compose-cli

Description As compose file defines (non-external) volumes ECS integration should create an EFS filesystem with sane defaults Filesystem MUST not be deleted by compose down

sheldonh avatar
sheldonh

Cool! I decided to play around with it today for some stuff. Pretty cool! builds it all as cloudformation stack. I used Taskfile.yml approach and with a few commands see a container built and fargate task up and running. Promising!

sheldonh avatar
sheldonh

The EFS part is a big deal. Without some ability to have some persistent storage for a task then stuff like Grafana/InfluxDB would be problematic. You know if it’s possible to create those outside of the current compose format and link it as a resource or out of luck for now with that?

sheldonh avatar
sheldonh

Only thing taking a while is AWS Cloudmap. New to me. I guess that helps with the networking in VPC for the services. That part failed to tear down the first time I think and it’s 2 mins in trying to redeploy and still waiting. Other than that piece, so far the remaining pieces are pretty cool!

sheldonh avatar
sheldonh

Failed task got it stuck. The task tried to pull image from docker rather than ecr. probably wrong reference in the docker compose format

sheldonh avatar
sheldonh

Makes me appreciate terraform lol. The tear down on cloudformation always seemed soooo slow. I think next docker compose up and it will be running

sheldonh avatar
sheldonh

Worked. Just not sure how to handle the ports and whitelisting now. Any tips/ideas welcome. I couldn’t put ports in docker compose file and not sure how to assign security groups or ips for a service.

I’m loving the promise of quick task definitions up through this, just things to work through.

sheldonh avatar
sheldonh

Gonna try aws copilot and their CLI. I’m guessing from reading it’s probably much faster for my tests

Zach avatar


Without some ability to have some persistent storage for a task then stuff like Grafana/InfluxDB would be problematic.
grafana doesn’t write much to disk, easier to just point it at a database for the storage. I’m running it in fargate like that now (minus the docker compose stuff)

sheldonh avatar
sheldonh

Did you write a terraform module for this? While copilot is really cool it abstracts a lot of underlying cloudformation so using a terraform module would be better.

I just am new to ecs so doing so the pieces with load balancers, domain cert, and all is a lot of pieces to connect. Have not found but one terraform project for this and it wasn’t up to date

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)
Zach avatar

Yup its pretty simple, just define a fargate ecs cluster, a service, a task def, and a container def. Add ALB for TLS, and security groups etc. eazy.

sheldonh avatar
sheldonh

Thank you. I’ll revisit those modules ad I did use in the past for a test. I think I successfully bused for a chat it deployment. Didn’t think of using for this and I have most of that ready. Just have to configure the grafana config then.

Tip: I wouldn’t call it easy though, it’s always dependent on your background. I’ve never deployed an auto scaling load balancer with fargate used, acm etc. All new to me. I’ve learned that you shouldn’t say it’s easy to anyone, but rather equip with concepts and understanding. Everyone comes from someplace different

1
Zach avatar

oh its complex to wrap your head around, but when you get done its like a handful of resources

1
sheldonh avatar
sheldonh

Exactly. Was just honest feedback, not mad. I stopped saying “it’s easy” this year as I realized that easy is relative to each person. Subnets for example are super easy to my friend who built all of them out for our main account. Since I wasn’t involved then and haven’t done much with subnets and vpc design I’ve found it is “hard” for me despite being something he can do in his sleep.

I consider aws.tools scripting relatively easy.. but realized that that’s because I’ve spent years with powershell and .net… So I cut that phrase out of my vocabulary when helping once I understood this better.

2020-10-18

2020-10-28

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Docker Hub Image Retention Policy Delayed, Subscription Updates - Docker Blogattachment image

Learn from Docker experts to simplify and advance your app development and management with Docker. Stay up to date on Docker events and new version announcements!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
05:27:03 AM

2020-10-29

RB avatar

Yikes. So what’s everyone doing about this? Buy the pro version for 5 per month for unlimited pulls and docker login prior to each build and pull?

loren avatar

or mirror what you need

1
Zach avatar

pull public images to your ECR or whatever other repository you have in your env

RB avatar

ya we’re most likely going to cache with artifactory

Zach avatar

I wonder if github caches images locally on your behalf for workflows

Lee Skillen avatar
Lee Skillen

You can store/cache with Cloudsmith as well, or one of the other package management services out there. Even if you’re not using us, it makes sense to have a private cache anyway, to isolate you from the foibles of many third-parties and upstreams (or, erm, policy changes like this one!)

Jonathan Marcus avatar
Jonathan Marcus

Is there any way to check if there is an updated tag without doing a docker pull? Like a docker pull --dry-run that would alert if there’s a new image, but that won’t count against this limit?

RB avatar
How do I check if my local docker image is outdated, without pushing from somewhere else?

I’m running a react app in a docker container, on a Coreos server. Let’s say it’s been pulled from dockerhub from https://hub.docker.com/r/myimages/myapp. Now I want to check periodically if the

bradym avatar

I have not yet tried it, but https://crazymax.dev/diun/ looks useful for this. Doesn’t solve the limit issue, but you could create a separate free dockerhub account to use with it.

Diunattachment image

Receive notifications when a Docker image is updated on a Docker registry

Vugar avatar

Not sure if this is of any help with original question… Unless you have admin access to webhooks notification settings… maybe Keel can be used for periodic polling, since you can configure required interval. But Keel will execute update upon changes in the tag… not sure if this is what you need.

Keel

Kubernetes Operator to automate Helm, DaemonSet, StatefulSet & Deployment updates

Jonathan Marcus avatar
Jonathan Marcus

Thanks everyone. I think hitting the Docker API seems reasonable, so we’ll try that. If it fails we’ll degrade gracefully and fall back to a docker pull.

2020-10-30

bradym avatar

https://about.gitlab.com/blog/2020/10/30/mitigating-the-impact-of-docker-hub-pull-requests-limits/

Gitlab posted to their blog about this change. According to the article: if you’re using gitlab.com, you don’t need to worry about it as they use google’s docker hub mirror. If you’re self-hosting gitlab runners you can set a dockerhub mirror, which they provide instructions for.

Caching Docker images to reduce the number of calls to DockerHub from your CI/CD infrastructureattachment image

Docker announced it will be rate-limiting the number of pull requests to the service in its free plan. We share strategies to mitigate the impact of the new pull request limits for users and customers that are managing their own GitLab instance.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I didn’t know about [mirror.gcr.io](http://mirror.gcr.io)

Caching Docker images to reduce the number of calls to DockerHub from your CI/CD infrastructureattachment image

Docker announced it will be rate-limiting the number of pull requests to the service in its free plan. We share strategies to mitigate the impact of the new pull request limits for users and customers that are managing their own GitLab instance.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

does that work outside of GCP? (e.g. pulling from AWS)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
03:17:32 AM
bradym avatar

Near the end of that page:
Pulling cached images does not count against Docker Hub rate limits. However, there is no guarantee that a particular image will remain cached for an extended period of time. Only obtain cached images on mirror.gcr.io by configuring the Docker daemon. A request to pull directly from mirror.gcr.io will fail if a cached copy of the image does not exist.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha, gotcha

    keyboard_arrow_up