#docker (2020-10)
All things docker
Archive: https://archive.sweetops.com/docker/
2020-10-02
Anyone knows a good already dockerized private docker registry ? And are there any well known strategies for syncing from a public + commercial repo to a private closed network registry. Thanks!
we use gitlab, but seems overkill for you
I’ve been wanting to try this out, https://goharbor.io/
Our mission is to be the trusted cloud native repository for Kubernetes
We use https://www.sonatype.com/nexus/repository-oss/download?smtNoRedir=1 It has also proxy support to external registries.
Download Nexus Repository OSS | The world’s first and only universal repository solution that’s FREE to use. |
If the registry doesn’t, then skopeo could sync repos https://github.com/containers/skopeo
Work with remote images registries - retrieving information, images, signing content - containers/skopeo
Thanks ! @github140 How does proxy configuration work, is that something to configure to the docker cfg’s at the ‘ client side ? ‘ to use sonatype for say .. ‘nginx’ ?
The docker client connects to the onprem defined (Nexus) proxy endpoint which refers to e.g. dockerhub. https://help.sonatype.com/repomanager3/formats/docker-registry/proxy-repository-for-docker
is the nexus one 1200 usd/year ?
So far we are good with OSS version.
Hello All, I am looking for creating a docker image from a running windows container. I have made some manual changes
Is there any way to do it?
@Biswajit Das Check out docker commit
2020-10-05
Hey all, I’ve deployed postfix on fargate, and I’m trying to make a fifo that I am tailing just before postfix starts so that I can reduce noise in the logs, as I think the health check is getting logged as a connection in fargate. Is there a better “docker” way of doing what I’m trying to do? This is the end of my entrypoint script. This doesn’t end up working – emails still send when I connect to my container with telnet – but my logs aren’t appearing in stdout as I haven’t piped them to stdout properly.
mkfifo /var/postfix_logs
postconf -e "maillog_file=/var/postfix_logs"
tail -f /var/postfix_logs | grep -v 'connect from\|lost connection\|disconnect' &
postfix start-fg
Hrm… interesting solution. I see what you’re trying to do…
But before trying to do this, did you first just try to do something like this:
postconf -e "maillog_file=/dev/stdout"
postfix start-fg | grep -v 'connect from\|lost connection\|disconnect'
(just a guess.. haven’t tried)
We have also deployed postfix in the past as a container for SMTP relay (E.g. in kubernetes), but these days just use this terraform module…
Terraform module to provision Simple Email Service on AWS - cloudposse/terraform-aws-ses
ooooooooh
Interesting!
Yeah, we need postfix (which is in fargate) as the client has some older applications that don’t have smtp auth which means no SES
Aha, makes sense.
Yep, I think this works. Should’ve tried the obvious lol
new:
Oct 07 06:04:37 email-smtp postfix/postfix-script[76]: starting the Postfix mail system
Oct 07 06:04:37 email-smtp postfix/master[77]: daemon started -- version 3.4.12, configuration /etc/postfix
Oct 07 06:04:49 email-smtp postfix/smtpd[81]: improper command pipelining after HELO from unknown[172.17.0.1]: MAIL FROM: sean.turner@$domain.govt.nz\r\nRCPT TO: [email protected]\r\nDATA\r\nSubject: Sending an e
Oct 07 06:04:50 email-smtp postfix/verify[84]: cache btree:/var/lib/postfix/verify_cache full cleanup: retained=0 dropped=0 entries
Oct 07 06:04:50 email-smtp postfix/cleanup[85]: 0A28E2C0068: message-id=<20201007060450.0A28E2C0068@email-smtp.ap-southeast-2.amazonaws.com>
Oct 07 06:04:50 email-smtp postfix/qmgr[79]: 0A28E2C0068: from=<[email protected]>, size=308, nrcpt=1 (queue active)
Oct 07 06:04:51 email-smtp postfix/smtp[86]: 0A28E2C0068: to=<[email protected]>, relay=email-smtp.ap-southeast-2.amazonaws.com[3.105.85.198]:587, delay=0.96, delays=0/0.03/0.84/0.08, dsn=2.0.0, status=deliverable (250 Ok)
Oct 07 06:04:51 email-smtp postfix/qmgr[79]: 0A28E2C0068: removed
old:
Oct 07 06:07:12 email-smtp postfix/postfix-script[76]: starting the Postfix mail system
Oct 07 06:07:12 email-smtp postfix/master[77]: daemon started -- version 3.4.12, configuration /etc/postfix
Oct 07 06:07:17 email-smtp postfix/smtpd[81]: connect from unknown[172.17.0.1]
Oct 07 06:07:20 email-smtp postfix/smtpd[81]: improper command pipelining after HELO from unknown[172.17.0.1]: MAIL FROM: sean.turner@$domain.govt.nz\r\nRCPT TO: [email protected]\r\n
Oct 07 06:07:20 email-smtp postfix/verify[84]: cache btree:/var/lib/postfix/verify_cache full cleanup: retained=0 dropped=0 entries
Oct 07 06:07:20 email-smtp postfix/cleanup[85]: A1A272C00C6: message-id=<20201007060720.A1A272C00C6@email-smtp.ap-southeast-2.amazonaws.com>
Oct 07 06:07:20 email-smtp postfix/qmgr[79]: A1A272C00C6: from=<[email protected]>, size=308, nrcpt=1 (queue active)
Oct 07 06:07:21 email-smtp postfix/smtp[86]: A1A272C00C6: to=<[email protected]>, relay=email-smtp.ap-southeast-2.amazonaws.com[54.79.14.233]:587, delay=0.64, delays=0/0.03/0.53/0.07, dsn=2.0.0, status=deliverable (250 Ok)
Oct 07 06:07:21 email-smtp postfix/qmgr[79]: A1A272C00C6: removed
Oct 07 06:07:56 email-smtp postfix/smtpd[81]: disconnect from unknown[172.17.0.1] helo=1 mail=1 rcpt=1 data=1 quit=1 commands=5
No more connect from unknown[172.17.0.1]
, disconnect from unknown[172.17.0.1] helo=1 mail=1 rcpt=1 data=1 quit=1 commands=5
One thing I should have also recommended is adding:
set -opipefail
At the very top. That way you get the exit code of the first command to fail in the pipeline.
e.g. in
postfix start-fg | grep -v 'connect from\|lost connection\|disconnect'
You want the exit code of postfix start-fg
, so this is what it solves.
set -opipefail
postconf -e "maillog_file=/dev/stdout"
postfix start-fg | grep -v 'connect from\|lost connection\|disconnect'
2020-10-06
2020-10-07
2020-10-17
Would the new ECS docker compose integration allow me to spin up Grafana from a docker compose file with persisted storage volumes and more in fargate without any terraform?
I have some compose files like sql server, grafana, InfluxDB and more and just haven’t gotten the time to deal with the full terraform + ecs side. If I can use the new integration to spin up some services or tools in a way account for folks that sounds super promising. Any feedback on trying? I’m not clear yet on how I whitelist access either or if they are by default open to public. https://docs.docker.com/engine/context/ecs-integration/
Deploying Docker containers on ECS
Looks like there is some work to support storage volumes: https://github.com/docker/compose-cli/issues/737 No idea if it’s released yet or not. Also it’s EFS so I’d expect high price and low performance.
As an aside, Fargate containers also get 10GB of local ephemeral storage so that may be worth looking into.
Description As compose file defines (non-external) volumes ECS integration should create an EFS filesystem with sane defaults Filesystem MUST not be deleted by compose down
Cool! I decided to play around with it today for some stuff. Pretty cool! builds it all as cloudformation stack. I used Taskfile.yml approach and with a few commands see a container built and fargate task up and running. Promising!
The EFS part is a big deal. Without some ability to have some persistent storage for a task then stuff like Grafana/InfluxDB would be problematic. You know if it’s possible to create those outside of the current compose format and link it as a resource or out of luck for now with that?
Only thing taking a while is AWS Cloudmap. New to me. I guess that helps with the networking in VPC for the services. That part failed to tear down the first time I think and it’s 2 mins in trying to redeploy and still waiting. Other than that piece, so far the remaining pieces are pretty cool!
Failed task got it stuck. The task tried to pull image from docker rather than ecr. probably wrong reference in the docker compose format
Makes me appreciate terraform lol. The tear down on cloudformation always seemed soooo slow. I think next docker compose up and it will be running
Worked. Just not sure how to handle the ports and whitelisting now. Any tips/ideas welcome. I couldn’t put ports in docker compose file and not sure how to assign security groups or ips for a service.
I’m loving the promise of quick task definitions up through this, just things to work through.
Gonna try aws copilot and their CLI. I’m guessing from reading it’s probably much faster for my tests
Without some ability to have some persistent storage for a task then stuff like Grafana/InfluxDB would be problematic.
grafana doesn’t write much to disk, easier to just point it at a database for the storage. I’m running it in fargate like that now (minus the docker compose stuff)
Did you write a terraform module for this? While copilot is really cool it abstracts a lot of underlying cloudformation so using a terraform module would be better.
I just am new to ecs so doing so the pieces with load balancers, domain cert, and all is a lot of pieces to connect. Have not found but one terraform project for this and it wasn’t up to date
Here are some Terraform examples for running apps in Fargate on ECS:
• https://github.com/terraform-aws-modules/terraform-aws-atlantis
• https://github.com/Vlaaaaaaad/terraform-aws-fargate-samproxy
Yup its pretty simple, just define a fargate ecs cluster, a service, a task def, and a container def. Add ALB for TLS, and security groups etc. eazy.
Thank you. I’ll revisit those modules ad I did use in the past for a test. I think I successfully bused for a chat it deployment. Didn’t think of using for this and I have most of that ready. Just have to configure the grafana config then.
Tip: I wouldn’t call it easy though, it’s always dependent on your background. I’ve never deployed an auto scaling load balancer with fargate used, acm etc. All new to me. I’ve learned that you shouldn’t say it’s easy to anyone, but rather equip with concepts and understanding. Everyone comes from someplace different
oh its complex to wrap your head around, but when you get done its like a handful of resources
Exactly. Was just honest feedback, not mad. I stopped saying “it’s easy” this year as I realized that easy is relative to each person. Subnets for example are super easy to my friend who built all of them out for our main account. Since I wasn’t involved then and haven’t done much with subnets and vpc design I’ve found it is “hard” for me despite being something he can do in his sleep.
I consider aws.tools scripting relatively easy.. but realized that that’s because I’ve spent years with powershell and .net… So I cut that phrase out of my vocabulary when helping once I understood this better.
2020-10-18
2020-10-28
Learn from Docker experts to simplify and advance your app development and management with Docker. Stay up to date on Docker events and new version announcements!
2020-10-29
Yikes. So what’s everyone doing about this? Buy the pro version for 5 per month for unlimited pulls and docker login prior to each build and pull?
pull public images to your ECR or whatever other repository you have in your env
ya we’re most likely going to cache with artifactory
I wonder if github caches images locally on your behalf for workflows
You can store/cache with Cloudsmith as well, or one of the other package management services out there. Even if you’re not using us, it makes sense to have a private cache anyway, to isolate you from the foibles of many third-parties and upstreams (or, erm, policy changes like this one!)
Is there any way to check if there is an updated tag without doing a docker pull
? Like a docker pull --dry-run
that would alert if there’s a new image, but that won’t count against this limit?
maybe something like this
https://gist.github.com/byrnedo/9b2078c191360c681f85cebb2187d66f
this looks more up to date
I’m running a react app in a docker container, on a Coreos server. Let’s say it’s been pulled from dockerhub from https://hub.docker.com/r/myimages/myapp. Now I want to check periodically if the
I have not yet tried it, but https://crazymax.dev/diun/ looks useful for this. Doesn’t solve the limit issue, but you could create a separate free dockerhub account to use with it.
Receive notifications when a Docker image is updated on a Docker registry
Not sure if this is of any help with original question… Unless you have admin access to webhooks notification settings… maybe Keel can be used for periodic polling, since you can configure required interval. But Keel will execute update upon changes in the tag… not sure if this is what you need.
Kubernetes Operator to automate Helm, DaemonSet, StatefulSet & Deployment updates
Thanks everyone. I think hitting the Docker API seems reasonable, so we’ll try that. If it fails we’ll degrade gracefully and fall back to a docker pull
.
2020-10-30
https://about.gitlab.com/blog/2020/10/30/mitigating-the-impact-of-docker-hub-pull-requests-limits/
Gitlab posted to their blog about this change. According to the article: if you’re using gitlab.com, you don’t need to worry about it as they use google’s docker hub mirror. If you’re self-hosting gitlab runners you can set a dockerhub mirror, which they provide instructions for.
Docker announced it will be rate-limiting the number of pull requests to the service in its free plan. We share strategies to mitigate the impact of the new pull request limits for users and customers that are managing their own GitLab instance.
I didn’t know about [mirror.gcr.io](http://mirror.gcr.io)
Docker announced it will be rate-limiting the number of pull requests to the service in its free plan. We share strategies to mitigate the impact of the new pull request limits for users and customers that are managing their own GitLab instance.
does that work outside of GCP? (e.g. pulling from AWS)
Near the end of that page:
Pulling cached images does not count against Docker Hub rate limits. However, there is no guarantee that a particular image will remain cached for an extended period of time. Only obtain cached images on mirror.gcr.io by configuring the Docker daemon. A request to pull directly from mirror.gcr.io will fail if a cached copy of the image does not exist.
aha, gotcha