All things docker Archive: https://archive.sweetops.com/docker/
Anyone using SSO with their AWS and successfully pulling images from ECR with docker pull via an SSO account? I can successfully docker login (supposedly) , but I get this error despite having AdministratorAccess
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin XXXXXXXX.dkr.ecr.us-east-1.amazonaws.com Login Succeeded docker pull XXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame Using default tag: latest Error response from daemon: pull access denied for XXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame, repository does not exist or may require 'docker login': denied: User: arn:aws:sts::YYYYYYYY:assumed-role/AWSReservedSSO_AdministratorAccess_29495c17e6538e9b/[email protected] is not authorized to perform: ecr:BatchGetImage on resource: arn:aws:ecr:us-east-1:XXXXXXXX:repository/reponame
Not asking anyone to fix it for me, I just want to know if the *real* issue is I haven’t yet found the AWS documentation where they casually mention that SSO accounts can’t do this.
Did you confirm full ecs access on your assigned role?
not authorized to perform: ecr:BatchGetImag
That makes me think it’s nothing to do with sso, but instead an issue with permissions itself
Yeah, I think it might be something to do with how the role is assumed where XXXXXXXX becomes YYYYYYYY (account numbers masked)
You need to check the role you are assuming and validate that this role being assumed has the full set of permissions required. My first guess is you have permissions if using your account in “home account”, but when assuming this role it’s missing something with ECR resulting in that error.
Make sure the ECR registry is actually sharing images with that account too
its a policy per repository
two policies to check: IAM policy the user is using and ECR resource policy ( make sure ECR resource policy is allowing access from the role IAM policy is attached to)
Dapper project is something I find interesting and similar in a general way to how cloudposse does the container driven code tools.
I’m wondering if there is any other tooling that makes it easy to wrap up sets or individual dev tools on containers so I can stop worrying about Linux/Mac/windows and just run whatever I want. Docker commands are too verbose for this imo.
Basically like dapper I want to run
task semver and then it be grabbing whatever I’ve defined that to be as a docker container, allowing normal cli usage.
Is it better to instead build all the tools into a single container like geodesic does and require the developer to exec into it and go from there? Use cli tool for prompt but it’s all internal to the tool?
Maybe the wrapper stuff is the problem and instead the docker container just has the simplified commands inside itself.
Been thinking about this because of the variation in machines. Maybe docker interactive is truly the best approach instead of special wrappers anyway. Only big issue is now I see a CI job having to download a 10 gb docker file instead of running a few install commands.
Something like this seems really cool too.
CLI for executing Cirrus tasks locally. Contribute to cirruslabs/cirrus-cli development by creating an account on GitHub.
https://github.com/EnvCLI/EnvCLI might be worth checking out
Don't install Node, Go, … locally - use containers you define within your project. If you have a new machine / other contributors you just have to install docker and envcli to get started. - …
@bradym nice! I’ll give this a look too.
This example is exactly the thing I’m trying to simplify without a bunch of work re-wrapping stuff up in yet another docker wrapper tool. :slightly_smiling_face:
docker run --rm -it --workdir /go/src/project/ --volume "C:\SourceCodes\golang\envcli:/ go/src/project" golang:latest /usr/bin/env sh -c "go build -o envcli src/*"
This gets turned into:
envcli run go build -o envcli src/* or with alias:
go build -o envcli src/*
The key to me is that’s super minimal, just basic help kinda like terragrunt started out with, solving basic workflow problems, but not trying to duplicate the underlying docker purpose.
The other approach is a single larger docker container like codespaces does, which you exec into and do all your work for tasks in. This has merits of working with Codespaces, and single file, but it’s not really what docker was designed for imo. Lots to think about with approaches here
I think I need to revisit atmos for inspiration. The core issue is dependencies and platform/CI tool agnostic.
Docker is really the answer for that part, since in reality, even if working with Go, there’s no real promise of true control without the OS control. I think I’ve been considering it from an ease of use and using something more like Atmos for the tooling might make more sense, especially since I’m seeing I can extended or customize/start with it from the docs.
My other alternative is to run Mage (Go Make alternative), but then again there are things that might be required I might not cover. Having it baked into the docker image solves so much of the problem.
The other thing to look at is do I use individual tools as docker commands or a interactive docker container with all tools in one (which seems more maintainable and sharable). Then I can consider rolling out things to the wider organization at
@Erik Osterman (Cloud Posse) i plan on revisiting atmos again soon. I did try it last time but found the variant files a bit confusing
I am curious since I think you and your team right Go, have you ever tried Mage? Go is cross platform and mage is a pretty nice experience in comparison to messing with bash files. Example For Docker: https://gist.github.com/samlown/8214a82f4d301ef1ba7652306e4c4594
Do you have a link to a barebones template that starts with barely any commands and variant files so I can look at the simplest option to start with? The one I tried last year was pretty intensive so I got lost a bit in my first run through.
We are rewriting it in native go. Should be ready in the next few weeks.
Ok this sounds awesome. Is it public yet? I’m working full time with Go now and been using Mage and Goyek.
I have been considering what makes sense to do as a docker shell with all tools vs just a go based execution since that’s cross platform. Thinking docker interactive will be a better stable result per all tooling can be included.
FYI mage offers a similar experience to build harness in that you can import remote tasks and have them available. It’s been interesting! I was able to build a terragrunt stack runner with multi writer to logging directory so life got a lot better doing this.
I’m right now however still not happy with non-native Terraform and wanted to see if Atmos offered a better start. I tried last time and found the yaml stacks very difficult to debug or work through so any updates tutorials or links would be helpful.
Common tasks make more sense than per repo for standardizing. I shouldn’t be reinventing project tools each repo! Cookie cutter only helps initialize. A common tool container makes a lot more sense.
@Erik Osterman (Cloud Posse) sorry, forgot to ping you to notify. I’m interested as I’m in the middle of evaluating a centralized docker task shell like geodesic or maybe using variant2 files makes more sense as much as I don’t want to write more HCL :slightly_smiling_face:
I am curious if the command calls are wrapped up in
sh package to work cross platform?
Whatever solution I use, I would like to know that core calls like go build and others all can work regardless of bash, zsh, powershell etc.
I’ll give you a demo/walkthrough of where we are at now
I’ll pull in @Andriy Knysh (Cloud Posse) too. Maybe you can show him some of the stuff you’re doing with mage.
Super stoked. That sounds awesome! I feel like I’ve dealt with this more than most of my coworkers simply because I bootstrapped so many projects and prefer to ensure that there’s a common task runner to provide a great experience to linting security checks and more.
Of all the solutions I’ve seen out there I feel like the build harness is the closest I’ve seen to this being realized. Atmos is super promising though I’ll admit I’m a little reluctant to dive into HCL though I know the general benefit it might offer.
I’ll definitely schedule something.
One thing I just noticed was that variant2 in the documentation technically says it doesn’t support Windows! Well I’m not on Windows any task runner I prefer to make sure can run without a full WSL setup in Windows. That means docker or Go native. Curious if anyone has tried Atmos on windows without issue. I think there was some special system calls for variant2 that broke the windows compability.
Oh and as a quick check… Is build-harness project still relevant? I see it used but are you looking to move away from that to Atmos based cli?
I really want to try out the new stacks and atoms feature too but couldn’t find a demo like “terraform vpc root module starter”. I’ll have to covert my terragrunt stack to try this so would welcome anyone linking a working example I could try for a vpc or other resource.
Thank you again
atmos solve different problems. Also, we’re too entrenched to practically move away from the
build-harness in all of our ~500+ repos.
atmos is a tool more specifically for cloud automation. while the
build-harness is more about how to “build” stuff or what we use to manage our repos.
i could see
build-harness one day getting replaced by another cli more like mage.
Cool! That helps.
Mage is Go, and also supports remote task discovery like atmos, so I’ve been thinking through what to use. Sounds like Mage would replace the build-harness style make commands. However, atmos allows building out of the cli calls into a centralized way.
It’s interesting how these blur into a similar area. @Erik Osterman (Cloud Posse) do you have a working example of a basic stack calling one of your modules with atmos so I could see barebones example to start? The current examples repo is pretty intensive. Maybe I can strip that one down and just have the terraform commands in there. Going to go give it a shot.
Here’s example of a task I originally used Go-Task (cross platform Make alternative) https://github.com/sheldonhull/sheldonhull.hugo/blob/12d30263cb261aa02c1189bbc987aad9788f56c4/magefile.go#L93-L99
A less “magic” oriented task runner in Go is goyek in beta stage. It’s sorta like writing Go tests so more “verbose” but way less hidden work behind the scenes.
That looks like this:
I’m chatting with author who’s really responsive in their discussions and been asking for a way to import remote tasks. Right now it supports this but doesn’t “auto register” the new tasks, still have to register them in
func main() because it doesn’t do a lot of dynamic discovery that mage does.
hugo repo full of magical blog goodness with an interspersing of - sheldonhull.hugo/magefile.go at 12d30263cb261aa02c1189bbc987aad9788f56c4 · sheldonhull/sheldonhull.hugo
Goyek pre-built tasks for CI/CD work. Contribute to sheldonhull/goyek-tasks development by creating an account on GitHub.
I have docker-compose to mange many solutions like
gitlab, vault, jenkins, nexus, awx, selenium, nifi, spark, sonarqube, custom apps, pgadmin, portainer, minio, and I need a solid reverse proxy to replace apache
CaddyWhat you think about this?
I really like Caddy as a reverse proxy
Caddy is a powerful, enterprise-ready, open source web server with automatic HTTPS written in Go
varnish is unbeatable in my opinion if performance is a must
Fastly still uses varnish AFAIK
If I found nginx a pain to do reverse proxy with, and want to simplify let’s encrypt, would you consider caddy as a reasonable solution for production reverse proxy with less effort?
We are a Go shop, so it’s my natural preference to use something more self-contained and easy to configure. I’d be willing to give up minor performance variations that are benchmark only if it was easier to use for docker compose than the nightmare I had in setting up nginix dynamically.
Cert is out of the equation, since it’s already provided by customer and managed from sec team, I want a nice tool that understand docker very well.
consul-template should dynamically update nginx if you want to look deeper
not using consult. Just have some ecs tasks/compose and want to setup reverse proxy info dynamically. Was a pain with nginx and also no automatic cert handling so was hoping caddy did a great job at a production level for someone too.
As an in-between step for actual container orchestration, I am starting to use pure
docker-compose commands with the use of
DOCKER_HOST=ssh://. While doing this, I wanted to use
docker-compose build <service> in my pipeline and noticed that the build actually needs runtime variables in order for it to build.
This sounds strange to me. Does anybody know why this is needed? I tried to search the Github issues but could not find anything related to his.