#docker (2019-09)
All things docker
Archive: https://archive.sweetops.com/docker/
2019-09-06
We’re just starting with docker. Currently, we are planning to use CircleCi to build core images for our solution and distribute customized private images+data container combo (which is based on the core) to customers, or host in AWS ECS (leaning Fargate atm). I am currently planning to build the core and store in ECR. Then, have customer-specific build pull this image down and use as base, and produce either an image in ECR for ECS or an artifact in S3 to distribute to the customer. Looking for validation that this is a good approach. Is ECR is a good option, or is there a better alternative (I am concerned about having copies of images across all AWS accounts/regions)? Is saving the image and packaging it in S3 for customer delivery the right approach? ** Appreciate the feedback!
There’s nothing wrong with that. We use 100% ECR for private images. I would recommend not having a lot of duplicate ECR repos. I put them all in a AWS account for shared resources, then have all other accounts access them from there. As for across region ( I don’t have this need yet), it is mostly a performance question. It you need to pull 5 images an hour, you’re not going to worry much about latency. On the other hand, if you need to pull 100 in 5 minutes, you will want to replicate ECR across regions so it is always close to the running containers
@Igor If you’re looking for a service that is specialised for distribution, look at Cloudsmith (https://cloudsmith.io) (note: I work there, and happy to help out).
No matter what way you set it up, distributing it via S3 alone is probably not the right approach since that makes it awkward for your customers to use (i.e. external distribution); they’d have to docker load
the image after pulling it down, rather than using docker pull
directly. Very do-able, but not great since it pulls the entire image down rather than only the layers that it needs.
Thank you @Steven @Lee Skillen
2019-09-15
Someone already face something like this:
'An assembly specified in the application dependencies manifest (x.Api.deps.json) was not found:
` package: ‘System.Private.ServiceModel’, version: ‘4.5.3’
path: ‘runtimes/unix/lib/netstandard2.0/System.Private.ServiceModel.dll’’`
My situation:
Dotnet 2.2 application, with this Dockerfile:
FROM [mcr.microsoft.com/dotnet/core/sdk//mcr.microsoft.com/dotnet/core/sdk:2.2) AS base
# Restoring WORKDIR /app
## Copy solution COPY ./*.sln ./
## Copy src projects COPY src//.csproj ./ RUN for file in $(ls .csproj); do mkdir -p src/${file%.}/ && mv $file src/${file%.*}/; done
## Copy tests projects COPY tests//.csproj ./ RUN for file in $(ls .csproj); do mkdir -p tests/${file%.}/ && mv $file tests/${file%.*}/; done
## Restore RUN dotnet restore
# Publishing WORKDIR /app COPY src/. ./src/ RUN dotnet publish -c Release –no-restore -o /app/out
# Testing FROM base AS tester WORKDIR /app COPY tests/. ./tests/ RUN dotnet test –logger:trx –no-restore
# Running FROM [mcr.microsoft.com/dotnet/core/aspnet//mcr.microsoft.com/dotnet/core/aspnet:2.2) AS runtime
EXPOSE 5001 EXPOSE 5002 ENV ASPNETCORE_ENVIRONMENT=Unset ENV ConnectionStrings__DefaultConnection=Unset ENV Sentry__Dsn=Unset ENV ELK__Elasticsearch__Dsn=Unset ENV TokenAuthSettings__Issuer=Unset ENV TokenAuthSettings__Key=Unset ENV TokenAuthSettings__Audience=Unset
WORKDIR /app COPY –from=base /app/out/* ./ ENTRYPOINT [“dotnet”, “x.Api.dll”]
this might help https://github.com/dotnet/wcf/issues/2824
I am using Azure Function (C#) which is calling a .NET standard library to call an external WCF service and I am getting the below error. I am not sure what exactly happening inside the Function ca…
apparently they fixed it in .NET Core 3.0
Hm, I upgraded but still with the same error
2019-09-17
I am testing the docker swarm configuration for the first time with and nginx+nodejs+redis combo
And a single t2.medium server without docker is showing significantly better results in performance testing to 2 t2.medium nodes running in swarm
Any idea on why docker isn’t performing up to par?
2019-09-27
Has anybody run into a problem with exec user process caused "permission denied"
on running a docker container? The image doesn’t work on a hardened RHEL host specifically. Also, an nginx host based on the same alpine image works fine, but not node/redis.
2019-09-30
re: above - turned out that switching user in the container is causing the issue (ie a node:10-alpine
image is working fine, but adding USER node
to it causes the error). Hoping someone may have an idea on what may be causing this
Hi Igor, well it’s kinda obvious that the issue is with the user not having enough permissions to start the process.
I would suggest you start the process yourself
e.g.
docker run -it node:10-alpine sh
that would start a shell with the user specificed in the USER
directive
if you need root you can do the following
docker run -u 0 -it node:10-alpine sh
I guess from there you can deduct what kind of rights does the user node need.