#docker (2019-04)
All things docker
Archive: https://archive.sweetops.com/docker/
2019-04-01
Has anyone noticed dockerised applications behaving different when using localhost vs IP
I have an app that works on localhost:80
but when using an IP it behaves differently. Really frustrating, because obviously when on a server the users will be using the ‘ip’ instead of ‘localhost’
Tried with both docker-compose and docker run
it was because of https’s less strict behaviour on localhost
aha! makes sense
If you want your Docker images to be small and you still want fast builds, multi-stage images are the way to go. And yet, you might find that multi-stage builds are actually quite slow in practice, in particular when running in your build pipeline. If that’s happening, a bit of digging is likely to show that even though you copied your standard build script, somehow the first stage of the Dockerfile gets rebuilt every single time. Unless you’re very careful, Docker’s build cache often won’t work for multi-stage builds—and that means your build is slow. What’s going on? In this article you will learn: Why multi-stage builds don’t work with the standard build process you’re used to for single-stage images. How to solve the problem and get fast builds. A note: outside the specific topic under discussion, the Dockerfiles and build scripts in this article are not examples of best practices, since that would involve many extraneous details that would make the main point harder to understand.
Thanks for sharing, @Erik Osterman (Cloud Posse)! I started using multi-stage images quite recently (we’re just getting started on using containers at work) and this is something I encountered yesterday with our backend.
If you want your Docker images to be small and you still want fast builds, multi-stage images are the way to go. And yet, you might find that multi-stage builds are actually quite slow in practice, in particular when running in your build pipeline. If that’s happening, a bit of digging is likely to show that even though you copied your standard build script, somehow the first stage of the Dockerfile gets rebuilt every single time. Unless you’re very careful, Docker’s build cache often won’t work for multi-stage builds—and that means your build is slow. What’s going on? In this article you will learn: Why multi-stage builds don’t work with the standard build process you’re used to for single-stage images. How to solve the problem and get fast builds. A note: outside the specific topic under discussion, the Dockerfiles and build scripts in this article are not examples of best practices, since that would involve many extraneous details that would make the main point harder to understand.
2019-04-03
With Cloud Native Buildpacks, we’ve taken the same philosophies that made buildpacks successful and applied them towards creating Docker images.
2019-04-09
I’m volume mounting package-lock.json (a file) but it is mounting as a directory… Any ideas why its a dir not a file?
volumes:
- ./:/app
- /app/node_modules
- /app/package-lock.json
I can only think it is because volume is meant to be a dir…
I’m noticing that my yarn.lock file in /app (mounted above) doesn’t appear on my local though
It’s only files created on my local that appear in /app
Aha yes the file must first exist locally
In a dockerised node app (or any language, really) should the package manager lock file be source controlled when it is already inside the container? Really struggling to mount it and have that file update
When you need to install gems and/or packages or pip module ( or however they are called ) ideally one would use a multi stage docker builds in which in the initial stages you package the needed modules and then you copy them over to the last stage which is a docker image without the lock file, the build tools for them etc.. ( among many other things )
Multi-stage builds are a new feature requiring Docker 17.05 or higher on the daemon and client. Multistage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping…
this is how i did it:
FROM node:11.2.0-alpine as builder
WORKDIR /usr/src/app
COPY package.json ./
COPY package-lock.json ./
RUN npm install --only=production
COPY server/ ./server/
COPY static/ ./static/
COPY views/ ./views/
COPY app.js ./
FROM node:11.2.0-alpine
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app/ ./
EXPOSE 3000
CMD ["node", "app.js"]
thanks @Andriy Knysh (Cloud Posse) I’ll give that dockerfile a go
here’s my local dev one:
# Stage 1 - build & run local environment
FROM node:8.15-alpine AS react-build
WORKDIR /app
ARG PORT
EXPOSE ${PORT}
ARG APP_ENV
COPY package.json package.json
RUN yarn
RUN ls -la
CMD ["yarn", "start"]
version: '3'
services:
portal:
build:
context: .
dockerfile: Dockerfile
args:
- PORT=3070
- APP_ENV=.env-local
ports:
- "3070:3070"
volumes:
- ./:/app
- /app/node_modules
environment:
- NODE_ENV=development
i don’t use docker a whole lot yet, so not sure if this helps, but saw this article recently and it sounded to me like it might be dealing with something similar…. https://medium.com/build-acl/docker-deployments-using-terraform-d2bf36ec7bdf
How to use Terraform to not only set up your orchestration platform, but also to deploy code updates as part of Continuous Delivery.
ignore me if off base
oops, that was the wrong link, here’s the link i meant, https://medium.com/rate-engineering/using-docker-containers-as-development-machines-4de8199fc662
How we did it and the lessons we learnt
Another tradeoff is that now every command you run on the traditional non-docker environment will need to be run inside the container by SSH-ing into it.
yeah… ssh-ing into containers
that seemed like a terrible idea
I’ve worked in a similar setup for a project. Ssh-ing into the containers isn’t bad. Whether it’s docker or vagrant, I’d rather use a standard environment for a project than each developer has a bespoke development environment.
And you don’t really ssh into it. You just docker-compose <container> <command>
Using bash
for the <command>
is if you want a dedicated terminal to run commands within the container.
I could never wrap my mind around using it.
2019-04-15
Damn, I’m still super stuck on volumes and changes in a container not being reflected on the host.
I have this volume mount taking place ./config/schema.json:/app/config/schema.json
When /app/config/schema.json
is updated in the container, it is not on my local… I want to ‘get’ the file from the container.
I’ve tried with ./config/schema.json
both existing and not on my local as well as mounting the directory above it and not the file, as schema.json is generated during the build of the container
oh have I just answered it… volume mounting is for when it is run not when it is being built.
2019-04-26
Docker Hub Hacked. 190K accounts affected (~5%), GitHub tokens may be exposed.