#docker (2019-04)
All things docker
Archive: https://archive.sweetops.com/docker/
2019-04-01
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
Has anyone noticed dockerised applications behaving different when using localhost vs IP
I have an app that works on localhost:80
but when using an IP it behaves differently. Really frustrating, because obviously when on a server the users will be using the ‘ip’ instead of ‘localhost’
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
Tried with both docker-compose and docker run
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
it was because of https’s less strict behaviour on localhost
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
aha! makes sense
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
![attachment image](https://pythonspeed.com/assets/titles/faster-multi-stage-builds.png)
If you want your Docker images to be small and you still want fast builds, multi-stage images are the way to go. And yet, you might find that multi-stage builds are actually quite slow in practice, in particular when running in your build pipeline. If that’s happening, a bit of digging is likely to show that even though you copied your standard build script, somehow the first stage of the Dockerfile gets rebuilt every single time. Unless you’re very careful, Docker’s build cache often won’t work for multi-stage builds—and that means your build is slow. What’s going on? In this article you will learn: Why multi-stage builds don’t work with the standard build process you’re used to for single-stage images. How to solve the problem and get fast builds. A note: outside the specific topic under discussion, the Dockerfiles and build scripts in this article are not examples of best practices, since that would involve many extraneous details that would make the main point harder to understand.
![mmuehlberger avatar](https://secure.gravatar.com/avatar/752c7a387bef6cb7254e3ff34b276d10.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0022-72.png)
Thanks for sharing, @Erik Osterman (Cloud Posse)! I started using multi-stage images quite recently (we’re just getting started on using containers at work) and this is something I encountered yesterday with our backend.
![attachment image](https://pythonspeed.com/assets/titles/faster-multi-stage-builds.png)
If you want your Docker images to be small and you still want fast builds, multi-stage images are the way to go. And yet, you might find that multi-stage builds are actually quite slow in practice, in particular when running in your build pipeline. If that’s happening, a bit of digging is likely to show that even though you copied your standard build script, somehow the first stage of the Dockerfile gets rebuilt every single time. Unless you’re very careful, Docker’s build cache often won’t work for multi-stage builds—and that means your build is slow. What’s going on? In this article you will learn: Why multi-stage builds don’t work with the standard build process you’re used to for single-stage images. How to solve the problem and get fast builds. A note: outside the specific topic under discussion, the Dockerfiles and build scripts in this article are not examples of best practices, since that would involve many extraneous details that would make the main point harder to understand.
2019-04-03
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
With Cloud Native Buildpacks, we’ve taken the same philosophies that made buildpacks successful and applied them towards creating Docker images.
2019-04-09
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
I’m volume mounting package-lock.json (a file) but it is mounting as a directory… Any ideas why its a dir not a file?
volumes:
- ./:/app
- /app/node_modules
- /app/package-lock.json
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
I can only think it is because volume is meant to be a dir…
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
I’m noticing that my yarn.lock file in /app (mounted above) doesn’t appear on my local though
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
It’s only files created on my local that appear in /app
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Aha yes the file must first exist locally
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
In a dockerised node app (or any language, really) should the package manager lock file be source controlled when it is already inside the container? Really struggling to mount it and have that file update
![Nikola Velkovski avatar](https://avatars.slack-edge.com/2018-11-08/474538495603_cc9e62a39b3dbc9d8d65_72.png)
When you need to install gems and/or packages or pip module ( or however they are called ) ideally one would use a multi stage docker builds in which in the initial stages you package the needed modules and then you copy them over to the last stage which is a docker image without the lock file, the build tools for them etc.. ( among many other things )
![Nikola Velkovski avatar](https://avatars.slack-edge.com/2018-11-08/474538495603_cc9e62a39b3dbc9d8d65_72.png)
Multi-stage builds are a new feature requiring Docker 17.05 or higher on the daemon and client. Multistage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping…
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
this is how i did it:
![Andriy Knysh (Cloud Posse) avatar](https://avatars.slack-edge.com/2018-06-13/382332470551_54ed1a5d986e2068fd9c_72.jpg)
FROM node:11.2.0-alpine as builder
WORKDIR /usr/src/app
COPY package.json ./
COPY package-lock.json ./
RUN npm install --only=production
COPY server/ ./server/
COPY static/ ./static/
COPY views/ ./views/
COPY app.js ./
FROM node:11.2.0-alpine
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app/ ./
EXPOSE 3000
CMD ["node", "app.js"]
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
thanks @Andriy Knysh (Cloud Posse) I’ll give that dockerfile a go
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
here’s my local dev one:
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
# Stage 1 - build & run local environment
FROM node:8.15-alpine AS react-build
WORKDIR /app
ARG PORT
EXPOSE ${PORT}
ARG APP_ENV
COPY package.json package.json
RUN yarn
RUN ls -la
CMD ["yarn", "start"]
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
version: '3'
services:
portal:
build:
context: .
dockerfile: Dockerfile
args:
- PORT=3070
- APP_ENV=.env-local
ports:
- "3070:3070"
volumes:
- ./:/app
- /app/node_modules
environment:
- NODE_ENV=development
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
i don’t use docker a whole lot yet, so not sure if this helps, but saw this article recently and it sounded to me like it might be dealing with something similar…. https://medium.com/build-acl/docker-deployments-using-terraform-d2bf36ec7bdf
![attachment image](https://cdn-images-1.medium.com/max/1200/1*mBcgSEjP4kZwpZZUH21Gdg.png)
How to use Terraform to not only set up your orchestration platform, but also to deploy code updates as part of Continuous Delivery.
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
ignore me if off base
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
oops, that was the wrong link, here’s the link i meant, https://medium.com/rate-engineering/using-docker-containers-as-development-machines-4de8199fc662
![Maciek Strömich avatar](https://secure.gravatar.com/avatar/98de12365b633b063e208220100d4594.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0002-72.png)
Another tradeoff is that now every command you run on the traditional non-docker environment will need to be run inside the container by SSH-ing into it.
yeah… ssh-ing into containers
![loren avatar](https://secure.gravatar.com/avatar/d1e25dcfbc68a0857a04dd78c9afe952.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0003-72.png)
that seemed like a terrible idea
![Joe Presley avatar](https://avatars.slack-edge.com/2021-04-22/1999001350244_6ed74ac664e8eee4204c_72.jpg)
I’ve worked in a similar setup for a project. Ssh-ing into the containers isn’t bad. Whether it’s docker or vagrant, I’d rather use a standard environment for a project than each developer has a bespoke development environment.
![Joe Presley avatar](https://avatars.slack-edge.com/2021-04-22/1999001350244_6ed74ac664e8eee4204c_72.jpg)
And you don’t really ssh into it. You just docker-compose <container> <command>
![Joe Presley avatar](https://avatars.slack-edge.com/2021-04-22/1999001350244_6ed74ac664e8eee4204c_72.jpg)
Using bash
for the <command>
is if you want a dedicated terminal to run commands within the container.
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
![Joe Presley avatar](https://avatars.slack-edge.com/2021-04-22/1999001350244_6ed74ac664e8eee4204c_72.jpg)
I could never wrap my mind around using it.
2019-04-15
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
Damn, I’m still super stuck on volumes and changes in a container not being reflected on the host.
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
I have this volume mount taking place ./config/schema.json:/app/config/schema.json
When /app/config/schema.json
is updated in the container, it is not on my local… I want to ‘get’ the file from the container.
I’ve tried with ./config/schema.json
both existing and not on my local as well as mounting the directory above it and not the file, as schema.json is generated during the build of the container
![oscarsullivan_old avatar](https://avatars.slack-edge.com/2019-02-27/563892542694_c14d0b37236a4a398ef8_72.png)
oh have I just answered it… volume mounting is for when it is run not when it is being built.
2019-04-26
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
![Erik Osterman (Cloud Posse) avatar](https://secure.gravatar.com/avatar/88c480d4f73b813904e00a5695a454cb.jpg?s=72&d=https%3A%2F%2Fa.slack-edge.com%2Fdf10d%2Fimg%2Favatars%2Fava_0023-72.png)
Docker Hub Hacked. 190K accounts affected (~5%), GitHub tokens may be exposed.