#docker (2019-04)

docker

All things docker Archive: https://archive.sweetops.com/docker/

2019-04-26

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Docker Hub Hacked. 190K accounts affected (~5%), GitHub tokens may be exposed.

2019-04-15

oscarsullivan_old avatar
oscarsullivan_old

Damn, I’m still super stuck on volumes and changes in a container not being reflected on the host.

oscarsullivan_old avatar
oscarsullivan_old

I have this volume mount taking place ./config/schema.json:/app/config/schema.json When /app/config/schema.json is updated in the container, it is not on my local… I want to ‘get’ the file from the container. I’ve tried with ./config/schema.json both existing and not on my local as well as mounting the directory above it and not the file, as schema.json is generated during the build of the container

oscarsullivan_old avatar
oscarsullivan_old

oh have I just answered it… volume mounting is for when it is run not when it is being built.

2019-04-09

oscarsullivan_old avatar
oscarsullivan_old

I’m volume mounting package-lock.json (a file) but it is mounting as a directory… Any ideas why its a dir not a file?

    volumes:
      - ./:/app
      - /app/node_modules
      - /app/package-lock.json
oscarsullivan_old avatar
oscarsullivan_old

I can only think it is because volume is meant to be a dir…

oscarsullivan_old avatar
oscarsullivan_old

I’m noticing that my yarn.lock file in /app (mounted above) doesn’t appear on my local though

oscarsullivan_old avatar
oscarsullivan_old

It’s only files created on my local that appear in /app

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha yes the file must first exist locally

oscarsullivan_old avatar
oscarsullivan_old

In a dockerised node app (or any language, really) should the package manager lock file be source controlled when it is already inside the container? Really struggling to mount it and have that file update

Nikola Velkovski avatar
Nikola Velkovski

When you need to install gems and/or packages or pip module ( or however they are called ) ideally one would use a multi stage docker builds in which in the initial stages you package the needed modules and then you copy them over to the last stage which is a docker image without the lock file, the build tools for them etc.. ( among many other things )

Nikola Velkovski avatar
Nikola Velkovski
Use multi-stage builds

Multi-stage builds are a new feature requiring Docker 17.05 or higher on the daemon and client. Multistage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is how i did it:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
FROM node:11.2.0-alpine as builder
WORKDIR /usr/src/app
COPY package.json ./
COPY package-lock.json ./
RUN npm install --only=production
COPY server/ ./server/
COPY static/ ./static/
COPY views/ ./views/
COPY app.js ./

FROM node:11.2.0-alpine
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app/ ./
EXPOSE 3000
CMD ["node", "app.js"]
:--1:1
oscarsullivan_old avatar
oscarsullivan_old

thanks @Andriy Knysh (Cloud Posse) I’ll give that dockerfile a go

oscarsullivan_old avatar
oscarsullivan_old

here’s my local dev one:

oscarsullivan_old avatar
oscarsullivan_old

\# Stage 1 - build & run local environment
FROM node:8.15-alpine AS react-build

WORKDIR /app

ARG PORT
EXPOSE ${PORT}
ARG APP_ENV

COPY package.json package.json
RUN yarn

RUN ls -la

CMD ["yarn", "start"]
oscarsullivan_old avatar
oscarsullivan_old
version: '3'
services:
  portal:
    build:
      context: .
      dockerfile: Dockerfile
      args:
        - PORT=3070
        - APP_ENV=.env-local
    ports:
      - "3070:3070"
    volumes:
      - ./:/app
      - /app/node_modules
    environment:
      - NODE_ENV=development
loren avatar
loren

i don’t use docker a whole lot yet, so not sure if this helps, but saw this article recently and it sounded to me like it might be dealing with something similar…. https://medium.com/build-acl/docker-deployments-using-terraform-d2bf36ec7bdf

Docker Deployments using Terraform attachment image

How to use Terraform to not only set up your orchestration platform, but also to deploy code updates as part of Continuous Delivery.

loren avatar
loren

ignore me if off base

loren avatar
loren
Using Docker Containers As Development Machines attachment image

How we did it and the lessons we learnt

Maciek Strömich avatar
Maciek Strömich
Another tradeoff is that now every command you run on the traditional non-docker environment will need to be run inside the container by SSH-ing into it. 

yeah… ssh-ing into containers

loren avatar
loren

that seemed like a terrible idea

Joe Presley avatar
Joe Presley

I’ve worked in a similar setup for a project. Ssh-ing into the containers isn’t bad. Whether it’s docker or vagrant, I’d rather use a standard environment for a project than each developer has a bespoke development environment.

Joe Presley avatar
Joe Presley

And you don’t really ssh into it. You just docker-compose <container> <command>

Joe Presley avatar
Joe Presley

Using bash for the <command> is if you want a dedicated terminal to run commands within the container.

oscarsullivan_old avatar
oscarsullivan_old

#geodesic would like a word with you

1
Joe Presley avatar
Joe Presley

I could never wrap my mind around using it.

2019-04-03

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Turn Your Code into Docker Images with Cloud Native Buildpacks

With Cloud Native Buildpacks, we’ve taken the same philosophies that made buildpacks successful and applied them towards creating Docker images.

2019-04-01

oscarsullivan_old avatar
oscarsullivan_old

Has anyone noticed dockerised applications behaving different when using localhost vs IP

I have an app that works on localhost:80 but when using an IP it behaves differently. Really frustrating, because obviously when on a server the users will be using the ‘ip’ instead of ‘localhost’

oscarsullivan_old avatar
oscarsullivan_old

Tried with both docker-compose and docker run

oscarsullivan_old avatar
oscarsullivan_old

oscarsullivan_old avatar
oscarsullivan_old

it was because of https’s less strict behaviour on localhost

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha! makes sense

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Why your Docker multi-stage build is surprisingly slow attachment image

If you want your Docker images to be small and you still want fast builds, multi-stage images are the way to go. And yet, you might find that multi-stage builds are actually quite slow in practice, in particular when running in your build pipeline. If that’s happening, a bit of digging is likely to show that even though you copied your standard build script, somehow the first stage of the Dockerfile gets rebuilt every single time. Unless you’re very careful, Docker’s build cache often won’t work for multi-stage builds—and that means your build is slow. What’s going on? In this article you will learn: Why multi-stage builds don’t work with the standard build process you’re used to for single-stage images. How to solve the problem and get fast builds. A note: outside the specific topic under discussion, the Dockerfiles and build scripts in this article are not examples of best practices, since that would involve many extraneous details that would make the main point harder to understand.

:--1:1
mmuehlberger avatar
mmuehlberger

Thanks for sharing, @Erik Osterman (Cloud Posse)! I started using multi-stage images quite recently (we’re just getting started on using containers at work) and this is something I encountered yesterday with our backend. :–1:

Why your Docker multi-stage build is surprisingly slow attachment image

If you want your Docker images to be small and you still want fast builds, multi-stage images are the way to go. And yet, you might find that multi-stage builds are actually quite slow in practice, in particular when running in your build pipeline. If that’s happening, a bit of digging is likely to show that even though you copied your standard build script, somehow the first stage of the Dockerfile gets rebuilt every single time. Unless you’re very careful, Docker’s build cache often won’t work for multi-stage builds—and that means your build is slow. What’s going on? In this article you will learn: Why multi-stage builds don’t work with the standard build process you’re used to for single-stage images. How to solve the problem and get fast builds. A note: outside the specific topic under discussion, the Dockerfiles and build scripts in this article are not examples of best practices, since that would involve many extraneous details that would make the main point harder to understand.

    keyboard_arrow_up