#docker (2020-12)

docker

All things docker Archive: https://archive.sweetops.com/docker/

2020-12-31

2020-12-30

2020-12-29

Babar Baig avatar
Babar Baig

Hi :wave:. I am working on deploying a rails application in ECS. I’ve 2 separate docker files for Rails app and Nginx configuration. Below is my docker compose

version: '3'
services:
  app:
    build:
      context: .
      dockerfile: ./docker/app/Dockerfile
    volumes:
      - assets-volume:/var/www/crovv/public
  web:
    build:
      context: .
      dockerfile: ./docker/web/Dockerfile
    depends_on:
      - app
    volumes:
      - assets-volume:/var/www/crovv/public
    ports:
      - 80:80
volumes:
  redis: 
  postgres_data:
  assets-volume:

I’ve used the docker compose to create ECS Task definition but when I run it in ECS app container runs fine and Nginx container throws the following error nginx: [emerg] host not found in upstream "app:3000" in /etc/nginx/conf.d/default.conf:5 Here is my default.conf

 # This is a template. Referenced variables (e.g. $RAILS_ROOT) need 

\# to be rewritten with real values in order for this file to work. 

upstream rails_app {
  server app:3000;
}
map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}
server {
  # define your domain
  server_name localhost;

  # define the public application root
  root   $RAILS_ROOT/public;
  index  index.html;

  # define where Nginx should write its logs
  access_log $RAILS_ROOT/log/nginx.access.log;
  error_log $RAILS_ROOT/log/nginx.error.log;

  client_max_body_size 0;

  # deny requests for files that should never be accessed
  location ~ /\. {
    deny all;
  }

  location ~* ^.+\.(rb|log)$ {
    deny all;
  }

  # serve static (compiled) assets directly if they exist (for rails production)
  location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ {
    try_files $uri @rails;

    access_log off;
    gzip_static on; # to serve pre-gzipped version

    expires max;
    add_header Cache-Control public;

    # Some browsers still send conditional-GET requests if there's a
    # Last-Modified header or an ETag header even if they haven't
    # reached the expiry date sent in the Expires header.
    add_header Last-Modified "";
    add_header ETag "";
    break;
  }

  # send non-static file requests to the app server
  location / {
    try_files $uri @rails;
  }

  location /cable {
      proxy_pass <http://rails_app/cable>;
      proxy_http_version 1.1;
      proxy_set_header Upgrade websocket;
      proxy_set_header Connection Upgrade;

      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-Proto https;
      proxy_redirect off;
  }

  location @rails {
      proxy_set_header  X-Real-IP  $remote_addr;
      proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header  Host $http_host;
      proxy_redirect off;
      proxy_pass <http://rails_app>;
  }
}

I can understand that the nginx container can not find the rails app server but I don’t know what to do to troubleshoot it. Here is the section of Infrastructure code that creates the container. Any help is appreciated. Thanks.

Babar Baig avatar
Babar Baig
module "app_container" {
  source                       = "[email protected]:cloudposse/terraform-aws-ecs-container-definition.git?ref=0.44.0"
  container_name               = "${var.container_name}-app"
  container_image              = "${local.aws_account_id}.dkr.ecr.${var.aws_region}.amazonaws.com/${local.ecr_app_repo}"
  container_memory_reservation = 512
  essential                    = true
  readonly_root_filesystem     = false
  port_mappings = [
    {
      containerPort = 3000
      hostPort      = 3000
      protocol      = "tcp"
    }
  ]
  log_configuration = {
    logDriver = "awslogs"
    options = {
      "awslogs-group" : "/${var.app_code}/${var.app_type}/${var.app_env}/${var.cluster_name}/app",
      "awslogs-region" : var.aws_region,
      "awslogs-stream-prefix" : "ecs"
    }
    secretOptions = null
  }

\## Environment variables declared here

\## Secrets declared here
  privileged = false
}

module "web_container" {
  source                       = "[email protected]:cloudposse/terraform-aws-ecs-container-definition.git?ref=0.44.0"
  container_name               = "${var.container_name}-web"
  container_image              = "${local.aws_account_id}.dkr.ecr.${var.aws_region}.amazonaws.com/${local.ecr_web_repo}"
  container_memory_reservation = 512
  essential                    = false
  readonly_root_filesystem     = false
  port_mappings = [
    {
      containerPort = 80
      hostPort      = 8084
      protocol      = "tcp"
    }
  ]
  log_configuration = {
    logDriver = "awslogs"
    options = {
      "awslogs-group" : "/${var.app_code}/${var.app_type}/${var.app_env}/${var.cluster_name}/web",
      "awslogs-region" : var.aws_region,
      "awslogs-stream-prefix" : "ecs"
    }
    secretOptions = null
  }
  links = ["${var.container_name}-app"]
  privileged = false
}

resource "aws_ecs_task_definition" "this" {
  family                   = "${var.app_type}-${var.app_code}-${var.app_env}-${var.task_def_name}"
  container_definitions    = "[${module.web_container.json_map_encoded},${module.app_container.json_map_encoded}]"
  task_role_arn            = aws_iam_role.ecs_role.arn
  execution_role_arn       = aws_iam_role.ecs_role.arn
  requires_compatibilities = ["EC2"]
  tags = merge({
    Name = "${var.app_type}-${var.app_code}-${var.app_env}-${var.task_def_name}"
  }, var.app_tags)
}

I am looking for someone to put me in a right direction with this. I am unable to run this on ECS.

Miguel Zablah avatar
Miguel Zablah

Did you test the image locally? I’m not familiar with this module since I created my own for all ecs related stuff

But what I will do is check that is working as intendent locally then test permissions on ecs and check each task that it has all the parameters set correctly, btw I will also use ECS Fargate

Babar Baig avatar
Babar Baig

Yes. I tested locally via docker compose file. It works.

imiltchman avatar
imiltchman

@ Are you running the two containers in the same task, or separate tasks?

Babar Baig avatar
Babar Baig

I am running 2 containers within the same Task Definition.

imiltchman avatar
imiltchman

In that case, change server app:3000; to server localhost:3000;

Babar Baig avatar
Babar Baig

Let me check that.

imiltchman avatar
imiltchman

This way, the nginx container will always try to hit the local app container

imiltchman avatar
imiltchman

So it’s not really load balanced, but it should work

Babar Baig avatar
Babar Baig

@imiltchman I tried the suggestion but somehow I feel that there is some issue with the application and I need to debug and fix that. Thanks for your suggestion it made the containers running on ECS.

I am thinking to remove the NGINX from the application altogether and use ALB for load balancing. I don’t know the configuration that I need to make in the application so that whenever I run this rails app and hit localhost:3000 instead of NGINX it should directly load the rails app.

imiltchman avatar
imiltchman

If the same task, you should just reference the app container on localhost, not app:3000

imiltchman avatar
imiltchman

If different tasks, then you’ll need to use service discovery or put an LB in front of the tasks

imiltchman avatar
imiltchman

ECS does not have the built in service discovery like docker

loren avatar
loren

i’ve got a probably dumb question about using docker containers… is there a simple/automatic way to refer to local files from the host, within the container environment? i was just playing with the terraform container, which says to do this:

docker run -i -t hashicorp/terraform:light plan [main.tf](http://main.tf)

but of course that fails because 1) it’s invalid syntax for terraform and 2) the container workdir does not have my main.tf. i do know about -v of course, and can mount $PWD to /, but what i’m more interested in is the idea of using a docker image to replace a binary installed to my system. if i have to mount $PWD to the workdir every time, that seems a little more annoying?

loren avatar
loren

i’m kinda thinking of something like:

alias terraform='docker run -it hashicorp/terraform'
loren avatar
loren

ahh, looks like using -v is it? along with overriding the workdir? https://github.com/koalalorenzo/docker-aliases

koalalorenzo/docker-aliases

Run commands inside docker containers to keep your OS untouched using bash alias - koalalorenzo/docker-aliases

loren avatar
loren

i guess there’s no good way of using a binary from one container in a different container? why do people like these things again?

mfridh avatar
mfridh

Haha yeah I prefer to just have stuff on my host.

mfridh avatar
mfridh

Here’s an old thing I have bookmarked but I never actually tried it yet.

https://github.com/whalebrew/whalebrew

I manage my laptop tools with puppet mostly so I’m not too bothered with these things yet.

whalebrew/whalebrew

Homebrew, but with Docker images. Contribute to whalebrew/whalebrew development by creating an account on GitHub.

1
loren avatar
loren

yeah, and i’m generally using a common makefile or a package manager, but i continue to be completely surprised by the difficulty people have managing their own workspace, so i’m hunting for options… thought containers were farther along or more popular for this kind of thing than it actually seems.

loren avatar
loren

whalebrew looks like it could be kinda cool

loren avatar
loren

i imagine credential-access might be tricky. i generally need to mount $HOME/.aws to /root/.aws (or whatever the container user is)

mfridh avatar
mfridh

If wrapper does the right thing it should run as your user id and mount home transparently.

It’s a mess. Jesse Frazelle started a dangerous trend?

1
mfridh avatar
mfridh

I run all in my host except for a few very specific Bazel “full stack” things - that has its own complete container sort of like geodesic. One specific version of all tools etc.

It’d be neat to have tfenv-like support on all tools.

mfridh avatar
mfridh

Then if you did things right it wouldn’t matter much how their host is installed as long as all team members had all the wrappers in place.

loren avatar
loren

yeah, i’m sure that’s where i first read about the idea to use docker images to replace local binaries, which it turns out was posted 5 years ago… https://blog.jessfraz.com/post/docker-containers-on-the-desktop/

Ramblings from Jessie: Docker Containers on the Desktop

How to run docker containers on your desktop.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

A few people on my team enjoy this tool: https://github.com/EnvCLI/EnvCLI

EnvCLI/EnvCLI

Don&#39;t install Node, Go, … locally - use containers you define within your project. If you have a new machine / other contributors you just have to install docker and envcli to get started. - …

1
loren avatar
loren

very interesting, thanks @Erik Osterman (Cloud Posse)!

Maciek Strömich avatar
Maciek Strömich

Were just using docker-compose and mount workdir from the host. The only annoying thing with docker-compose is that you can’t dynamically link containers from the command line and you need to define links in the yaml file bloating it when you want to use a single container for multiple use cases

2020-12-17

Santiago Campuzano avatar
Santiago Campuzano
Download and Try the Tech Preview of Docker Desktop for M1 - Docker Blog attachment image

Apple has recently shipped the first Macs based on the new Apple M1 chips. Today we have a public preview that you can download and try out!

2020-12-16

2020-12-14

bradym avatar
bradym

It’s considered best practice to specify versions of things to be installed in dockerfiles to ensure repeatability. But this sometimes/often leads to packages not being found on a later build or dependency issues like this:

Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
libssl-dev : Depends: libssl1.1 (= 1.1.1d-0+deb10u3) but 1.1.1d-0+deb10u4 is to be installed

One solution could be specifying a major or minor version (depending on the package) and letting the package manager fill in the rest, something like apt-get install libssl-dev=1.1.\*

I’m curious how others deal with this?

Santiago Campuzano avatar
Santiago Campuzano

@bradym We deal with that not that often. When that happens, we try to upgrade the whole application stack including the base Linux container and the required libs/packages

Santiago Campuzano avatar
Santiago Campuzano

Last time it happened with a legacy PHP 5 application, so we decided to upgrade the Debian base Docker image

2020-12-08

imiltchman avatar
imiltchman
Last Night's Patch Tuesday Update Breaks WSL 2 for Windows 10 v2004 attachment image

Last Night’s Windows 10 Patch Tuesday Cumulative Update Breaks WSL 2 for Some Windows 10 Version 2004 (May 2020 Update) Users

2020-12-04

Babar Baig avatar
Babar Baig

Hello everyone!

I am using Docker ECS integration to deploy my docker compose project on ECS. My docker compose file looks like following docker-compose.yml

volumes:
  postgres_data: {}

services:
  app:
    build:
      context: .
      dockerfile: ./docker/app/Dockerfile
    depends_on:
      - db
      - redis
    volumes:
      - assets-volume:/var/www/public
  db:
    image: postgres
    volumes:
      - postgres_data:/var/lib/postgresql/data
  redis:
    image: redis
    command: redis-server
    volumes:
      - '.:/app'
  web:
    build:
      context: .
      dockerfile: ./docker/web/Dockerfile
    depends_on:
      - app
    volumes:
      - assets-volume:/var/www/public
    ports:
      - 80:80
volumes:
  redis: 
  postgres_data:
  assets-volume:

When I run docker compose up I get the following error service "*app"* doesn't define a Docker image to run: incompatible attribute If you look into the services section I am specifying a path to a Dockerfile (instead of specifying an image) in app service. Can I assume that the docker compose CLI does not support the docker file paths?

Deploying Docker containers on ECS

Deploying Docker containers on ECS

    keyboard_arrow_up