All things docker Archive: https://archive.sweetops.com/docker/
I am using an Amazom AMI image to build spot instances whenever there is a build job. One of the packages that is created is docker using the commands below:
# Install Docker CE curl -fsSL <https://download.docker.com/linux/ubuntu/gpg> | apt-key add - add-apt-repository \ "deb [arch=amd64] <https://download.docker.com/linux/ubuntu> \ $(lsb_release -cs) \ stable" apt-get update -yq && apt-get -yq install docker-ce docker-ce-cli containerd.io
Sometimes, my internal routing table for the instance looks like this:
$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.206.96.1 0.0.0.0 UG 100 0 0 ens5 10.206.96.0 0.0.0.0 255.255.224.0 U 0 0 0 ens5 10.206.96.1 0.0.0.0 255.255.255.255 UH 100 0 0 ens5 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.30.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-77455fcf0cbe 172.31.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-41e09bcc0b9c 192.168.32.0 0.0.0.0 255.255.240.0 U 0 0 0 br-c80695bea2be 192.168.96.0 0.0.0.0 255.255.240.0 U 0 0 0 br-5d6cf359313a 192.168.112.0 0.0.0.0 255.255.240.0 U 0 0 0 br-f38334a832ee 192.168.160.0 0.0.0.0 255.255.240.0 U 0 0 0 br-41a02bc22363 192.168.208.0 0.0.0.0 255.255.240.0 U 0 0 0 br-1f45ae76c3fe 192.168.224.0 0.0.0.0 255.255.240.0 U 0 0 0 br-8a8bb9ab22a0 192.168.240.0 0.0.0.0 255.255.240.0 U 0 0 0 br-c7f72974b999
And sometimes I have a smaller routing table (some of the subnets listed above are excluded). I am not sure what the reason is for the inconsistent routing tables and bridged network interfaces, but it’s causing me some trouble, because every once in a while, a job needs to access an IP in one of the listed subnets in the routing table, which gets created by docker. In other words, docker sometimes creates a routing table that overrides the default route, causing builds to fail for a particular service, where the build script needs to access an IP address in one of the listed subnets. The traffic gets routed to one of those docker interfaces and times out.
Now my question is, what can be done about this? I just tried to have a look at
/etc/docker/daemon.json to see what’s in there and the file is actually missing. How do I deal with this erratic behaviour by dockerd?
Has someone here tried to access a service running in a container from another container, both running in the same host? In my case, I have a DB running on a compose with the port published to the host. Then I am running a gitlab-runner container, which needs to access this database to keep the test local. It works if I point to the DB in AWS RDS. I have tried to point to the local using its IP, container name, 127.0.0.1. I also tried running the second container with –net <compose network> and tried the same as mentioned before. Any ideas?
What does your compose file look like?
Saichovsky, thank you for responding. this looks like a problem with the gitlab-runner image. I was able to query from a centos image after downloading postgresql. the compose file looks like this:
version: '3.7' services: postgres: container_name: postgres image: postgres:12.7 restart: always env_file: - env/postgres.env logging: options: max-size: 10m max-file: "3" ports: - '5432:5432' networks: - database volumes: - "postgres_data:/var/lib/postgresql/data" volumes: postgres_data: networks: database:
I believe I know what is happening. the gitlab runner is creating a new container. that one is the one that needs to be connected to the db network
so the solution was to pass the name of the network after the argument –docker-network-mode for the gitlab-runner command
Why would docker-compose work but not docker compose when I’m running one on Ubuntu 18.04 and one on Mac with docker desktop and docker version on both is 20.9+
Tried experimental flag on Ubuntu for kicks and still no go.
docker compose is not yet GA. https://docs.docker.com/compose/cli-command/#installing-compose-v2
thank you, that helped a ton. couldn’t find anything on this terminology. Just setup update for ubuntu desktop to pull in v2 as update and will be good to go.