Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker containers on non-default network can't do DNS lookups #772

Open
lbedford opened this issue May 21, 2019 · 10 comments
Open

docker containers on non-default network can't do DNS lookups #772

lbedford opened this issue May 21, 2019 · 10 comments

Comments

@lbedford
Copy link

docker-compose creates its own network, which causes the DNS in docker to be set to 127.0.0.11.

This doesn't appear to be being allowed through the firewall:

lbedford@dexter:~/$ docker exec -it test_db_1 /usr/bin/apt update
Err:1 http://security.debian.org/debian-security stretch/updates InRelease
Temporary failure resolving 'security.debian.org'
Err:2 http://deb.debian.org/debian stretch InRelease
Temporary failure resolving 'deb.debian.org'
Err:3 http://deb.debian.org/debian stretch-updates InRelease
Temporary failure resolving 'deb.debian.org'
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
W: Failed to fetch http://deb.debian.org/debian/dists/stretch/InRelease Temporary failure resolving 'deb.debian.org'
W: Failed to fetch http://security.debian.org/debian-security/dists/stretch/updates/InRelease Temporary failure resolving 'security.debian.org'
W: Failed to fetch http://deb.debian.org/debian/dists/stretch-updates/InRelease Temporary failure resolving 'deb.debian.org'
W: Some index files failed to download. They have been ignored, or old ones used instead.

sample docker-compose file (same error with/without networks: default: null)

version: '2'
networks:
  default: null
services:
  web:
    image: "nginx"
  db:
    image: "redis"       

iptables from the namespace seem to make sense:

lbedford@dexter:~/$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} 6ce3e9335765) iptables -t nat -L
Password:
Chain PREROUTING (policy ACCEPT)
target prot opt source destination

Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
DOCKER_OUTPUT all -- anywhere localhost

Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
DOCKER_POSTROUTING all -- anywhere localhost

Chain DOCKER_OUTPUT (1 references)
target prot opt source destination
DNAT tcp -- anywhere localhost tcp dpt:domain to:127.0.0.11:46675
DNAT udp -- anywhere localhost udp dpt:domain to:127.0.0.11:58020

Chain DOCKER_POSTROUTING (1 references)
target prot opt source destination
SNAT tcp -- localhost anywhere tcp spt:46675 to::53
SNAT udp -- localhost anywhere udp spt:58020 to::53

it works correctly on the docker default network (for exampledocker run -it --rm --entrypoint /bin/bash rediscan run apt update correctly)

clearlinux version:
$ swupd info
Installed version: 29520

@ahkok
Copy link
Contributor

ahkok commented May 21, 2019

This doesn't appear to be being allowed through the firewall

Clearlinux doesn't ship a firewall by default. I assume that the problem is that DNS resolution isn't properly mapped into docker.

@ahkok ahkok removed the new label May 21, 2019
@znmeb
Copy link

znmeb commented May 21, 2019

I ran into something like this last night. I had two containers on a user-created network that published ports and expected to see each other by port number on localhost but couldn't. It was

docker network create osrm # default driver is bridge, which is what I want
docker run -p 5000:5000 --network osrm backend
docker run -p 9966:9966 --network osrm frontend

Adding an explicit driver - docker network create -d bridge osrm - fixed it. Clear ships without any Docker config files, and the defaults may be specified there.

@lbedford
Copy link
Author

It does appear to create the relevant network correctly:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
[snip]
c62a4ceb4322 test_default bridge local

$ docker network inspect test_default
[
{
"Name": "test_default",
"Id": "c62a4ceb4322b4bbe257027acb143276bd10c8be11db0bb61807f95701c0675e",
"Created": "2019-05-22T09:39:46.958450114+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"29a2076dd285f11b1f84d79140310df67c346a4e07e09d7b299d318e8b0298c9": {
"Name": "test_web_1_4a5c2f9e110d",
"EndpointID": "1a997edff46e6a6b0ea6dbf7c0bc87c156e4366335882d331d487337b7052964",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"87b4849b6202ee7769c241b48cb7491344e3b06ba7dbe3c4facdf000819aad51": {
"Name": "test_db_1_fc1f4e2a07d7",
"EndpointID": "90295edde0cdae473635dea484f715cda9951f4644223a87e30f1c549caee5e3",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

@jwang11
Copy link

jwang11 commented May 24, 2019

Just try it in my dev machine. The issue can only be reproduced in kata-runtime. if choose runc, non-default network works fine. The initial analysis suggest, it may relate to Embedded DNS server in user-defined networks. The DNS server 127.0.0.11 inside /etc/resolve.conf can't work well in kata container. If pass a right DNS server, network is workable.

So it mostly could be a kata-runtime specific issue.

@bryteise
Copy link
Member

@amshinde Have you seen this before on kata?

@lbedford
Copy link
Author

earlier today i set the docker_engine to runc in systems configuration files, and it works correctly. so it does appear to be an issue between docker internal dns and kata.

@znmeb
Copy link

znmeb commented May 24, 2019

Now that I think of it, I may have had the kana containers installed when I ran into this. If so I have a reproducible test case for it. It's buried in a huge repo but I can make it simpler.

@chavafg
Copy link

chavafg commented May 24, 2019

afaik, this is still an issue in kata, related: kata-containers/runtime#175

@amshinde
Copy link

@znmeb @lbedford As @chavafg mentioned, this is a known limitation with kata. This is due to the internal DNS resolver on 127.0.0.1 that docker uses for custom networks. We have been more focused on kubernetes use-cases and havent had the bandwidth to work on this.

@bryteise
Copy link
Member

For Clear, the work around is to use runc rather than kata. @CraigSterrett This is likely something to take into account (and maybe close) if we switch to runc as the default runtime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants