Whenever I start a docker container locally, where we installed twingate client, the DNS addresses the container gets given are in the CGNAT range that don’t work. The container & docker is very non-functional at that point because the name resolution doesn’t work at all…
croosso@ubuntu:~$ docker run -it ubuntu
root@a7f19ab39470:/# cat /etc/resolv.conf
as soon as I stop twingate, and restart docker I get the correct, functioning DNS inside containers.
What should we do for our devops and developers to make sure either containers doesn’t inherit the twingate dns, or cgnat addresses works?
@croosso can you clarify how the setup is done? Do you have our Client running inside of a container itself and then other containers pointed at it? Or is it running installed on the Docker host?
The CGNAT IP addresses you see as the nameservers as well as returned for DNS queries are correct, that’s part of how the service works and how we handle DNS address based Resources (see How DNS Works with Twingate | Docs for more info). I personally run the Client both on a Docker host as well as within a container that I then point other containers at and it works fine, both for accessing Resources as well as bypassing out to the Internet itself.
@Ben The Twingate client is installed solely on the Docker host, not within the containers. After starting and authenticating Twingate, I ran
docker run -it ubuntu followed by
apt-get update . However, it did not resolve anywhere. It seems like either dnsmasq or one of the interfaces is not routing the traffic correctly. As soon as I stop the Twingate client and restart a container, it functions as expected. There is probably a setting on the daemon or something else to allow this to work…
And is the container that’s having issues setup to run in bridge or host mode, or something else?
Also just in case this is maybe helpful, how I utilize our Client with my Docker containers is to set up the Client within one container, then have the other containers use it as their network mode. I do everything through compose so it’s pretty straight forward.
For example, I use Uptime Kuma which is a simple uptime monitoring platform and I run it in a remote system with our Client in headless mode in order to monitor services inside of my home lab. This is the stack that I’ve set up for it:
bash -c "apt-get update
&& apt-get install curl -y
&& curl https://binaries.twingate.com/client/linux/install.sh | bash
&& sudo twingate setup --headless /etc/twingate-service-key/service-key.json
&& sudo twingate start
&& sleep infinity"
Essentially the Twingate Client is setup as a service, pulls the service key from a file on the host that’s mapped as a volume, is in bridge mode, and then the second service
uptime-kuma is set with its network mode to point to the Client. It still has full Internet access through the Client’s container but any Twingate protected traffic will go through the proper flows. I made sure to enable shell access so I could log in and actually check connectivity and run local commands.
Thanks @Ben , but I’m not looking to run tg inside the container. Our developers install tg on their host machine, then they run all kind of containers on their machine for development. The problem is that as soon as they start a container on a linux host where tg is running, the container name-resolution doesn’t work at all, including internet access. I reproduce the problem on any linux simply by installing tg and running:
docker run -it ubuntu
I tried on a Mac and it works. Problem seems specific to a plain ubuntu setup.
@Ben I suspect it’s because docker doesn’t route traffic to sdwan0. I’m able to ping an external IP, like 18.104.22.168, but when I test if I can reach the tg dns, it doesn’t respond:
nc -zv 100.95.0.251 53
(No answer from the container, from the host, I get the success)
I haven’t been able to reproduce this on my own Linux host, I run the Client in headless mode, it has a few Resources assigned that live in a remote VPS (so off-site), and when I run the exact command you do and go into that container I can resolve things just fine, both Resources and non-Resources.
I’m also able to ping remote sites like 22.214.171.124 and 126.96.36.199 and so on, or by domain name. The container was setup to run off the Docker bridge which should then depend on the host, and I can see the SDWAN interface has priority in the host. I haven’t done anything super special on this host, resolved is running at default settings so I’m not sure what other differences may exist between it and your systems?
I’m running Ubuntu 22.04 if that makes a difference.
Hum, ok. That’s strange. Our standard ubuntu installation is based on a server image (because it’s much easier to automate with cloud-init). There is maybe something on the network configuration that is different. I’ll have a try with a plain ubuntu in a vm.
Yeah I just pull the 22.04 ISO off their site and keep it in the Proxmox store any time I need to spin up another VM. I’d be curious what customizations you’ve done to that image, I can try to reproduce it if I know a bit more.
I tried a VM with a fresh download and install of Ubuntu Desktop. Same behavior. Docker engine installed with the regular instructions. The only thing I always do (and mandatory on our TG policy) is to enable the ufw
sudo ufw enable
AH! I found the issue by looking at the routes… I discovered I have a resource on Twingate with the same ip range as the bridge network!
After I disabled this resource, it worked. Thanks for the time you spent on this @Ben !
Glad you figured it out, I hadn’t even thought of that!