Resources with not server (where connectors are) IP don't work

I’ve setup connectors in docker on my ubuntu server and created some resources. The one resources with IP of the server and some port does work fine but the other resources with other IP’s don’t work. The other IP are docker containers setup in ipvlan 2 and 3.

Also I’ve check the resources activity and there is a lot of successful connection but the all say that they lasted 3 seconds when they never actually connected.

Please someone help me, thank you in advance.

Hi @Nameless welcome to the Community!!!

To clarify, you can access the IP of the ubuntu server via Twingate but cannot access docker containers that exist in vlan2 and vlan3?

How are you managing the routing between the three vlans?

How did you provision the resources in your Twingate Admin Console?

Hello @chris-twingate and thank you for the warm welcome.

Yes, I can access via Twingate the docker container setup on the bridge network (server IP with specified port) but no the other containers in network x ipvlan2 and y ipvlan3.

I’m not sure i fully understand the question but the network x witch is ipvlan2 didn’t require any additional config because all of the containers are like “machines” on the network and the network y ipvlan3 i created normally (as instructed on the internet) and added the ubuntu server to my main router as a static route to connect the subnet to my main net.

I don’t understand what do you mean “provision the resources”? On Twingate I added the resources, gave them the appropriate names, gave them the IP’s of the container’s and if necessary the port of the container and I’ve also tried limiting (and not) the udp and icmp and also tried with and without aliass and nothing has worked.

Thank you. Usually for a network to use multiple VLANs (network IDs) there is a gateway/router that forwards traffic between them. If you could run the ip add command and the netstat -rn on your ubuntu server it would help me answer some of the questions I have below.
What is the name of your network (<NETWORK_NAME>.twingate.com)?
What are the network IDs (e.g. 192.168.1.0/24) for each of your networks referenced above?
BRIDGE:
VLAN2:
VLAN3:
Can any other computer on your network access the containers on vlan2/vlan3?

Sorry for the long response.

ip add command output:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 24:4b:fe:48:36:dc brd ff:ff:ff:ff:ff:ff
    inet 192.168.178.69/24 brd 192.168.178.255 scope global enp3s0
       valid_lft forever preferred_lft forever
    inet6 2a00:ee2:2a03:2700:264b:feff:fe48:36dc/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 494sec preferred_lft 494sec
    inet6 fe80::264b:feff:fe48:36dc/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:be:6f:8b:9e brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:beff:fe6f:8b9e/64 scope link
       valid_lft forever preferred_lft forever
75: enp3s0.1@enp3s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 24:4b:fe:48:36:dc brd ff:ff:ff:ff:ff:ff
76: enp3s0.2@enp3s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 24:4b:fe:48:36:dc brd ff:ff:ff:ff:ff:ff
126: vethee47381@if125: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 8e:70:50:a9:da:cc brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet6 fe80::8c70:50ff:fea9:dacc/64 scope link
       valid_lft forever preferred_lft forever
128: veth41a4fee@if127: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 8e:ff:46:ba:f7:4b brd ff:ff:ff:ff:ff:ff link-netnsid 6
    inet6 fe80::8cff:46ff:feba:f74b/64 scope link
       valid_lft forever preferred_lft forever

The last 2 (“vethee47381@if125” and “veth41a4fee@if127”) I don’t remember them being there before setting up twingate or creating them.

netstat -rn command did not work there is no netstat command, but I tried nstat -rn witch returned nothing.

My twingate network is “bogdanhomenetwork.twingate.com”.

BRIDGE = 172.17.0.1/16 (docker default; but the containers are not accessable via them IP rather via the server IP 192.168.178.69 + configured port for the container)
VLAN2 = 192.168.178.0/24 (enp3s0.1)
VLAN3 = 192.168.179.0/24 (enp3s0.2)

Locally things work correctly and perfectly. Any machine can access nextcloud container on ipvlan2 network and other container on the ipvlan3 network.

I see how things are now. I see you currently have five connectors running in your docker host. Can you stop all of them except sparkling-grebe? Then test your access again please.

I stopped all connectors them expect sparkling-grebe and tried again and didn’t change anything. I still couldn’t access any resource that wasn’t server ip + port.

Also for each attempt there where 2 logs in activity.

ok thank you for testing that. I see four (4) services listed in your Twingate Network They are:
Portainer (VLAN3)
Webtop (VLAN2)
Homarr (VLAN2)
Nextcloud (VLAN2)

Based on your previous posts Portainer is the only app not accessible while connected to Twingate?

No, Webtop is the only app accessible via Twingate wich is on the bridge network. All of the other ones don’t work.