Podman Containers - failing to remain up

I can deploy connectors via Podman just fine. Connections are active and responsive. However, 24 hours later I am unable to connect. The twingate portal reports the connectors as online, but no logs or connection attempts are shown. Ideas?

Can you activate debug logs on your connector and share its output?

To keep you informed - I have redeployed the containers with the --env TWINGATE_LOG_LEVEL=7, I will wait for the condition to occur, then export them.

1 Like

Ok you will love this, I dont think its you - for some reason it does not think the containers are up

[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.74.42.53:30005, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.74.42.53:30007, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_all_listen_addrs: listeners_ size 1
[DEBUG] [libsdwan] [stun] update_public_address: sent STUN request to 34.148.205.131:3478
[DEBUG] [libsdwan] [stun] parse_response: got STUN response: 129.222.252.61:26085
[DEBUG] [libsdwan] [relay] get_all_listen_addrs: listeners_ size 1
[DEBUG] [libsdwan] heartbeat payload: {"cert_digest":"OmiCOmByjsHOykm7WGfUSxr2DbdyrA0i7WZnyki3WnU=","connected_relays":[{"addr":"34.139.194.106:30009","zone":"us-east1-d"},{"addr":"34.73.70.177:30001","zone":"us-east1-d"},{"addr":"34.74.42.53:30005","zone":"us-east1-d"},{"addr":"34.74.42.53:30007","zone":"us-east1-d"}],"hairpinning_supported":"not-supported","hostname":"7372f36f2fc5","local_ipv4":["10.0.2.100"],"local_time_offset":0,"nat_type":"endpoint-dependent","stun_discovered_external_address":"129.222.252.61:26085","uptime":27540}
[DEBUG] [libsdwan] submit_request: sending HTTP request 7397794795804333983
�[DEBUG] [libsdwan] http::request::send_request: POST "https://homestead.twingate.com/api/v4/connector/heartbeat" application/json
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.139.194.106:30009, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.73.70.177:30001, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.74.42.53:30005, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.74.42.53:30007, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_all_listen_addrs: listeners_ size 1
�[DEBUG] [libsdwan] http::response::from: certificate 8fe918ea2a51116d3fd2cfd0e5327c1ef49b6f1f4604ccc2b02f447a269ce707, issuer: C=US, O=Let's Encrypt, CN=R3, subject: CN=*.twingate.com
�[DEBUG] [libsdwan] http::request::handle_response: POST "https://homestead.twingate.com/api/v4/connector/heartbeat" 200 OK (duration 0 sec)
[DEBUG] [libsdwan] operator(): got HTTP request 7397794795804333983 successful response
[DEBUG] [libsdwan] access_node/heartbeat
[DEBUG] [libsdwan] [stun] update_public_address: sent STUN request to 34.148.205.131:3478
[DEBUG] [libsdwan] [stun] parse_response: got STUN response: 129.222.252.61:26085
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.139.194.106:30009, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.73.70.177:30001, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.74.42.53:30005, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.74.42.53:30007, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_all_listen_addrs: listeners_ size 1
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.139.194.106:30009, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.73.70.177:30001, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.74.42.53:30005, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.74.42.53:30007, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_all_listen_addrs: listeners_ size 1
[DEBUG] [libsdwan] [stun] update_public_address: sent STUN request to 34.148.205.131:3478
[DEBUG] [libsdwan] [stun] parse_response: got STUN response: 129.222.252.61:26085
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.139.194.106:30009, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.73.70.177:30001, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.74.42.53:30005, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_needed_conns_count: relay 34.74.42.53:30007, needed_conns_count 0
[DEBUG] [libsdwan] [relay] get_all_listen_addrs: listeners_ size 1

I have an idea as to why… how did you spin up your Connector? via Docker Compose?

and in case you did use our Docker Compose template…

We just updated it in our documentation (see here).

We used to have a couple of extra environment variables specified in there that we just removed (yesterday): it turns out those env variables weren’t necessary in the Docker Compose (because they are set at image level) and in fact, one of the values recently changed…

The consequence of that change? It still allowed the Connector to function normally but it broke the health check method used by container orchestrators…

If you used Docker Compose, take a look at the initial YAML and remove those variables that are no longer in the documentation… and it might get back to normal!

take a look at our thread here for a bit more on this: Reddit - Dive into anything

Hmm - I am not sure that is it. I am spinning up the containers using the terminal command in your documentation.

podman run -d --env TWINGATE_LOG_LEVEL=7 --env TWINGATE_NETWORK=“homestead” --env TWINGATE_ACCESS_TOKEN=“” --env TWINGATE_REFRESH_TOKEN=“” --env TWINGATE_LABEL_HOSTNAME=“hostname” --name “twingate-podman2024-02-09-2” --restart=unless-stopped --pull=always

I do have a workaround, I put some cron jobs is to --restart all running containers every 24 hours. It’s a bandaid, this works for both Popos and Ubuntu. I am not convinced it is your issue, I am leaning that this is a podman issue, I have not tried with Docker

got it @kramer9. The only other thing I can think of is that perhaps the container isnt running with the right privileges? Im unfortunately not familiar with Podman. Do you know if containers in Podman have any particular restrictions in terms of networking?