Latency tweaks?

We’re using Twingate to access Azure File Storage File Shares via Windows as ‘traditional’ Windows file shares.

I’m aware that a VPN or a ZTNA is always going to increase latency, and that, regardless of bandwidth, SMB over such a connection is always going to be slower because the protocol was never intended for ‘high’ latency networks.

That said, is there anything we can do to reduce that added latency?

The Azure containers for the connectors default to 1 core and 2GB memory - would increasing either have any effect? Or maybe running the Twingate client service with higher priority / increasing resources on the client computers?

Hey Mike,

Generally we don’t see a huge impact to performance based on connector “hardware” unless we’re talking “lots” of users simultaneously transferring “lots” of data at high speeds - but I still think it’s worth a shot.

What sort of delay/latency/speeds are you seeing?

-arthur