Twingate with large blob downloads

Hello, we are using Twingate to connect our internal network where we’ll need to download large file blobs. We have been very happy with the performance of small file downloads, but after running some speed tests, we see that for downloads that last longer than 1-2 minutes, after about 1 minute the network speed begins to deteriorate from ~100mbps to ~5mps at its worse.

We are currently on the free tier, would this performance be improved when we upgrade to a paid user?

The connector is deployed on AWS and the connector only has a single downstream connection at the moment).

Hi there,

We don’t implement any type of limitation with regards to throughput/performance based on product tier, so you would see no improvement in that regard by upgrading to a paid plan.

Can you tell me a bit more about your environment?

  1. What sort of file sizes are we looking at with your blob downloads?
  2. What size/spec is the VM running the connector
  3. Have you ever had “better” performance out of a large download or has it always been sub-par?

Thanks!

-arthur

Hi Arthur,

I appreciate the reply and apologies for the delay. We were running a few more experiments internally. Answers below.

What sort of file sizes are we looking at with your blob downloads? In practice, our blobs vary from <500mb to a few gbs (largest is 10gb). The screenshot attached to the original ticket uses a self-hosted OpenSpeedTest deployed on one of our instances only accessed through the connector.

What size/spec is the VM running the connector? We have two connectors deployed, both running on t3a.micro’s. I have attached a screenshot of their metric vitals after running a speed test which runs through the Twingate connector.

Have you ever had “better” performance out of a large download or has it always been sub-par? We have had “better” performance when we were using AWS VPN Client, we didn’t find that our connection would saturate during upload. We wanted to switch to Twingate in the hopes of resolving the max 10mps throughput that AWS VPN client supports. If we run this same speed test (deployed on the same instance that the connector has access to), but access the speed test through AWS VPN Client, then we do not see any saturation during upload.

Please let me know if you have any other questions and if there is anything. I can expedite a resolution on this issue.

Ozzie

Hey Ozzie,

I don’t see screenshots in your original post or in your latest reply. Can you send through the connector vitals screenshot to arthur (at) twingate.com so I can take a look.

One thing you could also try to rule it out would be to deploy on a slightly larger container (a small or even a medium) and see if you notice any variation in performance.