Deno Deploy Questions
Hey there. We are using Supabase to run edge functions, which in turn built their feature on top of Deno Deploy. I'd like to know more about how Deno Deploy works.
Our main inquiry are on topics such as how fast can these functions scale, and how do they manage request load balancing and load, where are these edges, what caching (if any) mechanisms are used, do they become "cold" after some time, how "ephemeral" are these functions in case some troubleshooting or reboot is required, etc.
This comes since we've noticed some latency issues sometimes when we use our app and want to understand the root cause but searching on the guides we have no notion of how exactly are these edge functions being deployed and run and their surrounding environment. Any more in-depth info would help us greatly .
14 Replies
Hello! Deno Deploy is a next-generation cloud platform built on JavaScript isolates. It runs your JavaScript, TypeScript, and WebAssembly at the edge, in our 35 global regions, close to your users. It scales automatically, from zero to thousands of requests per second.
As for the latency issues you're experiencing, it's hard to pinpoint the exact cause without more information. However, Deno Deploy is designed to minimize latency by running your code in global regions close to your users.
Unfortunately, the documentation does not provide specific details on request load balancing, caching mechanisms, or the ephemerality of the functions. For more in-depth information, I would recommend reaching out to Deno's support or community forums. They might be able to provide more insights or help you troubleshoot the latency issues you're experiencing. 🦕
A lot of this comes from person experience, so take it with a grain of salt.
1. The functions scale pretty fast, though it's all opaque. You could sit down and test how it scales in your local region with a bit of clever code using broadcastchannels but I wouldn't worry too much about that.
2. There are no caching mechanisms built into deno deploy atm. If you want request caching, set the appropriate http caching headers and let the client handle caching.
3. Isolates spin up and down all of the time. On average isolates stay up around one minute. There are no "cold starts" (they take 0ms) in Deno Deploy as the isolate starts during the TCP/TLS handshake.
What latency issues are you encountering? I'm curious to hear your experience.
Hey, thanks for your response! I think it happens in the first time the functions are deployed as if the dependencies are being downloaded to run the request.
dependencies should not be downloaded at runtime
at deploy-time, your dependencies are bundled automatically
hmmm ok, it just feels awfully similar to when one hits an AWS Lamdba function that has been long unused and the response takes a while to reach the client, yet all subsequent responses work as expected...
I'm sure someone on the deploy team could advise but from my experience, even with projects with a lot of dependencies, I'm getting latencies way below 100 ms
I am doing queries to a DB, perhaps it is due to that tbh....
Interesting, depending on the queries that could be relevant
both the query could be important + distance to the DB + if there is a connection already with the DB
Which DB? I've seen DynamoDB exhibit cold start like connection times the first time it was used in an isolate for example
We're using PostgreSQL via Supabase
I use CockroachDB Serverless with Deno Deploy. Anecdotally, responses are snappy with my Deploy projects that don't use any backend DB. On my app backed by CockroachDB Serverless, there is latency on the first request, then they are quick thereafter.
I don't want to state that's the problem you're having, just my experience.
Is that an http connection to your cockroach DB?
Hey, its an HTTP connection to postgres, yeah