Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Vercel Serverless Functions vs. Cloudflare Workers (moiva.io)
74 points by alexey2020 on March 25, 2021 | hide | past | favorite | 35 comments


Great post, enjoyed your writing style and drawings a ton!

One of the bigger things I think Workers have going for it versus others is it’s ability to bind WASM modules. Hyper efficient, basically native running computation at edge is a really cool concept. Especially since you can talk to it with a dead-simple JavaScript API.

For example, Cloudflare charges a good amount of money for image resizing using their CDN however they do it. At the same time, they also have this proof of concept worker that uses Web Assembly to do it for basically free! [1]

Another cool demo that was on HN the other day was embedding SQLite with WASM for edge quick transactions. [2]

Completely mind blowing... Still, very few demos in the wild. The learning curve is a beast and hopefully gets better as Rust and WASM become more mainstream for web developers.

[1] https://github.com/cloudflare/cloudflare-workers-wasm-demo

[2] https://github.com/lspgn/edge-sql


Wow, thanks for the insight and ideas! Agree, having native running runtime at edge can can give a start to some interesting projects


> basically native running computation at edge is a really cool concept

What are some real world use cases for this? I'm surprised the Cloudflare product team is prioritizing working on WASM module support instead of like... querying from a database through a way other than `fetch()`.


Wow this was extremely informative for me! Cleared up a cacheing concern I had. Thanks for sharing.

I've had good success using the free tier of Vercel functions to handle the low-traffic storefront and user accounts for offsetra.com - Just wrapper functions around stripe and firestore. It's a godsend for independent, unskilled, time-constrained front-end devs like me!


One important piece missing from this article is that on Vercel you do not get global Serverless functions on any plan except the Enterprise plans. By default you can pick one preferred region for your Serverless functions and that's the region that's always used. In practice, assuming you have a somewhat decent caching strategy, this doesn't really matter as far as latency is concerned. Where it could potentially matter is that AWS region having an outage and now you can't fallback to another. We deploy all our functions to at least two regions and Vercel does handle region failover in this case.

Disclaimer: I'm a Vercel enterprise customer


Unless it was edited later, its in there about 1/4 of the way down.

"Vercel doesn’t replicate Functions across their Network in Free and Pro accounts - Functions can be deployed to one particular region only. Enterprise plan users can specify multiple regions for Serverless Functions."


Right. It's not missing. I pointed that out in the "Serverless Functions requests handling" section, also visually


If you store data in firestore, dynmodb, postgres or similar.. does it really matter if the function is distributed?


If there is no cached data, then it doesn't matter.

With Vercel it doesn't matter even in case there is valid cached data, because Vercel doesn't execute the function in that case.

Cloudflare always executes the Function regardless of the existence of cache and it's Function's responsibility to respond with Cached data. Hence, distributed Cloudflare functions is a necessity.


Thanks for feedback! Caching... it took me time to get my head around it. With Vercel it works more or less the way I imagined. It surprised me that Cloudflare has a different approach. But once I got it, it started making sense and I like it :)

Good luck with your project!


Can someone knowledgeable please explain where these workers are useful?

Serverless components within an main infa. make sense - it's an easier way to deploy.

But these 'edge' functions ... what is the advantage of saving a few ms on a transaction?

I understand that we may want standard content pushed out to the edge, but in what situation is it really worth all the added complexity of risk of pushing out functions to the edge, to save a few ms?


We’re using it to customize Cloudflare’s default caching policies so that we can cache more content at the edge. For example, we can segment the cache based on the geolocation or device type of the client. We can also normalize the URLs before doing the cache lookup, by stripping query params which we know aren’t going to affect the content in the response.

This can save hundreds of ms from the response time of the initial HTTP request, which means all of the other page resources will load more quickly too.

It does add some additional complexity, but for large sites and hosting platforms, this can have very significant cost savings. It’s usually way cheaper to serve bytes from Cloudflare’s cache than to serve them from your origin.


One use case is validating auth tokens at the edge so you can edge cache API responses that require auth.


I wonder this too, given ultimately, 99% of non-static sites will need to reach out to a central database. So to render a dynamic page, your worker then has to go to the DB, no?

Curious what is the use case. You can cache stuff, as detailed in the post, but assuming you have huge variance in page contents per user, I can't see too much use. I must be missing something.


A lot of applications are read heavy on their database. When this is the case, you can make reads to a read only slave closer to the edge. There are other options, but depending on your applications architecture this might be a relatively easy way to reduce latency on a large portion of your traffic.


Why not use a database with a worker?

https://www.cloudflare.com/products/workers-kv/


Key/value store doesn't really have joins, aggregation, tables, references, etc.


> in what situation is it really worth all the added complexity of risk of pushing out functions to the edge

If you are talking about developer point of view, then there is no additional complexity. All the complexity is covered by the underlying platform

> what is the advantage of saving a few ms on a transaction?

one example - if a transaction consists of a few separate sequential transactions, then ms add up and might affect user experience. Also an app might need to issue lots of requests on a page load and taken that there is a limit on parallel requests (6 requests per domain), the advantage might be sensible.

Having said that, I tend to agree that many use cases are not sensible to a few ms advantage


> there is a limit on parallel requests (6 requests per domain)

This is only the case if you are using HTTP/1.1: https://stackoverflow.com/a/45583977/11383840


It's often not actually more complex, it's simpler. With Cloudflare Workers, for example, you don't think about regions, availability zones, provisioning resources, or cold starts. You just write code, and it can scale from one request per second to thousands without any thought or work on your part, partially because of how its designed and partially because it's scaled across so many locations and machines.


> You just write code

Except you need to do it in a totally new paradigm (serverless) where you can't require any `npm` packages and you can't query your database unless it's over the `fetch()` API.


> where you can't require any `npm` packages

You just have to introduce a build tool like ncc (https://github.com/vercel/ncc) which will bundle your app into a single JS file.


Serverless are not all the same. Cloudflare uses V8 and you can't require npm packages, right. Vercel and many other implementations use NodeJS and you Can require npm packages.


And you're limited to 50 sub-requests..


if it's a real issue and you have to issue lots of subrequests, then you don't really get advantage from all Cloudflare micro-optimisations. In such situation I would suggest to look for other Serverless providers, or maybe traditional approach works better in such case


I had been using Vercel for a Next.js SSR deployment up until this week when I moved it to a basic AWS Lightsail box with no real NGINX optimisations.

I have Lightsail server in Frankfurt and I am in Sydney and the Lightsail box gets a higher Pagespeed score than the Vercel deployment and from my own anecdotal usage, the page load is noticeably faster. I had Vercel region set to Paris (no Frankfurt region yet).

I loved the simplicity of Vercel, especially the per-branch-deployments (which Im still using on the free tier) but it was surprising that for all the serverless boasts, its not actually any faster than a basic server.


It's a little more complex than that. Naturally an 'always running' server is faster when you're not getting a cache hit or you're running into a Lambda cold start. But for stuff served from CDN cache it won't make any difference. Vercel/nextjs are geared towards encouraging you to make everything static so that it does get served that way.

If you need to generate every part of your page to be user-specific then I would say that's a different use case and nextjs isn't necessarily the right tool.

That said, you can actually do some pretty dynamic pages with it. You should try out what they call 'Incremental Static Generation'. It's basically the SWR pattern, but for server-side rendering.


> Vercel/nextjs are geared towards encouraging you to make everything static so that it does get served that way.

That would mean they have no reason to exist. If they're slower than a regular server for dynamic content and only as fast as a regular CDN for static pages, they're beaten by the old server + CDN combination.

The niche for rendering on the edge is really incredibly narrow. Take one step further and you have client-side rendering, take one step back and you're already on a server. I'm not surprised people have trouble finding use-cases, and when they do it often turns out they would've been better served by one of the two other solutions.


I'll have to disagree.

I use Next because it offers SSR plus the hydration step for a fully-fledged React app on client, and the ease with which you can pass stuff between the two. SSR for paint speed and SEO/bots/meta/whatever, and the rich client side app functionality that people expect these days. A traditional server-side-render-only approach doesn't make that as easy, IMO.

I use Vercel for the DX, mostly. I can copy an old project repo and have a fully functioning new project site up in literally 20 mins. Same reason people use Netlify.

Also, they're not necessarily slower for dynamic content. Only noticeably if you hit a cold start really. But that's just the normal serverless/Lambda caveat: in exchange for interventionless scaling, one request in n is a little slower during scale-up events. You can always put your next app on EC2/EB/etc if it worries you. I'm more pro-nextjs than pro-Vercel.


Yes I'm sure it's down to the cold start issue of Lambda. Are there any tips on how to keep it warm?

I didn't go down the static route as I have tens of thousands of product pages. Will check out the Incremental Static Generation - thanks.


So yeah in that case I would do Incremental Static Regeneration but don’t pass all your product paths at build time. Instead you can use the fallback mechanism to render individual product pages on request, but then serve the result statically thereafter. That way you don’t build tons of pages at build time that people may not view anyway, but the pages that do get views will stay fresh and fast.

If you were using your own Lambda you could set a minimum reserved concurrency to avoid cold starts, but I don’t think Vercel gives you that level of control. The main thing to do is try to serve as much statically as possible so that it reads straight from the CDN cache or static file store and that way it never hits your Lambda on requests.

With ISR the Lambda updates the rendered file in the background, rather than in response to a request (generally), and the user is just served the last generated render. So in that scenario the Lambda latency is taken out of the equation.

If you know which products are most important you could also render a subset of them at build time and let the fallback handle the long tail ones.


Great, thank you!


Former ZEIT (Vercel) employee. It's amazing how much Vercel is different from when I was there.


How big is the latency between these edge locations and AWS and Google GCP regions?

Since if the DB is hosted on AWS or GCP, then there would be latency going to that too.

That's what I'm really curious about.

Vercel being mainly on AWS would make the latency to the DB optimal if it is hosted on AWS in the same region. But what if the DB is on GCP?


Noob question, but how do you handle databases and auth with these functions? I know how to do it in my comfy monolith, but not sure what the best approach is here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: