Edge Infrastructure & The Rebel Cloud

Chase Roberts
8 min readMar 2, 2023

The term “edge” in software infrastructure has become bloated. Does edge computing describe IoT? Or is it Lambda Functions? What about serverless? Perhaps web3 enthusiasts have also smuggled this term into their vocabulary. To clarify a definition of edge infrastructure, my partner Megan and I assembled some of the smartest minds from edge computing for a discussion about this rising infrastructure paradigm:

Guillermo Rauch, Co-founder & CEO of Vercel

Kurt Mackey, Co-founder & CEO of Fly.io

Nicholas Van Wiggeren, VP of Engineering at PlanetScale

We posted this discussion for your enjoyment on YouTube, and I thought I’d draw out some key learnings here. What follows is a definition of edge computing and why it matters. Then we’ll dig into working with data at the edge, where “serverless” fits into this paradigm, and the broader implications of The Rebel Cloud. Buckle up: I promise at least one Star Wars reference.

WTF is the edge

Edge is a cloud-native way of running code close to the user. -Guillermo

Nick surfaced a useful analogy for thinking about edge computing. Imagine you have a string with a user, an application server, and a data store servicing their requests.

The speed at which a user receives their requests depends on the distance between the user and the server. Companies built new data centers and increasingly moved workloads to cloud providers like Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Alibaba Cloud — which already have multiple data centers around the globe — to distribute applications closer to their users. The result? Reduced latency (and more strings).

Edge computing extends this trend by installing more application servers and, in some cases, the data stores closer to users. And no, not every user gets their own data center.

Edge infrastructure relies on more physical hardware locations — thanks to the incumbent cloud service providers and newer entrants like Fly.io and Cloudflare Workers. But this trend isn’t just about building more data centers: modern browser technologies, runtimes, and frameworks play an important role too. With the rise of JavaScript, edge-native runtimes like Node.js, Deno, and Bun (enabled by JavaScript engines V8 & JavaScriptCore), and modern React frameworks like Next.js, Remix, Svelte, and Astro, it became easier to run performant, secure applications closer to the users. WebAssembly (WASM) is another technology we’re watching that enables microservices to run anywhere.

Consider the pendulum effect at play. In the early days of React, the pendulum swung to default client causing developers to write most of their application logic in the client. JavaScript-enabled single-page applications (SPAs) that didn’t require as many roundtrips to servers. The default client wave resulted in bloated SPAs full of feature flags, components, and styles that eventually collapsed under their own weight. The pendulum swung back to default server, meaning developers wrote most of the application logic on the server. The downside of this design pattern is that network latency determines performance, which suffers if a user has a weak internet connection or is far from the application server.

The pendulum is swinging again and, as Guillermo describes it, “making a pit stop at the edge before it goes to origin.” Developers push parts of their applications closer to the users with new frameworks, infrastructure providers, and database technologies that offer the benefits of the client- and server-side paradigms but with fewer tradeoffs. React Server Components (RSCs) could advance this paradigm even further by limiting the code sent to the client when there is new data to run on the server. RSCs don’t replace server-side rendering (SSR) but instead complement SSR by fetching data and re-rendering content so client components can do what they do best: stateful interactivity (Chung Wu’s blog on RSCs is🏆).

But what about data?

I’m glad you asked. There will always be applications that require a consistent data store. For example, if I purchase something using my credit card, my credit card provider needs to update a centralized ledger with a record of that transaction. Otherwise, I could blow through my credit limit millennial style. Updating this data store requires a roundtrip between the client and the database, and moving data ain’t free. But do I need to call a data store for every application? Nick offers a framework to answer this question:

Start by asking, “what data do I have, and what data do I need?

Enter the concept of eventual consistency. Imagine an e-commerce website fielding orders on Black Friday. Not all of the data in this application needs to be fetched from the origin for every new user session. For example, product descriptions could be cached at the edge and updated after Black Friday when network costs are lower (like Saturday night when everyone is at the club, or in my case, watching Drive to Survive on my couch). Nick offered another example of edge concepts applied to data that's relevant to any product with logged-in users collaborating.

You may require a consistent, centralized database for usernames, passwords, profiles, etc. Once a user logs in, you can design a system that grants users temporary tokens and lets users act in the system without validating those users with a centralized database for every page load. If you can build a system like this, you can connect two users on the shortest path, remove that data store from the system, and let them have a great time.

Enter edge-native databases. Embedded in-process databases like DuckDB and SQLite enable analytical and transactional processing, respectively, on a single CPU. These databases are exciting because they enable developers to run both the applications and the databases closer to the users, delivering better performance for server-side dynamic web pages. Revisiting the e-commerce application example, I could store product descriptions closer to the user using an in-process database. The developer experience feels like shipping spreadsheets around the world (check out new offerings from CloudFlare & ChiselStrike). For centralized relational databases, MySQL and Postgres reign supreme, but setting up these databases for global scalability is a hero’s task. Fortunately, newer database vendors, including PlanetScale and Neon, simplify this task for developers by abstracting the underlying infrastructure for scale.

Speaking of infrastructure abstraction… Serverless

This brings us to the topic of serverless. Nick described serverless as an operating framework and analogized this concept to agile software development. Agile doesn’t describe a technology but rather an operating principle: we don’t plan too far ahead, we ship frequently, we expect iteration, and we respond to evolving requirements. Similarly, serverless is an operating framework that applies concepts like scaling to zero and abstracting infrastructure for scale. Kurt sets up this idea:

When you push applications all over the world, all of a sudden, you care about what’s turned on and when. When people talk about serverless, they say, “obviously, I’m not going to keep 140 processes and 140 database replicas of my database running all around the world at all times.” It’s not that these concepts are interchangeable, but the truly interesting aspects of serverless are important for global workloads. This almost always reduces to when is my stuff running and when is it not? And when do I have data somewhere, and when do I not?

Serverless is rightly intertwined with edge computing because globally distributed applications raise concerns about what is running where and when, and how to deploy these applications without endless complexity. Guillermo distills the idea that edge has inherited the properties of serverless:

In order to make the dream viable of executing dynamic compute with a similar cost and operational model as a static CDN, the ability to scale to zero and scale up quickly typically go hand-in-hand.

Essentially, serverless is an operating principle that should be considered when designing software applications for global deployment.

The Rebel Cloud

While adoption patterns suggest major cloud service providers will dominate cloud workloads, developer experience remains an important consideration. Ask a modern developer their preference between Neon and Amazon RDS, and you’ll see eyes light up (🤩) for the former and roll (🙄) for the latter. Ask an engineering or security leader if they’d prefer an amalgamation of best-of-breed cloud technologies that are poorly integrated and require independent security evaluations or a single integrated provider that requires one compliance review, and you’ll see eyes 🤩 and 🙄 in the opposite direction. With a single cloud provider, you trade off developer experience at the service level for a unified experience at the aggregate infrastructure level.

There is a rising class of cloud infrastructure providers that deliver a slice of the stack, but none of them represent one-stop shops for customers. Enter The Rebel Cloud, which Kurt describes as follows:

The next public cloud is a rebel alliance between us three and 47 other companies that have built a special user experience for a deep problem vs. cobbling together 400+ products with widely variable developer experiences that might occasionally be good.

And it’s not just solving the day one developer experience, but also the developer experience at scale and over time. Guillermo illustrates this tension with an example:

How do you guide organizations of sizable scale into the right design patterns on day 1? PlanetScale is fast by default. But if you forget to use an index or do a crazy left join, there is a price there. How do folks inside of an organization negotiate those tradeoffs and the awareness that they’re making these tradeoffs?… Everyone launches something new and it’s fast and cool on day 1, but what is the DX on Day 100?

It’s the “day 100 DX” that companies must determine how to solve together. This alliance also requires simplifying risk and compliance for buyers — something the cloud providers do well. Once you approve a single cloud provider, you gain access to an ecosystem of services. We need an equivalent experience for the rising cohort of infrastructure providers.

To realize this dream of an alternative cloud composed of multiple independent companies, business development (BD) between these companies will be required to reveal a cohesive developer experience (DX). BD is the new DX.

Edging forward

The edge matters because embracing this design pattern means future-proofing applications to be global by default. CDNs were born to store static files next to the user, resulting in better performance. Modern architectures enable the same effect but with dynamic applications. The edge can reduce costs with emerging compute runtimes optimized for fast cold boots and scaling to zero. Moving data closer to the user minimizes network costs due to fewer roundtrips. The next generation of infrastructure providers employs serverless and edge concepts as design principles, delivering better developer experiences. If these new providers embrace BD in service of unified experiences on Day 1 and 100, we might just see a Rebel Cloud emerge.

The Rebel Cloud Alliance (TM)

Thanks to Nick, Kurt, and Guillermo for the insights and Megan for helping explore them. These are their ideas — I’m simply the messenger.