Constructing a world deployment platform is tough, right here is why

When you ever tried to go world, you might have most likely confronted a actuality verify. An entire new set of points begins to look once you begin to function a workload over a number of areas throughout the globe:
- Orchestrating deployments throughout areas is troublesome: what occurs if you wish to deploy a brand new model of your app and it’s deployed appropriately in some areas however not on different ones?
- It’s essential to safe site visitors between your parts, distributed over the world: what know-how do you select? It’s essential to preserve that now, too!
- Your distributed programs might cease working: when brokers must coordinate over a community, they typically necessitate low latency. Hashicorp’s Consul requires the average RTT for all traffic between its agents to never exceed 50ms.
- It requires experience to handle the networking layer. How will you guarantee that your parts can appropriately talk with one another, particularly if there are literally thousands of kilometers in between them or if a few of them change into irresponsive?
- And there are even more challenges!
So it appears like an excellent concept in idea, however in follow, all of this complexity multiplies the variety of failure situations to contemplate!
Building a multi-region engine, so you don’t have to
We previously explored how we built our own Serverless Engine and a multi-region networking layer primarily based on Nomad, Firecracker, and Kuma. Put in your scuba tools, that is now a deep dive into our structure and the story of how we constructed our personal world deployment engine!
Step 0: Our original engine supported deployments in only one region
Multi-region or not, whenever you want to deploy an application on our platform, it all begins with a POST API call against our API with the desired deployment definition. A deployment definition describes how your app should be deployed and roughly looks like this:
{ "name": "my-cool-website", "type": "WEB", "routes": [ { "port": 3000, "path": "/" } ], "ports": [ { "port": 3000, "protocol": "http" } ], "docker": { "image": "docker.io/koyeb/demo", "command": "", "args": [] } }
Our API server stores this in a database and a Golang worker starts an elaborate boot process. The schema belows describes the components at play.
![]() |
---|
Authentic structure we ran to handle apps |
Our tech stack is centered around Nomad, Kuma and good ol’ Golang
That is a lot of components, isn’t it? If you’re curious about our core engine, we previously wrote about why and how we built it on top of bare metal servers with Nomad, Firecracker, and Kuma.
Within the meantime, let’s shortly go over what every of these do:
-
Staff: a set of long-running Golang packages. They orchestrate the boot course of by speaking to a bunch of companies to convey an Occasion to life
-
APIs: gRPC webservers, written in Golang. They’re the layer across the database to control our assets
-
A database: your typical PostgreSQL database
-
Harbor: a container picture registry. It holds the container photographs that we construct out of GitHub repositories
-
Nomad: a versatile scheduler and orchestrator. It might deploy and handle jobs on servers. We use a customized driver to make it deploy Firecracker microVMs, the place every microVM is an Instance. It’s break up into two elements:
- Nomad Servers: a set of Golang packages. Nomad Servers expose an API to deploy, upscale, edit and delete jobs
- Nomad Agent: a light-weight Golang program that runs on a machine. It’s the one that truly spawns the wanted microVM(s). It consistently chit-chats with a Nomad server: it takes orders and stories the present state of the Jobs on its machine
Each Nomad Agent and Nomad Server work hand in hand to make sure that, always, the required functions are working throughout the fleet of servers. If a machine fails, Nomad Server will ask different Brokers on different machines to take over the work
-
Kuma: a service mesh. It powers the community layer of Situations: a mesh wherein all Situations of a person can talk with each different Occasion via strong, safe, non-public networking.
- For every Occasion, Kuma provisions a sidecar. It intercepts inbound site visitors and treats it as ordered by the mesh configuration. For instance, if the MicroVM have been to obtain a request from an unauthorized peer, the sidecar would deny it
- Every sidecar should be linked to a zonal management aircraft to retrieve its mesh’s configuration. In flip, all zonal management aircraft talk with a world management aircraft to synchronize its view of the world
Each Nomad Brokers / Nomad Servers and Kuma Sidecars / Kuma Regional CPs are consistently speaking to one another. They’re the core half wanted in addition and handle the day by day lifetime of Situations.
We wished to deploy a area in North America and had future plans to broaden all around the world. We couldn’t let these core parts and our machines talk over the Atlantic or different lengthy distances 🙂. In truth, the community bandwidth would have been pricey and the Nomad and Kuma streams latency would have gotten too excessive. We additionally had plans to construct areas all world wide, so this downside was sure to occur once more.
So, we needed to construct a multi-region system!
Step 1: Agreeing on a target vision for a multi-region engine
Major architectural changes like this have a long-lasting impact: these decisions can be carried over during 10 years.
We needed a future-proof architecture that would hold its ground for at least two or three years to come and support at least 25 locations, actually up to 100 locations.
Our goals: efficiency, agility, resiliency (aka better, faster, stronger)
We first laid down our requirements and wishes. Our three main wishes were to:
- Provision new regions fast: our goal is to have dozens of locations available for our users, so it should be business as usual to spawn new locations
- Sustain partial outages:
- A failing region should not bring down our infrastructure as a whole
- A failure in Koyeb components should not affect the workloads of our users
- Reach our target architecture gradually: we wanted to start deploying new regions in the coming months, not years. Hence, we wanted to be able to ship a first version fast but that we could iterate on to improve over time
Given these requirements, we started exploring different ideas.
- Decentralized architecture: We have plans to deploy components all around the world. The physical distance between these components will lead to high latencies, which would make this architecture challenging to maintain.
- Dedicated virtual machines per region: We thought about having dedicated virtual machines which would host just some core services like Kuma, nearby the bare metal machines. It was tempting because of the low cost. However, we ruled it out too because we thought that we probably would need to enlarge those VMs to host more and more services over time and that this wouldn’t be flexible with a non-negligeable cost of maintenance.
![]() |
---|
Some early concepts that we had |
Topology: global, continental, regional and data center level components
In the end, we settled on a federated-like hierarchy to distribute our systems around the world: one global component, to which are attached a few smaller components, to each are attached a few smaller components, and so on.
The great thing about federation is that it is simple. The problem is that your global component does not scale well. We tried to shoot for something “in-between” that would allow us to move the stuff that does not scale well from the top-level components to the lower-level ones.
We defined 4 kinds of components scopes: global, continental, regional, data center-level and settled down on the following topology:
![]() |
---|
Goal topology |
- A knowledge heart (e.g.
was1
) could be an mixture of 1 to 1000’s of naked metallic servers - A area (e.g.
was
for Washington) could be:- a light-weight, management aircraft constituted of Nomad Servers and Kuma Zonal management planes…
- …controlling a number of knowledge facilities, all geographically shut to one another
- A continent (e.g.
na
for North America) could be a deployment cluster. It might haven’t any inherent worth however be a platform the place we may host the management planes of areas in such a method that these are geographically near the info facilities they handle - Lastly, our world Kubernetes cluster could be the worldwide management aircraft for the World™. It might host assets which can be distinctive throughout the platform: an account on our platform, the definition of an App, our foremost database, the billing system…
Designed for low latency the place it issues: for now, our most important want is low latency between the info facilities and Nomad/Kuma.
-
Continental clusters would host regional management planes, guaranteeing all knowledge facilities have a latency ≤ 40ms (most ≤ 60-70ms) to their respective management planes.
-
Our APIs and employees would stay on the worldwide cluster in 99% of the circumstances. That is effective as a result of they might carry out synchronous, however not latency-critical, calls to the regional management planes.
As a rule of thumb, naked metallic machines would and customers of the platform would
Step by step attain goal infrastructure: with this concept, now we have low latency for our most important parts. Tomorrow, we will go additional. The endgame over time is to maneuver increasingly stuff from the worldwide cluster to lower-level parts.
Failure scenarios of this architecture
The way we answered the reliability problem was to consider each region as an independent satellite.
If a machine fails, the reliability of applications can be ensured by rescheduling Instances in different machines in the identical knowledge heart. That is dealt with natively by Nomad.
If a knowledge heart fails, the reliability of functions will be ensured by rescheduling Instances in different knowledge facilities of the identical area. That is dealt with natively by Nomad too. Plus, we will outline affinities in Nomad; they offer us the flexibleness to outline wherein knowledge facilities of a area an Occasion can or can’t be rescheduled.
If a area suffers an outage, the reliability of functions will be ensured natively in the event that they have been deployed in different areas. The expertise shall be a bit degraded however it will total proceed working.
If the world cluster suffers an outage, then, the deployment expertise is affected: our public APIs could be unaccessible. Nonetheless, the functions hosted on us could be unimpacted as a result of they don’t want to work together with the worldwide cluster to function. Growing reliability of the worldwide cluster can also be simply doable to mitigate world deployment outages.
Pick your poison: a tour of the trade-offs we took
Engineering is all about trade-offs and when we settled on this design, we had to make some:
-
There is a global cluster.
Our global cluster is there to host… global resources. Sounds like a weak link, right? It might be, but we believe that it is way easier to manage a global cluster and that we can greatly mitigate the impact of outages on this component.
First, the risks of an outage of that Kubernetes cluster are low because it can be distributed across multiple availability zones. Then, the target architecture just described allows regions (and continents) to run independently in case of a global cluster outage.
-
We decided to run one Nomad cluster per region.
Nomad allows us to natively reschedule Instances to some other servers if one of them crashes. Having one Nomad cluster per region effectively prevents us from leveraging its native failover policies to reschedule jobs across regions in case of regional outages. So, we have to handle that failure scenario by ourselves; that is more work.
On the other hand, we believe that it is key to achieving our vision of independent, satellite regions. Theoretically, a single Nomad cluster is supposed to be able to orchestrate thousands of tasks, all over the globe. However, by splitting the Koyeb World into multiple regions orchestrated by multiple Nomad clusters, we reduce the impact of a Nomad cluster failing.
A future-proof architecture designed around continuous improvement
This design allows us to iterate quickly and to progressively improve availability.
Continental clusters have a privileged latency to bare metal machines (they are physically closer). We aim to move a lot of stuff there to improve performance and reduce costs.
For now, we settled on moving only the strictly necessary sotware on continental clusters (Kuma and Nomad). Over time, we will port more components there as we need it.
Step 2: Putting the “multi” in multi-region deployments
After all of this thinking, it was finally time to get our hands dirty! We laid down our specifications for the first version of our multi-regions deployment engine: keep it simple and migrate only the strictly necessary stuff over to continental clusters.
Before deploying a new region in the US, we decided to first make our European region comply with this new architecture.
At the time we had one single region. We decided to start from scratch with a new region, transparently migrate all of our users over there, and then bid farewell to the old setup. This was simpler than trying to build the target architecture while maintaining the original setup; a probably painful experience that we avoided.
We made very few changes to our original global cluster:
- We kept our current cluster in Europe and gave it the fancy title of global cluster
- We set up Istio with the Multiple Clusters deployment model to ease communication between world and continental clusters. With this, we let Istio safe these communications. Elements may then handle one another with only a area identify. With this, we will run HTTP requests to
http://nomad-api-access.fra:4646
from the worldwide cluster, for instance
Frankfurt: the first region with the new architecture
We provisioned a new cluster, our European continental cluster. Then, we dedicated a Kubernetes namespace for the new Frankfurt region (fra
).
In there, we put:
- A zonal kuma control plane, that we plugged to the global control plane
- A set of nomad servers for the region
We put a load balancer in front of nomad-server
and kuma-cp
because our bare metal machines would need to talk to them over the Internet. We protected those services with mutual TLS.
![]() |
---|
Design of management planes |
This design makes it tremendous straightforward to bootstrap a management aircraft. A management aircraft is just a set of YAML manifests. We will then apply it to an present continental Kubernetes cluster. It may be packaged in a Helm chart or a Kustomize configuration tree, for simplicity.
Lastly, we deployed our knowledge aircraft: a handful of naked metallic machines situated in Frankfurt. We configured the companies (e.g. nomad-agent
) on these hosts to focus on the model new regional management aircraft and voilà, the area was prepared! We simply needed to make our employee conscious of it and launch it… wait. That’s harder than it sounds.
Adapting our workers’ code to support multi-region deployments
So, we had to make the code changes in our APIs and workers to handle multi-region deployments. When users deploy an app on the platform, they push to us a deployment definition to describe the desired deployment: how much RAM should we allocate, how many instances should run, what is the image or GitHub repo to use…
First of all, at that time, a Service could only be deployed in a single region. We now wanted users to deploy the same service in different regions and potentially override some values for some regions. For example, if they want bigger instances of a service in a given region because this is where most of their users are, they should be able to.
We split the concept of deployment definition into two: regional deployment definition and deployment definition.
A deployment definition would hold the Service definition for all of the regions and all of the overrides. We added a mechanism to derive, for each region defined in a deployment definition, a regional deployment definition, which is the view of the deployment for a given region. In that way, the regional deployment definition is very close to what the original deployment definition was.
This allowed us to perform minimal changes to the existing worker. It just needs a regional deployment definition, and the right Nomad and Kuma clients:
type RegionalDeploymentDefinition struct { Region string Scaling uint MemMB uint Image string } var nomadClients map[string]*nomad.Client func init() { nomadClients = map[string]*nomad.Client{ "fra": nomad.NewClient("http://nomad-api-access.fra:4646"), "was": nomad.NewClient("http://nomad-api-access.was:4646"), } } func DeployService(ctx context.Context, req *RegionalDeploymentDefinition) error { nomadClient, ok := nomadClients[req.Region] if !ok { return errors.New("this region is not available") } spec := req.ToNomadSpec() res, err := nomadClient.RegisterJob(ctx, spec) if err != nil { return errors.Join(err, errors.New("cannot create Nomad Job")) } }
Sunsetting the old engine and our legacy location
With all that work done, our new region in Frankfurt was ready, compliant with our specs.
![]() |
---|
How we handle apps via each International and Continental clusters |
We ran automated checks on Frankfurt, migrated our inner accounts, and eventually slowly migrated all of our customers’ Companies to the brand new fra
.
On the finish of December final 12 months, each single service was formally moved to the brand new Frankfurt. We gave again the servers and formally bid farewell to our legacy area…
Rinse and repeat in Washington, DC: we provisioned a second region in one month
We worked for months to deploy Frankfurt. Truth is, from a user point of view, nothing much had changed: we still only offered a single location!
However, we set up pretty much all of the machinery to onboard new locations easily. And boy the architecture design paid off: we then deployed a new region in Washington in less than a month! All we needed to do was to do the same thing over again:
- Provision a new continental cluster in the US
- Provision a regional control plane in there
- Provision new bare metal machines in Washington, DC
Then, we once again ran automated tests until we slowly opened the region to our users.
Private networking & optimized global load balancing
With these two regions live, we were able to validate some features (and ship bugfixes 🤫) we were willing to offer for multi-region apps:
- All of your services can privately reach each other via DNS. In practice, it means that you can
curl http://my-other-service.koyeb:8080
from your code and reach your other service. Traffic is transparently encrypted with mutual TLS and we take care of routing requests to the closest healthy instance where your code is running - Inbound HTTP requests take the fastest path to reach your service. Once again, in practice, it means that when someone reaches the public URL of your app, our load balancing stack will pick up the request at the closest edge location to the user and route it to the closest healthy instance where your code is running
Those are features that you get out-of-the-box when deploying an app replicated on more than one region on us – we believe that they are great for global workloads.
What’s for the future?
We have a ton of ideas to improve our multi-region engine.
Port more components to continental clusters for reliability and cost management
Now, if you are attentive to details, you might have noticed that we did not mention where some of our components like our container image registry, Harbor, were in this new architecture. For example, we retrieve metrics and logs from the services running on bare metal machines to display this beautiful view:
This telemetry needs to be queryable and hence, stored somewhere. It is first crafted on bare metal machines but it needs to make its way to some database. Where is that database? For now, these kinds of components live in our global cluster.
😠 But we said that the bare metal machines should communicate directly to the global cluster!
Correct. That is suboptimal and we know it.
As stated before, we plan on porting more and more components from our global cluster to continental clusters. It should boost performance, improve reliability and reduce our costs.
Nail the continuous deployment experience
We would like to make it a no-op for us to introduce a new region: it would be great if we could prepare end-to-end continuous deployment strategies to provision new regions, run automated tests and slowly roll them out. The same goes for rolling out configuration changes; we would love to make this frictionless and safe.
Closing thoughts: we made multi-region deployments easy!
We are so proud of our multi-region engine! The only difference when deploying an app in any of our six regions is urgent a button. (Sure, it is actually that easy).
Plus, thus far, the design is holding its guarantees:
- Resiliency: we deleted our entire world cluster in staging (by mistake, however nonetheless 🤫) and our staging areas saved working, performing as unbiased satellites, as we designed them!
- Provision new areas quick: we delivered Washington in lower than 30 days a number of months again. However steady enhancements to the design continued to repay. We cooked 4 new areas this summer time in half the time it took to ship simply Washington: San Francisco, Paris, Tokyo and Singapore!
As you simply learn, now you can deploy your applications on our high-performance servers in six locations over the world. We provide a free tier, so take a look at us out!
We hope you appreciated realizing extra about a few of our internals. We’d like to know what you considered this put up: be at liberty to drop us a line on Twitter @gokoyeb or by direct message: @_nicoche @AlisdairBroshar. The identical goes if you wish to know extra about different internals of our system, we’d be completely satisfied to share extra 🙂.