A node is an instance of a Stormancer grid process running on a bare matel server, virtual machine, container or development computer.

One or several nodes together form a cluster.

Nodes provide all the facilities to run the cluster:

  • A web API to manage nodes, applications and cluster

  • A Distributed git server to push code and artifacts used by applications using the cluster web api.

  • A plugable network transport system that supports Raknet based UDP by default.

  • An application hosting mechanism able to fetch code from git repositories, deploy, build and run custom code for several different applications in isolation.

  • A routing mechanism that abstracts addressing in the cluster, making the game server topology fully invisible for game clients.

  • A dynamic services allocation mechanism which allocates in realtime logic (scene shards) to different nodes of the cluster.

  • A distributed in memory database supporting different partitioning policies.

  • A plugin system enabling the integration of the node & cluster in complex automated environment (Kubernetes, AWS, Azure etc…)



A cluster is a group of Stormancer grid nodes that act as a single entity for players, developers and administrators. Players connect to a cluster, developer push server applications to a cluster, etc…

It’s possible to add new nodes to a cluster without interruption of service. When this happens, the cluster rebalance itself and dynamically allocates new services to new nodes. If an application logic implements relocation, the system may be able to move processing from a node to another.

Removing nodes without interruption of service is more complex, as stopping any hosted service may lead to loss of data and service. services that implement relocation can be automatically moved to another node, but when it’s not the case, the system prevents any new service from being allocated from the running node and waits for the remaining services to shut down before stopping the node.



A federation is a group of clusters that may be located in different datacenter worlwide, which share a trust relationship which enables easy & transparent communication between themselves.

Scene to scene communications are possible between applications in different federated clusters, using application uri adressing. This way, a cluster in a federation can be in charge of matchmaking and social features, while the other host game sessions all around the world.



An account declared in a cluster is a groupement of server applications that simplifies management. An account can be accessed and managed by one or several users.


Like accounts, applications are cluster wide. They group configuration, code, metrics and logs in a single entity. When code is pushed to an application (using git or the deploy Web API), a deployment is created and automatically created (by default), then started the first time a client connects to the application.

Applications declare scenes, that are network locations players connect to to interact with services, sending and receving messages.

Each node of a cluster can run an host for every applications declared in it. Hosts are separate processes that run by default in the same process space as the node itself.

However, the docker plugins enables using docker to run hosts in local docker container to enforce isolation.


To run application hosts in docker containers, the node must have access to a docker daemon.


A deployment is a dotnet core project associated with an application. An application can be associated with several deployments, but only one may be active at a given time. However, when activating a new deployment, the one currently running is not stopped: It stays online as long as players are connected to it to enable update scenarios without interruption of service.

However, deployments are notified that they are not the active


A scene is a network location in an app clients connect to. Scenes can be configured to be distributed between different nodes using a set of customizable algorithms, including:

  • hash based partitioning

  • spatial partitioning

When connecting to a scene, client are associated to different shards depending on a partition key that can be set by the application logic. During execution, partition keys can be updated, which can transparently associate the client to a different shard.


While a player can be connected to many scenes at once, the client maintains a single UDP channel with the cluster. Scene connection are purely logical, and do not map 1 to 1 to actual sockets.


Scene sharding

By default a scene created with the PUT scene web API is not sharded (its shards count is equal to one). This means that all players connecting to the scene are going to reach the same node in the cluster.

A cluster is still able to scale nicely with this configuration, as long as all the server services are not exposed by a single scene, because different scenes are still allocated on different nodes.

However, it may be pertinent to shard scenes. Suppose that your game contains a service heavily used by all connected players at the same time. In this case, it may make sense to break the scene into shards by changing the scene partitioning policy.

For instance, scenes that could benefit from this are:

  • Stateless services scenes accessed often, like scenes dedicated to player profile management or leaderboards access.

  • Massively multiplayer seamless worlds partitionned using a spatial partitioning policy.

  • Stateful scenes running with high loads, like user sessions storage.

When connected to a scene, a player is always connected to a single shard at once, which acts as master shard for the player. This shard is selected according to the paritioning policy using the partition key associated with the player connection to the scene.

Good candidates for sharding are scenes that run logic that can be easily distributed using hash based partitioning. - Stateless scenes - Stateful scenes that store data using a key value in memory database that can be broken into adressable buckets.

A spatial partitioning policy is available which enables sharding worlds while maintening locality (entities spatially near have a high probability to be allocated to the same master shard.

The bundled replication features make use of the sharding facilities to provide network routing around entities and views.


Scene configured with a simple hash based partitioning policy. The hash based paritioning policy maintains locality by sharding the total hash space using intervals instead of modulo.

    "isPublic": true,




Scene configured with a quadtree based spatial paritioning policy.

    "isPublic": false,

            "boundingBox":[0,0,1000,1000] //x,y,deltaX,deltaY
                //Children are declared in the following order:
                //TopLeft,TopRight, BottomRight, BottomLeft
                //The quadtree is sparse, it's only necessary to declare the node we want to create.
                    null,    //No child Top left
                    null,    //No child Top right
                        //It's only necessary to describe the nodes that have children
                            null, //No child bottom right
                            null  //No child bottom left




Routes are network channels on a scene through which network communications between client & server, or between clients in case of P2P, happens.

A route is identified by a string (its name). When a client connects to a scene, routes are negotiated between the peers and a set of handles (16bit integers) are generated. Once connection is established, traffic is routed to routes using the handle instead of the integer. This limits the number of routes on a scene to 65536, but enables only 3 bytes overhead on client communications.

It’s also possible to configure routes to allow calling them from scenes in the same application, from different applications or even from different applications in different clusters in a federation.