Sharding and entity replication

To support advanced distributed replication scenarios such as seamless spatial sharding, the Stormancer grid integrates an entity replication system directly in the core API.

When a scene is partitioned into several shards, each shard is responsible for a part of the replicated world. For instance, when using a quadtree for spatial sharding, a scene shard is created for each quadtree leaf node (ie a node that have less than 4 children)

Entity Database

The Grid keeps track of each entity in a low latency in memory distributed database that can be updated by the host of the scene. The database indexing policy is configurable, which enable widely different scenarios, from hash based indexing to quadtree or octree based indexing for spatially distributed entities.

A scene can currently maintain only a single entity database. An application that needs several entity spaces must use different scene for each of them.

An entity stored in the database contains 4 blocks of data:

  • The entity Id which uniquely identifies the entity in the database. The id must be unique to the scene the DB is associated to.

  • A set of tags that describe the type of the entity and which can be used for filtering.

  • The entity partitionKey, used to decide where the entity will be stored in the database and for queries. For instance, spatial partitioning expects the entity partitionKey to be the morton code of the entity location in the world. Partition keys can be updated to ‘move’ entities in the world.

  • Additional custom data that are serialized using the Msgpack format.

Note

Once set, the id is readonly. The partitionKey and customData can be updated after entity creation.

Each shard of the database partition is associated with a corresponding scene shard host that acts as the authority for all entities stored in the shard.

Using host delegation, it is possible to transfer authority on the entities from the C# based server application code to a remote program, for instance an Unreal4 or Unity3D server.

Note

Once the shard host functionalities are delegated to a remote game server, the server behaves in all aspects as a C# scene host would. This enables to use the game engine to simulate shards (sections) of the world.

Entity example (player session)

{
    "entityId": "xxxx", //Session guid
    "partitionKey": 23, //Hash of the session id
    "type": "player",
    "allowedTags:": ["server"],
    "deniedTags": ["client"],
    "parts":{
        "sessionData":{
            "version":10,
            "allowedTags":[],
            "deniedTags":[]
            "data":"xxxxxx" //raw Binary data
        }
    }
}

Entity example (NPC in a seamless world)

{
    "entityId": "xxxx",     //Npc guid
    "partitionKey": xxxx,   //Morton code of the NPC position
    "type": "npc",
    "allowedTags:": [],     //everyone get events about the NPC entity.
    "deniedTags": [],
    "parts":{
        "transform":{
            "version":12243,
            "allowedTags":[],
            "deniedTags":[]
            "data":"xxxxxx" //raw transform binary data
        },
        "ai":{
            "version": 542,
            "allowedTags":["server"],  //Only the server can access data about the
            "deniedTags":["client"]
            "data": "yyyyyy" //raw AI state binary data
        }
    }
}

Entity partition key

The partition key is used by the database to compute on which partition the entity is to be stored.

For instance, if an octree is used for partitioning as is the case when using spatial partitioning, the partition key is expected to be the morton code of the entity tridimensional location. Updating the partition key can trigger automatic relocation of the entity in another shard of the partition.

Entity custom data

Custom data is composed of “parts” that can be updated by the authoritative code only. Updating a part is an atomic operation that is automatically replicated to all peers that have a view on the entity.

Each part has a set of tags that can be used to decide if the part and its events are sent to a given peer.

Client entities

In the same way every entity is associated with a scene shard that has authority on it, a client peer is connected to a scene shard.

The application can decide on which shard the a peer is initially connected by providing a partitionKey in the connectionToken.

Once connected, a client entity is automatically created in the database with the provided partitionKey. Optionally, initialization data for the client entity can be encapsulated in the connectionToken along the partition key.

Views

Views enable any peer (client or host) to get entity update events of entities in the view. Views are declared by hosts and either associated with themselves or with client entities.

  • Host views have an absolute position.

  • Client views can have either absolute positions or positions relative to the client entity.

A view is composed of:

  • Shapes (circles and boxes are supported, organized with boolean operations)

  • A set of tags that are used to filter which events are sent to the view.

A view that encompass the area of interest of the host is automatically created. The host can create additional views to be notified of entities outside of its zone of interest.

API

Host entity events

Whenever an entity leaves a shard and enters another one, either because the partition key changed or because the database was repartitioned, the shard host is able to perform custom operations through a set of events:

  • RemovingEntityFromShard : One or several entities are going to be removed from the shard once the event handlers complete. This event can be used to save any data that needs to be persisted in the entity to enable it to be rehydrated in the host it’s moving too. For instance, IA state don’t need to be persisted too often with the entity, so it’s recommanded to do it only every few second. During this event, the host should pause the AI of the entity and save the data in the database.

  • RemovedEntityFromShard: One or several entity were removed from the shard.

  • AddedEntityToShard: One or several entities where just added to the shard. The host can resume processing on the entity.

Host entity functions

A shard host can only read or write on entities that are currently located in the shard.

//Reading
replication->getEntity(entityId, partIds);
//Writing
replication->updateEntity(entityId, parts, allowedTags,deniedTags);

When updateEntity is called the update event is automatically propagated to all views that match the tags of the entity itself and the tags provided when calling the function.

This enables network level of details scenarios: If a client is associated with several views of different sizes, it’s possible to have different update frequencies if the entity is in the inner or the outer views.

View events

View events are batched and aggregate similar events for different entities.

  • EntitesEnterView

  • EntitesLeaveView

  • EntitesUpdated

Extensions to the Send API (Fire&Forget & RPCs)

The send function supports sending Fire & Forget messages to:

  • shard hosts, by providing a partition key. (IE location in the world for spatial sharding)

  • Views, by providing a partition key and tags for filtering

  • Individual peers associated with an entity (clients or hosts) by providing both an entityId and a partition key.

All these filtering capabilities can be leveraged for RPCs too, as long as the message is sent to a single peer.

Configuring a scene for spatial sharding

By default, scenes are configured to use an hash based partitioning policy that splits the hash space in a number of intervals equals to the number of shards.

Configuring a scene for spatial sharding can only be done while the scene is not running. It’s recommanded to do it on creation, using the PUT scene HTTP REST API.

{
    "id":"world",
    "template":"world-template",
    "isPersistant":true,
    "isPublic": false,

    "partitioningPolicy":{
        "type":"quadtree",
        "config":{
            "boundingBox":[0,0,1000,1000] //x,y,deltaX,deltaY
            "root":{
                //Children are declared in the following order:
                //TopLeft,TopRight, BottomRight, BottomLeft
                //The quadtree is sparse, it's only necessary to declare the node we want to create.
                "children":[
                    null,    //No child Top left
                    null,    //No child Top right
                    {},
                    {
                        //It's only necessary to describe the nodes that have children
                        "children":[
                            {},
                            {},
                            null, //No child bottom right
                            null  //No child bottom left
                        ]
                    }

                ]
            }
        }
    }

}

If using the above quadtree configuration, the scene is going to run 5 scene shards: One for each leaf.

Note

It’s not possible to change the partitioning policy after startup. However it’s possible to change the configuration of the policy. Doing triggers repartitioning and rebalancing of the entities between shards of the scene.

Note

The quadtree and octree partitioning policies should support automatic rebalancing in future iterations.