User:Mvanga
Contents
GSoC 2010
Introducing multi-server linking support into Cyphesis to allow for distributed server usage.
There is no current multi-region/multi-server support.
One initial approach you might be interested in considering is to keep the worlds in each server separate, but allow characters to be teleported between them by linking the servers together.
- This is a good starting point to test communication between servers. Can start with separate worlds to get started. Once that is working can try doing this using the same world (can use the method below assuming it makes sense...see below)
If you do use multiple servers to simulate one world, breaking it down by regions is just one way of approaching the problem. The load could also be split up between servers based on other criteria, or just balanced evenly between the servers.
WorldForge does not have a map as such. Each aspect of the world is specified by an entity, and each entity is stored and simulated on the server. In a multi-server architecture each entity might be stored and simulated on one or more servers, and responsibility for each entity could be transferred between servers as required.
- The mountains, the seas, the landscape are all also entities I presume?
The terrain surface is defined by entity attributes. Currently on one entity, but we plan to extend this so that data from many entities is used to define the basic terrain shape.
- If yes, it is impressively clever! Really flexible and simplifies a lot of things :D
A possible way to accomplish the task in such a situation could be:
- When new entities are added, notify all servers with initial information (optional)
- Only simulate the entity in one of the servers (responsibility chosen based on some criteria)
- Add a "responsibility" field for each entity which holds the server ID of the server the entity is to be managed by.
- If indeed terrain itself is an entity, then some entities should require multiple responsibility fields (right? Or am I confused because it's 6am)
- Only simulate the entity in one of the servers (responsibility chosen based on some criteria)
This way has scalability issues, as it involves lots of broadcast traffic - messages sent to every server. I had considered using an algorithm similar to that used by network switch fabrics to determine where an entity is being simulated.
Server A is in a mesh of N servers but is only connected M peers where M is considerably less than N. Operations arrive at A from the connections it has to its peers, and it notes which entities these operations are from and which connection they arrived over in a lookup table. If the server needs to send an operation to an entity it is not simulating, it checks in the lookup table it has built and if it finds an entry for the entity, it sends the operation over the link that the operations from that entity arrives over. It does not need to know which server is simulating that entity, just which way to send the operation. The server on the other end of the link will know where to send it next because it has also seen operations from that entity before. If it does not find an entry in the lookup table, it sends the operation along all connections to peer servers, and the message gets propagated by the other servers until it reaches one which knows where to send it. This algorithm requires a way to ensure that these operations expire and do not propagate excessively.
It is going to be necessary to cache data on servers other than the one that has authority over an entity, but I do not think it is necessary to store which server is responsible on the entity itself.
- When criteria fails to hold
- Send the changed values for fields to the new server (or entire entity if not already existent)
- Change the responsibility to the new server
- Remove responsibility from old server
- When criteria fails to hold
The process you describe above sounds a bit error prone. It seems likely that inconsistencies could exist between one server and another. I suggest reading up on distributed processing to find out about algorithms systems use to achieve consensus, and reliably distribute load.
Passing a representation of an object to another server can be done quite easily in Atlas, as any object can be represented using Atlas encoding.
WorldForge Internals
Below is some of my work on understanding the WorldForge architecture. I am hoping it helps someone, somewhere, sometime! WorldForge has been cleverly broken down into several libraries to make things consistent. The first thing to do is to actually understand what each of the libraries are responsible for. A good place to start is the README files for each of the libraries.
Atlas
Atlas Namespace Organization
The Atlas namespace contains the whole of the Atlas-C++ library, and which is divided into a hierarchy of other namespaces. The main namespaces of interest to the application developers are the Atlas::Net namespace containing classes to handle establishing network connections, and the Atlas::Objects namespace containing classes used to handle high level Atlas data.
Filters
Atlas data streams travel along a path of Filters (see Filter.h). Each outgoing message is converted to a byte stream and piped through an optional chain of filters for compression or other transformations, then passed to a socket for transmission. Incoming messages are read from the socket, piped through the filters in the opposite direction and passed to a user specified Bridge (see Bridge.h) callback class.
Filters are used by Codecs to transform the byte stream before transmission. The transform must be invertible; that is to say, encoding a string and then decoding it must result in the original string. Filters can be used for compression, encryption or performing checksums and other forms of transmission error detection. A compound filter can be created that acts like a single filter, allowing various filters to be chained together in useful ways such as compressing and then encrypting. The Filter class can be subclassed and the encode and decode methods (both take and return a string) need to be implemented. An example of a Filter can be seen in the Gzip class (Filters/Gzip.[h,cpp]). The begin() and end() functions are similar to constructors and destructors for the Filter object.
This is not implemented at the moment. Currently the flow of data is as follows:
Source ---> Codecs ---> Network
If Filters are implemented, the following flow path will ensue:
Source ---> Codecs ---> Filters ---> Network
Bridges
The Bridge class presents an interface that accepts an Atlas stream. The beginning of a stream is indicated with a call to streamBegin() and the end with a call to streamEnd(). Between these two calls, the Bridge is said to be in stream context. While the Bridge is in this stream context, a message can be sent using streamMessage(). Once streamMessage() has been called, the Bridge is said to be in a map context. When a Bridge is in a map context, the various map*Item() calls are allowed. These map*Item() calls are similar to functions that are called on finding different tokens in a compiler.
A good way of visualizing the concept of a bridge is an actual Bridge. A data stream passes into the bridge and then can pass out of it (depending on the bridge). The Bridge class doesn't do anything on its own and all the work is done by subclassing it in the form of Codecs.
Codecs
The idea of a Codec is to encode and decode between byte streams and structured data. An example of a codec would be conversion between XML and byte stream.
The Codec class inherits from the Bridge class and provides another added function called poll(). Codec have encoder and decoder parts. They convert between structured data and byte stream. When a Codec is used to structure data, the poll() is called. In this function, you need to define what needs to be done to structure the data that is coming into the Codec. You also need to define what the various map*Item() calls will do. A great example to understand how this works is in the XML class (in Codecs/XML.[h,cpp]).
The map*Item() functions are there for each data type in Atlas.
This great article by Al Riddoch and James Turner explains the structure of Atlas.