Hello there! I’ve been building a multiplayer rts game for a few years now using C# and unity. Until now I’ve stored all the data for the server in just large lists in ram, but this is unlikely to scale well so now that I’m getting closer to an alpha I’m thinking of storing that data in a database but given that my experience from work is with Ms Sql I’m not familiar with other database technologies. not sure which direction to take, I thought I’d ask for the wisdom of the level1 forum users before I make a decision I might regret for lack of knowledge.
My only requirements are:
Able to store objects
Basic crud operations
Nothing fancy or exotic but I see so many potential options I’m not sure what to pick.
SQLite would be my first choice. It’s got a few limitations. Like only one person can write at a time (one process). Not a big deal if the game server is the one doing all the writing. It’s stored as just a file. Highly portable. Easily backed up. Open source. Used by lots of large companies. If you need to scale past that for hosting environments then you could try MSSQL.
It also doesn’t do objects very well but if you can serialize them out to json and back (should be fairly easy) the you can store the json data
For latency issues you can have it run in memory. And then write some methods to dump to disk. You would have to call that periodically. Or just work off the disk entirely but your latency then would be dependent on the disk your writing against.
I would seccond SQLite, very handy to have everything stored in one file for moving game server files around and backing things up.
along the other things that @rockking mentioned it is a very fast and light weight database option!
You could use large maps (tree/hash), a bit of locking.
Databases and ORMs and all that stuff is useful when you want to keep your data nicely organized and avoid or easily debug thing, but it’s horrible for performance relative to just plain old c++ data structures and well placed mutexes and wisely used queues.
Some people use in memory SQLite / multiple SQLite instances for differeny uses.
I would keep everything related to the current game in the ram, and only save the state between the games to a database (like user profiles, unlockables, whatever).
To approach the scale I would try to separate lobby from the “server representation of a match”, which you could offload to few instances if needed.
Thank you so much for the suggestion, I’ve not heard of that kind of DB.
Yeah I’m just looking for something that’s very simple and fast, so that sounds like it could be a great solution. Only having one process doing the writing might be a bit of an issue, but I can experiment with that!
Running it in memory is really interesting to me as well as that will obviously be a lot more performant.
Thank you for the suggestion, I’ll definitely have to split the world up into different instances so that it is manageable over multiple hosts to keep it running at anything close to a reasonable frame rate.
So the way I handled getting around the single process lock issue was by connecting and disconnecting for every operation. But that brings its own set of latency issues. It would also make it so you couldn’t really run in memory. Perhaps you can implement some kind of cache setup where you have a master db on disk and each process runs its own in memory and writes back to disk periodically.
Id suggest mongodb. The c# API they have is extremely good. Imo better than entity framework or dapper which are some of the ORM mappers you could use for SQLite.
It’s a shemaless JSON-like db. You don’t really need to do migrations or define your table in some way. You can store nested objects easily. Does not mean you no longer need to think about how you store your data. But imo it’s just less of a hassle than entity framework or dapper.
It just works. It’s plenty fast. It’s extremely easy to hide entierly in a class library in a way that you app (game in your case) does not need to know anything about mongodb (which has unless they recently changed it always been not entierly possible with entity framework). Dapper is simple and fast, but a ton more work than the other two.
you didn’t say what you are planning to use your database for (other than you want it to store binary objects) … that will have a lot of influence on different type of database/technology
Does it need to run on the client or on the server side, or both?
You said low latency … how low ? If low enough it may be that a traditional database may not be a good choice
You say you need CRUD, do you need ACID as well (Atomicity, Consistency, Isolation, Durability) ? This could rule out a bunch of choices as well …
Do you need it to be relational?
yeah… right, wouldn’t a serialized object from c# be a byte stream of some kind ?
I was aiming at understanding whether op will want to have support for BLOBs/CLOBs (mysql/postgre ) as opposed to raw binary types/hasmaps and possibly what sizes per row are we talking about …
In order to store blobs to Mongodb op will need to base64 encode/decode them, it may not play well with the low latency requirement … but now that we’re talking about low latency … are we talking milli or micro seconds for database operations?
Theoretically. Practically that’s insanity. You store your object however the DB wants you to and don’t actively work against the system. You are gonna have a horrible time otherwise.
Like if you have a player object and the player has a profile picture that picture is binary data. The rest isn’t. If you have a lot of those might be a good idea to store them somewhere else and just store the storage location in your db.
Like on disk is pretty good. If there is no specific reason to need something more sophisticated for binary blobs.
Unless things the players uploads need to be private you could even throw them straight into a public nginx directory. When reading the object from the DB append the url or IP to the filename. And voila you got super fast binary storage. Depends on the game ofc, but a lot of the time I would expect binary data a player uploads to have the primary purpose of being swag for leaderboards and such. At least that is the only kind I have ever uploaded to any game I have played. If that is not the case you need some authentication in front. Drag the data threw your game server or something else.
There’s multiple ways you can use a ‘database’ when writing an application.
You can use it for all the good things it can provide as a database (tables, indexes, relational constraints) but your app needs to be ‘database aware’ and either you have to write a data management layer or embed lots and lots of sql code in your app
You can still use some of the relational features for things like configs but you may want to ‘abstract’ the database layer from your app data structure, in order to do that you can either use an intermediate framework or plain serialize your objects into plain tables. The serialized objects will need to use a blob/clob field, depending on whether the serializer can map to a character strucure (yaml,json,xml) or not …
The DB doesn’t drive your serialization pattern, it’s more your language/choice that drives you to different approaches, typically because of specifial needs of your app/code …
Id say it 100% does unless you are prepared to work redicolous unnessesary overtime to re-invent everything.
As I put it before. Work with your database not against it.
Either way. I don´t think the OP is trying to serialize binary data. I had a hunch that the “c# object” part made you come to that conclusion, which is why I mentioned it. But that conclusion is wrong. The other option would have been that I had overread something.
If you use a database in C# (any database) ofc you´d want to store objects. Pretty much just about everything is an object in C#.
Even if you could store a C# object as an exact representation in binary you would not want to do so (Im not aware that anybody ever tried…). But you would store a lot of unnessesary weight way beyond the data you are trying to store. And it would absoutely break beyond repair with the tiniest code change. It might even break with system changes, runtime updates or OS changes (how the object looks like in memory is nothing you can rely on it´s implementation detail of the language). Not to mention that the data would become unreadable from any other language.
Yeah, those are possible ways of serialization. May or may not be of any use depending on what database you are using. You would probably use none of these in most SQL databases. Certainly not sqlite. Yeah, you (can) do it there too. But it´s a bit of an ugly hack and you can absolutely forget about any indexing or querying that data you just serialized to a big fat string.
That’s a good point I didn’t think of that in my requirement, the more I’ve looked into it I think I’ll use the DB just to store the checkpoint saves on the server.
The client side is working fine with what I have, so it would be just the server side.
I’m not sure what the Isolation part of ACID is, but the consistency would be important, but if things were a little out of order it probably wouldn’t be a disaster (he says with probably unwarranted confidence)
In database parlance you really don’t want things out of order, because that means your data is not consistent/reliable anymore.
What ACID guarantees (in layman terms) is that when one of your processes/threads interacts with the database it always gets back a consistent picture … when reading, you get back consistent data at the time the read was processed, and when writing you are guaranteed that all you data in a transaction (that may include reading stuff from tables and writing suff to other multiple tables) will either be processed in its totality or not at all, hence giving you a consistent picture of your data.
It is then up to you as a developer to decide how much ‘consistency’ you can afford to lose in exchange probably for speed by working with the logic of transactions.
That is if the database you use is ACID compliant (mysql is one example where it may or may not depending on how you decide to deploy it, postgres is, mongodb definitely is not) you don’t have to worry about maintaining compliance in yout code.
If, on the other hand, all you need is to store a serialized object representing a state of your game, on a user by user basis, and using data that is not residing in other parts of the database, then mongodb (or any other document based nosql database) may very well what works for you, with a lot less pain in maintaining it (until it breaks) and a lot less steep learning curve …
As usual with these things, the best answer usually is ‘It depends’
It does support transactions too if you need updates of multiple collections or/and documents to be applied all at the same time and not one after each other.
A lot of the time you don’t even need to bother about it where you otherwise would have had to in an SQL database. This is an issue with relations between trables and to update the object you may need to update many tables in SQL and you do not want somebody to read a half way updated object while it is being updated.
Often times this specifically is not an issue in mongodb because often times some of the relations you would have had in an SQL database you can easiely store efficiently in a single document in mongodb. Except when you don’t, but then you can still achieve this with transactions.
It is an ACID compliant Database. It just does not enforce a consistent shema of documents in collections like SQL databases do. The only other shemaless db I’ve used is azure CosmosDB, so i can’t speak for all of them. In comparison to that it is far easier to achieve a consistent structure I’d say because you can easiely change/update/delete a field of every document in a collection without it being terribly slow. As in do a migration of sorts when logically necessary. In CosmosDB you have to rewrite every document entierly, as it does not support partial updates. So there when we made ‘breaking changes’ to the database design we always ended up fixing it up in code and writing a bunch of converters to get from old to new when reading it from the DB without actually rewriting any of the old documents because that would have taken too long (though Im sure azure would be very happy to suggest and bill rewriting entier collections).