Multiplayer and database

0 favourites
  • 8 posts
From the Asset Store
Build Social Apps, Chats and Advanced Lists using Firebase Realtime-Database
  • recently I have delivered multiplayer game with ajax, database support, using php api, from ciient perspective all gone good, but I am not quite satisfied with the game performance.

    Ajax call with php api sometimes it takes long time and sometimes ajax sends false result , so it is not foolproof everytime but to the max it is there.

    So from db perspective also it is not fast though as I expected, seems I am missing something.

  • Hello, I am about API ... give me your Skype? my skype - mrislamkozha1

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • First u should tell us what u want to save. Of course Ajax is slow, and Database connexion is slow. But im not sure you have to use it often so i dont see your problem. By the way, it is impossible to get fake results with Ajax if your program works correctly. But what can happen is that you get an answer from a request faster than an other sent previously. So your program has to treat this case.

  • what is the best strategy to use database and which database is most suitable to use with multiplayer plugin ?

    In my case I used I used MP with Ajax + php + mysql (it was server intensive game) , so is there is way I can use mp to do communication with server like websocket or Ajax , so it will reduce my overhead of using either with MP.

    I might thing to use mongodb or rethinkdb to use as backend with websocket to make multiplayer for better support instead using MP for my next project, so it is a bet for me.

    Tom Ashley any suggestion ?

  • what is the best strategy to use database and which database is most suitable to use with multiplayer plugin ?

    In my case I used I used MP with Ajax + php + mysql (it was server intensive game) , so is there is way I can use mp to do communication with server like websocket or Ajax , so it will reduce my overhead of using either with MP.

    I might thing to use mongodb or rethinkdb to use as backend with websocket to make multiplayer for better support instead using MP for my next project, so it is a bet for me.

    Tom Ashley any suggestion ?

    We use PostgreSQL, a custom in-house data cache in memory on the server, and AJAX all driven by Java servlets. Java was selected for concurrency capability and durability.

    In order to reduce overhead between server and client, we use a stripped down version of JSON to communicate data, where we replaced all of the tags with data keys.

    So our data coming from the server looks like this:

    {"0":["494.146","505.616","2"],"1":["504.685","496.818","3"],"2":["499.400","503.440","1"],"3":["498.211","504.608","2"],"4":["495.308","499.463","0"],"5":["496.505","490.680","3"],"6":["498.858","492.868","2"],"7":["505.334","490.110","0"],"8":["496.857","493.231","0"],"9":["505.773","488.897","0"],"10":["504.614","492.411","1"],"11":["513.358","492.846","2"],"12":["508.307","497.187","1"],"13":["512.842","504.548","1"],"14":["504.119","504.817","1"],"15":["510.334","502.565","2"],"16":["505.632","504.527","0"],"17":["507.403","505.337","0"],"18":["508.531","507.744","0"],"19":["509.322","508.887","3"],"20":["502.126","510.670","0"],"21":["503.625","509.856","0"],"22":["499.366","511.583","0"],"23":["501.102","507.602","2"],"24":["509.805","513.852","1"],"25":["503.473","519.109","3"],"26":["502.520","509.122","4"],"27":["500.128","518.697","0"],"28":["502.275","512.740","1"],"29":["496.815","513.783","2"],"30":["490.549","514.874","1"],"31":["495.223","513.865","3"],"32":["492.194","513.550","0"],"33":["489.541","513.698","0"],"34":["492.471","509.486","1"],"35":["496.760","511.117","0"],"36":["493.857","518.666","1"],...[/code:2labvb3b]
    Then we have metadata standards on both ends (in the API document) that interpret the data.  This helps to reduce the number of characters being transferred between client and server, and allows us to ship huge amounts of data for very little space.  We also make full use of the gzip protocol between the Apache gateway webserver and the client browsers.
    
    The risk in doing it this way is increased ambiguity and errors associated with that.  On the whole though, this strategy has worked for us.
  • gumshoe2029 .. this is what I was looking for, thanks for the briefing.

    data cache in memory on the server - is it like a memcache or something you are using flat file or sessions on cookies ? I think you have used ajax with MP ?

  • gumshoe2029 .. this is what I was looking for, thanks for the briefing.

    data cache in memory on the server - is it like a memcache or something you are using flat file or sessions on cookies ? I think you have used ajax with MP ?

    The datacache is an in-memory object in Java based on the ConcurrentHashMap that stores TableKeys and Table objects that store the entire database in memory for quick (sub-nanosecond) access.

    I use Redis for session cookies. I store the sessions as JSON strings in the Redis cache using crypographically secure random SHA-512 hashes as keys.

    I do not use the Multiplayer plugin at all, because our game state is 100% determined by the server. Basically, the entire game state is stored in the cache, and the game engines modify the data in the cache and the API engine dips in and grabs specifics for players on AJAX HTTP GET requests. The server ships it to them in JSON, and we use rex_hash to parse and use the data in the C2 client.

    If you are having response problems it is probably a PHP issue, not a Construct2 issue.

    Example: galaxy API call:

    This call is not even gzipped yet, and it completes displaying ~18,500 data entries in 4.33s. The server-side Java processing takes about 100ms.

    Here is the same call using the gzip protocol on the Apache webserver:

    850ms complete from start to finish.

  • I've been on/off working on something like this so well done on getting it working.

    one thing that is always worth considering is the configuration of your database. I've been using MySQL and using the default parameters I got a certain level of performance that was surprisingly slow. however I modified some of the parameters in the config file, around concurrent threads, memory usage, etc, and I got a factor of 10 improvement on the database response. There is a lot of stuff on the web but it can take a while to find it, but defaults are never optimum for anyone. And no, I can't say what the right parameter values are - it depends on what you're using the database for.

    of course this doesn't address the transport issues across the internet, but it definitely is a way of improving database performance.

    R

Jump to:
Active Users
There are 1 visitors browsing this topic (0 users and 1 guests)