GDR Forum Index
Podcast Podcast
Dev Dev Logs
Search Search
RSS RSS
Register Register
Log in Log in
Reply to topic GDR Forum Index -> Game Developer's Refuge -> Development Log - Client/Server Prototype for Game
View previous topic :: View next topic  
Author Message
0xDB
Developer

Joined: 26 Dec 2005
Posts: 1670
Location: Your consciousness.
PostPosted: Thu May 21, 2009 4:32 am    Post subject: Development Log - Client/Server Prototype for Game Reply with quote

I'm currently planning and preparing for the development of TankGame V2.0.

I want to implement Network play for that game, for up to sixteen players per server.

So, currently, I'm learning internet socket programming from "Beej's Guide to Network Programming Using Internet Sockets", which I found in Siroccos "Tons of programming books online as PDFs"-thread.

While I haven't yet written any code, so I haven't yet sent any data back and forth between machines, I've been thinking about some high level stuff, needed for proper Client/Server networked game.

The idea is to write a Client/Server prototype application based on these thoughts and then make that application use internet sockets for communication.

Collection of rough thoughts on Client/Server programming for game development
(based on what I've read over the years, what I already know about networks and on observations of in-game behaviour of other networked games, so yeah most of it is not likely to be news or groundbreaking in any way, it's pretty straight forward and I went with whatever came to my mind without a lot of thinking about it twice)

Terminology first

object
Anything that holds a subset of data, which is relevant to the game logic. e.g. tank, shot, level-map

fluff
Anything that is eye/ear candy and not necessarily relevant to the game logic. e.g. particle effects, sounds, water animations

gamestate
A set of objects that together hold all the data which is relevant to the game logic. Game logic updates are applied to the gamestate. Game logic updates modify objects. When an object changes, the gamestate changes. The gamestate also can hold a set of fluff and modify that.

server
A programme running a copy of a gamestate, which takes care of communicating that gamestate to a number of other programmes, called clients, that each run local copies of the gamestate and which can request the server to make changes
to the gamestate upon which the server decides whether these changes are acceptable and communicates them back to all clients if necessary.

client
A programme running a local copy of a gamestate. The programme can not run independently. It must be communicating with a server. It may send requests to the server to change the gamestate. It must accept messages from the server and update the local copy of the gamestate according to the content of these messages.

LUPS
Logic Updates Per Second, the number of times per second, the server will apply logic to the gamestate objects and fluff. Any changes to objects must be copied to all clients.

ClientCount
The number of clients connected to the server.

rules
The set of logic and arithmetic operations that define how the gamestate objects need to be modified in each logic update.

message
A set of whatever data that is to be transferred between client and server.

a first description of the problem:
The problem is that there will be a gamestate on one machine(the server) which has to be copied to several other machines(the clients) as efficiently as possible up to LUPS times per second.

The data that can be transferred per second is limited by the speed of the used network connection measured in possible bytes per second.

The clients have to share the speed of the servers network connection, as each of them has to get the gamestate. The clients have to have a connection speed at least as fast as the servers speed divided by the number of connected clients to be able to receive the gamestate.

So, I guess some calculation is necessary (ignoring the client speed for now): Let ServSpeed be the possible bytes per second, the server can send in total.
So ServSpeed / ClientCount is the number of bytes per second that can be used to send the gamestate to each clients or in other words, this is the maximum size for the gamestate in bytes.

But it is not necessary to send the whole gamestate to each client at every logic update, as it is likely that many of the objects of the gamestate (e.g. the level-map objects) do not change in every update. Also, if the rules for updating game objects are clearly defined, it is not even necessary to send updates for each game object, as the clientside gamestates can just perform these updates themselves (they just have to receive the rules for that once when the gamestate is initialized).

So, the only updates to the gamestate that the server needs to send to the client each second, are when new objects enter the gamestate or when objects get removed or when an object must be updated in a way that cannot be decided by the client itself(e.g. when a tank gets hit, reduce armor on that tank and tell all clients about that armor change). Clients are not allowed to make these kinds of changes themselves, because that may lead to inconsistencies across the various gamestate instances, which may be caused by the clients not running at exactly the same speed or by data not getting transferred fast enough, for example:
A shot hits a rock on the server and gets removed and the server sends the information about that shot removal to the clients. But one client does not receive that update in time(or not at all) and someone elses tank drives into that shot on that clients copy of the gamestate. The client is not allowed to decide for himself that the tank has been hit, because in the gamestate of the server, which is the only gamestate that really has any relevance to the game logic, the tank was not hit.
So much for the data that gets sent from the server to the clients.

But the clients need to be able to sent data to the server as well, to request changes to the gamestate, e.g. (may I fire a shot? may I move forward?) to which the server has to respond by sending updates about that to the clients.

In an ideal environment, the ServSpeed would be unlimited and all changes would be copied to the client gamestates instantly. But an environment like that only exists in simulation, where server and clients all run on the same machine and where there are not really any copies of the gamestate but just a single copy of it, to which every client has direct access.

The fact that network speeds are limited and that data can not be copied instantly and without delay between machines raises the problem of synchronization.

Example to illustrate the problem:
In the server gamestate a shot is spawned somewhere and the fact that it is spawned gets sent to the clients. The server gamestate continues to update objects, including moving the newly spawned shot.

Eventually, the clients receive the message that the shot was spawned, spawn it in their copy of the gamestate and continue to update it by the rules but the actual shot in reality(in the servers gamestate) is already somewhere else, because it has already been updated several times, while the message about its spawning was in transmission, so the position that the client spawns it at, is already outdated and not correct anymore.

How to deal with that? The idea is this:
The gamestate on the server will keep a state variable(currentLogicUpdate), which is increased in every logic update. The gamestate on the client will keep a variable like that as well.
Each message that gets sent from server to client or client to server, the currentLogicUpdate will be included. That way, the receiver of the message can compare it to its own state and perform as many updates as necessary based on the difference of its onw currentLogicUpdate to the number found in the message.

Some mechanism will be necessary to deal with the scenario in which a client falls too far behind (its own currentLogicUpdate being several seconds behind that of the last message it received). In that case, the client needs to send some sort of emergency request for a full gamestate update and the server will have to pause updates to the gamestate until all clients are synchronized to the same gamestate again.

So, to prevent these emergency synchronizations, it seems to be most important to minimize the network traffic, minimize the number of bytes transferred on each logic update.

So, it seems reasonable to tightly pack the messages into some binary coded form with as few bits as necessary, so the receiver knows what to do.

As my thoughts are getting more detailed and low level now, I will take a break and put in some data model in pseudo code to play around with:

object
id
stateA, stateB
<whatever data necessary>

gamestate
currentLogicUpdate
objects set of object

client
state gamestate

server
state gamestate

The above model neglects any data necessary to handle the connection and the exchange of messages between client and server.

Ok, so the problem still is to encode messages in a way that they are just as big as necessary and not a bit bigger.

For the purpose of knowing which object is affected by a message, I have given each object an id. This id will be a unique identifier and an object that has the same id on the client and on the server is concept-wise the same object(even though its actual state might differ due to the synchronization problem).

So, up to this point, I know that I will need at least the id and the currentLogicUpdate in each message.

It's time to think about data types now. If I take a 32bit integer for the id, the number of objects that I can have in the gamestate is limited to 4.294.967.296 objects. That's plenty but it's already four bytes per message just needed for the object id. I think that 65.536 objects will still be sufficient, so I'll use a 16bit integer for the object id for now (can always refactor, if it turns out to be insufficient).

The currentLogicUpdate needs a type, so that it will allow a single game to last at least a few hours, let's say 24 hours, so
LUPS * 60 * 60 * 24 = the number the type must be able to represent In TankGame V1.0c, LUPS is 30, which is sufficient for smooth gameplay imo, so 30 * 60 * 60 * 24 = 2.592.000, so a 16 bit integer would not be sufficient.

A 22 bit integer would be ok but I don't want to shift around bits across byte boundaries and I also don't want to use an odd 24 bit integer(for which there is no builtin type), so I'll just use an int32 for the currentLogicUpdate.

With a 32 bit integer, I can make a single game last 4.294.967.296 / 2.592.000 = 1657 days (that should be ok).

So now, a message is:
[32bits SendersCurrentLogicUpdate]
[16bits objectId]

But that's not enough of course. The receiver needs to know what to do with the message and also, what else is included in the message in addition to the logicUpdate and the objectId.

To minimize the bits necessary for the type of the message and for determining the length of the message, a single message type id seems to do the trick.

The number of possible messages isn't likely to be very big, as I only want to send spawns, some updates and removals of objects and some request, response stuff to handle client requests, so an 8bit integer will make the message type id.

updated message head:
[8bit messageType] // receiver can also tell length of message by this
[32bits SendersCurrentLogicUpdate]
[16bits objectId]
<additional message data, depending on messageType>

It seems to be pointless to define each messageType in detail at this point already. Instead, I'm going to assume that the average size of additional message data is going to be about 32 bytes (yeah, how is that for a random assumption?).

So, my average message will be: 1+4+2+32 = 39 bytes wide.

That's the raw useData though, a TCP message requires additional data, so that the hardware along the way knows how to transfer the message, so it reaches its target. Therefore each message needs a so called TCP header in front of its data.

A TCP header is about 160 bits wide (according to wikipedia), so that's another 20 bytes that need to be transfered with each message.

Some more calculations:
a 56K modem can (in idealistic theory) transfer 7.000 bytes per second, or about 437 bytes per second per client (if sixteen clients are connected to the server).

So, the server would be able to send just about 10 messages per second to each client (and that's without the TCP header).

With the TCP header, that would just be 437 / (39 + 20) = 7 messages per second. With 30 logic updates per second but only 7 messages per second, that sounds completely unacceptable for a fast paced realtime action multiplayer game, so this needs more optimization.

Idea: Don't send a TCP-message for each message but rather batch all messages together for each logicUpdate and send them all in a single TCP-message, so there's just one TCP-header (20 bytes) per logicUpdate.

And also: Don't send a message for each message of the same type but batch all messages of the same type into a single message, so there's only one type id per batch of messages for one type.

And also: Don't send the SendersCurrentLogicUpdate in each message but send it only once for all the batched messages of the currentLogicUpdate.

So, this leads to the following new batched-message head:
[20 bytes] // TCP header
[4 bytes] // SendersCurrentLogicUpdate
[1 byte] // number of different batched message types following =: m
<batched messageTypes>

And each batched message type:
[1 byte messageType]
[1 byte numberOfFollowingMessagesForThisType] =: n
n * [2 bytes objectId]
n * [32 bytes] // average additional message data

So, each batched-message would have an average : 20+4+1 + m * (2 + n*34) bytes. To simplify the following calculation (and to make more wild assumptions), let m be the same as n. So there will be n^2 messages per batched-message.

On a 56K modem, I can then send...
(20+4+1 + 2*n + 34*n*n) <= 437
n^2 + (1/17)n - 12 = 0
n = -(1/34) + sqrt( (1/1156) + 12) // neglecting the small fractions =>
n = sqrt(12) = ...
... between 3 and 4 different message types and for each of these between 3 and 4 actual messages of that type per second to each client.

So that's 9 to 16 messages per client for each second. That's still unacceptable. The only way to improve this further would be in assuming less additional message data per message.

Maybe I should just start writing code and optimize it on a real life example.
(and maybe assuming 56Kbits/s to be the standard server speed is a little nuts too...)

Wow, you're still reading, you must be really really bored. :)
_________________
0xDB
View user's profile Send private message Visit poster's website
Hard Rock
Contributor

Joined: 31 Aug 2005
Posts: 238

PostPosted: Thu May 21, 2009 6:51 am    Post subject: Reply with quote

Keep in mind that when games support 56K for clients, they rarely support it for servers. Also not only that, but as support for additional players is added to the server they require additional bandwidth.

For example, battlefield requirements:

Code:

BANDWIDTH
ISDN users - Join games with a maximum of 16 players.

ADSL/Cable users - Join games with a maximum of 32 players.

T1/LAN users - Join games with a maximum of 64 players.


Those are client requirements, but server bandwidth is also affected. Though most servers are probably in colo these days anyway, considering the crappy uploads most ISPs give.
_________________
Hard Rock
[The Stars Dev Company][Twitter]
View user's profile Send private message Visit poster's website
0xDB
Developer

Joined: 26 Dec 2005
Posts: 1670
Location: Your consciousness.
PostPosted: Thu May 21, 2009 10:33 am    Post subject: Reply with quote

Quote:
Also not only that, but as support for additional players is added to the server they require additional bandwidth.
I don't want to make a difference between required server bandwidth and required client bandwidth, because all clients need to receive the updates from all other clients as well, so naturally, if more clients are connected, there's also more data to send to each individual client.

Quote:
Though most servers are probably in colo these days anyway
What does "to be in colo" mean?

Quote:
considering the crappy uploads most ISPs give
Yes, uploads are quite limited. Maybe that's why I initially assumed 56Kbits/s per second for the server speed.

Thinking more about requirements:
Client and Server components need a way of connecting to each other and the client needs a way of reporting it's version number to the server and vice versa, so that on connection a check can be performed whether the client will be allowed to play on the server. For that purpose an extra message, that is never allowed to change in later versions must be established. All other messages may change in later versions if necessary. To be able to reuse the basic network components(everything minus the game specific messages) that version check should not only include a version number but also a game identification string, e.g. "TankGame", so that it's not possible to connect to a "SpaceGame" server with a TankGame client.
_________________
0xDB
View user's profile Send private message Visit poster's website
Sirocco
Moderator

Joined: 19 Aug 2005
Posts: 9459
Location: Not Finland
PostPosted: Thu May 21, 2009 10:34 am    Post subject: Reply with quote

This is relevant to my interests.
View user's profile Send private message Visit poster's website
Hard Rock
Contributor

Joined: 31 Aug 2005
Posts: 238

PostPosted: Thu May 21, 2009 12:18 pm    Post subject: Reply with quote

Quote:

I don't want to make a difference between required server bandwidth and required client bandwidth, because all clients need to receive the updates from all other clients as well, so naturally, if more clients are connected, there's also more data to send to each individual client.

Why would the clients need to talk to each other if you are running a client server set up? If the clients start talking to each other you are attempting to do a peer to peer network, which is a little different.

If you mean that the server is going to have to send more data because there are more clients connected, then yes. But this should be accounted for in your MAX packet size calculations, so you can still support a 56kb support for the client, and allow the server to accept more (which makes sense, the person hosting the server should have a faster internet connection).

Quote:

What does "to be in colo" mean?

Co-location, rather then have the player host a server locally on their machine, they host it on a machine connected at a server facility. This can either their own box or rented from a dedicated/vps host.

These will have 100mbps connections or greater (upload).
_________________
Hard Rock
[The Stars Dev Company][Twitter]
View user's profile Send private message Visit poster's website
xearthianx
Developer

Joined: 28 Sep 2006
Posts: 771
Location: USA! USA!
PostPosted: Fri May 22, 2009 6:09 pm    Post subject: Reply with quote

I have a thoughts for you:

In regards to state vs fluff: Sounds an eyecandy can be event triggered. You should never need to send this, if the client is receiving the events.

Quote:

Eventually, the clients receive the message that the shot was spawned, spawn it in their copy of the gamestate and continue to update it by the rules but the actual shot in reality(in the servers gamestate) is already somewhere else, because it has already been updated several times,

This is actually less of a problem than you think, if you design your client well. If you are familiar with how old mainframe terminals worked, or modern ssh connections, your local computer is basically a "dumb terminal". If you separate out the game and simulation logic from the display logic, your client basically becomes a proxy for user input, and displays feedback. It doesn't need to know or care who is "winning" and who is "dead", or even what that means. It just needs to know how and where to draw them. This decouples the client temporally from the server's simulation state, because if the client misses an update, he just draws stuff in the new location when new data comes in. The client doesn't need to know a bullet's velocity, just what it looks like and where it is.

As far as bandwidth concerns, you will want to minimize your updates as much as possible, and keep them as terse as you can. Your batching idea is a very good start, because it cuts out a lot of the per-message overhead. In some situations, you can cut this down further by only sending updates relevant to that particular player, e.g. don't need to care what's going on behind a closed door. May not be applicable to your tank game where everyone can see everything, but it's something to think about. You should also reconsider your plan to use only "whole" data types, because that's a lot of wasted bit. But if bandwidth is really a strong concern, you will have to get into clever bit-packing formats and other simple compression schemes. If you can reduce your outbound data package by half (a modest compression ratio) then you can have twice as much stuff going on.

You're talking about TCP headers. You might want to stay away from TCP. A UDP datagram is only 64 bits, which is shaving more than half your header overhead right off the top. And the lack of confirmation and resent packets decreases the reliability of any indivdiual packet, but overall increases the protocol's data/metadata ratio. With this kind of connection, you do what's sometimes called "spray n pray." Just send out a barrage of data, knowing that most of them will probably get through. For more critical updates, like spawning a new entity, you can manually send acknowledgemnt packets, or open a TCP socket on another port (but I probably wouldn't bother doing this).
_________________
Ionoclast Laboratories - Scientia et Dominatia!
View user's profile Send private message AIM Address Yahoo Messenger MSN Messenger
innerlogic
Newbie

Joined: 27 Dec 2007
Posts: 8

PostPosted: Fri Jun 05, 2009 9:28 pm    Post subject: Deterministic gameplay Reply with quote

One other approach you might want look into is a deterministic gameplay model, which can simplify communications quite a bit. In a deterministic game, the current game state at any point in time is a function of the initial state plus all the user inputs (both local and remote) along the way.

So for a basic tank game, the initial state would probably consist of which map to use, position/orientation and stats for each tank, and a random seed (so that any random numbers generated in the game logic are predictably identical on all clients).

Every simulation frame (the frequency of which is determined to balance responsiveness and bandwidth needs), every client samples its inputs and bundles them into a packet including the frame number and transmits to the game server, which retransmits the input data for each client to all other connected clients. For this game, input data might be a packed binary structure of input flags such as fireButtonPressed, moveForward, rotateLeft, rotateRight.

Once each client has all the other clients' inputs for a frame, it can run the simulation logic for all the tanks and determine who's firing, where each projectile is, who's hit, who's dead, etc. This part is a little tricky since you're going to be sampling and transmitting inputs for frame n + d, while you're currently simulating frame n (where d represents the number of frames your simulation lags your input due to network latency in receiving the other players' inputs, and may vary over time). Since the simulation logic is deterministic, each client has exactly the same view of the world in every frame even though they've never explicitly shared the world state beyond the initial start conditions.

Because each client runs the simulation logic independently, you really don't need the server-side validation any more- each client could provide a simple checksum of the current game state with their inputs, and if any of the checksums don't agree, somebody's cheating (assuming there's not some bit of non-determinism left in your game logic). So your server becomes a much simpler data relay with this approach, and your data packets are potentially tiny if you're sending a reasonable set of inputs.

Anyways, I know that's a pretty high-level rundown, but this is one of my favorite topics and I'd be happy to dive into more details if you have any interest.
View user's profile Send private message
0xDB
Developer

Joined: 26 Dec 2005
Posts: 1670
Location: Your consciousness.
PostPosted: Sun Jun 07, 2009 4:26 am    Post subject: Reply with quote

Well, I still haven't written any code or worked on this since my last post (insert good old lack of time excuse) but today I've took some time to read this thread and to share some thoughts on the input given.

Quote:
In regards to state vs fluff: Sounds an eyecandy can be event triggered. You should never need to send this, if the client is receiving the events.
Yes, that was the idea.

Quote:
If you separate out the game and simulation logic from the display logic, your client basically becomes a proxy for user input, and displays feedback.[..]The client doesn't need to know a bullet's velocity, just what it looks like and where it is.
Wheras it is true that the client does not really need to know that, I think it would help making any possible lag appear less strong and it will reduce the amount of needed messages (positional updates for objects travelling at the same speed/same direction all the time don't need to be send).

The client only needs to know when a shot spawned where and when and where it vanishes or when it hits something. The server will tell the client about these events.
Along the way though, the client does not even need to receive positional updates to the shot, because it can calculate that by itself knowing the movement vector and the speed of the shot, which was sent to it in the spawning event.

Quote:
In some situations, you can cut this down further by only sending updates relevant to that particular player, e.g. don't need to care what's going on behind a closed door. May not be applicable to your tank game where everyone can see everything
No that's an interesting thought. If I will have maps larger than a single screen(and I most certainly will because for sixteen players, a small map like that would be too crowded) and I don't have a minimap or a limited range radar, I only need to send messages regarding objects within the visible/radar range of the respective player.

Quote:
..clever bit-packing formats..
This could be tricky. The only way I can imagine to reduce the bits needed to store a certain pattern of bits would be if there are some patterns of bits that occur more often than others. e.g. if bit patterns 0101 and 0110 would be used often, I could define a 0 in the packed message to get expanded to 0101 and a 1 to 0110.

For this it would be necessary to have an unoptimized and unpacked version of the message sending/receiving system at first and then I'd have to add a monitor that records all messages and counts the occurences of all possible bit patterns across a large number of differently played games. Based on that data it would then be possible to make optimizations(or not, depending on whether certain patterns occur more often and there'd still have to be a way to send uncompressed patterns for the patterns that are 'irregular'(as in less likely to occur than the others))

I honestly don't think though that certain patterns are more likely to occur than others and having 256 different patterns to map to a single byte... well that would just be the same as storing the bytes value directly.

The other approach would be a runlength encoding of bits but I can not imagine that to be efficient for this type of data, as it would only be efficient if there are long continuous bitstreams of un-interrupted ones and zeros or repeating patterns of bits (like they can be found in images files with large areas of the same color).

Quote:
One other approach you might want look into is a deterministic gameplay model, which can simplify communications quite a bit. In a deterministic game, the current game state at any point in time is a function of the initial state plus all the user inputs (both local and remote) along the way.

Basically, that's what I am already aiming to do. The only difference is that in my current concept I won't send ALL input states at each logic update but I'd just send the allowed changes to each client after another client requested to do some action(e.g. firing a shot) and these messages would already include the logic frame at which the change occured, so each client is capable to calculate the new state based on that.

Quote:
Every simulation frame (the frequency of which is determined to balance responsiveness and bandwidth needs), every client samples its inputs and bundles them into a packet including the frame number and transmits to the game server, which retransmits the input data for each client to all other connected clients. For this game, input data might be a packed binary structure of input flags such as fireButtonPressed, moveForward, rotateLeft, rotateRight.
For TankGame 1.0c, these flags are be: toggleEngine, fire, rotateTankLeft, rotateTankRight, rotateGunRight, rotateGunLeft and for a networked version leaveGame and pauseGame might get added, so that would be 8 bit flags for the input state at any given logic frame.

So, for 30 logic updates per second (that's what TankGame1.0c does), that would be 30 bytes per second from each client to the server, which can be send in a single message at the end of each second.
So the server needs to receive 30 * 16 bytes per second, or 3840 bits + 16 * 64 bits for the UDP packet heads, which amounts to 4864 bits per second.

And for a maximum of 16 clients, that would be (30*15)*16 (each client needs to receive the input states from the fifteen other clients for that second) bytes for each second that get send from the server to the clients or 7200 bits + 16*64 bits for the UDP packet heads, so the server needs to send 8224 bits per second.

Disregarding actual technical limitations of a standard 56K modem and network quality, let's assume for a moment that it can transfer about 23000 bits per second simultaneously in each direction(send/receive). That means a machine equipped with it could host almost three instances of a full 16 player TankGame server at the same time... sweet. Did I overlook something? Is my calculation wrong somewhere?

This is almost too good to be true but I think it's worth investigating this approach more.

...
(took a break here before I continued to write)
...

Aha! I neglected the fact that each message needs to include the logic frame at which the input state was recorded as the messages are not guaranteed to reach the receiver in the same sequence that they were sent.
In this case, if I send input states bundled for each whole second, I can get away with sending only the logic frame for the first input state in that message. Well, that would not add much overhead, so the calculation above still seems accurate.

But there is still, just as in the other concept, the problem of dealing with lag. The biggest problem I see here though is that clients have to actively wait for input to continue the simulation. If one or more clients are lagging too much, all others will be negatively affected as they have to wait for the input of each other client before they can calculate the next gamestates.

I think this is easier to deal with in the other concept, in which the server evaluates everything and sends only the necessary changes/updates/events to the clients, which can then just make the required changes to all the well identified objects affected and continue to simulate everything else without having to wait for input.

-----
Well well, I should probably just start writing some code but unfortunately I don't know where to take the time for that, so all I can do for now is keep thinking and day-dreaming about it.
_________________
0xDB
View user's profile Send private message Visit poster's website
xearthianx
Developer

Joined: 28 Sep 2006
Posts: 771
Location: USA! USA!
PostPosted: Sun Jun 07, 2009 3:54 pm    Post subject: Reply with quote

Dennis wrote:
But there is still, just as in the other concept, the problem of dealing with lag. The biggest problem I see here though is that clients have to actively wait for input to continue the simulation. If one or more clients are lagging too much, all others will be negatively affected as they have to wait for the input of each other client before they can calculate the next gamestates.

Nah, that wont be a problem. The server (and therefore the other clients) just keep going. If one of the clients' update packets arrive late or not at all, the server can just drop those packets, or alternatively apply them in tandem with their new input for the next frame to play "catch up". This will cause laggy clients to either pause from time to time, or "jitter" as their missing inputs are applied to catch them up. Not pretty, but it keeps every one else moving along. Of course, from their perspective, it'll be everyone else who's jumping around, but there's nothing you can really do about that.
_________________
Ionoclast Laboratories - Scientia et Dominatia!
View user's profile Send private message AIM Address Yahoo Messenger MSN Messenger
xearthianx
Developer

Joined: 28 Sep 2006
Posts: 771
Location: USA! USA!
PostPosted: Mon Jun 08, 2009 11:05 am    Post subject: Reply with quote

Here's an article from Game Developer's "Inner Product" column back in 2002. It talks about bit-packing integer values, as well as compressing them even further with sub-bit precision using a clever multiplication method. Suitable for trimming down network bandwidth, space-limited storage, and any other situation which require a cheap but effective compression method. (He shrinks 8 bytes down to 3!) Includes source code.

http://number-none.com/product/Packing%20Integers/index.html

And here's an article from GamaSutra from 2004 discussing general networking architecture and technical issues. I only just skimmed over it, but it looks like the guy's saying a lot of the same things we are.

http://www.gamasutra.com/features/20041206/jenkins_01.shtml
_________________
Ionoclast Laboratories - Scientia et Dominatia!
View user's profile Send private message AIM Address Yahoo Messenger MSN Messenger
Sirocco
Moderator

Joined: 19 Aug 2005
Posts: 9459
Location: Not Finland
PostPosted: Mon Jun 08, 2009 12:29 pm    Post subject: Reply with quote

Hmm... that bit packing tut was pretty clever.

I ended up using a bunch of bit fu setting up the data structures for FB: Cry Havoc. I wanted the maps to be really small on disk, so I had to get rather creative about how I stored things. Stuffing data into unused parts of bytes is a good start. I took advantage of the map's dimensions and packed three values into a single byte, allowing me to stuff a bunch of meta data into each block at a fraction of the cost. And at the time I was just really bored.
_________________
NoOP / Reyn Time -- The $ is screwing everyone these days. (0xDB)
View user's profile Send private message Visit poster's website
innerlogic
Newbie

Joined: 27 Dec 2007
Posts: 8

PostPosted: Mon Jun 08, 2009 4:13 pm    Post subject: Reply with quote

Dennis wrote:

But there is still, just as in the other concept, the problem of dealing with lag. The biggest problem I see here though is that clients have to actively wait for input to continue the simulation. If one or more clients are lagging too much, all others will be negatively affected as they have to wait for the input of each other client before they can calculate the next gamestates.

I think this is easier to deal with in the other concept, in which the server evaluates everything and sends only the necessary changes/updates/events to the clients, which can then just make the required changes to all the well identified objects affected and continue to simulate everything else without having to wait for input.


Yeah, I think that's a pretty good summary of the tradeoffs involved- I'd compare your approach with a typical MMORPG-style multiplayer protocol, where the server actively tracks state and is the final arbiter of all object state. It definitely can degrade more gracefully when some players have worse connections, since you can enact a policy where inputs not received by some frame cutoff time are effectively discarded by the server, and only that client is negatively affected, as xearthianx pointed out. UDP might be a little tricky if the server is just transmitting deltas to the clients since packets may be lost/rearranged, but I suspect you can come up with some clever ACK system to send with the input state each frame to handle that. Looking forward to hearing more about what you come up with.
View user's profile Send private message
xearthianx
Developer

Joined: 28 Sep 2006
Posts: 771
Location: USA! USA!
PostPosted: Sun Jun 14, 2009 2:12 am    Post subject: Reply with quote

I just had a thought for how to deal with laggy or unreliable connections. Every so often the server can send out a checksum packet that says, "As of logic frame XXX, your gamestate should hash to 0xABADFACE". Everyone will know which predefined gamestate values to check against, run a fast checksum on them, and compare. Clients that don't match can request a state refresh from the server.

The drawback is that you wouldn't know what information was missing, so you would have to rebuild the entire relevant state from scratch (which would be relatively small, once a game was already established). But you would have a reliable way to detecting that a client was out of sync.
_________________
Ionoclast Laboratories - Scientia et Dominatia!
View user's profile Send private message AIM Address Yahoo Messenger MSN Messenger
Madgarden
Contributor

Joined: 31 Aug 2005
Posts: 324
Location: Kitchener, ON, CA
PostPosted: Sun Jun 14, 2009 7:59 am    Post subject: Reply with quote

The Quake 3 networking model seems like a decent and simple solution. Here's an article on it:
http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/Quake3Networking

And here's someone's dissection of the protocol:
http://www.tilion.org.uk/Games/Quake_3/Network_Protocol
_________________
I know it sounds crazy, but it JUST MIGHT WORK
View user's profile Send private message Send e-mail AIM Address Yahoo Messenger MSN Messenger
Bean
Admin

Joined: 20 Aug 2005
Posts: 3776

PostPosted: Sun Jun 14, 2009 8:36 am    Post subject: Reply with quote

Very interesting read, thanks!


-Bean
_________________
Kevin Reems | Nuclear Playground | Solid Driver
View user's profile Send private message Visit poster's website
xearthianx
Developer

Joined: 28 Sep 2006
Posts: 771
Location: USA! USA!
PostPosted: Sun Jun 14, 2009 8:58 pm    Post subject: Reply with quote

Isn't the code for Q3A GPL now? You could take a look at the source files themselves, for the real nitty-gritty.
_________________
Ionoclast Laboratories - Scientia et Dominatia!
View user's profile Send private message AIM Address Yahoo Messenger MSN Messenger
Reply to topic GDR Forum Index -> Game Developer's Refuge -> Development Log - Client/Server Prototype for Game
Game Developer's Refuge
is proudly hosted by,

HostGator

All trademarks and copyrights on this page are owned by their respective owners. All comments owned by their respective posters.
phpBB code © 2001, 2005 phpBB Group. Other message board code © Kevin Reems.