Best way to code a real-time multiplayer game - real-time

I'm not sure if the term real-time is being misused here, but the idea is that many players on a server have a city producing n resources per second. There might be a thousand such cities. What's the best way to reward all of the player cities?
Is the best way a loop like such placed in an infinite loop running whenever the game is "live"? (please ignore the obvious faults with such simplistic logic)
foreach(City c in AllCities){
if(c.lastTouched < DateTime.Now.AddSeconds(-10)){
c.resources += (DateTime.Now-c.lastTouched).Seconds * c.resourcesPerSecond;
c.lastTouched = DateTime.Now;
c.saveChanges();
}
}

I don't think you want an infinite loop as that would waste a lot of CPU cycles. This is basically a common simulation situation Wikipedia Simulation Software and there are a few approaches I can think of:
A discrete time approach where you increment the clock by a fixed time and recalculate the state of your system. This is similar to your approach above except do the calculation periodically and remove the 10 seconds if clause.
A discrete event approach where you have a central event queue, each with a timestamp, sorted by time. You sleep until the next event is due and then dispatch it. E.g. the event could mean adding a single resource. Wikipedia Discrete Event Simulation
Whenever someone asks for the number of resources calculate it based on the rate, initial time, and current time. This can be very efficient when the number of queries is expected to be small relative to the number of cities and the elapsed time.

while you can store the last ticked time per object, like your example, it's often easier to just have a global timestep
while(1) {
currentTime = now();
dt = currentTime - lastUpdateTime;
foreach(whatever)
whatever.update(dt);
lastUpdateTime = currentTime;
}
if you have different systems that don't need as frequent updates:
while(1) {
currentTime = now();
dt = currentTime - lastUpdateTime;
subsystem.timer += dt
while (subsystem.timer > subsystem.updatePeriod) {// need to be careful
subsystem.timer -= subsystem.updatePeriod; // that the system.update()
subsystem.update(subsytem.updatePeriod); // is faster than the
} // system.period
// ...
}
(which you'll notice is pretty much what you were doing on a per city basis)
Another gotcha is that with different subsystem clock rates, you can get overlaps (ie ticking many subsystems the same frame), leading to inconsistent frame times which can sometimes be an issue.

Related

What is the Use Case of `extrapolate` in `Akka-Streams`?

I just tried conflate and extrapolate in akka-streams.
As conflate makes totally sense to me, I don't get the Use Case of extrapolate.
Why should we add more work for the downstream - when the upstream does not demand it?
From the Scala Doc:
Allows a faster downstream to progress independent of a slower upstream.
For one example:
Game development
In video games, it's common to have at least two "loops": a logic/game loop and a rendering loop. Usually, the rate of the game loop (the "tick rate") is slower than the rate of the rendering loop (the "frame rate"). For example, a logic tick might occur 10 times a second, but the frame rate should usually be at least 60 frames per second. In order for there to be something to render in between ticks, game developers use either extrapolation or interpolation. As you might have guessed, the extrapolate function is well suited for extrapolation. Here's an example with a tick rate of 10 ticks per second and no frame rate limit:
Source.tick(0.millis, 100.millis, 0)
.scan(intialGameState) { (g, _) => tick(g) }
.extrapolate(extrapolateFn)
.runForeach(render)
Now extrapolateFn just needs to return an iterator that provides extrapolated game states on demand:
def extrapolateFn(g: GameState) = Iterator.continually {
// Compute how long it has been since `g` was created
// Advance the state by that amount of time
// Return the new state
}

Why isn't Time.deltaTime equal to the sum of element.deltaTime in all the elements of Input.accelerationEvents?

Time.deltaTime gives you the time passed in the last frame.
Input.accelerationEvents contains an array of the last reads of the accelerometer and its time.
I'd guess that after
totalTime = 0;
foreach (AccelerationEvent element in Input.accelerationEvents){
totalTime +=element.deltaTime;
}
the result would be equal to Time.deltaTime, but it isn't. What am I missing?
The AccelerationEvent.deltaTime variable returns the amount of time since the last sampling of the device's accelerometer. However, this sampling is not guaranteed to be synchronized with game framerate (even though both aim to achieve 60Hz), and as such the sum of the deltaTime of all Input.accelerationEvents during a frame may not equate the Time.deltaTime of that frame.
The the Unity documentation mentions something to this effect:
[...] In reality, things are a little bit more complicated – accelerometer
sampling doesn’t occur at consistent time intervals, if under
significant CPU loads. As a result, the system might report 2 samples
during one frame, then 1 sample during the next frame.
One way to visualize this is with the following (assume each dash is one arbitrary unit of time):
Frames completed:
1-----2-----3-----4-----5-----6-----7-----8-----9-----
Accelerometer samples made:
1-----2-----3-----4------5-----6---7-----8-----9-----
Note that while frame6 is being completed, both sample6 and sample7 were made. However, although frame6.deltaTime = 5, the sum of sample6.deltaTime + sample7.deltaTime = 5 + 3 = 8. As a result, their times don't match up.
Hope this helps! Let me know if you have any questions.
Here's what they say in Unity documentation regarding accelerometer http://docs.unity3d.com/Manual/MobileInput.html
Unity samples the hardware at a frequency of 60Hz and stores the result into the >variable. In reality, things are a little bit more complicated – accelerometer >sampling doesn’t occur at consistent time intervals, if under significant CPU >loads. As a result, the system might report 2 samples during one frame, then 1 >sample during the next frame
Also don't forget that
AccelerationEvent.deltaTime is Amount of time passed since last accelerometer measurement.
And Time.deltaTime is the time in seconds it took to complete the last frame.
Those values are independent and there is no reason for them to be equal to each other.

OpenGL vs. wxMathPlot for real time plotting in wxWidgets?

I am new to using wxWidgets,and am looking to do a continuous real time plot. I have done some research into real time (continuous) plotting tools available for wxWidgets.
I am most interested in using mxMathPlot for the plotting with support of mpFXYVector for feeding in points.
(I know there's Numerix Library as well, but it seems like there wasn't much documentation on it)
However, I would like to do about 100 updates a second? or 100 new points coming in a second.
Is this something that is feasible using mxMathPlot with mpFXYVector, or would this approach be too slow?
Or should OpenGL be considered?
Displaying a real time graph of data updated at 100 Hz update using wxMathPlot is feasible.
The following is a screenshot of an app using three wxMathplot instances, all showing data that is being updated at 128Hz
If you want the sort of real time graph where the old data vanish on the left as the new data appear on the right, but the amount of data shown is constant at, say, the last 10 seconds, then mpFXYVector is awkward to use. For a discussion of how to deal with this, see this answer.
However, for a first pass, this framework should get you started
... in Myframe constructor
vector <double> xs, ys;
mpFXYVector * data_layer
mpWindow * graph;
// initialize
...
// Set up 100 Hz timer
...
MyFrame::OnTimer( ... )
{
// add new data point(s)
...
// copy data into layer
data_layer.SetData(xs,ys);
// redraw
graph->Fit();
/* Note that at this point we have requested a redraw
But it hasn't taken place yet
We need to return so that the windows system can update the screen etc
We will back in 1/100 sec to do it all again.
*/
}
So, now the question is, how fast can we make this scheme go? I have experimented with a timer event handler that does nothing more than keep track of how many times it is called.
void MyFrame::OnTimer2(wxTimerEvent& )
{
static int ticks = 0;
ticks++;
static clock_t start;
if( ticks == 1 ) {
start = clock();
}
if( ticks == 1000 ) {
double duration = (double)(clock() - start) / CLOCKS_PER_SEC;
wxMessageBox(wxString::Format("1000 ticks in %f secs\n",duration));
myTimer->Stop();
return;
}
}
The fastest I can make this go, on a powerful desktop, is 66 Hz.
Does this matter?
IMHO, it does not. There is no way a human eye can appreciate a graph being updated at 100Hz. The important thing is that the graph shows data acquired at 100Hz, without any losses or lags, but the graph display does not need to be updated so frequently.
So, in the app that produced the screenshot at the top of this answer and other similar apps I have developed, the graph is being updated at 10Hz, which the wxWidgets framework has no trouble maintaining, and the data acquisition is occurring at a much higher frequency in another thread or process. Ten times per second, the graph display code copies the data that has been acquired in the meantime and updates the display.

How to balance start time for a multiplayer game?

I'm making a multiplayer game with GameKit. My issue is that when two devices are connected the game starts running with a slight time difference. On of the devices starts running the game a bit later. But this is not what i want. i want it to start simultaneously on both devices. So the first thing that i do is i check time of the beginning on both devices like this:
startTime = [NSDate timeIntervalSinceReferenceDate];
and this is how it looks:
361194394.193559
Then I send startTime value to the other device and then i compare received value with startTime of the other device.
- (void)balanceTime:(double)partnerTime
{
double time_diff = startTime - partnerTime;
if (time_diff < 0)
startTimeOut = -time_diff;
}
So if difference between two start times is negative it means that this device is starting earlier and therefore it has to wait for exactly the difference assigned to startTimeOut variable, which is a double and usually is something like 2.602417. So then i pause my game in my update method
- (void)update:(ccTime)dt
{
if (startTimeOut > 0)
{
NSLog(#"START TIME OUT %f", startTimeOut);
startTimeOut -= dt;
return;
}
}
But it unfortunately it doesn't help. Moreover it even extends the difference between start times of the devices. I just can't get why. Seems like everything i'm doing is reasonable. What am i doing wrong? How do i correct it? What would you do? Thanks a lot
As Almo commented, it is not possible to synchronize two devices to the same time. At the lowest level you will gnaw your teeth out on the Heisenberg Uncertainty Principle. Even getting two devices to synchronize to within a tenth of a second is not a trivial task. In addition, time synchronization would have to happen more or less frequently since the clocks in each device run ever so slightly asynchronous (ie a teeny bit faster or a weeny bit slower).
You also have to consider the lag introduced by sending data over Wifi, Blutooth or over the air. This lag is not a constant, and can be 10ms in one frame and 1000ms in another. You can't cancel out lag, nor can you predict it. But you can predict player movements.
The solution for games, or at least one of them, is client-side prediction and dead reckoning. This SO question has a few links of interest.

iphone BPM tempo button

i want to create a button that allows the user to tap on it and thereby set a beats per minute. i will also have touches moved up and down on it to adjust faster and slower. (i have already worked out this bit).
what are some appropriate ways to get the times that the user has clicked on the button to get an average time between presses and thereby work out a tempo.
Overall
You best use time() from time.h instead of an NSDate. At the rate of beats the overhead of creating an NSDate could result in an important loss of precision.
I believe time_t is guaranteed to be of double precision, therefore you're safe to use time() in combination with difftime().
Use the whole screen for this, don't just give the user 1 small button.
Two idea
Post-process
Store all times in an array.
Trim the result. Remove elements from the start and end that are more than a threshold from the average.
Get the average from the remaining values. That's your speed.
If it's close to a common value, use that.
Adaptive
Use 2 variables. One is called speed and the other error.
After the first 2 beats calculate the estimated speed, set error to speed.
After each beat
queue = Fifo(5) # First-in, first-out queue. Try out
# different values for the length
currentBeat = now - timeOflastBeat
currentError = |speed - currentBeat|
# adapt
error = (error + currentError) / 2 # you have to experiment how much
# weight currentError should have
queue.push(currentBeat) # push newest speed on queue
# automatically removes the oldest
speed = average(queue)
As soon as error gets smaller than a certain threshold you can stop and tell the user you've determined the speed.
Go crazy with the interface. Make the screen flash whenever the user taps. Extra sparks for a tap that is nearly identical to the expected time.
Make the background color correspond to the error. Make it brighter the smaller the error gets.
Each time the button is pressed, store the current date/time (with [NSDate date]). Then, the next time it's pressed, you can calculate the difference with -[previousDate timeIntervalSinceNow] (negative because it's subtracting the current date from the previous), which will give you the number of seconds.