real-time in context of a game - real-time

I have a problem grokking the concept of real-time (IMO badly named, different meaning in different contexts). I understand real-time software as a software where time is a key variable. Events must occur at given time. Say, railway switch change at 15:02 and the next one must be at 15:05 no matter what.
But how about this example. In game, when player's FPS drops below 16 game exits and tell user to upgrade his hardware or kill other applications. So when one iteration of the game loop takes more than 1/16 of a second the output of the program is completely different.
Is it real-time(ish)? Can it be considered as a Real Time Computing?

Your question is hard to understand, are you referring to Real Time Computing, or simulating real time, or something completely different?
Simulating real time: It is possible to simulate real-time in a game by polling for events. Store the time of an event, and then when it comes time to render a frame, the game should repeatedly 'fast forward' by moving the current time to the time of the next event and handle the event. This should repeat until there are no more events, or the time is 'current'.
This requires you to have anything that is a function of time (such as velocity, position, acceleration) be calculated according to the current time. This means you would not have these attributes periodically updated, and allows your game to be deterministic, as the 'game time' is no longer dependent upon real time. It also makes things like game speed and pausing very simple to implement.
If you're referring to the concept of real-time systems, then I would say there's not enough information to determine whether that 'game loop' is 'real-time'. It depends on the operating environment of the game, and the logic in the 'game loop'. According to wikipedia, a real-time deadline must be met, regardless of system load.

In the rapidly approaching canonical article Fix your Timestep!, Glenn Fielder addresses numerous ways to handle this issue. While the article focuses primarily on physics, the key points are applicable to any system that represents a function of time, to wit, things dealing with moving things.
The executive summary of that article (which is well worth reading) is this:
You can make your physics deterministic (well, as much as can be achieved with imperfect input) by using discrete physics timesteps. It looks like this:
Render as fast as possible
Pass in a time delta that represents how long steps previous took this frame
Process delta time modulo timestep number of physics steps
Store the remainder of delta that you weren't able to process in an accumulator
That accumulator gets added to the next frame's time buffer. This requires some fine tuning such that temporary lag spikes due to e.g. a rapidly spinning player (which necessitates a lot of visibility determination over time) don't end up putting you in an inescapable time debt. If you wanted to intelligently guard against such an occurrence, you could have a sentry look for dangerous levels of accumulated time, which you could respond to by perhaps dropping a video frame.
Another advantage to using discrete timesteps is that they behave well in multiplayer games. If you have an authoritative server or node in a peer-to-peer configuration, the server can ensure that all clients' physics simulations are running at the same physics timeline. Discrete time blocks also simplifies things in rollback based multiplayer.
Edit:
Disclaimer: I've never written software for real-time myself, only worked in a company that had!
In response to really-real real-life Real Time software, it's unlikely that anyone has made a game that could be qualified as this, at least in software. (I'm not sure how one would qualify games on ROMs or games that don't run under a host OS?) While your example would be an attempt at real-time software, most real-time software goes through a period of certification in which the maximum amount of time spent per instruction or on a logical block of operation is determined. Games might come close to this in a sense when, for example, platform licensors have requirements (as I believe XBLA does) regarding minimum 30fps or similar. However, these certifications are usually established through a period of testing rather than through mathematical proof.

Related

Unity - relate wall clock time to physics time (in fixed update)

I am involved in a project that is building software for a robot that uses ROS2 to support the robot's autonomy code. To streamline development, we are using a model of our robot built in Unity to simulate the physics. In Unity, we have analogues for the robot's sensors and actuators - the Unity sensors use Unity's physics state to generate readings that are published to ROS2 topics and the actuators subscribe to ROS2 topics and process messages that invoke the actuators and implement the physics outcomes of those actuators within Unity. Ultimately, we will deploy the (unmodified) autonomy software on a physical robot that has real sensors and actuators and uses the real world for the physics.
In ROS2, we are scripting with python and in Unity the scripting uses C#.
It is our understanding that, by design, the wall clock time that a Unity fixed update call executes has no direct correlation with the "physics" time associated with the fixed update. This makes sense to us - simulated physics can run out of synchronization with the real world and still give the right answer.
Some of our planning software (ROS2/python) wants to initiate an actuator at a particular time, expressed as floating point seconds since the (1970) epoch. For example, we might want to start decelerating at a particular time so that we end up stopped one meter from the target. Given the knowledge of the robot's speed and distance from the target (received from sensors), along with an understanding of the acceleration produced by the actuator, it is easy to plan the end of the maneuver and have the actuation instructions delivered to the actuator well in advance of when it needs to initiate. Note: we specifically don't want to hold back sending the actuation instructions until it is time to initiate, because of uncertainties in message latency, etc. - if we do that, we will never end up exactly where we intended.
And in a similar fashion, we expect sensor readings that are published (in a fixed update in Unity/C#) to likewise be timestamped in floating point seconds since the epoch (e.g., the range to the target object was 10m at a particular recent time). We don't want to timestamp the sensor reading with the time it was received because of unknown latency from the time the sensor value was current and the time it was received in our ROS2 node.
When our (Unity) simulated sensors publish a reading (based on the physics state during a fixed update call), we don't know what real-world/wall clock timestamp to associated with it - we don't know which 20ms of real time that particular fixed update correlates to.
Likewise, when our our Unity script that is associated with an actuator is holding a message that says to initiate actuation at a particular real-world time, we don't know if that should happen in the current fixed update because we don't know the real-world time that the fixed update correlates to.
The Unity Time methods all seem to deal with time relative to the start of the game (basically, a dynamically determined epoch).
We have tried capturing the wall clock time and time since game start in a MonoBehavior's Start, but this seems to put us off by a handful of seconds when the fixed updates are running (with the exact time shift being variable between runs).
How to crosswalk between the Unity game-start-based epoch and a fixed-start epoch (e.g., 1970)?
An example: This code will publish the range to the target, along with the time of the measurement. This gets executed every 20ms by Unity.
void FixedUpdate()
{
RangeMsg targetRange = new RangeMsg();
targetRange.time_s = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds() / 1000.0;
targetRange.range_m = Vector3.Distance(target.transform.position, chaser.transform.position);
ros.Publish(topicName, targetRange);
}
On the receiving end, let's say that we are calculating the speed toward the target:
def handle_range(self, msg):
if self.last_range is not None:
diff_s = msg.time_s - self.last_range.time_s
if diff_s != 0.0:
diff_range_m = self.last_range.range_m - msg.range_m
speed = Speed()
speed.time_s = msg.time_s
speed.speed_mps = diff_range_m / diff_s
self.publisher.publish(speed)
self.last_range = msg
If the messages are really published exactly every 20ms, then this all works. But if Unity gets behind and runs several fixed updates one after another to get caught up, then the speed gets computed as much higher than it should (because each cycle, 20ms of movement is applied, but the cycles may be executed within a millisecond of each other).
If instead we use Unity's time for timestamping the messages with
targetRange.time_s = Time.fixedTimeAsDouble;
then the range and time stay in synch and the speed calculation works great, even in the face of some major hiccup in Unity processing. But, then the rest of our code that lives in the 1970 epoch has no idea what time targetRange.time_s really is.

Why are computer/game physics engines often non-deterministic?

Developing with various game physics engines over the years, I've noticed that on the same machine I observe widely different results in physics simulations between runs. Most recently, the Unity engine does this, even though physics are calculated at set intervals of time (FixedUpdate) -- as far as I can determine it should be completely independent of frame-rate.
I've asked this question on game forums before, and was told it was due to chaotic motion: see double pendulum. But, even the double pendulum is deterministic if the starting conditions are exactly controlled, right? On the same machine, shouldn't floating point math behave the same way?
I understand that there are problems with floating point math accuracy, but I understand those problems (as outlined here) to not be problems on the same hardware -- isn't floating point inaccuracy still deterministic? What am I missing?
tl;dr: If running a simulation on the same machine, using the same floating point math(?), shouldn't the simulation be deterministic?
Thank you very much for your time.
Yes, you are correct. A program executed on the same machine will give identical results each time (at least ideally---there might be cosmic rays or other external things that affect memory and what not, but I would say these are not of our concern). All calculations on a computer are deterministic, and so all algorithms of a computer will necessarily be deterministic (which is the reason it's so hard to make random number generators)!
Most likely the randomness you see is implemented in the program with some random number generator, and the seed for the random numbers varies from run to run. Should you start the simulation with the same seed, you will see the same result.
Edit: I'm not familiar with Unity, but doing some more research seems to indicate that the FixedUpdate routine might be the problem.
Except for the audio thread, everything in unity is done in the main thread if you don't explicitly start a thread of your own. FixedUpdate runs on the very same thread, at the same interval as Update, except it makes up for lost time and simulates a fixed time step.
source
If this is the case, and the function itself looks somewhat like:
void physicsUpdate(double currentTime, double lastTime)
{
double deltaT = currentTime - lastTime;
// do physics using `deltaT`
}
Here we will naturally get different behaviour due to deltaT not being same from two different runs. This is determined from what other processes are running in the background, as they could delay the main thread. This function would be called irregularly and you would observe different results from runs. Note that these irregularities will mostly not be due to floating point inprecision, but due to inaccuracies when doing integration. (E.g. velocity is often calculated by v = a*deltaT, which assumes a constant acceleration since last update. This is in general not true).
However, if the function would look like this:
void physicsUpdate(double deltaT)
{
// do physics using `deltaT`
}
Every time you do simulations using this you will always get the exact same result.
I've not got much experience with Unity or its physics simulations, but I've found the following forum post which also links to an article which seems to indicate it's down to precision with the floating point calculations.
As you've mentioned, a lot of people seem to keep rehashing this question!
The forum also links to this blog post which may shed some light on the issue.

Measure the electricity consumed by a browser to render a webpage

Is there a way to calculate the electricity consumed to load and render a webpage (frontend)? I was thinking of a 'test' made with phantomjs for example:
load a web page
scroll to the bottom
And measure how much electricity was needed. I can perhaps extrapolate from CPU cycle. But phantomjs is headless, rendering in real browser is certainly different. Perhaps it's impossible to do real measurements.. but with an index it may be possible to compare websites.
Do you have other suggestions?
It's pretty much impossible to measure this internally in modern processors (anything more recent than 286). By internally, I mean by counting cycles. This is because different parts of the processor consume different levels of energy per cycle depending upon the instruction.
That said, you can make your measurements. Stick a power meter between the wall and the processor. Here's a procedure:
Measure the baseline energy usage, i.e. nothing running except the OS and the browser, and the browser completely static (i.e. not doing anything). You need to make sure that everything is stead state (SS) meaning start your measurements only after several minutes of idle.
Measure the usage doing the operation you want. Again, you want to avoid any start up and stopping work, so make sure you start measuring at least 15 seconds after you start the operation. Stopping isn't an issue since the browser will execute any termination code after you finish your measurement.
Sounds simple, right? Unfortunately, because of the nature of your measurements, there are some gotchas.
Do you recall your physics classes (or EE classes) that talked about signal to noise ratios? Well, a scroll down uses very little energy, so the signal (scrolling) is well in the noise (normal background processes). This means you have to take a LOT of samples to get anything useful.
Your browser startup energy usage, or anything else that uses a decent amount of processing, is much easier to measure (better signal to noise ratio).
Also, make sure you understand the underlying electronics. For example, power is VA (voltage*amperage) where both V and A are in phase. I don't think this will be an issue since I'm pretty sure they are in phase for computers. Also, any decent power meter understands the difference.
I'm guessing you intend to do this for mobile devices. Your measurements will only be roughly the same from processor to processor. This is due to architectural differences from generation to generation, and from manufacturer to manufacturer.
Good luck.

MATLAB timer object pitfalls and poor usage

I created a timer that executes every 0.1 seconds. It calls a function that reads data and then updates the property of an object. When I start the timer, MATLAB displays the "Busy" signal at the bottom of the command window. MATLAB becomes unresponsive and I cannot halt the timer using the stop() function. My only recourse is to use Ctrl-C.
I determined the problem to be the processing time of the timer callback function was longer than the calling period and I presume no other MATLAB code could squeak into the stack/queue. This makes me somewhat worried about relying upon timers. My goal is to continuously make measurements from several devices, store them in an object, and need MATLAB to do other things in between these measurements. Also, I cannot afford to miss a measurement.
I am creating an app that responds to user input and provides the user with real time information, so I chose a fast period thinking it would produce a snapping UX. Since I am committed to using MATLAB I could not think of a better way of implementing this functionality than using a timer object. So the first question is, do timer objects seem like the right tool for the job I describe above?
Second, if I am to use timer objects would someone please share their experience about common mistakes or pitfalls using timers? Or has someone any advice on how to best implement timer objects? Is there a practical limit to the number of timer objects that can be used simultaneously? What is the best way to determine the optimal frequency of a timer object?
Thanks!
I would think that 0.1 seconds is pushing it for reliability, especially if you have multiple timers going at once, and especially if you want a user interface to be responsive at the same time as well.
MATLAB is basically single-threaded. There are exceptions, for example the lower level math routines call BLAS in a multithreaded way that speeds them up a lot. You can also write MEX code in C that is multithreaded, and call that from MATLAB. But there's basically one true thread that all your code runs on.
Timer objects are also an exception in a way. When you create a timer object, underneath that there's a java timer object, which is running on a separate java thread. But when any of its callbacks fire, it calls back into MATLAB to execute them, which happens on the one true thread.
If you cannot afford to miss a measurement, you'll need to set the BusyMode property of your timers to queue rather than the default drop, which will mean that if any of the callbacks take longer than 0.1 seconds to execute, they will back up - and you have to fit in any user interface actions as well.
In addition, MATLAB doesn't (and couldn't) make any real guarantees about the precision of how fast or regularly the timer callbacks will execute. If Windows suddenly decides to run a virus scan, or update itself etc, MATLAB will lose priority and that will mess things up. If you were firing timers every 10 seconds, or even every 1 second, it's in all likelihood going to be roughly accurate. But if you're firing them every millisecond, you can't expect it to be reliable - you'd need a proper real time environment for that. 0.1 seconds seems borderline to me, and I'd expect its reliability to depend on what exactly you were doing in those 0.1 seconds, and what else was going on as well (and what computer you're running, as well).
To answer your last questions (max number of timers, optimal timer frequency etc) - there's no general answer, just try it and work out what happens with a range of values in your particular situation.
If it turns out that it's not reliable enough, you could either try:
Doing the data acquisition in another faster way (e.g. in C), maybe in a separate process, then calling that from MATLAB via MEX, perhaps with some sort of buffering to smooth things out.
Turning to some the other MathWorks products (e.g. Simulink, Simulink Coder, some of the System Toolboxes) that are designed for developing properly real time systems.
Hope that helps!

Controlling events in a hybrid Modelica model

I am confused by the hybrid modelling paradigm in Modelica. On one hand, events are useful, on the other hand, they are to be avoided. Let me explain my case:
I have a large model consisting of multiple buildings in a neighborhood that is simulated over 1 year. Initially, the model ran very slow. Adding noEvent() around as many if-conditions as possible drastically improved the speed.
As the development continued, the control of the model got more complicated, and I have again many events, sometimes at very short intervals. To give an idea:
Number of (model) time events : 28170
Number of (U) time events : 0
Number of state events : 22572
Number of step events : 0
These events blow up the output (for correct post-processing I need the variables at events) and slows the simulation. And moreover, I have the feeling that some of the noEvent(if...) lead to unexpected behavior.
I wonder if it would be a solution to force my events at certain time steps and prohibit them in between these time steps? Ideally, I would like to trigger these 'forced events' based on certain conditions. For example: during the day they should be every 15 minutes, at high solar radiation at every minute, during nights I don't want events at all.
Is this a good idea to do? I guess this will be faster as many of the state events will become time events? How can this be done with Modelica 3.2 (in Dymola)?
Thanks on beforehand for all answers.
Roel
A few comments.
First, if you have a simulation with lots of events (relative to the total duration of the simulation), the first thing I would encourage you to do is use a lower order integrator. The point here is that higher-order integrators normally allow you to take longer time steps. But if those steps are constantly truncated by events, they just end up being really expensive.
Second, you could try fixed-step integrators. Depending on the tool, they may implement this kind of "pool events and fire them all at once" kind of approach in the context of fixed-time step integrators. But the specification doesn't really say anything on how tools should deal with events that occur between fixed time steps.
Third, another way to approach this would be to "pool" your events yourself. The simplest way I could imagine doing this would be to take all the statements that currently generate events and wrap them in a "when sample(...,...) then" statement. This way, you could make sure that the events were only triggered at specific intervals. This would be more portable then the fixed time step approach. I think this is what you were actually proposing in your question but it is important to point out that it should not be based on time steps (the model has no concept of a time step) but rather on a model specified sampling interval (which will, in practice, be completely independent of time steps).
As you point out, using "sample(...,...)" will turn these into time events and, yes, this should be faster.