Can I relate MFTIME to the real system time? - streaming

I am writing some code that forwards samples from Windows Media Foundation to live555. While MF uses its 100 ns timestamps, live555 uses "real time" in form of struct timeval. I know how to fake the latter from GetSystemTime(), but I wonder whether it is possible to derive the "real time" from the MF sample time and the data passed to IMFClockStateSink::OnClockStart?

Media Foundation also provides a presentation time source based on the system clock.
While this presentation source is providing time stamps based on system clock (presumably using or using shared source with timeGetTime but I did not check), this source is not the only option.
So you basically should not make assumptions on correlation between clock time and current system "absolute" time. Time stamps are supposed to only provide relative time increments at 10 MHz rate.
If the presentation clock uses some other time source, ...

Related

Unity - relate wall clock time to physics time (in fixed update)

I am involved in a project that is building software for a robot that uses ROS2 to support the robot's autonomy code. To streamline development, we are using a model of our robot built in Unity to simulate the physics. In Unity, we have analogues for the robot's sensors and actuators - the Unity sensors use Unity's physics state to generate readings that are published to ROS2 topics and the actuators subscribe to ROS2 topics and process messages that invoke the actuators and implement the physics outcomes of those actuators within Unity. Ultimately, we will deploy the (unmodified) autonomy software on a physical robot that has real sensors and actuators and uses the real world for the physics.
In ROS2, we are scripting with python and in Unity the scripting uses C#.
It is our understanding that, by design, the wall clock time that a Unity fixed update call executes has no direct correlation with the "physics" time associated with the fixed update. This makes sense to us - simulated physics can run out of synchronization with the real world and still give the right answer.
Some of our planning software (ROS2/python) wants to initiate an actuator at a particular time, expressed as floating point seconds since the (1970) epoch. For example, we might want to start decelerating at a particular time so that we end up stopped one meter from the target. Given the knowledge of the robot's speed and distance from the target (received from sensors), along with an understanding of the acceleration produced by the actuator, it is easy to plan the end of the maneuver and have the actuation instructions delivered to the actuator well in advance of when it needs to initiate. Note: we specifically don't want to hold back sending the actuation instructions until it is time to initiate, because of uncertainties in message latency, etc. - if we do that, we will never end up exactly where we intended.
And in a similar fashion, we expect sensor readings that are published (in a fixed update in Unity/C#) to likewise be timestamped in floating point seconds since the epoch (e.g., the range to the target object was 10m at a particular recent time). We don't want to timestamp the sensor reading with the time it was received because of unknown latency from the time the sensor value was current and the time it was received in our ROS2 node.
When our (Unity) simulated sensors publish a reading (based on the physics state during a fixed update call), we don't know what real-world/wall clock timestamp to associated with it - we don't know which 20ms of real time that particular fixed update correlates to.
Likewise, when our our Unity script that is associated with an actuator is holding a message that says to initiate actuation at a particular real-world time, we don't know if that should happen in the current fixed update because we don't know the real-world time that the fixed update correlates to.
The Unity Time methods all seem to deal with time relative to the start of the game (basically, a dynamically determined epoch).
We have tried capturing the wall clock time and time since game start in a MonoBehavior's Start, but this seems to put us off by a handful of seconds when the fixed updates are running (with the exact time shift being variable between runs).
How to crosswalk between the Unity game-start-based epoch and a fixed-start epoch (e.g., 1970)?
An example: This code will publish the range to the target, along with the time of the measurement. This gets executed every 20ms by Unity.
void FixedUpdate()
{
RangeMsg targetRange = new RangeMsg();
targetRange.time_s = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds() / 1000.0;
targetRange.range_m = Vector3.Distance(target.transform.position, chaser.transform.position);
ros.Publish(topicName, targetRange);
}
On the receiving end, let's say that we are calculating the speed toward the target:
def handle_range(self, msg):
if self.last_range is not None:
diff_s = msg.time_s - self.last_range.time_s
if diff_s != 0.0:
diff_range_m = self.last_range.range_m - msg.range_m
speed = Speed()
speed.time_s = msg.time_s
speed.speed_mps = diff_range_m / diff_s
self.publisher.publish(speed)
self.last_range = msg
If the messages are really published exactly every 20ms, then this all works. But if Unity gets behind and runs several fixed updates one after another to get caught up, then the speed gets computed as much higher than it should (because each cycle, 20ms of movement is applied, but the cycles may be executed within a millisecond of each other).
If instead we use Unity's time for timestamping the messages with
targetRange.time_s = Time.fixedTimeAsDouble;
then the range and time stay in synch and the speed calculation works great, even in the face of some major hiccup in Unity processing. But, then the rest of our code that lives in the 1970 epoch has no idea what time targetRange.time_s really is.

Retrieve real world time from time saved using "Clock" → "To Workspace" in Simulink

I have made some measurements with an NI6024 acquisition card using Simulink, with the following model:
I have run the simulation with simulation time = "inf" and a fixed time step of 0.2, in order to collect real time data from the card. But I didn't realize that the values that "Clock" gives do not correspond to real-world time. More specifically, I have run the experiment for about a minute but the data in the variable "t" range from 0 to about 50000, which is clearly wrong. I have saved the workspace data, and I have access to the recorded data (the variables "t" and "h"), but have no means to reproduce the experiment.
Is there any way to retrieve the real world time of the simulation?
You've basically got two choices.
Run your model in real time, using for instance something like Simulink Real-Time, or other real time OS. In this case the (wall clock) time will represent time since the model was started.
Write an S-Function to slow down the simulation so that it fakes real time. There are multiple examples of doing this on the File Exchange. See Real-Time Pacer for Simulink for one such example.

Android app dev: Finding the best way to synchronize the timestamps of two sensors

There's already a good answer on the technical details and constraints of timing the gyro measurement:
Movesense, timestamp source of imu data, and timing issues in general
However, I would like to ask more practical question from the Android app developer perspective working with two sensors and requirement for high accuracy with Gyro measurement timing.
What would be the most accurate way to synchronize/consolidate the timestamps from two sensors and put the measurements on the same time axis?
The sensor SW version 1.7 introduced Time/Detailed API to check the internal time stamp and the UTC time set on the sensor device. This is how I imagined it would play out with two sensors:
Before subscribing anything, set the UTC time (microseconds) on the sensor1 and sensor2 based on Android device time (PUT /Time)
Get the difference of the "Time since sensor turned on" (in milliseconds) and "UTC time set on sensor" (in microseconds) (on sensor1 and sensor2) (GET /Time/Detailed).
Calculate the difference of these two timestamps (in milliseconds)(for both sensors).
Get the gyro values from the sensor with the internal timestamp. Add the calculated value from step 3 to the internal timestamp to get the correct/global UTC time value.
Is this procedure correct?
Is there a more efficient or accurate way to do this? E.g. the GATT service to set the time was mentioned in the linked post as the fastest way. Anything else?
How about the possible drift in the sensor time for gyro? Are there any tricks to limit the impact of the drift afterwards? Would it make sense to get the /Time/Detailed info during longer measurements and check if the internal clock has drifted/changed compared to the UTC time?
Thanks!
Very good guestion!
Looking at the accuracy of the crystals (+- 20 ppm) it means that typical drift between sensors should be no more than 40 ppm. That translates to about 0.14 seconds over an hour. for longer measurements and or better accuracy, a better synchronization is needed.
Luckily the clock drift should stay relatively constant unless the temperature of the sensor is changing rapidly. Therefore it should be enough to compare the mobile phone clock and each sensor UTC at the beginning and end of the measurement. Any drift of each of sensors should be visible and the timestamps easily compensated.
If there is need to even more accurate timestamps, taking regular samples of /Time/Detailed from each sensor and comparing it to the phone clock should provide a way to estimate possible sensor clock drift.
Full Disclosure: I work for the Movesense team

What is responsible for changing core's load and frequency in multicore processor

Having looked for a description of the multicore design i keep finding several diagrams, but all of them look somewhat like this:
I know from looking at i7z command output that different cores can run at different frequencies.
This would suggest that the decisions regarding which core will be given a new process and for changing the frequency of the core itself are done either by the operating system or by the control block of the core itself.
My question is: What controls the frequencies of each individual core? Is the job of associating a READY process with the specific core placed upon the operating system or is it done by something within the processor.
Scheduling processes/threads to cores is purely up to the OS. The hardware has no understanding of tasks waiting to run. Maintaining the OS's list of processes that are runnable vs. waiting for I/O is completely a software thing.
Migrating a thread from one core to another is done by kernel code on the original core storing the architectural state to memory, then OS code on the new core restoring that saved state and resuming user-space execution.
Traditionally, frequency and voltage scaling decisions are made by the OS. Take Linux as an example: The decision-making code is called a governor (and also this arch wiki link came up high on google). It looks at things like how often processes have used their entire time slice on the current core. If the governor decides the CPU should run at a different speed, it programs some control registers to implement the change. As I understand it, the hardware takes care of choosing the right voltage to support the requested frequency.
As I understand it, the OS running on each core makes decisions independently. On hardware that allows each core to run at different frequencies, the decision-making code doesn't need to coordinate with each other. If running a high frequency on one core requires a high voltage chip-wide, the hardware takes care of that. I think the modern implementation of DVFS (dynamic voltage and frequency scaling) is fairly high-level, with the OS just telling the hardware which of N choices it wants, and the onboard power microcontroller taking care of the details of programming oscillators / clock dividers and voltage regulators.
Intel's "Turbo" feature, which opportunistically boosts the frequency above the max sustainable frequency, does the decision making in hardware. Any time the OS requests the highest advertised frequency, the CPU uses turbo when power and cooling allow.
Intel's Skylake takes this a step further: The OS can hand full control over DVFS to the hardware, optionally with constraints. That lets it react from microsecond to microsecond, rather than on a timescale of milliseconds. This does actually allow better performance in bursty workloads, because more power budget is available for turbo when it's useful. A few benchmarks are bursty enough to observe this, like some browser / javascript ones IIRC.
There was a whole talk about Skylake's new power management at IDF2015, check out the slides and/or archived webcast. The old method is described in a lot of detail there, too, to illustrate the difference, so you should really check it out if you want more detail than my summary. (The list of other IDF talks is here, thanks to Agner Fog's blog for the link)
The core frequency is controlled by a given voltage applied to a core's "oscillator".
This voltage can be changed by the Operating System but it can also be changed by the BIOS itself if a high temperature is detected in the CPU.

Dymola/Modelica real-time simulation advances too fast

I want to simulate a model in Dymola in real-time for HiL use. In the results I see that the Simulation is advancing about 5% too fast.
Integration terminated successfully at T = 691200
CPU-time for integration : 6.57e+005 seconds
CPU-time for one GRID interval: 951 milli-seconds
I already tried to increase the grid interval to reduce the relativ error, but still the simulation is advancing too fast. I only read about aproaches to reduce model complexity to allow simulation within the defined time steps.
Note, that the Simulation does keep up with real-time and is even faster. How can I ín this case match simulated time and real time?
Edit 1:
I used Lsodar solver with checked "Synchronize with realtime option" in Realtime tab. I have the realtime simulation licence option. I use Dymola 2013 on Windows 7. Here the result for a stepsize of 15s:
Integration terminated successfully at T = 691200
CPU-time for integration : 6.6e+005 seconds
CPU-time for one GRID interval : 1.43e+004 milli-seconds
The deviation still is roughly about 4.5%.
I did however not use inline integration.
Do I need hard realtime or inline integration to improve those results? It should be possible to get a deviation lower than 4.5% using soft realtime or not?
Edit 2:
I took the Python27 block from the Berkeley Buildings library to read the System time and compare it with the Simulation advance. The result shows that 36 hours after Simulation start, the Simulation slows down slightly (compared to real time). About 72 hours after the start of the simulation it starts getting about 10% faster than real time. In addition, the jitter in the result increases after those 72 hours.
Any explanations?
Next steps will be:
-changing to fixed step solver (Might well be this is a big part of the solution)
-Changing from DDE Server to OPC Server, which at the Moment doesn't not seem to be possible in Dymola 2013 however.
Edit 3:
Nope... using a fixed step solver does seem to solve the problem. In the first 48 hours of simulation time the deviation seems to be equal to the deviation using a solver with variable step size. In this example I used the Rkfix 3 solver with an integrator step of 0.1.
Nobody knows how to get rid of those huge deviations?
If I recall correctly, Dymola has a special compilation option for real-time performance. However, I think it is a licensed option (not sure).
I suspect that Dymola is picking up the wrong clock speed.
You could use the "Slowdown factor" that is in the Simulation Setup, on the Realtime tab just below "Synchronize with realtime". Set this to 1/0.95.
There is a parameter in Dymola that you can use to set the CPU speed but I could not find this now, I will have a look for this again later.
I solved the problem switching to an embedded OPC-Server. Error between real time and simulation time in this case is shown below.
Compiling Dymola Problems with an embedded OPC-Server requires administrator rights (which I did not have before). The active folder of Dymola must not be write protected.