I'm trying to calculate timestamps for picamera relative to the system clock in as precise a way as possible but it's not clear to me from the documentation how this can be done.
I can set the clock_mode to "raw" and get times relative to camera initialization time but I don't see a method to query the initialization time or I can use "reset" to get timestamps relative to the recording start but likewise I don't see a good way of getting a precise (ms resolution) timestamp for when recording started. I am guessing there is too much latency between my call to camera.start_recording(...) and the actual start of recording to use the system time before this call as the recording start time.
Related
I am involved in a project that is building software for a robot that uses ROS2 to support the robot's autonomy code. To streamline development, we are using a model of our robot built in Unity to simulate the physics. In Unity, we have analogues for the robot's sensors and actuators - the Unity sensors use Unity's physics state to generate readings that are published to ROS2 topics and the actuators subscribe to ROS2 topics and process messages that invoke the actuators and implement the physics outcomes of those actuators within Unity. Ultimately, we will deploy the (unmodified) autonomy software on a physical robot that has real sensors and actuators and uses the real world for the physics.
In ROS2, we are scripting with python and in Unity the scripting uses C#.
It is our understanding that, by design, the wall clock time that a Unity fixed update call executes has no direct correlation with the "physics" time associated with the fixed update. This makes sense to us - simulated physics can run out of synchronization with the real world and still give the right answer.
Some of our planning software (ROS2/python) wants to initiate an actuator at a particular time, expressed as floating point seconds since the (1970) epoch. For example, we might want to start decelerating at a particular time so that we end up stopped one meter from the target. Given the knowledge of the robot's speed and distance from the target (received from sensors), along with an understanding of the acceleration produced by the actuator, it is easy to plan the end of the maneuver and have the actuation instructions delivered to the actuator well in advance of when it needs to initiate. Note: we specifically don't want to hold back sending the actuation instructions until it is time to initiate, because of uncertainties in message latency, etc. - if we do that, we will never end up exactly where we intended.
And in a similar fashion, we expect sensor readings that are published (in a fixed update in Unity/C#) to likewise be timestamped in floating point seconds since the epoch (e.g., the range to the target object was 10m at a particular recent time). We don't want to timestamp the sensor reading with the time it was received because of unknown latency from the time the sensor value was current and the time it was received in our ROS2 node.
When our (Unity) simulated sensors publish a reading (based on the physics state during a fixed update call), we don't know what real-world/wall clock timestamp to associated with it - we don't know which 20ms of real time that particular fixed update correlates to.
Likewise, when our our Unity script that is associated with an actuator is holding a message that says to initiate actuation at a particular real-world time, we don't know if that should happen in the current fixed update because we don't know the real-world time that the fixed update correlates to.
The Unity Time methods all seem to deal with time relative to the start of the game (basically, a dynamically determined epoch).
We have tried capturing the wall clock time and time since game start in a MonoBehavior's Start, but this seems to put us off by a handful of seconds when the fixed updates are running (with the exact time shift being variable between runs).
How to crosswalk between the Unity game-start-based epoch and a fixed-start epoch (e.g., 1970)?
An example: This code will publish the range to the target, along with the time of the measurement. This gets executed every 20ms by Unity.
void FixedUpdate()
{
RangeMsg targetRange = new RangeMsg();
targetRange.time_s = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds() / 1000.0;
targetRange.range_m = Vector3.Distance(target.transform.position, chaser.transform.position);
ros.Publish(topicName, targetRange);
}
On the receiving end, let's say that we are calculating the speed toward the target:
def handle_range(self, msg):
if self.last_range is not None:
diff_s = msg.time_s - self.last_range.time_s
if diff_s != 0.0:
diff_range_m = self.last_range.range_m - msg.range_m
speed = Speed()
speed.time_s = msg.time_s
speed.speed_mps = diff_range_m / diff_s
self.publisher.publish(speed)
self.last_range = msg
If the messages are really published exactly every 20ms, then this all works. But if Unity gets behind and runs several fixed updates one after another to get caught up, then the speed gets computed as much higher than it should (because each cycle, 20ms of movement is applied, but the cycles may be executed within a millisecond of each other).
If instead we use Unity's time for timestamping the messages with
targetRange.time_s = Time.fixedTimeAsDouble;
then the range and time stay in synch and the speed calculation works great, even in the face of some major hiccup in Unity processing. But, then the rest of our code that lives in the 1970 epoch has no idea what time targetRange.time_s really is.
There's already a good answer on the technical details and constraints of timing the gyro measurement:
Movesense, timestamp source of imu data, and timing issues in general
However, I would like to ask more practical question from the Android app developer perspective working with two sensors and requirement for high accuracy with Gyro measurement timing.
What would be the most accurate way to synchronize/consolidate the timestamps from two sensors and put the measurements on the same time axis?
The sensor SW version 1.7 introduced Time/Detailed API to check the internal time stamp and the UTC time set on the sensor device. This is how I imagined it would play out with two sensors:
Before subscribing anything, set the UTC time (microseconds) on the sensor1 and sensor2 based on Android device time (PUT /Time)
Get the difference of the "Time since sensor turned on" (in milliseconds) and "UTC time set on sensor" (in microseconds) (on sensor1 and sensor2) (GET /Time/Detailed).
Calculate the difference of these two timestamps (in milliseconds)(for both sensors).
Get the gyro values from the sensor with the internal timestamp. Add the calculated value from step 3 to the internal timestamp to get the correct/global UTC time value.
Is this procedure correct?
Is there a more efficient or accurate way to do this? E.g. the GATT service to set the time was mentioned in the linked post as the fastest way. Anything else?
How about the possible drift in the sensor time for gyro? Are there any tricks to limit the impact of the drift afterwards? Would it make sense to get the /Time/Detailed info during longer measurements and check if the internal clock has drifted/changed compared to the UTC time?
Thanks!
Very good guestion!
Looking at the accuracy of the crystals (+- 20 ppm) it means that typical drift between sensors should be no more than 40 ppm. That translates to about 0.14 seconds over an hour. for longer measurements and or better accuracy, a better synchronization is needed.
Luckily the clock drift should stay relatively constant unless the temperature of the sensor is changing rapidly. Therefore it should be enough to compare the mobile phone clock and each sensor UTC at the beginning and end of the measurement. Any drift of each of sensors should be visible and the timestamps easily compensated.
If there is need to even more accurate timestamps, taking regular samples of /Time/Detailed from each sensor and comparing it to the phone clock should provide a way to estimate possible sensor clock drift.
Full Disclosure: I work for the Movesense team
I am writing some code that forwards samples from Windows Media Foundation to live555. While MF uses its 100 ns timestamps, live555 uses "real time" in form of struct timeval. I know how to fake the latter from GetSystemTime(), but I wonder whether it is possible to derive the "real time" from the MF sample time and the data passed to IMFClockStateSink::OnClockStart?
Media Foundation also provides a presentation time source based on the system clock.
While this presentation source is providing time stamps based on system clock (presumably using or using shared source with timeGetTime but I did not check), this source is not the only option.
So you basically should not make assumptions on correlation between clock time and current system "absolute" time. Time stamps are supposed to only provide relative time increments at 10 MHz rate.
If the presentation clock uses some other time source, ...
I am using Matlab for a position tracking application wherein the position is extracted frame by frame from a ~20 minute .avi file. Right now to process a 20 minute video takes ~1 hour. The annoying thing is that the actual algorithmic computations are quite fast. The bottleneck is simply LOADING the .avi frames into Matlab, which we do 20 frames at a time. Here is our pseudocode:
vidobj = VideoReader(vidFile);
frmStep=20; %# of frames to load at a time
for k=1:frmStep:(numFrames-frmStep+1)
f = read(vidobj, [k (k+frmStep-1)]);
%%Do video processing
end
I was wondering whether there was any way to load this faster or do anything about the horribly long computation times....
Over the years I have tried a couple of alternatives to Matlab's native video processing procedures but I never profiled them so I can't tell you anything about the speed up.
The first alternative I've used extensively was mmread. This function uses ffmpeg to do the actual frame grabbing.
Currently I use the VideoCapture class in mexopencv. You will need opencv installed for that to compile. I have also managed to get most of the Matlab bindings in opencv3 to compile (on Mac OSX), which also gives you a VideoCapture class.
I'm having trouble to synchronize the color and depth image with Image acquisition ToolBox.
Currently, I'm just trying to log both streams into binary files without frame drop or losing the synchronization.
I don't try to render during my recording.
The code for the start button :
colorVid = videoinput('kinect',1); depthVid = videoinput('kinect',2);
colorVid.FramesPerTrigger = inf; depthVid.FramesPerTrigger = inf;
triggerconfig([colorVid depthVid],'manual');
iatconfigLogging(colorVid,'Video/Color.bin');
iatconfigLogging(depthVid,'Video/Depth.bin');
start([colorVid depthVid]);
pause(2); % this is to be sure both sensor are start before the trigger
trigger([colorVid depthVid]);
where the iatconfigureLogging() is from here
and the stop button just doing
stop([colorVid depthVid]);
Since the frames rate of the Kinect is 30FPS and we can't change this, I'm using FrameGrabInterval to emulate it.
But when I over like 5FPS, I can't log depth and color and keep the frames synchronized for more then 20-25 seconds. And except 1 FPS, the sync is over after 2-3 minutes and I'm looking for at least a 10-15 minutes acquisition.
I'm looking on something with like flushdata(obj,'triggers'); right now, but I don't figure it out how to keep the 30 FPS with the logging.
Thanks in advance for any one who will give something.
As far as I know you cannot synchronize the streams by triggering because they are not synchronized in hardware. I tried it and the best I could come up with was timestamping each stream and throwing away frame pairs that were too far apart in time. I noticed the classic frequency offset effect whereby the streams move in and out of synch with inverse frequency of the difference between the periods of each. The obvious disadvantage of throwing away frames this is that you get a non-continuous stream.
You can get timestamp information using
[data time] = getdata(vid,1);