i have a little project i am working on in which i need to measure the time difference between several raspberry pi modules in the matter of nano seconds. my question is what timing mechanism can provide me this type of difference measurement ability, other than GPS clocking system (which i am trying to avoid).
among the options i am familiar with:
PTP sync
Linux time from epoch, in nano-Seconds
GPS- trying to avoid if possible
other suggestions are welcome
Related
i am currently developing a model in simulink with three different main functions (let's call them A,B,C for now), where one of them is running at a different sample time as the other ones. However, I tried to simulate this system on the Raspberry Pi via external mode but got a lot of overruns and a high cpu load. Now, I am trying to split the model, so that for example functions A and B are executed on one core and function C is executed on another core.
For this, I used this article from Mathworks, but I think that you can't actually assign one task to a core but just specify the periodic execution. As a result I could reduce the cpu load to a maximum of 40% but still get a lot of overruns (imo, this also contradicts itself).
As a second approach, I tried this article but I think this is not possible for Raspberry Pis since I can not add and assign cores in the concurrent execution tab.
My goal is to assign each task to a core on the raspberry and see the cpu load on the raspberry pi.
Many thanks in advance!
I have optimized my deep learning model with TensorRT. A C++ interface is inferencing images by optimized model on Jetson TX2. This interface is providing average 60 FPS (But it is not stable. Inferences are in range 50 and 160 FPS). I need to run this system as real time on real time patched Jetson.
So what is your thoughts on real time inference with TensorRT? Is it possible to develop real time inferencing system with TensorRT and how?
I have tried set high priorities to process and threads to provide preemption. I expect appoximatly same FPS value on every inference. So I need deterministic inference time. But system could not output deterministicaly.
Have you tried to set the clock on Jetson: sudo nvpmodel -m 0
Here is some links for more information:
https://elinux.org/Jetson/Performance
https://devtalk.nvidia.com/default/topic/999915/jetson-tx2/how-do-you-switch-between-max-q-and-max-p/post/5109507/#5109507
https://devtalk.nvidia.com/default/topic/1000345/jetson-tx2/two-cores-disabled-/post/5110960/#5110960
I'm trying to evaluate an application that runs on a vehicular network using OMNeT++, Veins and SUMO. Because the application relies on realistic traffic behavior, so I decided to use the LuST Scenario, which seems to be the state of the art for such data. However, I'd like to use specific parts of this scenario instead of the entire scenario (e.g., a high and a low traffic load fragment, perhaps others). It'd be nice to keep the bidirectional functionality that VEINS offers, although I'm mostly interested in getting traffic data from SUMO into my simulation.
One obvious way to implement this would be to use a warm-up period. However, I'm wondering if there is a more efficient way -- simulating 8 hours of traffic just to get a several-minute fragment feels inefficient and may be problematic for simulations with sufficient repetitions.
Does VEINS have a built-in mechanism for warm-up periods, primarily one that avoids sending messages (which is by far the most time consuming part in the simulation), or does it have a way to wait for SUMO to advance, e.g., to a specific time stamp (which also avoids creating vehicle objects in OMNeT++ and thus all the initiation code)?
In case it's relevant -- I'm using the latest stable versions of OMNeT++ and SUMO (OMNeT++ 4.6 with SUMO 0.25.0) and my code base is based on VEINS 4a2 (with some changes, notably accepting the TraCI API version 10).
There are two things you can do here for reducing the number of sent messages in Veins:
Use the OMNeT++ Warm-Up Period as described here in the manual. Basically it means to set warmup-period in your .ini file and make sure your code checks this with if (simTime() >= simulation.getWarmupPeriod()). The OMNeT++ signals for result collection are aware of this.
The TraCIScenarioManager offers a variable double firstStepAt #unit("s") which you can use to delay the start of it. Again this can be set in the .ini file.
As the VEINS FAQ states, the TraCIScenarioManagerLaunchd offers two variables to configure the region of interest, based on rectangles or roads (string roiRoads and string roiRects). To reduce the simulated area, you can restrict simulation to a specific rectangle; for example, *.manager.rioRects="1000,1000-3000,3000" simulates a 2x2km area between the two supplied coordinates.
With both solutions (best used in combination) you still have to run SUMO - but Veins barely consums any of the time.
Looking for some help to be honest, This is not my area of knoladge atall.
Ive read around the question of powering my Pi with a battery, now I nabbed one of these guys for my phone
http://www.amazon.co.uk/13000mAh-Portable-External-Technology-Motorola-Black/dp/B00BQ5KHJW/ref=sr_1_cc_1?s=aps&ie=UTF8&qid=1420826597&sr=1-1-catcorr&keywords=anker+astro+e4
Incase the link dies in the future;
Item model number: AK-79AN13K2-BA
AnkerĀ® 2nd Gen Astro E4 13000mAh 2-Port (3A Output) Fast
Max 3A Out
5V Out
Now, from what i've read there have been mixed notes of, don't use batterys, only use this battery, don't do this, don't exeed this magical number ( which was differant each time ). so any help would be grately needed. If i was to power my pi via this thing. im I going to get a poof of smoke and need to replace the poor pi :(
A raspberry Pi is powered via USB, which means that it simply takes the 5V supplied via USB to run. As long as your current source is stable (ie. it doesn't change when you draw current from it), no device will care whether it is a battery or a switching power supply. Now, a bare raspberry Pi B uses less than 2W of power, 2W/5V = 0.4A = 400mA, so if that battery pack lives up to its specification, you are going to be fine. The device is spec'ed to provide 13000mAh, so at a constant current of 400mA, this would last you more than 32 hours.
Now, most people attach something to the raspberry, and that something will also draw power, but just add that power to the calculations above, to see if it's going to work out.
I need to optimise some codes for Cortex-M3 processor which doesn't have FP unit. I'm completely new to domain of optimisation.anyways,I use VS 2012 Release Candidate for native compiling of codes on my pc(Intel Core i5, windows 7 as os)and then porting them to Cortex_M3.I tried to write my codes in a way that it uses as little as possible the floating point arithmetics.but I still have a few. so i know that when i embedd it in Cortex_M3, it will take advantage of emulated FPU codes instead (Software FPU). Since i'm not able to do profiling for cortex_m3, i did it on my PC using VS2012 (Instrumentation method) to verify which functions take more time and have to be more optimized.
I think that profiling results on my PC can be proportional to that of COrtex_M3 if i don't use FP unit of my PC.
Is there a keyword or way in Visual Studio (2008 pro. or 2012 RC) which allows me to skip the (hardware) FP unit?
your insights are very appreciated
Your PC is soooooo different to a Cortex M3 that optimisations performed there are unlikely to be of any relevance. Some of the differences:
PC can issue more than one instruction per cycle
PC has some billion of those cycles per second vs some tens of millions
PC likely has more cache than your M3 has RAM
As you observe - the floating point unit
The M3 is an embedded processor - if you can't profile in the traditional way either get a better toolset, so that you can, or do it by hand, by using the hardware timers in the device to time your functions. Or toggle some port pins and hang an oscilloscope off it - that's proper embedded :)
EDIT:
You can profile without an OS - higher-end embedded toolchains can instrument the code, run it and pull the results back for post-processing
There are other hardware timers than the watchdog. At the simplest level, write some functions to read the value before you perform some task, read the value afterwards, subtract the result and print it out. More complex schemes can also be done, logging many iterations and keeping track of statistics etc.
If you have a few port pins, just set one before the function(s) you want to profile, clear it when it completes.
With a 4 channel scope you can see the execution times (and when they happen relative to each other, which can be useful if one interrupts another) of 4 sections of code at a time. If you have more, get a logic analyser and you can do loads of them!
You can also see the jitter or variation in execution time which can be instructive. Try it on the libc trig functions as the angle varies, you'll see that at some angles the sin/cos functions (for example) take way longer to run than at other angles. This can be a significant problem in a real-time system.