aligning omnet++ clock with the system clock - simulation

I'm trying to integrate omnet++ with a 3d robot simulator, and this is roughly what I'm picturing.
So There are a number of objects in the robot simulator, and they communicate with each other using 802.11 which will be simulated by omnet++. Each node in omnet++ corresponds to each object in the robot simulator, and an object's movement will be synchronized with the corresponding node in omnet++.
But since omnet++ is a discrete event simulator, I need to deal with the clock mismatch problem between omnet++ and the robot simulator.
I know omnet++ has cRealTimeScheduler class for synchronizing simulation clock to wall clock, but I'm not sure if this will do what I want.
I'm a noob when it comes to network simulation, so I want to know if this is even possible or not. Does using cRealTimeScheduler class take care of clock synchronization? or do I need to take a different tack? (a different scheduler, or even a different simulator?)
Any help will be greatly appreciated. Thank you.

If the robot simulator itself is running in real-time, then you are fine with the cRealTimeScheduler approach. cRealTimeScheduler will synchronize with the wall clock time. If the robot simulator is also running in real-time then the two will be implicitly synchronized, too.
If the robot simulator has its own simulation time (i.e. can run faster than real time) then you should create your own scheduler class that synchronizes the two simulation. This is called co-simulation where two simulation in tandem. Veins (sumo + omnet) is also doing this where Sumo (car traffic simulator) and omnet (network simulator) is working together.
What you are trying to achieve is possible, however I'm not familiar with the robot simulator part, but as long as the other simulator is also communicating with messages are discrete time points, and you can get the simulation time from the robot simulator, you should be fine.

Related

Change parameters during robot learning using simulators (Webots, Gazebo, etc)

I am searching a simulator for my robot learning research.
In the learning process, I need to change parameters of both environment (friction coefficients, terrain height in the world) and robot itself (mass, inertia).
How can simulators like Gazebo and Webots realize it?
(another problem: bisides physics engine, I also need visual reality for computer-vision-aided algorithms.
Is there any simulator that could provide both functions? )
Webots allows you from a supervisor program to easily change any parameter of a simulation (including friction coefficients) while it is running. Moreover it has a VR interface. I don't know about Gazebo.

Sumo simulator for urban traffic simulation

For urban traffic simulator, we can use Sumo simulator with other simulators like Omnet++ or Matlab or Ns2/3.
I know Sumo can model mobility and other simulators coupled to Sumo for communication protocols or communication networks.
While it is possible to simulate VANET with just using Matlab.
What is the difference between them (SUMO and others or just using Matlab)?
How can we find which is better?
Thank you
It really depends on how much influence the traffic situation has to your scenario. If you are just interested in checking whether your protocol works even if two vehicles drive at 200 km/h in opposite directions but there is no interaction with other vehicles, you do not need SUMO. But if your scenarios involve jams or complex junctions and you want (more or less) realistic trajectories for interacting vehicles you are better off with a traffic simulation like SUMO (especially if you want to run on real world scenarios importing data from OpenStreetMap etc.).

Obstetrician needs help! How to exactly synchronize system time between iPhone and Windows 10 Laptop

We are trying to develop a low-cost ultrasound device that can be used by inexperienced operators for health care in developing countries. We have created a low-profile optical tracking system that connects to the ultrasound probe. It outputs positional data from both the binocular camera and an on-board 9-axis IMU. The ultrasound pictures are collected on an iPhone at a frame rate of 60 per second and are time stamped to the millisecond based on the iPhone system time. The optical tracker collects positional data onto a Windows 10 laptop. We need to exactly synchronize the system time of the 2 devices (iPhone, laptop) at least to 1/10 sec and preferably to the millisecond.
Is there a way to access the precise system time on the iPhone and synchronize this with the laptop?
Full disclosure: I am an obstetrician and not an engineer. But I’m not satisfied with the story I’m getting from the developers about this. It must be possible.
We've tried pointing the laptop to the same internet clock as the iPhone, but the sync is not good enough. Maybe because of wifi latency?

Raspberry Pi Zero Embedded

First I want to point out that I'm very new to Raspberry Pis. I have bought a Raspberry Pi Zero for my project, because Arduino did not have enough horsepower.
My Project involves an I2C sensor and audio output (I2S). The audio is generated on the Raspberry and that is why I need the computational power.
Now I'd like to know what would be a good choice for the operating system. I don't really need anything else but the I2C and I2S and some math to generate the sound. The project is going to have the Raspberry embedded in the system and is battery powered, so it should be able to survive sudden power loss.
I found something relating to Real Time Operating systems, but I'm not sure if I need it to be exactly real time since I can buffer the generated sound data. But I do need the system to be fast, and as light as possible as the sound generation is rather heavy process.
I understand this is sort of vague question and I'm happy with any information I can get and if you could just point me in the right direction, that would be appreciated.

How this iphone application works?

I am just looking iphone apps at apple store, and i have found this app g8, http://www.dynolicious.com/, but can you give some ideas or logic that how this app works, i mean how it is possible to measure car performance without using or communicating with any external hardware ...??? By using just hardware built into iphone, ie. accelerometer.
It works as follows. The data provided as results are actually estimates, not absolutely correct values measured attaching external hardware to the car. The estimates come from the GPS unit and the accelerometer embedded within the iPhone. Using the GPS you can estimate from the positions detected in different temporal instants the distance travelled and therefore the velocity. Then you can also estimate the acceleration using the accelerometer.
This is just a guess, but I would expect that you tell it your car make and model which will give the manufacturer's performance and fuel consumption figures. Then using the accelerometer and positional information in the phone you can calculate the speed of the car. A relatively simple equation can then be used to calculate the expected performance.
I would guess it uses the GPS to measure the 0-60 acceleration (start a stopwatch and stop when GPS says you're moving at 60 mph) and the built-in accelerometer to detect G-force. The horsepower estimate is just that - an estimate. They may have a performance table of various known cars and their 0-60 performance and horsepower. Based on that, they can give an estimate for yours.