simulation of sensors using cooja simulator - android-sensors

I am trying to simulate sensor node with limited computing resources and limited memory using cooja simulator. I would like to know that how can I vary the memory size and find its effect on speed.

you can run COOJA by ant run_bigmem command. this command will assign the maximum of available memory to COOJA simulator.

Related

Obstetrician needs help! How to exactly synchronize system time between iPhone and Windows 10 Laptop

We are trying to develop a low-cost ultrasound device that can be used by inexperienced operators for health care in developing countries. We have created a low-profile optical tracking system that connects to the ultrasound probe. It outputs positional data from both the binocular camera and an on-board 9-axis IMU. The ultrasound pictures are collected on an iPhone at a frame rate of 60 per second and are time stamped to the millisecond based on the iPhone system time. The optical tracker collects positional data onto a Windows 10 laptop. We need to exactly synchronize the system time of the 2 devices (iPhone, laptop) at least to 1/10 sec and preferably to the millisecond.
Is there a way to access the precise system time on the iPhone and synchronize this with the laptop?
Full disclosure: I am an obstetrician and not an engineer. But I’m not satisfied with the story I’m getting from the developers about this. It must be possible.
We've tried pointing the laptop to the same internet clock as the iPhone, but the sync is not good enough. Maybe because of wifi latency?

aligning omnet++ clock with the system clock

I'm trying to integrate omnet++ with a 3d robot simulator, and this is roughly what I'm picturing.
So There are a number of objects in the robot simulator, and they communicate with each other using 802.11 which will be simulated by omnet++. Each node in omnet++ corresponds to each object in the robot simulator, and an object's movement will be synchronized with the corresponding node in omnet++.
But since omnet++ is a discrete event simulator, I need to deal with the clock mismatch problem between omnet++ and the robot simulator.
I know omnet++ has cRealTimeScheduler class for synchronizing simulation clock to wall clock, but I'm not sure if this will do what I want.
I'm a noob when it comes to network simulation, so I want to know if this is even possible or not. Does using cRealTimeScheduler class take care of clock synchronization? or do I need to take a different tack? (a different scheduler, or even a different simulator?)
Any help will be greatly appreciated. Thank you.
If the robot simulator itself is running in real-time, then you are fine with the cRealTimeScheduler approach. cRealTimeScheduler will synchronize with the wall clock time. If the robot simulator is also running in real-time then the two will be implicitly synchronized, too.
If the robot simulator has its own simulation time (i.e. can run faster than real time) then you should create your own scheduler class that synchronizes the two simulation. This is called co-simulation where two simulation in tandem. Veins (sumo + omnet) is also doing this where Sumo (car traffic simulator) and omnet (network simulator) is working together.
What you are trying to achieve is possible, however I'm not familiar with the robot simulator part, but as long as the other simulator is also communicating with messages are discrete time points, and you can get the simulation time from the robot simulator, you should be fine.

How does the OS interact with peripherals like sound cards/ video cards etc

As far as I understand it, any program gets compiled to a series of assembly instructions for the architecture it is running on. What I fail to understand is how the operating system interacts with peripherals such as a video card. Isn't the driver itself a series of assembly instructions for the CPU?
The only thing I can think think of is that it uses regions of memory that is then monitored by the peripheral or it uses the BUS to communicate operations and receive results. Is there a simple explanation to this process.
Sorry if this question is too general, it's something that's been bothering me.
You're basically right in your guess. Depending on the CPU architecture, peripherals might respond to "memory-mapped I/O" (where they watch for reads and writes to specific memory addresses), or to other specific I/O instructions (such as the x86 IN and OUT instructions).
Device drivers are OS-specific software, and provide an interface between the OS and the hardware.
A specific physical device either has hardware that knows how to respond to whatever signals from the CPU it monitors, or it has its own CPU and software that is often called firmware. The firmware of a device is not specific to any operating system and is usually stored in persistent memory on the device even after it is powered off. However, some peripherals might have firmware that is loaded by the device driver when the OS boots.
There are simple explanations and there are truthfull explanations - choose one!
I'll try a simple one: Along the assembly instructions, there are some, that are specialized to talk to peripherials. The hardware interprets them not by e.g. adding values in registers oder writing something to RAM, but by moving some data from a register or a region in RAM to a peripherial (or the other way round).
Inside the OS, the e.g. the sound driver is responsible for assembling some sound data along with some command data in RAM, and the OS then invokes the bus driver to issue these special instructions to move the command and data to the soundcard. The soundcard hardware will (hopefully) understand the command and interpret the data as sound it should play.

Iphone Simulator - Allocates way too much memory and runs slow compared to device

I've seen plenty of posts about the simulator running slow, but my problem is different.
I ran my app with instruments and saw that in the device, the app uses about 8mb of live memory when the app is running. In the simulator the live memory is about 50MB, and I have no idea why this is.
This causes the simulator to lag and I need it to run smooth so i can take a nice screen capture video of my app.
Any ideas?
There are a number of steps in the OpenGL ES 1.1/2.0 pipeline that are done in software when running on the simulator (as the Mac GPUs are plain OpenGL) but are hardware accelerated when running on the device (hence it actually running faster on the device).
From the documentation:
Important: Rendering performance of OpenGL ES in Simulator has no
relation to the performance of OpenGL ES on an actual device.
Simulator provides an optimized software rasterizer that takes
advantage of the vector processing capabilities of your Macintosh
computer. As a result, your OpenGL ES code may run faster or slower in
iOS simulator (depending on your computer and what you are drawing)
than on an actual device. Always profile and optimize your drawing
code on a real device and never assume that Simulator reflects
real-world performance.
This definitely explains the speed discrepancy, might also explain the extra memory taken up when running in the simulator.

Emulate limited resource device android

I am trying to find a NullPointerException that I get when in my app the phone release memory. I was testing on a Samsung G3, but now that I have change for a GS2 which has more RAM memory, the variable is still there when I minimaze/maximize.
Is there any way to simulate my old phone and his limited RAM memory? A bit ironic,but now I miss it...In the SDK emulators I can set SD card size, but not the RAM, which I guess is the key problem here.
There is a program on the android market called CPU Master. You can set CPU speed so then you can test your program as in a phone with limited resources.