Orientation of body - accelerometer

We need to find the patient monitoring system which monitors the position of the body of the patient when patient is on bed . what we want to get is when patient is sleeping we want to monitor what is the position in which he/she is whether he is sleeping facing celing , or sleeping facing left side,or sleeping facing right side .
What we are using is IMU SENSOR and we have the gyro , accelerometer, magnetometer reading in xyz planes . please suggest what to do further in order to find above positions.

With only one (simple) sensor attached to a body, it's rather difficult. With multiple sensors (and the ability to distinguish between them), you could possibly triangulate each of them to get relative distances on the three spatial axes.
But, without more information than you currently have (including perhaps some details on what the sensor actually senses), I don't think it's possible.

Related

Gather multiple kinects v2 data in one computer

I would like to use three kinects v2 running on three computers and then gather their data in one computer (real time 3d reconstruction using unity3d). Is it possible to do so and how ? Thank you.
So what you're asking is very do-able, it just takes a lot of work.
For reference I'm referring to the frames of the 3D point cloud gathered by the kinect as your image.
All you need is to set up a program on each of your kinect-computers that runs them as a client. With the other computer you can run that as a server and have the clients sending packets of images with some other data attached.
The data you'll need at a minimum will be angle and position from 'origin'.
For this to work properly you need to be able to reference the data in all your kinects to each other. The easiest way to do this is to have a known point and measure the distance from that point and angle the kinects are facing vs North and sea level.
Once you have all that data you can take each image from each computer and rotate the bit clouds using trigonometry, then combine all the data. Combing the data is something you'll have to play with as there are loads of different ways to do it and it will depend on your application.

Get position from accelerometer

I am working in a monocular 3D Mapping project, and I need every time both position and rotation (angle).
To filter Gyroscope Data, I decided to use the "compass" and set 0 value to the angle if it's north.
But to get the position, I will need to double integrate the accelerometer value with a small sampling step (1ms) and 7 values mean filter.
I think this will make position more accurate. But does someone have an idea about the error range ? for example, in 10 meters, How much the error will be.
And does anyone have a better idea?
The sensors are from STM32F3 Discovery Board
Thanks
The STM32F3 has two sensors you'd be using:
LSM303DLHC accelerometer and magnetometer
L3GD20 3-axis digital gyroscope.
The sensor accuracy should appear somewhere in the datasheet. Since you'll be using several sensors, you'll have to calculate the total error over the time your measuring. Note, the error won't be a single number like 10 meters because it will accumulate over time. If you had a GPS or some other way of determining your position you'd be able to limit your accumulated error.
What you're doing sounds like an Inertial Measurement Unit. If you haven't already, I'd recommend reading up on that and also Dead Reckoning.

How does multitouch IR touch screen work

I am doing research on touch screens and I couldnot find a good source except for this image below which could explain how multitouch IR systems work. basically the single touch IR systems are pretty simple as on two sides of the panel, lets say left and top are the IR transmitters and on the right and bottom are the receivers. So if a user touches somewhere in the middle, the path of IR will be disrupted and the ray will not reach the receiving end, therefore the processor can pick up the coordinates. but this will not work for multitouch systems as there is an issue of ghost points with this approach.
Below I have an image of 'PQ labs' multitouch IR system working, but as there is no explanation given, therefore I am not able to understand its working, Any help will be greatly appreciated.
I consider that they have a special algorithm to avoid the point caused by the inner cross of emitter light. But this algorithm will not work for every time, so sometime if you put your finger very close to each other. The ghost point may will show up.
My guess:
The sensors are analog (there must be an Analog to digital converter to read each of the opto transistor (IR receiver).
LEDa and LEDb are not on at the same time
The opto transistor are running in a linear range (not in saturation) when no object is present.
One object:
4. When an One object is placed on the surface. There will be less light accessing some of the opto transistors. This will be reflected by a reading that is lower then the read when no object is present.
The reading of the photo transistor array (it is an array reflecting the read from each opto transistor) will provide information about:
4.1. How many opto transistors are completely shaded (off)
4.2. What opto transistor are effected
Please note: A reading from one LED is not sufficient to know the object position
To get object location we need two reading (one from LEDa and one from LEDb). Now, we can calculate the object Position and Size since we know the geometry of the screen.
Two Objects:
Now each array may have "holes" (there will be two groups) in the shaded area. These holes will indicate that there is an additional object.
If the objects are closed to each other the holes may not be seen. However, there are many LEDs. So there will be multiple arrays (one for each LED) and based on the presented geometry these holes may be seen by some of the LEDs.
For more information please see US patent#: US7932899
Charles Bibas

Visualizing/plotting location based on accelerometer/gyro readings over time

What's the easiest way to plot location track based on series of reading of accelerometer/gyro/compass sensors taken over time? Let's say I have following data taken every second:
ElapsedTime(s) xMag(uT) yMag(uT) zMag(uT) xAccel(g) yAccel(g) zAccel(g) xRate(rad/sec) yRate(rad/sec) zRate(rad/sec) roll(rad) pitch(rad) yaw(rad)
...
Is there an easy way to draw a location plot for any given time? I'm using iPhone 4 with xSensor app to capture data, but can't just use GPS. I would appreciate any hints. Both standalone applications and Java libraries would be good.
Double integration of the acceleration vector (after gyro & compass direction correction) will give you a location from some initial or arbitrary offset. The problem is that any small offset or errors in the acceleration data, which there will be, will result in a rapidly diverging position.
What might be more possible is to get frequent and precise GPS position data, and use the acceleration data to estimate the route in between two very nearby GPS fixes.

Detect the iPhone rotation spin?

I want to create an application could detect the number of spin when user rotates the iPhone device. Currently, I am using the Compass API to get the angle and try many ways to detect spin. Below is the list of solutions that I've tried:
1/ Create 2 angle traps (piece on the full round) on the full round to detect whether the angle we get from compass passed them or not.
2/ Sum all angle distance between times that the compass is updated (in updateHeading function). Let try to divide the sum angle to 360 => we could get the spin number
The problem is: when the phone is rotated too fast, the compass cannot catch up with the speed of the phone, and it returns to us the angle with latest time (not continuously as in the real rotation).
We also try to use accelerometer to detect spin. However, this way cannot work when you rotate the phone on a flat plane.
If you have any solution or experience on this issue, please help me.
Thanks so much.
The iPhone4 contains a MEMS gyrocompass, so that's the most direct route.
As you've noticed, the magnetometer has sluggish response. This can be reduced by using an anticipatory algorithm that uses the sluggishness to make an educated guess about what the current direction really is.
First, you need to determine the actual performance of the sensor. To do this, you need to rotate it at a precise rate at each of several rotational speeds, and record the compass behavior. The rotational platform should have a way to read the instantaneous position.
At slower speeds, you will see a varying degree of fixed lag. As the speed increases, the lag will grow until it approaches 180 degrees, at which point the compass will suddenly flip. At higher speeds, all you will see is flipping, though it may appear to not flip when the flips repeat at the same value. At some of these higher speeds, the compass may appear to rotate backwards, opposite to the direction of rotation.
Getting a rotational table can be a hassle, and ensuring it doesn't affect the local magnetic field (making the compass useless) is a challenge. The ideal table will be made of aluminum, and if you need to use a steel table (most common), you will need to mount the phone on a non-magnetic platform to get it as far away from the steel as possible.
A local machine shop will be a good place to start: CNC machines are easily capable of doing what is needed.
Once you get the compass performance data, you will need to build a model of the observed readings vs. the actual orientation and rotational rate. Invert the model and apply it to the readings to obtain a guess of the actual readings.
A simple algorithm implementation will be to keep a history of the readings, and keep a list of the difference between sequential readings. Since we know there is compass lag, when a difference value is non-zero, we will know the current value has some degree of inaccuracy due to lag.
The next step is to create a list of 'corrected' readings, where the know lag of the prior actual values is used to generate an updated value that is used to create an updated value that is added to the last value in the 'corrected' list, and is stored as the newest value.
When the cumulative correction (the difference between the latest values in the actual and corrected list exceed 360 degrees, that means we basically don't know where the compass is pointing. Hopefully, that point won't be reached, since most rotational motion should generally be for a fairly short duration.
However, since your goal is only to count rotations, you will be off by less than a full rotation until the accumulated error reaches a substantially higher value. I'm not sure what this value will be, since it depends on both the actual compass lag and the actual rate of rotation. But if you care only about a small number of rotations (5 or so), you should be able to obtain usable results.
You could use the velocity of the acceleration to determine how fast the phone is spinning and use that to fill in the blanks until the phone has stopped, at which point you could query the compass again.
If you're using an iPhone 4, the problem has been solved and you can use Core Motion to get rotational data.
For earlier devices, I think an interesting approach would be to try to detect wobbling as the device rotates, using UIAccelerometer on a very fine reporting interval. You might be able to get some reasonable patterns detected from the motion at right angles to the plane of rotation.