Recently I came across this app "Instant Blood Pressure - Monitor Blood Pressure Using Only Your Phone by Aura Labs, Inc."
https://appsto.re/us/jWIYX.i which claims to measure your blood pressure just by using the camera of your iPhone. I also use a meditation app called Sattva which can measure my heart rate pretty accurately just by placing my finger against the camera.
Although the two applications are doing slightly different things (pressure vs heart rate), how does this technology work?
I very much doubt that you can measure blood pressure using the camera. Pulse is different. It measures the amount of red colour in the skin and finds the rate at which it changes. Blood pressure is different though.
The way it is measured by doctors is to apply pressure to the blood vessels and decrease it to the point where the blood can be heard pulsing and then again when it can't. Without knowing these pressures you cannot measure blood pressure (by definition).
Related
I'm doing robot project - It need to measure subtle movements in XY direction, while driving in Z direction .
So I was thinking of using a camera with MATLAB and blinking LED attached to a wall - that way using image subtraction I can identify the LED, and with weight matrix locate the center of the light.
Now every period of time I can log the amount of pixels the center moved right-left or up-down directions and check the accuracy of the motion.
But when attempting this sensing solution I had some challenges I couldn't overcome
light source like LED/laser has soft edges so the center is not accurate
the camera is not calibrated (and I'm not sure how to calibrate it)
Is there other simple solution for this problem?
note: the amount of motion can be proportional.
You might be able to improve the accuracy of the location of the led by applying some kind of peak interpolation.
For the calibration: Matlab offers an app for camera calibration, maybe that helps you.
I'm running a CMDeviceMotion processing queue on iPhone 4, which gives me user-induced acceleration, along with the rotation rates. I can filter this data myself.
What I'm trying to understand is how to convert these discrete samples of acceleration, device attitude and rotational rate into a 3 dimensional displacement. This is possible with classical mechanics for straight lines, but I"m thinking of more advanced calculations - for example curves. This can be handled with GPS, but I'm looking for a much better resolution - lets say within 10 feet. GPS under clear sky has an average accuracy of about 30 feet.
Is there some sort of a physics engine or physics processor that can take a set of device motion or acceleration/turn rate events and give me a distance of how far the phone is from the original location?
I know that there are various pedometer and bike GPS trackers for iPhone. Are they based on GPS or do they actually do the acceleration integration like I'm describing?
Unfortunately, the acceleration integration you are describing won't work in itself.
However, you may improve the accuracy by fusing with the GPS signal and/or make domain specific assumptions. For details, see the above link.
I'm working on an iPhone app for motorcyclist that will detect a crash after it has occurred. Currently we're in the data acquisition process and plotting graphs and looking at data. What i need to log is the forward user acceleration and tilt angle of the bike relative to bike standing upright on the road. I can get the user acceleration vector, i.e. the forward direction the rider is heading by sqrt of the x,y and z accelerometer values squared. But for the tilt angle i need a reference that is constant, so i thought lets use the gravity vector. Now, i realize that deviceMotion API has gravity and user acceleration values, where do these values come from and what do they mean? If i take the sqrt of the x,y and z squared components of the gravity will that always give me my up direct? How can i use that to find the tilt angle of the bike relative to an upright bike on the road? Thanks.
Setting aside "whiy" do this...
You need a very low-pass filter. So once the phone is put wherever-it-rides on the bike, you'll have various accelerations from maneuvers and the accel from gravity ever present in the background. That gives you an on-going vector for "down", and you can then interpret the accel data in that context... Fwd accel would tip the bike opposite of braking, so I think you could sort out fwd direction in real time too.
Very interesting idea.
Assuming that it's not a "joke question" you will need a reference point to compare with i.e. the position taken when the user clicks "starting". Then you can use cos(currentGravity.z / |referenceGravity|) with |referenceGravity| == 1 because Core Motion measures accelerations in g.
But to be honest there are a couple of problems for instance:
The device has to be in a fixed position when taking the reference frame, if you put it in a pocket and it's just moving a little bit inside, your measurement is rubbish
Hmm, the driver is dead but device is alive? Chances are good that the iPhone won't survive as well
If an app goes to the background Core Motion falls asleep and stops delivering values
It has to be an inhouse app because forget about getting approval for the app store
Or did we misunderstand you and it's just a game?
Since this is not a joke.
I would like to address the point of mount issue. How to interpret the data depends largely on how the iPhone is positioned. Some issues might not be apparent to those that don't actually ride motorcycles.
Particularly when it comes to going around curves/corners. In low speed turns the motorcycle leans but the rider does not or just leans slightly. In higher speed turns both the rider and the motorcycle lean. This could present an issue if not addressed. I won't cover all scenarios but..
For example, most modern textile motorcycle jackets have a cell phone pocket just inside on the left. If the user were to put there phone in this pocket, you could expect to see only 'accelerating' & 'braking'(~z) acceleration. In this scenario you would expect to almost never see significant amounts of side to side (~x) acceleration because the rider leans proportionally into the g-force of the turn. So while going around a curve one would expect to see an increase in (y)down from it's general 1g state. So essentially the riders torso is indexed to gravity as far as (x) measurements go.
If the device were mounted to the bike you would have to adjust for what you would expect to see given that mounting point.
As far as the heuristics of the algorithm to detect a crash go, that is very hard to define. Some crashes are like you see on television, bike flips ripping into a million pieces, that crash should be extremely easy to detect, Huh 3gs measured up... Crash! But what about simple downs?(bike lays on it's side, oops, rider gets up, picks up bike rides away) They might occur without any particularly remarkable g-forces.(with the exception of about 1g left or right on the x axis)
A couple more suggestions:
Sensitivity adjustment, maybe even with some sort of learn mode (where the user puts the device in this mode and rides, the device then records/learns average riding for that user)
An "I've stopped" or similar button; maybe the rider didn't crash, maybe he/she just broke down, it does happen and since you have some sort of ad-hoc network setup it should be easy to spread the news.
I have a GPS app that I would like to detect if the user is standing still and not moving. Using Core Location works for this, but is sometimes not accurate because new updates move and gives the illusion of speed and motion.
So, I am wondering if in addition to that, I can also use Core Motion. Is this a good idea to detect motion such as someone walking, running, driving, etc, and know when they are no longer doing that motion? Or, is Core Motion only for small movements such as tilting the device or lifting it to your ear?
I wanted to tell others who visit this question what I've learned and what I think about this approach.
I have been doing some research of my own to know whether this is possible, and more importantly, even if it is what is the battery consumption and accuracy of the location change detected. For Android though, this question was asked quite sometime back. The answer provides links to this Google Tech Talk. At 23:20, the speaker talks about how difficult it is to achieve this and the accuracy you will achieve in the results.
Even though I have to come to realize the battery consumption from sensors on the iPhone is a little lesser than in most Android phones, I still think this is a costly affair in terms of accuracy and battery consumption.
you can use the GPS with the sensor readings to distinguish between walking, running, etc. if you combine the tilt angle frequency change and the GPS speed information (you need to do some work to get some of this info of course, but thats the way to do it).
You are talking about 4 different measurements from 4 different sensors (technically more than 4 but..) -
Latitude & Longitude - from CoreLocation. It uses a mix of GPS + cell tower triangulation.
Accelerometer - the current orientation of the device in 3D space.
Gyroscope - orientation of the device on its own axis.
Magnetometer - which tells you which direction a device is point w.r.t south,north,east,west
Of all these I think only Latitude & Longitude are of use to you. Basically what you do is to make the sensitivity (i.e. the update rate from the sensor) a bit more relaxed. With some tweaking around with this you should be able to tell with good accuracy if a person is standing or moving.
How does one detect the amount of time the device has detected motion on the accelerometer?
If I wanted to detect continuous motion detection on the device such as 10 seconds of motion. How is this done? When I use the accelerometer delegate method, it keeps being called and is difficult to handle the time calculation.
As commented, accelerometer delivers acceleration values and not speed - so you will have to integrate over incoming values. Bu bevare of:
- earth gravity is always there
- there is a lot of noise (accelerometer chips built in phones are really cheap, crappy and
sell for 10$ a bucket)