Detect Heart Rate using iPhone Camera [duplicate] - iphone

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Detecting heart rate using the camera
I am working on detecting pulse rate in iOS. I have done some search and now I am able to read heart beats using external bluetooth device which is capable of reading heart beats. But now I am ver curious about detecting pulse using iPhone camera. I am trying to understand How it can be done? What is actual theory behind that? Can any one have an idea behind this?
According to my search I found that I need to use camera in video mode. And I need to compare each frames from that video for colour changes. When our heart pumps blood into our body, colours are changed with every pump. So how will I get that colour change using camera or is there any other way to do this?

Someone at MIT Media Labs beat you to it :P
http://www.cardiio.com/
Click on "How it works".
I believe the gist of it was that the app measures the amount of light reflected off your face due to increase/decrease in blood. Based on this, they can determine your heart rate.
Don't know about the underlying algorithm. If I know, I wouldn't be sitting here, I'd be writing MIT apps :D
Apparently there is a threshold that is considered a "standard" heart rate.
Studies have shown that our heart rate measurements are within 3 bpm
of a clinical pulse oximeter when performed at rest in a well-lit
environment (Poh et al., Opt. Express 2010; Poh et al., IEEE Trans
Biomed Eng 2011).
You'll probably need some sophisticated equipment to record your results like a real heart rate measure machine so you can compare the RGB (in 255,255,255 triplets values) between different frames at different heart rate and also you have to make sure you're sitting in about the same exact environment with controlled lighting.
If you try to do it at home, you'll get no where. If the sky dims for example, you're going to get different RGBA value.

Related

Is Deconvolution possible for video in iOS?

I want to film a batter swinging at a baseball, but the bat is blurry. The video is 30 fps.
Through research I have found that deconvolution seems to be the way to minimize motion blur, but I have no idea if or how I can implement it in my iOS app post processing.
I was hoping someone could point me in the right direction like how to apply a deconvolution algorithm in iOS or what I might need to do...or if it is even possible. I imagine it takes some processing power.
Any suggestions at all are welcome...
Thanks, this is driving me crazy...
After a lot of research and talks with developers about deconvolusion on iOS (Thanks to Brad Larson for taking the time to give me detailed information) I am confident that it is not possible and/or not worth the time. If the hardware can handle the computations (No guarantee) it would be EXTREMELY slow and consume much of the device's battery. I have also been told it could take months to implement the algorithms...if it is possible at all.
Here is the response I received from Apple...
Deconvolution algorithms are generally difficult to implement and can be very computationally intensive. I suggest you starting with a simple sharpening technique. Depending on the amount of the motion blur in your video, it might just suffice.
The sharpen filters, including CISharpenLuminance and CIUnsharpMask, are now available in iOS 6, so it is moderately easy to test them out.
Core Image Filter Reference
https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html
Core Image sample code from this year's WWDC session 511 "Core Image Techniques". It's called "Attempt3". This sample demonstrates best practices for applying CIFilter's to a live video taken by the iPhone/iPad camera. You may download the session video from the following page: https://developer.apple.com/videos/wwdc/2012/.
Just wanted to pass this information along.

Use iPhone to recognize sound frequency in range 20-24 Hz

My boss wants me to develop an app, using iPhone to recognize sound frequencies from 20-24 Hz that humans cannot hear. (iPhone frequency response: 20 Hz to 20 kHz)
Is this possible? If yes, can anyone give me some advice? Where to start?
Before you start working on this you need to make sure that the iPhone hardware is physically capable of detecting such low frequencies. Most microphones have very poor sensitivity at low frequencies, and consumer analogue input stages typically have a high pass filter which attenuates frequencies below ~ 30 Hz. You need to try capturing some test sounds containing the signals of interest with an existing audio capture app on an iPhone and see whether the low frequency components get recorded. If not then your app is a non-starter.
What you're looking for is a fast fourier transform. This is the main algorithm used for converting a time based signal to a frequency based one.
It seems the Accelerate framework has some FFT support, so I'd start looking at that, there are several posts about that already.
Apple has some sample openCL code for doing this on a mac, but AFAIK openCL isn't on iOS yet.
You'd also want to check the frequency response of the microphone ( I think there are some apps out doing oscilloscope displays from the mic that would help here).
You basic method would be to take a chunk of sound from the mic. Filter it and then maybe shift it down in frequency, depending on what you need to do with it.

Talking Tom Cat Clone App [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
iPhone voice changer
I want to make an applications similar to Talking Tom Cat, But i am not able to identify the approach for sound.
How I can convert my sound into cat sound?
Look up independent time pitch stretching of audio. It's a digital signal processing technique. One method that can be used is the phase vocoder technique of sound analysis/resynthesis in conjunction with resampling. There seem to be a couple companies selling libraries to do suitable time pitch modification.
The technique to convert a normal voice into a squeaky voice is called time-scale modification of speech. One way is to take the speech and reduce the pitch by a certain amount. Another approach is to stretch/compress the speech so that the frequencies in the voice get scaled by an appropriate amount. These are techniques in digital signal processing.
the great link to download the sample code that provides all your needs,here also refer here for gaining more knowledge regarding your question.
In the init method of the Sample Code of HelloWorldLayer.mm ,u can see three float values as
time = 0.7;
pitch = 0.8;
formant = pow(2., 0./12.);
just adjust the pitch value to 1.9,it would be really a nice cat sound!!!

iPhone: CPU power to do DSP/Fourier transform/frequency domain?

I want to analyze MIC audio on an ongoing basis (not just a snipper or prerecorded sample), and display frequency graph and filter out certain aspects of the audio. Is the iPhone powerful enough for that? I suspect the answer is a yes, given the Google and iPhone voice recognition, Shazaam and other music recognition apps, and guitar tuner apps out there. However, I don't know what limitations I'll have to deal with.
Anyone play around with this area?
Apple's sample code aurioTouch has a FFT implementation.
The apps that I've seen do some sort of music/voice recognition need an internet connection, so it's highly likely that these just so some sort of feature calculation on the audio and send these features via http to do the recognition on the server.
In any case, frequency graphs and filtering have been done before on lesser CPUs a dozen years ago. The iPhone should be no problem.
"Fast enough" may be a function of your (or your customer's) expectations on how much frequency resolution you are looking for and your base sample rate.
An N-point FFT is on the order of N*log2(N) computations, so if you don't have enough MIPS, reducing N is a potential area of concession for you.
In many applications, sample rate is a non-negotiable, but if it was, this would be another possibility.
I made an app that calculates the FFT live
http://www.itunes.com/apps/oscope
You can find my code for the FFT on GitHub (although it's a little rough)
http://github.com/alexbw/iPhoneFFT
Apple's new iPhone OS 4.0 SDK allows for built-in computation of the FFT with the "Accelerate" library, so I'd definitely start working with the new OS if it's a central part of your app's functionality.
You cant just port FFT code written in C into your app...there is the thumb compiler option that complicates floating point arithmetic. You need to put it in arm mode

An iPhone library for shape recognition via the camera

I hope this falls within the "programming question" category.
Im all lightheaded from Googling (and reading every post in here on the subject) on the subject "Computer Vision", but Im getting more confused than enlightened.
I have 6 abstract shapes printed on a piece of paper and I would like to have the camera on the iPhone identify these shapes (from different angles, lightning etc.).
I have used OpenCV a while back(Java) and I looked at other libraries out there. The caveat is that it seems that either they rely on a jail broken iPhone or they are so experimental and hard to use that I would probably end up using days learning libraries only to figure out they didn't work.
I have thought of taking +1000 images of my shapes and training a Haar filter. But again
if there is anything out there that is a bit easier to work with I would really appreciate the advise, suggestion of people with a bit of experience.
Thank you for any suggestion or pieces of advise you might have:)
Have a look at at OpenCV's SURF feature extraction (they also have a demo which uses it to detect objects).
Surf features are salient image features which are invariant to rotation and scale. Many algorithms detect objects by extracting such features from an image, and then use simple "bag of words" classification (comparing the set of extracted image features to the features of your "shapes". Even without referring to their spacial alignment you can have good detection rates if you only have 6 shapes).
While not a library, Chris Greening explains how iPhone Sudoku Grab does its image recognition of puzzles in his post here. He does seem to recommend OpenCV, and not just for jailbroken devices.
Also Glen Low talks a bit about how Instaviz does its shape recognition in an interview for the Mobile Orchard podcast.
I do shape recognition in my iPhone app Instaviz and the routines are actually packaged into a library I call "Recog". Only problem is that it is meant for finger or mouse gesture recognition rather than image recognition. You pass the routines a set of points representing the gesture and it tells you whether it's a square, circle etc.
I haven't yet decided on a licensing model but probably use a minimal per-seat royalty.