How to generate and play white noise on the fly with OpenAL? - iphone

I'm using OpenAL in my app to play sounds based on *.caf audio files.
There's a tutorial which describes how to generate white noise in OpenAL:
amplitude - rand(2*amplitude)
But they're creating a buffer with 1000 samples and then just loop that buffer with
alSourcei(source, AL_LOOPING, AL_TRUE);
The problem with this approach: Looping white noise just doesn't work like this because of DC offset. There will be a noticeable wobble in the sound. I know because I tried looping dozens of white noise regions generated in different applications and all of them had the same problem. Even after trying to crossfade and making sure the regions are cut to zero crossings.
Since (from my understanding) OpenAL is more low-level than Audio Units or Audio Queues, there must be a way to generate white noise on the fly in a continuous manner such that no looping is required.
Maybe someone can point out some helpful resources on that topic.

The solution with the least change might just be to create a much longer OpenAL noise buffer (several seconds) such that the wobble is at too low rate to easily hear. Any waveform hidden in a 44Hz repeat (1000 samples at 44.1k sample rate) is within normal human hearing range.

Related

Record screen in real time using MATLAB?

I am using an optical microscope and camera to record some videos which are post-processed in MATLAB.
Real time acquisition and pixel statistics would be extremely helpful, because some of what I am looking at absorbs very little light (I am using transmission mode). An example is that a blank (background) sample would give me an an average pixel value across a 512x512 ccd array of something like 144 (grayscale). An actual sample might have an average value of 140 or so. This subtle shift in pixel intensity would be useful in helping me focus the microscope.
Unfortunately, my camera setup is not supported by MATLAB, so I cannot use the image acquisition toolbox for real time. So I was wondering, is there a way that I could 'fake' real time image acquisition by selecting say a rectangle of my current desktop (the rectangle that is the video output of the microscopes camera), for matlab to record in real time?
Thanks

Matlab video processing of heart beating. code supplemented

I'm trying to write a code The helps me in my biology work.
Concept of code is to analyze a video file of contracting cells in a tissue
Example 1
Example 2: youtube.com/watch?v=uG_WOdGw6Rk
And plot out the following:
Count of beats per min.
Strenght of Beat
Regularity of beating
And so i wrote a Matlab code that would loop through a video and compare each frame vs the one that follow it, and see if there was any changes in frames and plot these changes on a curve.
Example of My code Results
Core of Current code i wrote:
for i=2:totalframes
compared=read(vidObj,i);
ref=rgb2gray(compared);%% convert to gray
level=graythresh(ref);%% calculate threshold
compared=im2bw(compared,level);%% convert to binary
differ=sum(sum(imabsdiff(vid,compared))); %% get sum of difference between 2 frames
if (differ ~=0) && (any(amp==differ)==0) %%0 is = no change happened so i dont wana record that !
amp(end+1)=differ; % save difference to array amp wi
time(end+1)=i/framerate; %save to time array with sec's, used another array so i can filter both later.
vid=compared; %% save current frame as refrence to compare the next frame against.
end
end
figure,plot(amp,time);
=====================
So thats my code, but is there a way i can improve it so i can get better results ?
because i get fealing that imabsdiff is not exactly what i should use because my video contain alot of noise and that affect my results alot, and i think all my amp data is actually faked !
Also i actually can only extract beating rate out of this, by counting peaks, but how can i improve my code to be able to get all required data out of it ??
thanks also really appreciate your help, this is a small portion of code, if u need more info please let me know.
thanks
You say you are trying to write a "simple code", but this is not really a simple problem. If you want to measure the motion accuratly, you should use an optical flow algorithm or look at the deformation field from a registration algorithm.
EDIT: As Matt is saying, and as we see from your curve, your method is suitable for extracting the number of beats and the regularity. To accuratly find the strength of the beats however, you need to calculate the movement of the cells (more movement = stronger beat). Unfortuantly, this is not straight forwards, and that is why I gave you links to two algorithms that can calculate the movement for you.
A few fairly simple things to try that might help:
I would look in detail at what your thresholding is doing, and whether that's really what you want to do. I don't know what graythresh does exactly, but it's possible it's lumping different features that you would want to distinguish into the same pixel values. Have you tried plotting the differences between images without thresholding? Or you could threshold into multiple classes, rather than just black and white.
If noise is the main problem, you could try smoothing the images before taking the difference, so that differences in noise would be evened out but differences in large features, caused by motion, would still be there.
You could try edge-detecting your images before taking the difference.
As a previous answerer mentioned, you could also look into motion-tracking and registration algorithms, which would estimate the actual motion between each image, rather than just telling you whether the images are different or not. I think this is a decent summary on Wikipedia: http://en.wikipedia.org/wiki/Video_tracking. But they can be rather complicated.
I think if all you need is to find the time and period of contractions, though, then you wouldn't necessarily need to do a detailed motion tracking or deformable registration between images. All you need to know is when they change significantly. (The "strength" of a contraction is another matter, to define that rigorously you probably would need to know the actual motion going on.)
What are the structures we see in the video? For example what is the big dark object in the lower part of the image? This object would be relativly easy to track, but would data from this object be relevant to get data about cell contraction?
Is this image from a light microscop? At what magnification? What is the scale?
From the video it looks like there are several motions and regions of motion. So should you focus on a smaller or larger area to get your measurments? Per cell contraction or region contraction? From experience I know that changing what you do at the microscope might be much better then complex image processing ;)
I had sucsess with Gunn and Nixons Dual Snake for a similar problem:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.6831
I placed the first aproximation in the first frame by hand and used the segmentation result as starting curv for the next frame and so on. My implementation for this is from 2000 and I only have it on paper, but if you find Gunn and Nixons paper interesting I can probably find my code and scan it.
#Matt suggested smoothing and edge detection to improve your results. This is good advise. You can combine smoothing, thresholding and edge detection in one function call, the Canny edge detector.Then you can dialate the edges to get greater overlap between frames. Little overlap will probably mean a big movement between frames. You can use this the same way as before to find the beat. You can now make a second pass and add all the dialated edge images related to one beat. This should give you an idea about the area traced out by the cells as they move trough a contraction. Maybe this can be used as a useful measure for contraction of a large cluster of cells.
I don't have access to Matlab and the Image Processing Toolbox now, so I can't give you tested code. Here are some hints: http://www.mathworks.se/help/toolbox/images/ref/edge.html , http://www.mathworks.se/help/toolbox/images/ref/imdilate.html and http://www.mathworks.se/help/toolbox/images/ref/imadd.html.

Encoding an image in to the fourier domain of a sound

I'm trying to convert an image to a sound where you can see the image if you were to view the spectrogram of that sound. Kind of like the aphex twin had done in window licker.
So far I have written an iPhone app that takes a photograph and then converts it to grayscale. I then use this gray scale as a magnitude which I'd like to plug back through an inverse FFT.
The problem I have, though, is how do I go from magnitude into the imaginary and real parts.
mag = sqrtf( (imag * imag) + (real * real));
Obviously I can't solve for 2 unknowns. Furthermore I can't find out if those real and imaginary parts are negative or not.
So I'm at a bit of a loss. It must be possible. Can anyone point me in the direction of some useful information?
A spectrogram contains no phase information, so you can just set the imaginary parts to 0 and set the real parts equal to the magnitude. Remember that you need to maintain complex conjugate symmetry if you want to end up with a purely real time domain signal after you have applied the inverse FFT.
The math wonks are right about regenerating from greyscale, but why limit yourself thus? Have you considered keeping a portion of the phase information in the color channels?
Specifically, why not process the LEFT channel into BLUE, the RIGHT channel into RED, and for the GREEN color element, run the transform again on (LEFT-RIGHT), so that you have three spectra.
In one version of "Surround Sound", L-R encodes the rear channel - there is good stuff there.
When regenerating your sound, assign the "real" values to the corresponding channels.
Try the following (formulas - but this editor insists on calling them code..)
LEFT.real=+BLUE
RIGHT.real=+RED
LEFT.imag=+GREEN
RIGHT.imag=-GREEN
Experiment with variations on this, while listening thru some sort of surround sound setup, to see which provides the most pleasing results. Make sure not to drive the thing into clipping, since phase changes occur, regeneration of a complex saturated signal is likely to create clipping.

iPhone - recognizing a wave form/frequency

I capture a sound using my App. Suppose this sound is a sinusoidal 1 KHz sound and there is a background sound present. How do I identify that this 1 KHz sound is present on the sound?
I mean, I can imagine how elements can be found in images, as for example, if you are looking for a yellow square on an image, all you have to do is to specify the color you want, give it a certain tolerance, and look for a group of pixels that have that color and form a square shape. But what about sounds? How do you identify a wave form and frequency when all you get is an amplitude value that represents 1/44.000 of the wave form in one second?
I don't want the code, as this is too complex for this post, but if you guys can point me on the right direction of how this is done, free source codes that can exemplify the techniques or the math behind it, I appreciate. Thanks
You just need to do a FFT (Fast Fourrier Transform) of the wave to have the frequencies it is composed of. This is a classic task in signal processing (Fourrier Transforms are transformations to switch between time and frequency spaces) so you should find a lot of resources on the subject.
Maybe look at the CoreAudio framework and sample codes too.
(See also CoreAudio overview and the Audio and Video topics in Apple's doc)

Light generation algorithm for clouds on the iOS platform

I'd like to fill the background of my app with animated clouds. I did some research and stumbled upon the perlin noise algorithm which seems to be fitting. However even in the first test it was extremely expensive to generate a 512x512 (2D) cloud map. I tried simplex noise but it didn't fix it.
According to http://freespace.virgin.net/hugo.elias/models/m_clouds.htm generating clouds is done by adding some perlin/simplex noise maps together. Impossible to do it on a iPhone in my app: I need fluid graphics (my optimistic expectation is 60 FPS on an A4).
So my question: Is there a lighter algorithm to generate animated clouds that does not make my frame rate drop too much?
Thanks in advance!
Paul
Unless all you're doing is generating clouds you'll definitely want them precomputed. Perlin noise can make for nice 2d animations by traversing a set of 3d data, but you could just scroll a 2d image of some noise or a fractal like is generated by the diamond-square algorithm. Either way, you should probably precompute it.
If you want some more variation, I would experiment with putting a noise filter over the precomputed clouds.
Pre-generate the clouds and create 2d sprites using core animation or otherwise. You can then animate these around cheaply. You may not get 60 fps, but you should get close depending on how complex movement you want or what other animations are going on at the time. Either way, it's going to be faster than generating clouds yourself.