How to stretch waveform using audiokit - swift

I'm tracking the amplitude while outputting it as a waveform in real time. I'm using audiokitUI's RollingViewData class to do this with the results as seen below.
My issue is the waveform is very small insensitive to input, maybe looks like the node needs to go through a booster before being plotted? How would i achieve this and are there any methods that do so in audio kit v5? Thanks

Related

how to generation animation as mpg or speed up gif in matlab

I am plotting a randomly walking points on the figure and trying to capture the motion at each step by getframe. After I collect all the frames, I output the result as avi with movie2avi, but the output file was so big to fit into my presentation. I am looking for a way to export the movie to mp4, anyone have any idea? I also try to use the 3rd party movie2gif, it save the size to huge extent but when I play the gif, it looks so unsmooth
In later versions of Matlab (e.g. 2012) it is done by creating and writing a video object. For example, the following code generates movie of a randomly moving circle. You can adjust the speed of the movie with the FrameRate and the size with the Quality properties. For more details see the Matlab documentation.
vobj=VideoWriter('MyMovieFile', 'Motion JPEG AVI');
vobj.FrameRate=4;
vobj.Quality=75
open(vobj);
for i=1:100
plot(rand,rand,'o')
F=getframe(gcf);
writeVideo(vobj, F);
cla(gca)
end
close(vobj)

Offline (visualisation) rendering in scientific computing

I have already asked this question on scientific computing and wondered if this forum could offer alternatives.
I need to simulate the movement of a large number of agents undergoing soft body deformation. The processes that govern the agents' movement are complex and so the entire process requires parallelisation.
The simulation needs to be visualised in 3D. As I will be running this simulation across many different nodes (MPI or even MPI+GPGPU) I do not want the visualisation to run in real time, rather the simulation should output a video file after it is finished.
(i'm not look for awesome AAA video game quality graphics, in addition the movement code will take up enough CPU time so I don't want to further slow the application down by adding heavy weight rendering code)
I think there are three ways of producing such a video:
Write raw pixel information to BMPs and stitch them together - I have done this in 2D but I don't know how this would work in 3D.....
Use an offline analogue of OpenGL/Direct3D, rendering to a buffer instead of the screen.
Write some sort of telemetry data to a file, indicating each agents' position, deformation etc for each time interval and then after the simulation has finished use it as input to a OpenGL/Direct3D program.
This problem MUST have been solved before - there's plenty of visualisation in HPC
In summary: How does one easily render to a video in an offline manner (very basic graphics not toy story - I just need 3D blobs) without impacting performance in a big way?
My idea would be to store the different states/positions of the vertices as single frames of a vertex animation in a suitable file format. A suitable format would be COLLADA, which is a intermediate format for 3D scenes based on XML, thus it can be easily parsed and written with general purpose XML libraries. There are also special purpose libraries for COLLADA like COLLADA DOM and pycollada. The COLLADA file containing the vertex animation could then be rendered directly to a video file, with the rendering software of your choice (3D Studio Max, Blender, Maya ...)

Matlab video processing of heart beating. code supplemented

I'm trying to write a code The helps me in my biology work.
Concept of code is to analyze a video file of contracting cells in a tissue
Example 1
Example 2: youtube.com/watch?v=uG_WOdGw6Rk
And plot out the following:
Count of beats per min.
Strenght of Beat
Regularity of beating
And so i wrote a Matlab code that would loop through a video and compare each frame vs the one that follow it, and see if there was any changes in frames and plot these changes on a curve.
Example of My code Results
Core of Current code i wrote:
for i=2:totalframes
compared=read(vidObj,i);
ref=rgb2gray(compared);%% convert to gray
level=graythresh(ref);%% calculate threshold
compared=im2bw(compared,level);%% convert to binary
differ=sum(sum(imabsdiff(vid,compared))); %% get sum of difference between 2 frames
if (differ ~=0) && (any(amp==differ)==0) %%0 is = no change happened so i dont wana record that !
amp(end+1)=differ; % save difference to array amp wi
time(end+1)=i/framerate; %save to time array with sec's, used another array so i can filter both later.
vid=compared; %% save current frame as refrence to compare the next frame against.
end
end
figure,plot(amp,time);
=====================
So thats my code, but is there a way i can improve it so i can get better results ?
because i get fealing that imabsdiff is not exactly what i should use because my video contain alot of noise and that affect my results alot, and i think all my amp data is actually faked !
Also i actually can only extract beating rate out of this, by counting peaks, but how can i improve my code to be able to get all required data out of it ??
thanks also really appreciate your help, this is a small portion of code, if u need more info please let me know.
thanks
You say you are trying to write a "simple code", but this is not really a simple problem. If you want to measure the motion accuratly, you should use an optical flow algorithm or look at the deformation field from a registration algorithm.
EDIT: As Matt is saying, and as we see from your curve, your method is suitable for extracting the number of beats and the regularity. To accuratly find the strength of the beats however, you need to calculate the movement of the cells (more movement = stronger beat). Unfortuantly, this is not straight forwards, and that is why I gave you links to two algorithms that can calculate the movement for you.
A few fairly simple things to try that might help:
I would look in detail at what your thresholding is doing, and whether that's really what you want to do. I don't know what graythresh does exactly, but it's possible it's lumping different features that you would want to distinguish into the same pixel values. Have you tried plotting the differences between images without thresholding? Or you could threshold into multiple classes, rather than just black and white.
If noise is the main problem, you could try smoothing the images before taking the difference, so that differences in noise would be evened out but differences in large features, caused by motion, would still be there.
You could try edge-detecting your images before taking the difference.
As a previous answerer mentioned, you could also look into motion-tracking and registration algorithms, which would estimate the actual motion between each image, rather than just telling you whether the images are different or not. I think this is a decent summary on Wikipedia: http://en.wikipedia.org/wiki/Video_tracking. But they can be rather complicated.
I think if all you need is to find the time and period of contractions, though, then you wouldn't necessarily need to do a detailed motion tracking or deformable registration between images. All you need to know is when they change significantly. (The "strength" of a contraction is another matter, to define that rigorously you probably would need to know the actual motion going on.)
What are the structures we see in the video? For example what is the big dark object in the lower part of the image? This object would be relativly easy to track, but would data from this object be relevant to get data about cell contraction?
Is this image from a light microscop? At what magnification? What is the scale?
From the video it looks like there are several motions and regions of motion. So should you focus on a smaller or larger area to get your measurments? Per cell contraction or region contraction? From experience I know that changing what you do at the microscope might be much better then complex image processing ;)
I had sucsess with Gunn and Nixons Dual Snake for a similar problem:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.6831
I placed the first aproximation in the first frame by hand and used the segmentation result as starting curv for the next frame and so on. My implementation for this is from 2000 and I only have it on paper, but if you find Gunn and Nixons paper interesting I can probably find my code and scan it.
#Matt suggested smoothing and edge detection to improve your results. This is good advise. You can combine smoothing, thresholding and edge detection in one function call, the Canny edge detector.Then you can dialate the edges to get greater overlap between frames. Little overlap will probably mean a big movement between frames. You can use this the same way as before to find the beat. You can now make a second pass and add all the dialated edge images related to one beat. This should give you an idea about the area traced out by the cells as they move trough a contraction. Maybe this can be used as a useful measure for contraction of a large cluster of cells.
I don't have access to Matlab and the Image Processing Toolbox now, so I can't give you tested code. Here are some hints: http://www.mathworks.se/help/toolbox/images/ref/edge.html , http://www.mathworks.se/help/toolbox/images/ref/imdilate.html and http://www.mathworks.se/help/toolbox/images/ref/imadd.html.

How to generate and play white noise on the fly with OpenAL?

I'm using OpenAL in my app to play sounds based on *.caf audio files.
There's a tutorial which describes how to generate white noise in OpenAL:
amplitude - rand(2*amplitude)
But they're creating a buffer with 1000 samples and then just loop that buffer with
alSourcei(source, AL_LOOPING, AL_TRUE);
The problem with this approach: Looping white noise just doesn't work like this because of DC offset. There will be a noticeable wobble in the sound. I know because I tried looping dozens of white noise regions generated in different applications and all of them had the same problem. Even after trying to crossfade and making sure the regions are cut to zero crossings.
Since (from my understanding) OpenAL is more low-level than Audio Units or Audio Queues, there must be a way to generate white noise on the fly in a continuous manner such that no looping is required.
Maybe someone can point out some helpful resources on that topic.
The solution with the least change might just be to create a much longer OpenAL noise buffer (several seconds) such that the wobble is at too low rate to easily hear. Any waveform hidden in a 44Hz repeat (1000 samples at 44.1k sample rate) is within normal human hearing range.

iPhone - recognizing a wave form/frequency

I capture a sound using my App. Suppose this sound is a sinusoidal 1 KHz sound and there is a background sound present. How do I identify that this 1 KHz sound is present on the sound?
I mean, I can imagine how elements can be found in images, as for example, if you are looking for a yellow square on an image, all you have to do is to specify the color you want, give it a certain tolerance, and look for a group of pixels that have that color and form a square shape. But what about sounds? How do you identify a wave form and frequency when all you get is an amplitude value that represents 1/44.000 of the wave form in one second?
I don't want the code, as this is too complex for this post, but if you guys can point me on the right direction of how this is done, free source codes that can exemplify the techniques or the math behind it, I appreciate. Thanks
You just need to do a FFT (Fast Fourrier Transform) of the wave to have the frequencies it is composed of. This is a classic task in signal processing (Fourrier Transforms are transformations to switch between time and frequency spaces) so you should find a lot of resources on the subject.
Maybe look at the CoreAudio framework and sample codes too.
(See also CoreAudio overview and the Audio and Video topics in Apple's doc)