Why is soundfiler outputting the sample rate, not number of samples, in Pure Data? - puredata

I tried copying the help file example for soundfiler, but my soundfiler keeps outputting the sample rate of the file, not the total number of samples read, unlike the example does and the documentation says it should.
(I know the output is not the total number of samples because I've used different files and it keeps outputting 44104.)
Can anyone see what I'm doing wrong?

By default, soundfiler has a maximum length to load.
In the Pd console you will see the message
soundfiler_read: truncated to 4000000 elements
that will alert you of the truncation.
So soundfiler stops reading when it hits the max.
To fix this, change your read message to include -resize and in the properties of the array increase the size to like 400000000.
Background info: soundfiler reads sound files to memory as fast as possible, it will not wait for other processes and therefore will stop audio playback and the GUI will be unresponsive for this time. To avoid unintended lock-up of Pd when loading a long sample this arbitrary limitation exists that the user can override.

Related

serial input buffer size Matlab

I'm trying to read a lot of data coming from my Arduino, I've set my input buffer to 500000 to make sure that it can handle all these data. My data are 4 sensors readings each samples at 250 Hz. With the default buffer size (712), I used to get snags when I plot the readings in real time and the samples get disordered which makes the plot go crazy. I solved this by increasing the buffer size to 50000. But now, this will work for a while but if I want to run it for 15 minutes, I get the same misbehavior after 5 minutes, with the addition that the plotting gets slower. I do have some processing code along with the live plotting but it shouldn't be like this with such a bi buffer. I want to know whether the buffer will contain all the data from the beginning until it's full or will it keep erasing older data when it gets full (knowing that I already saved it in another vector and plotted it). I truly don't understand why this keeps happening.
kind regards
I.H
When the buffer gets full, once you get new data it erases the old data. The behavior you are seeing is because your processing and your plotting is slower than the flow of the data.
Try to make sure that you optimize you processing
Make sure that for plotting is done by "drawnow". Like this you are sure that if there is anything in the queue it is not executed
Try to avoid saving and keeping all the data
If the problem is still there, you can try to implement a timer to make sure that you are consistent with reading your data

How to write "Big Data" to a text file using Matlab

I am getting some readings off an accelerometer connected to an Arduino which is in turn connected to MATLAB through serial communication. I would like to write the readings into a text file. A 10 second reading will write around 1000 entries that make the text file size around 1 kbyte.
I will be using the following code:
%%%%%// Communication %%%%%
arduino=serial('COM6','BaudRate',9600);
fopen(arduino);
fileID = fopen('Readings.txt','w');
%%%%%// Reading from Serial %%%%%
for i=1:Samples
scan = fscanf(arduino,'%f');
if isfloat(scan),
vib = [vib;scan];
fprintf(fileID,'%0.3f\r\n',scan);
end
end
Any suggestions on improving this code ? Will this have a time or Size limit? This code is to be run for 3 days.
Do not use text files, use binary files. 42718123229.123123 is 18 bytes in ASCII, 4 bytes in a binary file. Don't waste space unnecessarily. If your data is going to be used later in MATLAB, then I just suggest you save in .mat files
Do not use a single file! Choose a reasonable file size (e.g. 100Mb) and make sure that when you get to that many amount of data you switch to another file. You could do this by e.g. saving a file per hour. This way you minimize the possible errors that may happen if the software crashes 2 minutes before finishing.
Now knowing the real dimensions of your problem, writing a text file is totally fine, nothing special is required to process such small data. But there is a problem with your code. You are writing a variable vid which increases over time. That may cause bad performance because you are not using preallocation and it may consume a lot of memory. I strongly recommend not to keep this variable, and if you need the dater read it afterwards.
Another thing you should consider is verification of your data. What do you do when you receive less samples than you expect? Include timestamps! Be aware that these timestamps are not precise because you add them afterwards, but it allows you to identify if just some random samples are missing (may be interpolated afterwards) or some consecutive series of maybe 100 samples is missing.

reading images from VideoReader gets progressively slower

I've been trying to to read an MP4 file using VideoReader. Matlab is able to read the images, but the further the frame is along the video, the more time it takes.
tic;I=read(v,1);toc
Elapsed time is 0.264011 seconds.
tic;I=read(v,2000);toc
Elapsed time is 32.859614 seconds.
Also, I'm not sure if this is related, but Matlab cannot determine the number of frames in the file:
v=VideoReader('S1140007 (~200 cubes, large).MP4');
Warning: Unable to determine the number of frames in this file.
I've tried using two versions R2012b and R2015a, and the problem persists.
On a different machine, however, the number of frames can be determined and the reading times don't get longer, so obviously there's something configured wrong on my machine.
I there a known solution for this problem (can this be related to codecs somehow?), or maybe an alternative method of reading one image at a time (readFrame is not relevant for my needs).
Any help would be appreciated,
Aviram
OK, so this is not exactly an answer, but a workaround...
It seems that to set the NumberofFrames property in the videoreader object created for a video with an undetermined number of frames, one needs to read the last frame using the following code (as mentioned in the documentation of VideoReader):
v=VideoReader('path.mp4');
l=read(v,inf);
This sets the number of frames in the video, and allows for indexing and quick reading of single frames from the video. However, this only works in matlab r2012b. In 2015a, the NumberofFrames property is set by the read(v,inf) trick, but the reading is still very time-consuming, for some reason.
I'm not sure why this happens, and as I've said, some of the other machines I've checked were able to read my files properly (but some didn't), so this is far from completed. It is not clear why it cannot determine the number of frames, or why there's any variability between computers and why in some versions the last(v, inf) works and in others only partially.

Modifying Every Column of Every Frame of a Video

I would like to write a program that will take a video as input, create an output video file, and will (starting after a certain number of frames), begin writing modified frames to the output file frame by frame.
The modification will need to work on individual columns of pixels, one at a time.
Viewing this as a problem to be solved in Matlab, with each frame as a matrix... I cannot think of a way to make this computationally tractable.
I am hoping that someone might be able to offer suggestions on how I might begin to approach this problem.
Here are some details, in case it helps:
I'm interested in transforming a video in the following way:
Viewing a video as a sequence of (MxN) matrices, where each matrix is called a frame:
Take an input video and create new file for output video
For each column V in frame(i) of output video, replace this column by
column V in frame(i + V - N) of the input video.
For example: the new right-most column (column N) of frame(i) will contain column N of frame(i + N - N) = frame(i)... so that there is no replacement. The new 2nd to right-most column (column N-1) of frame(i) will contain column N-1 of [frame(i+N-1-N) = frame(i-1)].
In order to make this work (i.e. in order to not run out of previous frames), this column replacement will start on frame N of the video.
So... This is basically a variable delay running from left to right?
As you say, you do have two ways of going about this:
a) Use lots of memory
b) Use lots of file access
Your memory requirements increase as a cube power of the size of the video - the size of each frame increases, AND the number of previous frames you need to have open or reference increases. I.e. doubling frame size will require 4x memory per frame, and 2x number of frames open.
I think that Matlab's memory management will probably make this hard to do for e.g. a 1080p video, unless you have a pretty high-end workstation. Do you? A quick test-read of a 720p video gives 1.2MB per frame. 1080p would then be approx 5MB per frame, and you would need to have 1920 frames open: approx 10GB needed.
It will be more efficient to load frames individually, if you don't have enough memory - otherwise you will be using pagefiles and that'll be slower than loading frame-by-frame.
Your basic code reading each frame individually could be something like this:
VR=VideoReader('My_Input_Video_Filename.avi');
VW=VideoWriter('My_Output_Video_Filename.avi','MPEG-4');
NumInFrames=get(VR,'NumberOfFrames');
InWidth=get(VR,'Width');
InHeight=get(VR,'Height');
OutFrame=zeros(InHeight,InWidth,3,'uint8');
for (frame=InWidth+1:NumInFrames)
for (subindex=1:InWidth)
CData=read(VR,frame-subindex);
OutFrame(:,subindex,:)=CData(:,subindex,:);
end
writeVideo(VW,OutFrame);
end
This will probably be slow, and I haven't fully checked it works, but it does use a minimum amount of memory.
The best case for minimum file acess is probably using a ring buffer arrangement and the maximum amount of memory, which would look something like this:
VR=VideoReader('My_Input_Video_Filename.avi');
VW=VideoWriter('My_Output_Video_Filename.avi','MPEG-4');
NumInFrames=get(VR,'NumberOfFrames');
InWidth=get(VR,'Width');
InHeight=get(VR,'Height');
CDatas=read(VR,InWidth);
BufferIndex=1;
OutFrame=zeros(InHeight,InWidth,3,'uint8');
for (frame=InWidth+1:NumInFrames)
CDatas(:,:,:,BufferIndex)=read(VR,frame);
tempindices=circshift(1:InWidth,[1,-1*BufferIndex]);
for (subindex=1:InWidth)
OutFrame(:,subindex,:)=CDatas(:,subindex,:,tempindices(subindex));
end
writeVideo(VW,OutFrame);
BufferIndex=mod(BufferIndex+1,InWidth);
end
The buffer indexing code may need some tweaking there, but something along those lines would be a minimum file access, maximum memory use solution.
For a given PC with more or less memory, you can implement somewhere in between these two as a solution (i.e. reading somewhere between 1 and all frames per iteration) as a best-case.
Matlab will be quite slow for this kind of task, but it will be a good way of getting your algorithm right and working out indexing bugs and that kind of thing. Converting to a compiled language would give a good increase in speed - I converted a Matlab script to a C# program in a couple of hours, and gave a 10x increase in speed over an optimised script where the time taken was in the number of file reads.
Hope this helps, good luck!

Mixing sound files of different size

I want to mix audio files of different size into a one single .wav file without clipping any file.,i.e. The resulting file size should be equal to the largest sized file of all.
There is a sample through which we can mix files of same size
[(http://www.modejong.com/iOS/#ex4 )(Example 4)].
I modified the code to get the mixed file as a .wav file.
But I am not able to understand that how to modify this code for unequal sized files.
If someone can help me out with some code snippet,i'll be really thankful.
It should be as easy as sending all the files to the mixer simultaneously. When any single file gets to the end, just treat it as if the remainder is filled with zeroes. When all files get to the end, you are done.
Note that the example code says it returns an error if there would be clipping (the sum of the waves is greater than the max representable value.). This condition is more likely if you are mixing multiple inputs. The best way around it is to create some "headroom" in the input waves. You can do either do this in preprocessing, by ensuring that each wave's volume is no more than X% of maximum. (~80-90%, depending on number of inputs.). The other way is to do it dynamically in the mixer code by multiplying each sample by some value <1.0 as you add it to the mix.
If you are selecting the waves to mix at runtime and failure due to clipping is unacceptable, you will need to modify the sample code to pin the values at max/min instead of returning an error. Don't just let them overflow or you will get noisy artifacts.
(Clipping creates artifacts as well, but when you haven't created enough headroom before mixing, it is definitely preferrable to overflow. It is a more familiar-sounding type of distortion, similar to what you get when you overdrive your speakers. See this wikipedia article on clipping:
Clipping is preferable to the alternative in digital systems—wrapping—which occurs if the digital hardware is allowed to "overflow", ignoring the most significant bits of the magnitude, and sometimes even the sign of the sample value, resulting in gross distortion of the signal.
How I'd do it:
Much like the mix_buffers function that you linked to, but pass in 2 parameters for mixbufferNumSamples. Iterate over the whole of the longer of the two buffers. When the index has gone beyond the end of the shorter buffer, simply set the sample from that buffer to 0 for the rest of the function.
If you must avoid clipping and do it in real-time and you know nothing else about the two sounds, you must provide enough headroom. The simplest method is by halving each of the samples before mixing:
mixed = s1/2 + s2/2;
This ensures that the resultant mixed sample won't overflow an int16_t. It will have the side effect of making everything quieter though.
If you can run it offline, you can calculate a scale factor to apply to both waveforms which will keep the peaks when summed below the maximum allowed value.
Or you could mix them all at full volume to an int32_t buffer, keeping track of the largest (magnitude) mixed sample and then go back through the buffer multiplying each sample by a scale factor which will make that extreme sample just reach the +32767/-32768 limits.