This question already exists:
how to control the duration of a videoreader function in MatLab? [duplicate]
Closed 8 years ago.
I am using a Matlab code to record a video file using the VideoWriter function. I want to change the code to record only a certain part of the video file rather than the whole video. What command can I use to record only the first 40 secs of the data recorded at normal speed? I also want to know if there is a way I can record only a small part from the middle of the data recorded.
You can add conditional statements such as if to control the writing of the video file.
Alternatively, you can wrap the video writer function into a wrapper, which accepts your actual data, and a control boolean.
If you mean you want to record 40 seconds for each data set, with different frame rates, a wrapper function that takes frame rate and time length, and calculates the frame counts for itself may work.
If you mean you are frequently changing data sets, which will add up to one piece of video, and you want it be 40 second long, then a 'global' variable storing how many seconds you have recorded, as well as a function to calculate time increments, is needed.
Edited -
Based on your refined details, you may find these necessary and - hopefully - helpful.
exact frame rate for each data set.
a variable (should be in your controlling function/script) to store
how many milliseconds you already have.
a wrapper function that does the following job (and takes arguments
accordingly):
check if the current time already exceeds 40,000 milliseconds (if so, do nothing and return);
calculate the time period in milliseconds of your data to be added = (num of frames to record) / (frame rate per second) * 1000;
call video writer to record your data set, either frame by frame, or together in a whole;
add the time period into current time.
You can make it fancier by making it able to cut a piece of data series somewhere in the middle, so for example a 10-second data won't add the extra 4 seconds, if you already have 34 seconds on file.
Related
I am having problems with the 'windowing', as it's called, of pitch-shifted grains of sounds using the Web Audio API. When I play the sound, I successfully set the initial gain to 0, using setValueAtTime, and successfully fade the sound in using linearRampToValueAtTime. However, I am not successful in scheduling a fade-out to occur slightly before the sound ends. It may be because the sound is pitch-shifted, although in the code below, I believe I have set the parameters correctly. Though maybe the final parameter in setTargetAtTime is not suitable because I don't fully understand that parameter, despite having read Chris's remarks on the matter. How does one successfully schedule a fade-out when the sound is pitch-shifted, using setTargetAtTime? Below is my code. You can see the piece itself at https://vispo.com/animisms/enigman2/2022
source.buffer = audioBuffer;
source.connect(this.GrainObjectGain);
this.GrainObjectGain.connect(app.mainGain);
source.addEventListener('ended', this.soundEnded);
//------------------------------------------------------------------------
// Now we do the 'windowing', ie, when we play it, we fade it in,
// and we set it to fade out at the end of play. The fade duration
// in seconds is a maximum of 10 milliseconds and
// a minimum of 10% of the duration of the sound. This helps
// eliminate pops.
source.playbackRate.value = this.playbackRate;
var fadeDurationInSeconds = Math.min(0.01,0.1*duration*this.playbackRate);
this.GrainObjectGain.gain.setValueAtTime(0, app.audioContext.currentTime);
this.GrainObjectGain.gain.linearRampToValueAtTime(app.constantGrainGain, app.audioContext.currentTime+fadeDurationInSeconds);
this.GrainObjectGain.gain.setTargetAtTime(0, app.audioContext.currentTime+duration*this.playbackRate-fadeDurationInSeconds, fadeDurationInSeconds);
source.start(when, offset, duration);
Given your comment below I guess you want to schedule the fade out fadeDurationInSeconds before the sound ends.
Since you change playbackRate you need to divide the original duration by that playbackRate to get the actual duration.
Changing your setTargetAtTime() call as follows should schedule the fade out at the desired point in time.
this.GrainObjectGain.gain.setTargetAtTime(
0,
app.audioContext.currentTime + duration / this.playbackRate - fadeDurationInSeconds,
fadeDurationInSeconds
);
Please not that setTargetAtTime() actually never reaches the target. At least in theory. It comes reasonably close though after some time. But that time is longer as the timeConstant.
The relevant text from the spec can be found here: https://webaudio.github.io/web-audio-api/#dom-audioparam-settargetattime
Start exponentially approaching the target value at the given time with a rate having the given time constant. Among other uses, this is useful for implementing the "decay" and "release" portions of an ADSR envelope. Please note that the parameter value does not immediately change to the target value at the given time, but instead gradually changes to the target value.
The timeConstant parameter roughly defines the time it takes to reach 2/3 of the desired signal attenuation.
It's mentioned here: https://webaudio.github.io/web-audio-api/#dom-audioparam-settargetattime-timeconstant
More precisely, timeConstant is the time it takes a first-order linear continuous time-invariant system to reach the value 1−1/𝑒 (around 63.2%) given a step input response (transition from 0 to 1 value).
I'm trying to read a lot of data coming from my Arduino, I've set my input buffer to 500000 to make sure that it can handle all these data. My data are 4 sensors readings each samples at 250 Hz. With the default buffer size (712), I used to get snags when I plot the readings in real time and the samples get disordered which makes the plot go crazy. I solved this by increasing the buffer size to 50000. But now, this will work for a while but if I want to run it for 15 minutes, I get the same misbehavior after 5 minutes, with the addition that the plotting gets slower. I do have some processing code along with the live plotting but it shouldn't be like this with such a bi buffer. I want to know whether the buffer will contain all the data from the beginning until it's full or will it keep erasing older data when it gets full (knowing that I already saved it in another vector and plotted it). I truly don't understand why this keeps happening.
kind regards
I.H
When the buffer gets full, once you get new data it erases the old data. The behavior you are seeing is because your processing and your plotting is slower than the flow of the data.
Try to make sure that you optimize you processing
Make sure that for plotting is done by "drawnow". Like this you are sure that if there is anything in the queue it is not executed
Try to avoid saving and keeping all the data
If the problem is still there, you can try to implement a timer to make sure that you are consistent with reading your data
I have an AB PLC where I am trying to read analog values to see if the values vary more than 1V in 5 minutes? I have 10 sets of values I need to read. What would the easiest way to implement this? I can think of creating arrays to save the values each time I read them but the part I am having trouble with is, how to keep a running average of the values and compare against each time I read them.
Any help with this would be greatly appreciated!!
If I understand correctly all you want to do is see if your analog input is more or less than 1V from your set value? Just check if your value is greater than (set value + 1V) or less than (set value - 1V) every plc scan then set a bool value to true. That should be it.
I think finding an average of the analog input is not the way to go for this. But if you did want to find an average of an analog input over time you would need 3 things. Sample time, interval time, and total intervals. You would set up a sample time of, lets say 12 seconds. You will get the analog value every 12 seconds. After 60 seconds you would take the total and divide by (60/12 == 5). You would then add that value to the previous value average value that you totaled up and divide by the total number of intervals times (total intervals) you have accumulated. Hope I didn't make that to complicated.
What i understood from you question is you want check whether input voltage changed or not using the analog value you got, in my case i'm using 0 to 10v. Just simple store the analog value at max input i mean at 10v and just do the same for 0v and you can simply calculate the value for 1v. All you have do is compare the value with +/- 1v value you got from the calculation. you can do this dynamically with n-number of analog inputs(n= max analog inputs supported by your PLC.)
Have a look at FFL and FFU. They are First-In-First-Out buffers. You specify the length of the buffer you want and use FFL and FFU in pairs on the same buffer. Running averages are not that difficult to compute, and there are a number of ways to best implement depending on the platform (SLC vs CLX). The simplest method that would work on both platforms is to use a counter.ACC as a value to indirectly reference the element number of the FIFO for an addition function, then divide by the number of elements in your FIFO. This can all be done in a single multi-branch rung.
1. Load your value into FIFO buffer at some timer interval using FFL.
2. If you don't need the FIFO values 'Popped out' for use elsewhere, just set .POS to 0 when the FIFO is full and let it continue to update with new values, the values aren't cleared so they are still readable for your Running Average. But you MUST either use FFU to step the .POS back or use a MOV function to change the .POS once it's full or it will stop taking values.
3. Create a counter with a .PRE equal to the .LEN of your FIFO
4. On a parallel Rung, with each increment of the counter.ACC use an ADD function. Here's an example assuming CLX. If you're using SLC you can do the same thing but obviously you can't use tag names:
ADD
Value1: AllValues
Value2: FIFO[IndexCounter.ACC]
Destination: AllValues
5. When your counter.DN bit is set, divide AllValues by FIFO.LEN and store in a RunningAverage Tag, then reset the counter. Have your counter step once for each scan or put it all in a Periodic Function to execute the routine.
I would like to write a program that will take a video as input, create an output video file, and will (starting after a certain number of frames), begin writing modified frames to the output file frame by frame.
The modification will need to work on individual columns of pixels, one at a time.
Viewing this as a problem to be solved in Matlab, with each frame as a matrix... I cannot think of a way to make this computationally tractable.
I am hoping that someone might be able to offer suggestions on how I might begin to approach this problem.
Here are some details, in case it helps:
I'm interested in transforming a video in the following way:
Viewing a video as a sequence of (MxN) matrices, where each matrix is called a frame:
Take an input video and create new file for output video
For each column V in frame(i) of output video, replace this column by
column V in frame(i + V - N) of the input video.
For example: the new right-most column (column N) of frame(i) will contain column N of frame(i + N - N) = frame(i)... so that there is no replacement. The new 2nd to right-most column (column N-1) of frame(i) will contain column N-1 of [frame(i+N-1-N) = frame(i-1)].
In order to make this work (i.e. in order to not run out of previous frames), this column replacement will start on frame N of the video.
So... This is basically a variable delay running from left to right?
As you say, you do have two ways of going about this:
a) Use lots of memory
b) Use lots of file access
Your memory requirements increase as a cube power of the size of the video - the size of each frame increases, AND the number of previous frames you need to have open or reference increases. I.e. doubling frame size will require 4x memory per frame, and 2x number of frames open.
I think that Matlab's memory management will probably make this hard to do for e.g. a 1080p video, unless you have a pretty high-end workstation. Do you? A quick test-read of a 720p video gives 1.2MB per frame. 1080p would then be approx 5MB per frame, and you would need to have 1920 frames open: approx 10GB needed.
It will be more efficient to load frames individually, if you don't have enough memory - otherwise you will be using pagefiles and that'll be slower than loading frame-by-frame.
Your basic code reading each frame individually could be something like this:
VR=VideoReader('My_Input_Video_Filename.avi');
VW=VideoWriter('My_Output_Video_Filename.avi','MPEG-4');
NumInFrames=get(VR,'NumberOfFrames');
InWidth=get(VR,'Width');
InHeight=get(VR,'Height');
OutFrame=zeros(InHeight,InWidth,3,'uint8');
for (frame=InWidth+1:NumInFrames)
for (subindex=1:InWidth)
CData=read(VR,frame-subindex);
OutFrame(:,subindex,:)=CData(:,subindex,:);
end
writeVideo(VW,OutFrame);
end
This will probably be slow, and I haven't fully checked it works, but it does use a minimum amount of memory.
The best case for minimum file acess is probably using a ring buffer arrangement and the maximum amount of memory, which would look something like this:
VR=VideoReader('My_Input_Video_Filename.avi');
VW=VideoWriter('My_Output_Video_Filename.avi','MPEG-4');
NumInFrames=get(VR,'NumberOfFrames');
InWidth=get(VR,'Width');
InHeight=get(VR,'Height');
CDatas=read(VR,InWidth);
BufferIndex=1;
OutFrame=zeros(InHeight,InWidth,3,'uint8');
for (frame=InWidth+1:NumInFrames)
CDatas(:,:,:,BufferIndex)=read(VR,frame);
tempindices=circshift(1:InWidth,[1,-1*BufferIndex]);
for (subindex=1:InWidth)
OutFrame(:,subindex,:)=CDatas(:,subindex,:,tempindices(subindex));
end
writeVideo(VW,OutFrame);
BufferIndex=mod(BufferIndex+1,InWidth);
end
The buffer indexing code may need some tweaking there, but something along those lines would be a minimum file access, maximum memory use solution.
For a given PC with more or less memory, you can implement somewhere in between these two as a solution (i.e. reading somewhere between 1 and all frames per iteration) as a best-case.
Matlab will be quite slow for this kind of task, but it will be a good way of getting your algorithm right and working out indexing bugs and that kind of thing. Converting to a compiled language would give a good increase in speed - I converted a Matlab script to a C# program in a couple of hours, and gave a 10x increase in speed over an optimised script where the time taken was in the number of file reads.
Hope this helps, good luck!
I've been working on a very specific project for iOS, lately, and my researches lead me to an almost final code. I've solved all the extreme difficulties I've found until now, but on this one I don't seem to have a clue (about the reason nor the possibility of solving it).
I set up my audioqueue (sample rate 44100, format LinearPCM, 16 bits per channel, 2 bytes per frame, 1 channel per frame...) and start recording the sound with 12 audio buffers. However, there seems to be a delay after every 4 callbacks.
The situation is the following: the first 4 callbacks are called with an interval each of about 2 ms. However, between the 4th and the 5th, there is a delay of about 60ms. The same thing happens between the 8th and the 9th, the 12th and 13th and on...
There seems to be a relation between the bytes per frame and the moment of the delay. I know this because if I change to 4 bytes per frame, I start having the delay between the 8th and the 9th, then between the 16th and the 17th, the 24th and the 25th... Nonetheless, there doesn't seem to be any relation between the moment of the delay and the number of buffers.
The callback function does only two things: store the audio data (inBuffer->mAudioData) on a array my class can use; and call another AudioQueueEnqueueBuffer, to put the current buffer back on the queue.
Did anyone go through this problem already? Does anyone know, at least, what could be the cause of it?
Thank you in advance.
The Audio Queue API seems to run on top of the RemoteIO Audio Unit API, who's real audio buffer size is probably unrelated to, and in your example larger than, whatever size your Audio Queue buffers are. So whenever a RemoteIO buffer is ready, a bunch of your smaller AQ buffers quickly get filled from it. And then you get a longer delay waiting for some larger buffer to be filled with samples.
If you want better controlled (more evenly spaced) buffer latency, try using the RemoteIo Audio Unit directly.