I am having a problem when creating an avi file using Matlab. My aim is to use an edge filter on an entire video and save the file as an avi. The filter works fine, my problem is the writing of the avi file.
My code:
vidFile = VideoReader('video.avi');
edgeMov = avifile('edges','fps',30);
for i = 1:vidFile.numberofframes
frameI = read(vidFile,i);
frameIgray = rgb2gray(frameI);
edgeI = edge(frameIgray,'canny',0.6);
edgeIuint8 = im2uint8(edgeI);
edgeIuint8(:,:,2) = edgeIuint8(:,:,1);
edgeIuint8(:,:,3) = edgeIuint8(:,:,1);
edgeMov = addframe(edgeMov,edgeIuint8);
end
edgeMov = close(edgeMov)
When the loop finishes and the avifile is closed, I go to play the video and it says "Windows Media Player encountered a problem while playing this file". I've also tried, without success, Media Player Classic and VLC which lead me to believe that the problem must be the file itself. Using GSpot I checked the file and it said that the AVI header is corrupt.
Retrying the loop again returned exactly the same problem. What's confusing me is when I run the loop for a smaller number of frames, 30 for example, the video writes fine and I can watch it. The size of the video I am trying to convert is in excess of 1000 frames so I don't know if size is a problem?
Any help would be greatly appreciated, thank you.
I've used the following to create AVI
edgeMov = avifile('video.avi','compression','Indeo5','fps',15,'quality',95);
Give it a try.
Related
I am having trouble using the output from PiCamera capture function (directed in a BytesIO stream) and opening it using the PIL library. Here is the code (based on the PiCamera basic examples):
#Camera stuff
camera = PiCamera()
camera.resolution = (640, 480)
stream = io.BytesIO()
sleep(2)
try:
for frame in camera.capture_continuous(stream, format = "jpeg", use_video_port = True):
frame.seek(0)
image = Image.open(frame) //THIS IS WHERE IS CRASHES
#OTHER STUFF THAT IS NON IMPORTANT GOES HERE
frame.truncate(0)
finally:
camera.close()
stream.close()
The error is : PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0xaa01cf00>
Any help would be greatly appreciated :)
Have a nice day!
The problem is simple but I am wondering why the io library works that way.
One simply needs to seek back the stream to 0 after truncating it or seek to 0 and then simply call truncate with no parameter (all after you are done opening the image). Like so:
for frame in camera.capture_continuous(stream, format = "jpeg", use_video_port = True):
stream.seek(0)
image = Image.open(stream)
#Do stuff with image
stream.seek(0)
stream.truncate()
Basically when you open the image and do some operation on it, the pointer of the BytesIO can move around and end up somewhere else than the zero position. After that when you call truncate(0) it does not move the pointer back to zero as I thought it would (seems logical to me to move the pointer back to where the truncation occurs). When to code runs once more, the capture writes in the stream but this time it does not start writing at the beginning and everything breaks after that.
Hope this can help someone in the future :)
I just received a 30 day trial of the Computer Vision System Toolbox, and I just tested it out. I found this code online that separates video from audio:
file='movie.AVI';
file1='targetfile.wav';
hmfr= video.MultimediaFileReader(file,'AudioOutputPort',true,'VideoOutputPort',false);
hmfw = video.MultimediaFileWriter(file1,'AudioInputPort',true,'FileFormat','WAV');
while ~isDone(hmfr)
audioFrame = step(hmfr);
step(hmfw,audioFrame);
end
close(hmfw);
close(hmfr);
but I can't run it, I only get the error:
Undefined variable "video" or class "video.MultimediaFileReader".
I'm not quite sure what this means, does it refer to my code or the computer vision system toolbox? I checked, I have all the requirements and the add-on manager says it's properly installed, so I'm not quite sure why I get this error.
I think your task is quite easier than you think it is. It can be done without any reliance on toolboxes.
That's how:-
1. Read your video file and get its sample rate using audioread.
2. Then use audiowrite to write it as an audio file.
[input_file, Fs] = audioread('movie.AVI');
audiowrite('target_file.WAV', input_file, Fs);
%If your path is set to default then MATLAB may give you 'Permission Denied' Error.
%Change the path or give different full path like: 'D:\target_file.WAV' while audiowriting
I have a weird problem... I want to create a video with the extension .tif with a lot of frames. My script is running well 2 times over 3 but it crashes randomly sometimes...
I have a loop which has the length of the total of frames in the video and at each turn I add a tif to my multipage tif.
There is my code to create the new video :
% --- Create the new frame
newVid.cData = iL(y0:y0end, x0:x0end);
% --- Create the new video
if nbrFrames == 1
imwrite(newVid.cData,dataOutVid);
else
imwrite(newVid.cData,dataOutVid,'WriteMode','append');
end
Each turn I change the value of "newVid.cData". in fact the new video is a portion of an original video in which I focused on a specific object (a mouse for me). "dataOutVid" is the path where I store the new video and the extension of the path is .tif.
How I obtain the path:
disp('Where do you want to save the new video and under which name ?');
[name, path] = uiputfile({'.tif'}, 'Save Video');
dataOutVid = strcat(path,name);
There is the mistake I can possibly have randomly:
Error using imwrite (line 454)
Unable to open file "D:\Matlab\Traitement Vidéo\test.tif" for writing. You might not have write permission.
Error in mouseExtraction(line 164)
imwrite(newVid.cData,dataOutVid,'WriteMode','append');
Well I don't understand why this error appears randomly (one time at the frame 270, another time at the frame 1250, etc...). How is it possible I suddendly loose the right to overwrite my file...
Edit : I already checked if I didn't have a RAM problem but I only use 20% of it during the execution of the script...
I've been looking around Swift documentation to save an audio output from AVAudioEngine but I couldn't find any useful tip.
Any suggestion?
Solution
I found a way around thanks to matt's answer.
Here a sample code of how to save an audio after passing it through an AVAudioEngine (i think that technically it's before)
newAudio = AVAudioFile(forWriting: newAudio.url, settings: nil, error: NSErrorPointer())
//Your new file on which you want to save some changed audio, and prepared to be bufferd in some new data...
var audioPlayerNode = AVAudioPlayerNode() //or your Time pitch unit if pitch changed
//Now install a Tap on the output bus to "record" the transformed file on a our newAudio file.
audioPlayerNode.installTapOnBus(0, bufferSize: (AVAudioFrameCount(audioPlayer.duration)), format: opffb){
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) in
if (self.newAudio.length) < (self.audioFile.length){//Let us know when to stop saving the file, otherwise saving infinitely
self.newAudio.writeFromBuffer(buffer, error: NSErrorPointer())//let's write the buffer result into our file
}else{
audioPlayerNode.removeTapOnBus(0)//if we dont remove it, will keep on tapping infinitely
println("Did you like it? Please, vote up for my question")
}
}
Hope this helps !
One issue to solve:
Sometimes, your outputNode is shorter than the input: if you accelerate the time rate by 2, your audio will be 2 times shorter. This is the issue im facing for now since my condition for saving the file is (line 10)
if(newAudio.length) < (self.audioFile.length)//audiofile being the original(long) audio and newAudio being the new changed (shorter) audio.
Any help here?
Yes, it's quite easy. You simply put a tap on a node and save the buffer into a file.
Unfortunately this means you have to play through the node. I was hoping that AVAudioEngine would let me process one sound file into another directly, but apparently that's impossible - you have to play and process in real time.
Offline rendering Worked for me using GenericOutput AudioUnit. Please check this link, I have done mixing two,three audios offline and combine it to a single file. Not the same scenario but it may help you for getting some idea. core audio offline rendering GenericOutput
I want to play an MP4 file showing a reaching task for an experiment. I am not sure how to formulate the syntax. So far I have:
moviefile = 'GOPR0056.MP4';
screenNum = 0;
[window, rect] = Screen('OpenWindow', screenNum, 1);
moviePtr = Screen('OpenMovie', window, moviefile);
Screen('PlayMovie', moviePtr, 1);
But I'm getting an issue:
PTB-ERROR: Could not open movie file [GOPR0056.MP4] for playback! No such moviefile with the given path and filename.
PTB-ERROR: The specific file URI of the missing movie was: file:///GOPR0056.MP4.
The file is located in the directory. May I be getting a video drivers error because this is a MP4 file. Thanks.
When playing videos with Psychtoolbox, always provide the full path, even if the video is in the current directory. Try this:
moviefile = [pwd filesep 'GOPR0056.MP4'];