I am creating a video for a presentation with matlab. The video shows a optimization algorithm moving towards the minimum of a costfunction. I want the last seconds of the video to be a freeze-frame, which means that I use the writevideo function to write a couple of identical frames. However when I play the video with VLC player or with powerpoint the correct length of the video is shown, but the identical frames of the video are "skipped". Is there any way I can stop this?
My Code looks like this:
graph = figure('units','pixels','position',[0 0 1920 1080]);
scatter3... %(first frame)
v = VideoWriter('Presentation.avi');
v.Quality = 95;
v.FrameRate = 1;
open(v);
frame = getframe(graph);
writeVideo(v,frame);
for i = 1:10
plot3... %(changing frames)
frame = getframe(graph);
writeVideo(v,frame);
end
for j = 1:5
%(identical frames)
frame = getframe(graph);
writeVideo(v,frame);
end
close(v);
Thank you for answering!
Related
I am reading a filename.mp4 video file in MATLAB. I want to edit the images, however, I want to keep the audio intact. Using VideoReader and VideoWriter only does the images part. I used vision.VideoFileReader and 'vision.VideoFileWriter'. I read the video and audio files, then take the image and add a picture next to it. Then write the frame and the audio associated with it. The final video shows the picture I added, but not the original image. Any help appreciated.
v = VideoReader('movie.mp4');
nfr = v.NumberofFrames;
clear v;
vR = vision.VideoFileReader('movie.mp4','AudioOutputPort',1);
fr = vR.info.VideoFrameRate;
vW = vision.VideoFileWriter('filename.avi','AudioInputPort',1,'FrameRate',fr);
pic = imread('picture.png');%read picture
[a1,b1,~] = size(pic);% get picture size to be resized.
for i = 1:nfr
[I,audio] = vR();
I = permute(I,[2,1,3]);%rotate 90 degrees
if i == 1%resize the picture
[a,b,~] = size(I);
pic = imresize(pic,[a,a/a1*b1]);
end
I = [I pic];%combine picture and movie frame
vW(I,audio);%write frame and audio
end
release(vR);
release(vW);
I figured it out.
v = VideoReader('movie.mp4');
nfr = v.NumberofFrames;
clear v;
vR = vision.VideoFileReader('movie.mp4','AudioOutputPort',1,'VideoOutDataType','uint8');
%default of VideoOutDataType is 'single', converting it to a similar format is essential
fr = vR.info.VideoFrameRate;
vW = vision.VideoFileWriter('filename.avi','AudioInputPort',1,'FrameRate',fr);
pic = imread('picture.png');%read picture
[a1,b1,~] = size(pic);% get picture size to be resized.
for i = 1:nfr
[I,audio] = vR();
I = permute(I,[2,1,3]);%rotate 90 degrees
if i == 1%resize the picture
[a,b,~] = size(I);
pic = imresize(pic,[a,a/a1*b1]);%resizing the pic to same height as movie frame %with proportional width
end
I = [I pic];%combine picture and movie frame
vW(I,audio);%write frame and audio
end
release(vR);
release(vW);
I wrote a script that present movie and record from the webcam. The problem is that it rolls the movie in slow motion. I don't think the problem is from taking the snapshot ( I checked it). This is the code:
clear cam
v = VideoReader('movie.MP4');
cam = webcam;
vidWriter = VideoWriter('webcam.avi');
open(vidWriter);
%pre loading the frames
for i=1:50
vtemp = readFrame(v);
vid{i}=vtemp;
end
for index = 1:50
% Acquire frame for processing
img = snapshot(cam);
% Write frame to video
writeVideo(vidWriter, img);
%show the vid frame
imshow(vid{index});
end
close(vidWriter);
clear cam
Any assistance will be much appreciated.
I want to make a video with Matlab using the VideoWriter class. I have several frames of an object as single point clouds and I want to put the frames one after another into my video. As for now, I am doing this like this:
function [] = makeMyVideo(videoPath, framerate, filenamestoplot)
writerObj = VideoWriter(videoPath);
writerObj.FrameRate = framerate;
open(writerObj);
figure;ptHandles = onePlot(filenamestoplot);axis off;view(54,12);
axis tight
set(gca,'nextplot','add');
set(gcf,'Renderer','zbuffer');
firstCameraPos = campos;
for k = 1:numel(filenamestoplot)
pause(0.1);
delete(ptHandles);
ptHandles = onePlot(filenamestoplot(k));axis off;view(54,12);
campos(firstCameraPos);
frame = getframe(gcf);
writeVideo(writerObj,frame);
end
close(writerObj);
end
This works, but my 3D object is "making little jumps". I tried to fix this by setting the camera position for every frame, but unfortunately this did not solve the problem. Do you have any idea, how to fix this?
Thanks!
I am currently trying to figure out how to be able to detect a face that is 5m away from the source and will have its facial features clear enough for the user to see. The code i am working on is as shown.
faceDetector = vision.CascadeObjectDetector();
%Get the input device using image acquisition toolbox,resolution = 640x480 to improve performance
obj =imaq.VideoDevice('winvideo', 1, 'YUY2_640x480','ROI', [1 1 640 480]);
set(obj,'ReturnedColorSpace', 'rgb');
figure('menubar','none','tag','webcam');
while (true)
frame=step(obj);
bbox=step(faceDetector,frame);
boxInserter = vision.ShapeInserter('BorderColor','Custom',...
'CustomBorderColor',[255 255 0]);
videoOut = step(boxInserter, frame,bbox);
imshow(videoOut,'border','tight');
f=findobj('tag','webcam');
if (isempty(f));
[hueChannel,~,~] = rgb2hsv(frame);
% Display the Hue Channel data and draw the bounding box around the face.
figure, imshow(hueChannel), title('Hue channel data');
rectangle('Position',bbox,'EdgeColor','r','LineWidth',1)
hold off
noseDetector = vision.CascadeObjectDetector('Nose');
faceImage = imcrop(frame,bbox);
imshow(faceImage)
noseBBox = step(noseDetector,faceImage);
noseBBox(1:1) = noseBBox(1:1) + bbox(1:1);
videoInfo = info(obj);
ROI=get(obj,'ROI');
VideoSize = [ROI(3) ROI(4)];
videoPlayer = vision.VideoPlayer('Position',[300 300 VideoSize+30]);
tracker = vision.HistogramBasedTracker;
initializeObject(tracker, hueChannel, bbox);
while (1)
% Extract the next video frame
frame = step(obj);
% RGB -> HSV
[hueChannel,~,~] = rgb2hsv(frame);
% Track using the Hue channel data
bbox = step(tracker, hueChannel);
% Insert a bounding box around the object being tracked
videoOut = step(boxInserter, frame, bbox);
%Insert text coordinates
% Display the annotated video frame using the video player object
step(videoPlayer, videoOut);
pause (.2)
end
% Release resources
release(obj);
release(videoPlayer);
close(gcf)
break
end
pause(0.05)
end
release(obj)
% remove video object from memory
delete(handles.vid);
I am trying to work on this code to figure out the distance it can cover when tracking a face. I couldnt figure out which one handles that. Thanks!
Not sure what your question is, but try this example. It uses the KLT algorithm, which, IMHO, is more robust for face tracking than CAMShift. It also uses the webcam interface in base MATLAB, which is very easy.
I need to detect the number of frames for which a face is appearing in a video. I looked into the sample code using CAMShift algorithm provided in the MathWorks site(http://www.mathworks.in/help/vision/examples/face-detection-and-tracking-using-camshift.html). Is there a way of knowing whether a face has appeared in a particular frame?
I'm new to MatLab. I'm assuming the step function will return a false value if no face is detected (condition fails - similar to C). Is there a possible solution? I think using MinSize is also a possible solution.
I am not concerned about the computational burden - although a faster approach for the same would be appreciated. My current code is given below:
clc;
clear all;
videoFileReader = vision.VideoFileReader('Teapot.mp4', 'VideoOutputDataType', 'uint8', 'ImageColorSpace', 'Intensity');
video = VideoReader('Teapot.mp4');
numOfFrames = video.NumberOfFrames;
faceDetector = vision.CascadeObjectDetector();
opFolder = fullfile(cd, 'Face Detected Frames');
frameCount = 0;
shotCount = 0;
while ~isDone(videoFileReader)
videoFrame = step(videoFileReader);
bbox = step(faceDetector, videoFrame);
framCount = frameCount + 1;
for i = 1:size(bbox,1)
shotCount = shotCount + 1;
rectangle('Position',bbox(i,:),'LineWidth', 2, 'EdgeColor', [1 1 0]);
videoOut = insertObjectAnnotation(videoFrame,'rectangle',bbox,'Face');
progIndication = sprintf('Face has been detected in frame %d of %d frames', shotCount, numOfFrames);
figure, imshow(videoOut), title(progIndication);
end
end
release(videoFileReader);
You can use the vision.CascadeObjectDetector object to detect faces in any particular frame. If it does not detect any faces, its step method will return an empty array. The problem is that the face detection algorithm is not perfect. Sometimes it detects false positives, i. e. detects faces where there are none. You can try to mitigate that my setting the MinSize and MaxSize properties, assuming you know what size faces you expect to find.