I want to make a video with Matlab using the VideoWriter class. I have several frames of an object as single point clouds and I want to put the frames one after another into my video. As for now, I am doing this like this:
function [] = makeMyVideo(videoPath, framerate, filenamestoplot)
writerObj = VideoWriter(videoPath);
writerObj.FrameRate = framerate;
open(writerObj);
figure;ptHandles = onePlot(filenamestoplot);axis off;view(54,12);
axis tight
set(gca,'nextplot','add');
set(gcf,'Renderer','zbuffer');
firstCameraPos = campos;
for k = 1:numel(filenamestoplot)
pause(0.1);
delete(ptHandles);
ptHandles = onePlot(filenamestoplot(k));axis off;view(54,12);
campos(firstCameraPos);
frame = getframe(gcf);
writeVideo(writerObj,frame);
end
close(writerObj);
end
This works, but my 3D object is "making little jumps". I tried to fix this by setting the camera position for every frame, but unfortunately this did not solve the problem. Do you have any idea, how to fix this?
Thanks!
Related
I am relatively new to image processing, and have never attempted to do anything with images in MATLAB, so forgive me if i am making some very rookie errors.
I am attempting to make a program that will track ants in a video. The video is taken from a stationary camera, and records the ants from a birds-eye perspective. I am having issues making reliable tracks of the ants however. Initially, i used the ForegroundDetection function, however there were multiple issues:
1.) Stationary ants were not detected
2.) There was too much overlap between objects (high levels of occlusion)
A friend of mine recommended having a larger gap between compared frames, so instead of subtracting frame 1 from frame 2, do frame 1 from frame 30 instead (1 second apart), as this will make the ants that do not move as much more likely to appear on the subtracted image.
Below is the code i have so far. It is a bit of a shot-in-the-dark attempt to solve the problem, as i am running out of ideas:
i = 1;
k = 1;
n = 1;
Video = {};
SubtractedVideo = {};
FilteredVideo = {};
videoFReader = vision.VideoFileReader('001.mp4',...
'ImageColorSpace', 'Intensity', 'VideoOutputDataType', 'uint8');
videoPlayer = vision.VideoPlayer;
blobby = vision.BlobAnalysis('BoundingBoxOutputPort', true, ...
'AreaOutputPort', true, 'CentroidOutputPort', true, ...
'MinimumBlobArea', 1);
shapeInserter = vision.ShapeInserter('BorderColor','White');
while ~isDone(videoFReader) %Read all frame of video
frame = step(videoFReader);
Video{1, i} = frame;
i = i+1;
end
%Perform subtraction
for j=1: 1: numel(Video)-60
CurrentFrame = Video{1,j};
FutureFrame = Video{1,j+60};
SubtractedImage = imsubtract(CurrentFrame, FutureFrame);
SubtractedVideo{1,j} = SubtractedImage;
ImFiltered = imgaussfilt(SubtractedImage, 2);
BWIm = im2bw(ImFiltered, 0.25);
FilteredVideo{1,j} = BWIm;
end
for a = n:numel(FilteredVideo)
frame = Video{1, n};
bbox = step(blobby, FilteredVideo{1, k});
out = step(shapeInserter, frame, bbox);
step(videoPlayer, out);
k = k+1;
end
currently, when i run the code, i get the following error on the line out = step(shapeInserter, frame, bbox):
'The Points input must be a 4-by-N matrix. Each column specifies a different rectangle and is of the form [row column height width].'
My questions is:
1.) Is this the best way to try and solve the problem I'm having? Is there potentially an easier solution?
2.) What does this error mean? How do i solve the issue?
I appreciate any help anyone can give, thank you!
I'm using Matlab to show the frames in a video sequence , below is my code:
seq=sprintf('walk%d.avi',v); % video's name
videoReader = vision.VideoFileReader(seq);
vidObj = VideoReader(seq);
numFrames = vidObj.NumberOfFrames
for i = 1:numFrames
frame = step(videoReader); % read the next video frame
imshow(frame)
end
Actually it worked fine previously, and i have no idea since when and what caused it to show rotated image . Hope you guys can help me. thank you.
Most updated Matlab function to vertically flip back the frame is:
FlippedFrame = flip(frame,1);
I am currently trying to figure out how to be able to detect a face that is 5m away from the source and will have its facial features clear enough for the user to see. The code i am working on is as shown.
faceDetector = vision.CascadeObjectDetector();
%Get the input device using image acquisition toolbox,resolution = 640x480 to improve performance
obj =imaq.VideoDevice('winvideo', 1, 'YUY2_640x480','ROI', [1 1 640 480]);
set(obj,'ReturnedColorSpace', 'rgb');
figure('menubar','none','tag','webcam');
while (true)
frame=step(obj);
bbox=step(faceDetector,frame);
boxInserter = vision.ShapeInserter('BorderColor','Custom',...
'CustomBorderColor',[255 255 0]);
videoOut = step(boxInserter, frame,bbox);
imshow(videoOut,'border','tight');
f=findobj('tag','webcam');
if (isempty(f));
[hueChannel,~,~] = rgb2hsv(frame);
% Display the Hue Channel data and draw the bounding box around the face.
figure, imshow(hueChannel), title('Hue channel data');
rectangle('Position',bbox,'EdgeColor','r','LineWidth',1)
hold off
noseDetector = vision.CascadeObjectDetector('Nose');
faceImage = imcrop(frame,bbox);
imshow(faceImage)
noseBBox = step(noseDetector,faceImage);
noseBBox(1:1) = noseBBox(1:1) + bbox(1:1);
videoInfo = info(obj);
ROI=get(obj,'ROI');
VideoSize = [ROI(3) ROI(4)];
videoPlayer = vision.VideoPlayer('Position',[300 300 VideoSize+30]);
tracker = vision.HistogramBasedTracker;
initializeObject(tracker, hueChannel, bbox);
while (1)
% Extract the next video frame
frame = step(obj);
% RGB -> HSV
[hueChannel,~,~] = rgb2hsv(frame);
% Track using the Hue channel data
bbox = step(tracker, hueChannel);
% Insert a bounding box around the object being tracked
videoOut = step(boxInserter, frame, bbox);
%Insert text coordinates
% Display the annotated video frame using the video player object
step(videoPlayer, videoOut);
pause (.2)
end
% Release resources
release(obj);
release(videoPlayer);
close(gcf)
break
end
pause(0.05)
end
release(obj)
% remove video object from memory
delete(handles.vid);
I am trying to work on this code to figure out the distance it can cover when tracking a face. I couldnt figure out which one handles that. Thanks!
Not sure what your question is, but try this example. It uses the KLT algorithm, which, IMHO, is more robust for face tracking than CAMShift. It also uses the webcam interface in base MATLAB, which is very easy.
I need to detect the number of frames for which a face is appearing in a video. I looked into the sample code using CAMShift algorithm provided in the MathWorks site(http://www.mathworks.in/help/vision/examples/face-detection-and-tracking-using-camshift.html). Is there a way of knowing whether a face has appeared in a particular frame?
I'm new to MatLab. I'm assuming the step function will return a false value if no face is detected (condition fails - similar to C). Is there a possible solution? I think using MinSize is also a possible solution.
I am not concerned about the computational burden - although a faster approach for the same would be appreciated. My current code is given below:
clc;
clear all;
videoFileReader = vision.VideoFileReader('Teapot.mp4', 'VideoOutputDataType', 'uint8', 'ImageColorSpace', 'Intensity');
video = VideoReader('Teapot.mp4');
numOfFrames = video.NumberOfFrames;
faceDetector = vision.CascadeObjectDetector();
opFolder = fullfile(cd, 'Face Detected Frames');
frameCount = 0;
shotCount = 0;
while ~isDone(videoFileReader)
videoFrame = step(videoFileReader);
bbox = step(faceDetector, videoFrame);
framCount = frameCount + 1;
for i = 1:size(bbox,1)
shotCount = shotCount + 1;
rectangle('Position',bbox(i,:),'LineWidth', 2, 'EdgeColor', [1 1 0]);
videoOut = insertObjectAnnotation(videoFrame,'rectangle',bbox,'Face');
progIndication = sprintf('Face has been detected in frame %d of %d frames', shotCount, numOfFrames);
figure, imshow(videoOut), title(progIndication);
end
end
release(videoFileReader);
You can use the vision.CascadeObjectDetector object to detect faces in any particular frame. If it does not detect any faces, its step method will return an empty array. The problem is that the face detection algorithm is not perfect. Sometimes it detects false positives, i. e. detects faces where there are none. You can try to mitigate that my setting the MinSize and MaxSize properties, assuming you know what size faces you expect to find.
I have an inverted pendulum video here which is 33 second length. The objective is to plot a red point in the center of moving rectangle part of the pendulum and to plot a line along the black stick calculating its angle for every frame.
I have handled the video frame by frame. Then I have used Object Detection In A Cluttered Scene Using Point Feature Matching. It would be good if I had access to the matching point's indexes and then I would easily calculate the angle.
I have thought that I can get the moving rectangle part's region and seek the similar regions in the next frames. But this solution seems too local.
I do not know which techniques to apply.
clear all;
clc;
hVideoFileReader = vision.VideoFileReader;
hVideoPlayer = vision.VideoPlayer;
hVideoFileReader.Filename = 'inverted-pendulum.avi';
hVideoFileReader.VideoOutputDataType = 'single';
while ~isDone(hVideoFileReader)
grayFrame = rgb2gray(step(hVideoFileReader));
frame = step(hVideoFileReader);
if isFirstFrame
part = grayFrame(202:266,202:282); % #moving part's region
isFirstFrame = false;
subplot(1,2,1);
imshow(part);
end
partPoints = detectSURFFeatures(part);
grayFramePoints = detectSURFFeatures(grayFrame);
hold on;
subplot(1,2,1), plot(partPoints .selectStrongest(10));
subplot(1,2,2), imshow(grayFrame);
subplot(1,2,2), plot(grayFramePoints .selectStrongest(20));
frame2 = pointPendulumCenter(frame);
frame3 = plotLineAlongStick(frame2);
step(hVideoPlayer, frame3);
hold off;
end
release(hVideoFileReader);
release(hVideoPlayer);
%% #Function to find the moving part's center point and plot a red dot on it.
function f = pointPendulumCenter(frame)
end
%% #Function to plot a red line along the stick after calculating the angle of it.
function f = plotLineAlongStick(frame)
end
It would make the problem much easier if your camera did not move. If you take your video with a stationary camera (e.g. mounted on a tripod) then you can use vision.ForegroundDetector to segment out moving objects from the static background.