Matlab, vision.peopledetector with Motion-Based Multiple Object Tracking? - matlab

I have been researching methods to detect people only in the video from a security camera. I want to use vision.peopledetector with vision.BlobAnalysis and vision.ForegroundDetector. But it doesn't work.
It should be like Motion-Based Multiple Object Tracking example, but only for detecting humans. Cant seem to get it work.
What I have done so far without using vision.BlobAnalysis and `vision.ForegroundDetector. It is not accurate at all and cant count
video = VideoReader('atrium.mp4');
peopleDetector = vision.PeopleDetector;
videoPlayer = vision.VideoPlayer;
while hasFrame(video)
img = readFrame(video);
[bboxes,scores] = step(peopleDetector,img);
D = 1;
frame = insertObjectAnnotation(img,'rectangle',bboxes,D);
step(videoPlayer,frame);
end

OK. So here's what I think is happening: the resolution of the atrium.mp4 video is not high enough to make reliable detections using vision.PeopleDetector. Here's what I did to modify your code:
video = VideoReader('atrium.mp4');
peopleDetector = vision.PeopleDetector;
videoPlayer = vision.VideoPlayer;
while hasFrame(video)
img = readFrame(video);
img = imresize(img, 3); % resize the frame to make people large enough
[bboxes,scores] = step(peopleDetector,img);
D = 1;
frame = insertObjectAnnotation(img,'rectangle',bboxes,D);
step(videoPlayer,frame);
end
I now see fairly consistent detections in the video, but they are still not tracked all the time, and there seems to be some erroneous detections (one, specifically at the right bottom corner of the video). To avoid these issues, I would do something like what this demo does:
https://www.mathworks.com/help/vision/examples/track-face-raspberry-pi2.html
In essence, this demo uses face detection only when there are no detections, and switches over to tracking once a detection has been made. That way, your processing loop is significantly faster (tracking is less computationally demanding than detection), and you can have (in general) higher fidelity tracking than independent detections in each frame. You could also include some other heuristics such as no movements at all in > 50 frames implies a false positive, and such.

Related

Fingers movement tracing in a video

Please, I'm trying to detect the fingers' movement in a video. Firstly, I'd like to apply a skin color detection to separate the hand from the background and then I'll find the hand counter followed by calculating the convex points to detect the fingers. What I want now from this video is a new video showing only the movement of the two tapping fingers (or their contour), as shown in this figure.
I used this code to detect the skin color:
function OutImg=Skin_Detect(I)
% clear
% I=imread('D:\New Project\Movie Frames from RLC_L_FTM_IP60\Frame
0002.png');
I=double(I);
[hue,s,v]=rgb2hsv(I);
cb = 0.148* I(:,:,1) - 0.291* I(:,:,2) + 0.439 * I(:,:,3) + 128;
cr = 0.439 * I(:,:,1) - 0.368 * I(:,:,2) -0.071 * I(:,:,3) + 128;
[w h]=size(I(:,:,1));
for i=1:w
for j=1:h
if 138<=cr(i,j) && cr(i,j)<=169 && 136<=cb(i,j) && cb(i,j)<=200 &&
0.01<=hue(i,j) && hue(i,j)<=0.2
segment(i,j)=1;
else
segment(i,j)=0;
end
end
end
% imshow(segment);
OutImg(:,:,1)=I(:,:,1).*segment;
OutImg(:,:,2)=I(:,:,2).*segment;
OutImg(:,:,3)=I(:,:,3).*segment;
% figure,imshow(uint8(im));
The same code is working perfectly when I apply it to an image, but I detect nothing when I apply it to a video, as follows:
videoFReader = vision.VideoFileReader('RLC_L_FT_IP60.m4v');
% Create a video player object for displaying video frames.
videoPlayer = vision.DeployableVideoPlayer;
% Display the original video
while ~isDone(videoFReader)
videoFrame = step(videoFReader);
% Track using the Hue channel data
Out=Skin_Detect(videoFrame);
step(videoPlayer,Out);
end
Please, any suggestion and idea to solve this problem?
I'll be so grateful if anyone can help with this even with different codes.
Thank you in advance.
I had a similar problem. I think a proper way is using a classifier, even it is a "simple" classifier... Here the steps I have followed in my solution:
1) I have used the RGB color space and a Mahalanobis distance for the skin color model. It is fast and work pretty well.
2) Connected components: A simple morphological close operation with a small structuring element can be used to join areas that might be disconnected during imperfect thresholding, like the fingers of the hand.
3) Feature extraction: area, perimeter, and ratio of area over perimeter, for example.
4) Classification: use an SVM classifier to perform the final classification. I hope you have labeled training data for the process.
I am not solving exactly yor concrete problem, but maybe it could give you some idea... :)
If you don't insist on writing on your own, you can use Google's MediaPipe for hand- and finger-tracking.
Info:
https://ai.googleblog.com/2019/08/on-device-real-time-hand-tracking-with.html
Examples for Desktop and Android:
https://github.com/google/mediapipe/blob/master/mediapipe/docs/hand_tracking_mobile_gpu.md

Matlab: How to distribute workload?

I am currently trying to record footage of a camera and represent it with matlab in a graphic window using the "image" command. The problem I'm facing is the slow redraw of the image and this of course effects my whole script. Here's some quick pseudo code to explain my program:
figure
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
image(I);
end
AcquireImageFromCamera() is a mex coming from an API for the camera.
Now without displaying the acquired image the script easily grabbs all frames coming from the camera (it records with a limited framerate). But as soon as I display every image for a real-time video stream, it slows down terribly and therefore frames are lost as they are not captured.
Does anyone have an idea how I could split the process of acquiring images and displaying them in order to use multiple cores of the CPU for example? Parallel computing is the first thing that pops into my mind, but the parallel toolbox works entirely different form what I want here...
edit: I'm a student and in my faculty's matlab version all toolboxes are included :)
Running two threads or workers is going to be a bit tricky. Instead of that, can you simply update the screen less often? Something like this:
figure
count = 0;
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
count = count + 1;
if count == 5
count = 0;
image(I);
end
end
Another thing to try is to call image() just once to set up the plot, then update pixels directly. This should be much faster than calling image() every frame. You do this by getting the image handle and changing the CData property.
h = image(I); % first frame only
set(h, 'CData', newPixels); % other frames update pixels like this
Note that updating pixels like this may then require a call to drawnow to show the change on screen.
I'm not sure how precise your pseudo code is, but creating the image object takes quite a bit of overhead. It is much faster to create it once and then just set the image data.
figure
himg = image(I)
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
set(himg,'cdata',Frame);
drawnow; %Also make sure the screen is actually updated.
end
Matlab has a video player in Computer vision toolbox, which would be faster than using image().
player = vision.VideoPlayer
while(true)
Frame = AcquireImageFromCamera(); % Mex, returns current frame
step(player, Frame);
end

Varying intensity in the frames captured by camera (uEye)

I have been using Matlab to capture images from an uEye Camera at regular
intervals and use them for processing. The following is the small piece of
code that I am using to achieve that,
h=actxcontrol('UEYECAM.uEyeCamCtrl.1','position',[250 100 640 480]);
d=h.InitCamera(1);
check = 1;
str_old = 'img000.jpeg';
m = h.SaveImage('img000.jpeg');
pause(60);
And following are the images captured by the camera. There was no change in the
lighting conditions outside but you can notice the difference in the intensity
levels in the image captured by the camera.
Is there any reason for this?
Solved thanks to Zaphod
Allow sometime for the camera to adjust its exposure. I did it by moving the pause
statement to just after the InitCamera() command to delay the image capture by the
camera and give it enough time to adjust itself.
Solved thanks to Zaphod
Allow sometime for the camera to adjust its exposure. I did it by moving the pause statement to just after the InitCamera() command to delay the image capture by the camera and give it enough time to adjust itself.
h=actxcontrol('UEYECAM.uEyeCamCtrl.1','position',[250 100 640 480]);
d=h.InitCamera(1);
pause(60);
check = 1;
str_old = 'img000.jpeg';
m = h.SaveImage('img000.jpeg');

How to speed up frame rates in MATLAB? [duplicate]

the getsnapshot function takes a lot of time executing since (I guess) initializes the webcam every time is called. This is a problem if you want to acquire images with a high framerate.
I trick I casually discovered is to call the preview function, which keeps the webcam handler open making getsnapshot almost instantaneous, but it keeps a small preview window open:
% dummy example
cam = videoinput(...);
preview(cam);
while(1)
img = getsnapshot(cam);
% do stuff
end
Is there a "cleaner" way to speedup getsnapshot? (without preview window opened)
You could use the new "machine vision" toolbox which is specially built for vision applications. See code below:
vid = videoinput('winvideo', 1, 'RGB24_320x240'); %select input device
hvpc = vision.VideoPlayer; %create video player object
src = getselectedsource(vid);
vid.FramesPerTrigger =1;
vid.TriggerRepeat = Inf;
vid.ReturnedColorspace = 'rgb';
src.FrameRate = '30';
start(vid)
%start main loop for image acquisition
for t=1:500
imgO=getdata(vid,1,'uint8'); %get image from camera
hvpc.step(imgO); %see current image in player
end
As you can see, you can acquire the image with getdata. The bottleneck in video applications in Matlab was the preview window, which delayed to code substantially. The new vision.VideoPlayer is a lot faster (i have used this code in real time vision applications in Matlab. When i had written the first version without the vision toolbox, achieving frame rates at about 18 fps and using the new toolbox got to around 70!).
Note: I you need speed in image apps using Matlab, you should really consider using OpenCV libs through mex to get a decent performance in image manipulation.

Improve animation rendering in Matlab

I have written a code to create an animation (satellite movement around the Earth). When I run it, it works fine. However, when it is modified to be part of a code much more complex present in a Matlab GUI, the results produced changes (mainly because of the bigger number of points to plot). I also have noticed that if I use the OpenGL renderer the movement of the satellite is quicker than when the other renderers (Painters and Zbuffer) are used. I do not know if there are further possibilities to achieve an improvement in the rendering of the satellite movement. I think the key is, perhaps, changing the code that creates the actual position of the satellite (handles.psat) and its trajectory along the time (handles.tray)
handles.tray = zeros(1,Fin);
handles.psat = line('parent',ah4,'XData',Y(1,1), 'YData',Y(1,2),...
'ZData',Y(1,3),'Marker','o', 'MarkerSize',10,'MarkerFaceColor','b');
...
while (k<Fin)
az = az + 0.01745329252;
set(hgrot,'Matrix',makehgtform('zrotate',az));
handles.tray(k) = line([Y(k-1,1) Y(k,1)],[Y(k-1,2) Y(k,2)],...
[Y(k-1,3) Y(k,3)],...
'Color','red','LineWidth',3);
set(handles.psat,'XData',Y(k,1),'YData',Y(k,2),'ZData',Y(k,3));
pause(0.02);
k = k + 1;
if (state == 1)
state = 0;
break;
end
end
...
Did you consider to apply a rotation transform matrix on your data instead of the axis?
I think <Though I haven't checked it> that it can speedup your code.
You've used the typical tricks that I use to speed things up, like precomputing the frames, setting XData and YData rather than replotting, and selecting a renderer. Here are a couple more tips though:
1) One thing I noticed in your description is that different renderers and different complexities changed how fast your animation appeared to be running. This is often undesirable. Consider using the actual interval between frames (i.e. use tic; dt = toc) to calculate how far to advance the animation, rather than relying on pause(0.2) to generate a steady frame rate.
2) If the complexity is such that your frame rate is undesirably low, consider replacing pause(0.02) with drawnow, or at least calculate how long to pause on each frame.
3) Try to narrow down the source of your bottleneck a bit further by measuring how long the various steps take. That will let you optimize the right stage of the operation.