vision.PeopleDetector function in Matlab - matlab

Have anyone ever used vision.PeopleDetector function from Computer Vision System Toolbox in Matlab?
I've installed it and tried to apply to images I have.
Although it detects people on the training image, it detects nothing on real photos. Either it doesn't detect people at all or detects people at parts of the image where they are not presented.
Could anyone share the experience of using this function?
Thanks a lot!
Here is a sample image:

The vision.PeopleDetector object does indeed detect upright standing people in images. However, like most computer vision algorithms it is not 100% accurate. Can you post a sample image where it fails?
There are several things you can try to improve performance.
Try changing the ClassificationModel parameter to 'UprightPeople_96x48'. There are two models that come with the object, trained on different data sets.
How big (in pixels) are the people in your image? If you use the default 'UprightPeople_128x64' model, then you will not be able to detect a person smaller than 128x64 pixels. Similarly, for the 'UprightPeople_96x48' model the smallest size person you can detect is 96x48. If the people in your image are smaller than that, you can up-sample the image using imresize.
Try reducing the ClassificationThreshold parameter to get more detections.
Edit:
Some thoughts on your particular image. My guess would be that the people detector is not working well here, because it was not trained on this kind of images. The training sets for both models consist of natural images of pedestrians. Ironically, the fact that your image has a perfectly clean background may be throwing the detector off.
If this image is typical of what you have to deal with, then I have a few suggestions. One possibility is to use simple thresholding to segment out the people. The other is to use vision.CascadeObjectDetector to detect the faces or the upper bodies, which happens to work perfectly on this image:
im = imread('postures.jpg');
detector = vision.CascadeObjectDetector('ClassificationModel', 'UpperBody');
bboxes = step(detector, im);
im2 = insertObjectAnnotation(im, 'rectangle', bboxes, 'person', 'Color', 'red');
imshow(im2);

Related

Stitching overlaying images from different cameras in Matlab

Two cameras takes two images of a wooden plank. The images have an overlap of the plank which I need to stitch together in a way that it looks natural and preferably seamless to the human eye for inspection purposes. The images are cropped to the same size and masked to remove the background and most of the non-overlapping areas but the plank can have a slight tilt on the conveyor belt.
Currently I'm using the normxcorr2 function on the general overlay area, following ideas from the Matlab totorial of the normxcorr2 function, to try and identify one of the images in the other and work out an overlay offset, following the tutorial. However, this fails quite often as the normxcorr2 functions returns a zero offset - resulting in a bad stitching:
c = normxcorr2(plank_part1,plank_part2);
Find peak in cross-correlation:
[ypeak, xpeak] = ind(c==max(c(:)));
Account for the padding that normxcorr2 adds:
yoffSet = ypeak-size(onion,1);
xoffSet = xpeak-size(onion,2);
[xoffSet,yoffSet]
ans =
0 0
It would seem normxcorr2 can not always find the correct overlay of the images, or any overlay at all(?), even though I try to make it easier by increasing the gray scale contrast by the function histeq. My guess is that the amount of "gray-ish" area from the sapwood overwhelms the distinct knots, which are the important parts to stitch propperly.
Does anyone know of a way to either increase the likelihood of this stitching process, maybe by some more preprocessing, or use any other matlab skills/functions to make this work better?
P.S I can not use anything but freely accessible scripts as this would probably become license/copyright issues for my project.
Thank you for your time in trying to help!
You should look at the following link. The term that you should be looking for is image registration. There are more advanced methods than normxcorr2

Matlab - Center of mass of object having only its edge

I'm trying to make an object recognition program using a k-NN classifier. I've got a bunch of images for the training part of the classifier and a bunch of images to recognize. Those images are in grayscale and there's an object per image. The problem is that there's only the edge of the object (not filled), so I don't think using regionprops(img,'centroid') will work properly for what I understand...
So how can I get their center of mass?
xenoclast's answer should be quite clear, just to add something extra.
As you are done creating the binary image from the grayscale image of yours using im2bw; if the edge of your the object is a the boundary that covers the object fully, you may use regionprops(bw,'centroid') directly without going through imfill.
The first step would be to binarise the image with im2bw. Then you can use imfill(img, 'holes') to turn it from an outline into a filled solid. After that regionprops will work as expected.

Detecting shape from the predefined shape and cropping the background

I have several images of the pugmark with lots of irrevelant background region. I cannot do intensity based algorithms to seperate background from the foreground.
I have tried several methods. one of them is detecting object in Homogeneous Intensity image
but this is not working with rough texture images like
http://img803.imageshack.us/img803/4654/p1030076b.jpg
http://imageshack.us/a/img802/5982/cub1.jpg
http://imageshack.us/a/img42/6530/cub2.jpg
Their could be three possible methods :
1) if i can reduce the roughness factor of the image and obtain the more smoother texture i.e more flat surface.
2) if i could detect the pugmark like shape in these images by defining rough pugmark shape in the database and then removing the background to obtain image like http://i.imgur.com/W0MFYmQ.png
3) if i could detect the regions with depth and separating them from the background based on difference in their depths.
please tell if any of these methods would work and if yes then how to implement them.
I have a hunch that this problem could benefit from using polynomial texture maps.
See here: http://www.hpl.hp.com/research/ptm/
You might want to consider top-down information in the process. See, for example, this work.
Looks like you're close enough from the pugmark, so I think that you should be able to detect pugmarks using Viola Jones algorithm. Maybe a PCA-like algorithm such as Eigenface would work too, even if you're not trying to recognize a particular pugmark it still can be used to tell whether or not there is a pugmark in the image.
Have you tried edge detection on your image ? I guess it should be possible to finetune Canny edge detector thresholds in order to get rid of the noise (if it's not good enough, low pass filter your image first), then do shape recognition on what remains (you would then be in the field of geometric feature learning and structural matching) Viola Jones and possibly PCA-like algorithm would be my first try though.

Matlab video processing of heart beating. code supplemented

I'm trying to write a code The helps me in my biology work.
Concept of code is to analyze a video file of contracting cells in a tissue
Example 1
Example 2: youtube.com/watch?v=uG_WOdGw6Rk
And plot out the following:
Count of beats per min.
Strenght of Beat
Regularity of beating
And so i wrote a Matlab code that would loop through a video and compare each frame vs the one that follow it, and see if there was any changes in frames and plot these changes on a curve.
Example of My code Results
Core of Current code i wrote:
for i=2:totalframes
compared=read(vidObj,i);
ref=rgb2gray(compared);%% convert to gray
level=graythresh(ref);%% calculate threshold
compared=im2bw(compared,level);%% convert to binary
differ=sum(sum(imabsdiff(vid,compared))); %% get sum of difference between 2 frames
if (differ ~=0) && (any(amp==differ)==0) %%0 is = no change happened so i dont wana record that !
amp(end+1)=differ; % save difference to array amp wi
time(end+1)=i/framerate; %save to time array with sec's, used another array so i can filter both later.
vid=compared; %% save current frame as refrence to compare the next frame against.
end
end
figure,plot(amp,time);
=====================
So thats my code, but is there a way i can improve it so i can get better results ?
because i get fealing that imabsdiff is not exactly what i should use because my video contain alot of noise and that affect my results alot, and i think all my amp data is actually faked !
Also i actually can only extract beating rate out of this, by counting peaks, but how can i improve my code to be able to get all required data out of it ??
thanks also really appreciate your help, this is a small portion of code, if u need more info please let me know.
thanks
You say you are trying to write a "simple code", but this is not really a simple problem. If you want to measure the motion accuratly, you should use an optical flow algorithm or look at the deformation field from a registration algorithm.
EDIT: As Matt is saying, and as we see from your curve, your method is suitable for extracting the number of beats and the regularity. To accuratly find the strength of the beats however, you need to calculate the movement of the cells (more movement = stronger beat). Unfortuantly, this is not straight forwards, and that is why I gave you links to two algorithms that can calculate the movement for you.
A few fairly simple things to try that might help:
I would look in detail at what your thresholding is doing, and whether that's really what you want to do. I don't know what graythresh does exactly, but it's possible it's lumping different features that you would want to distinguish into the same pixel values. Have you tried plotting the differences between images without thresholding? Or you could threshold into multiple classes, rather than just black and white.
If noise is the main problem, you could try smoothing the images before taking the difference, so that differences in noise would be evened out but differences in large features, caused by motion, would still be there.
You could try edge-detecting your images before taking the difference.
As a previous answerer mentioned, you could also look into motion-tracking and registration algorithms, which would estimate the actual motion between each image, rather than just telling you whether the images are different or not. I think this is a decent summary on Wikipedia: http://en.wikipedia.org/wiki/Video_tracking. But they can be rather complicated.
I think if all you need is to find the time and period of contractions, though, then you wouldn't necessarily need to do a detailed motion tracking or deformable registration between images. All you need to know is when they change significantly. (The "strength" of a contraction is another matter, to define that rigorously you probably would need to know the actual motion going on.)
What are the structures we see in the video? For example what is the big dark object in the lower part of the image? This object would be relativly easy to track, but would data from this object be relevant to get data about cell contraction?
Is this image from a light microscop? At what magnification? What is the scale?
From the video it looks like there are several motions and regions of motion. So should you focus on a smaller or larger area to get your measurments? Per cell contraction or region contraction? From experience I know that changing what you do at the microscope might be much better then complex image processing ;)
I had sucsess with Gunn and Nixons Dual Snake for a similar problem:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.6831
I placed the first aproximation in the first frame by hand and used the segmentation result as starting curv for the next frame and so on. My implementation for this is from 2000 and I only have it on paper, but if you find Gunn and Nixons paper interesting I can probably find my code and scan it.
#Matt suggested smoothing and edge detection to improve your results. This is good advise. You can combine smoothing, thresholding and edge detection in one function call, the Canny edge detector.Then you can dialate the edges to get greater overlap between frames. Little overlap will probably mean a big movement between frames. You can use this the same way as before to find the beat. You can now make a second pass and add all the dialated edge images related to one beat. This should give you an idea about the area traced out by the cells as they move trough a contraction. Maybe this can be used as a useful measure for contraction of a large cluster of cells.
I don't have access to Matlab and the Image Processing Toolbox now, so I can't give you tested code. Here are some hints: http://www.mathworks.se/help/toolbox/images/ref/edge.html , http://www.mathworks.se/help/toolbox/images/ref/imdilate.html and http://www.mathworks.se/help/toolbox/images/ref/imadd.html.

Calculating corresponding pixels

I have a computer vision set up with two cameras. One of this cameras is a time of flight camera. It gives me the depth of the scene at every pixel. The other camera is standard camera giving me a colour image of the scene.
We would like to use the depth information to remove some areas from the colour image. We plan on object, person and hand tracking in the colour image and want to remove far away background pixel with the help of the time of flight camera. It is not sure yet if the cameras can be aligned in a parallel set up.
We could use OpenCv or Matlab for the calculations.
I read a lot about rectification, Epipolargeometry etc but I still have problems to see the steps I have to take to calculate the correspondence for every pixel.
What approach would you use, which functions can be used. In which steps would you divide the problem? Is there a tutorial or sample code available somewhere?
Update We plan on doing an automatic calibration using known markers placed in the scene
If you want robust correspondences, you should consider SIFT. There are several implementations in MATLAB - I use the Vedaldi-Fulkerson VL Feat library.
If you really need fast performance (and I think you don't), you should think about using OpenCV's SURF detector.
If you have any other questions, do ask. This other answer of mine might be useful.
PS: By correspondences, I'm assuming you want to find the coordinates of a projection of the same 3D point on both your images - i.e. the coordinates (i,j) of a pixel u_A in Image A and u_B in Image B which is a projection of the same point in 3D.