ROI of palmprint - matlab

I'm doing project on personal verification using palmprint biometrics. I want to extract a Region of Interest (ROI) of palmprints in MATLAB.

In the image processing toolbox in matlab there are many ways to manually extract a region of interest. Personally, I often use "roipoly".
(http://www.mathworks.com/help/toolbox/images/ref/roipoly.html)
Just type
m = roipoly(I);
where I is your image.

Auto ROI Extraction
Locating the ROI of Palmprint images is a popular problem in biometrics and image processing. This is the primary step in developing a biometric system based on palmprint image recognition.
There are a lot of methods presented already.
This is one method I used.
https://www.mathworks.com/matlabcentral/fileexchange/46573-roi-of-palmprint-images

Related

How to get data for Matlab cascade Object Detector?

I want to use the trainCascadeObjectDetector in Matlab. It requires an array with the regions of interest of the images. I found two apps where you can put boxes around the rois and the array gets created automatically:
Cascade Trainer: Specify Ground Truth, Train a Detector
Training Image Labeler
Unfortunately they both require Matlab R2014 and I only got R2013.
Is there an other way to define the rois without manually creating the array?
Regards
Philip
I did not find an other solution so I wrote a custom Matlab script for the job. The imrect function in Matlab is well suitable for this. After the image is shown, the user can drag a rectangular over the region of interest. The coordinates of the region than get stored in a structure together with the path to the image file. Additionally the parts of the image that do not belong to the roi are stored in the negative sample folder.

vision.PeopleDetector function in Matlab

Have anyone ever used vision.PeopleDetector function from Computer Vision System Toolbox in Matlab?
I've installed it and tried to apply to images I have.
Although it detects people on the training image, it detects nothing on real photos. Either it doesn't detect people at all or detects people at parts of the image where they are not presented.
Could anyone share the experience of using this function?
Thanks a lot!
Here is a sample image:
The vision.PeopleDetector object does indeed detect upright standing people in images. However, like most computer vision algorithms it is not 100% accurate. Can you post a sample image where it fails?
There are several things you can try to improve performance.
Try changing the ClassificationModel parameter to 'UprightPeople_96x48'. There are two models that come with the object, trained on different data sets.
How big (in pixels) are the people in your image? If you use the default 'UprightPeople_128x64' model, then you will not be able to detect a person smaller than 128x64 pixels. Similarly, for the 'UprightPeople_96x48' model the smallest size person you can detect is 96x48. If the people in your image are smaller than that, you can up-sample the image using imresize.
Try reducing the ClassificationThreshold parameter to get more detections.
Edit:
Some thoughts on your particular image. My guess would be that the people detector is not working well here, because it was not trained on this kind of images. The training sets for both models consist of natural images of pedestrians. Ironically, the fact that your image has a perfectly clean background may be throwing the detector off.
If this image is typical of what you have to deal with, then I have a few suggestions. One possibility is to use simple thresholding to segment out the people. The other is to use vision.CascadeObjectDetector to detect the faces or the upper bodies, which happens to work perfectly on this image:
im = imread('postures.jpg');
detector = vision.CascadeObjectDetector('ClassificationModel', 'UpperBody');
bboxes = step(detector, im);
im2 = insertObjectAnnotation(im, 'rectangle', bboxes, 'person', 'Color', 'red');
imshow(im2);

How to match an object within an image to other images using SURF features (in MATLAB)?

my problem is how to match one image to a set of images and to display the matched images. I am using SURF feature for feature extraction.
If you have the Computer Vision System Toolbox, take a look at the following examples:
Object Detection In A Cluttered Scene Using Point Feature Matching
Image Search using Point Features

Matlab face alignment code

I am attempting to do some face recognition and hallucination experiments and in order to get the best results, I first need to ensure all the facial images are aligned. I am using several thousand images for experimenting.
I have been scouring the Internet for past few days and have found many different programs which claim to do so, however due to Matlabs poor backwards compatibility, many of the programs no longer work. I have tried several different programs which don't run as they are calling onto Matlab functions which have since been removed.
The closest I found was using the SIFT algorithm, code found here
http://people.csail.mit.edu/celiu/ECCV2008/
Which does help align the images, but unfortunately it also downsamples the image, so the result ends up quite blurry looking which would have a negative effect on any experiments I ran.
Does anyone have any Matlab code samples or be able to point me in the right direction to code that actually aligns faces in a database.
Any help would be much appreciated.
You can find this recent work on Face Detection, Pose Estimation and Landmark Localization in the Wild. It has a working Matlab implementation and it is quite a good method.
Once you identify keypoints on all your faces you can morph them into a single reference and work from there.
The easiest way it with PCA and the eigen vector. To found X and Y most representative data. So you'll get the direction of the face.
You can found explication in this document : PCA Aligment
Do you need to detect the faces first, or are they already cropped? If you need to detect the faces, you can use vision.CascadeObjectDetector object in the Computer Vision System Toolbox.
To align the faces you can try the imregister function in the Image Processing Toolbox. Alternatively, you can use a feature-based approach. The Computer Vision System Toolbox includes a number of interest point detectors, feature descriptors, and a matchFeatures function to match the descriptors between a pair of images. You can then use the estimateGeometricTransform function to estimate an affine or even a projective transformation between two images. See this example for details.

SIFT keypoint detector

It seems that nothing changes on a jpg file after running the SIFT demo program here http://www.cs.ubc.ca/~lowe/keypoints/. Does anyone know how it works?
Thanks a lot.
SIFT is an algorithm that generate keypoints based on its renowned automatic feature detection capabilities. Those keypoints would most likely be taken to compare with or match against other images. The image itself is not being modified. Rather, we are looking for 'distinguishable clusters of pixels' so that it would 1) distinguish itself from other pictures 2) liken itself to similar images. I have used this beautifully crafted algorithm in several occasions in my research. If you need more clarifications, let me know.
#Gary Tsui: can we use SIFT to search for similar parts in one image? If we copy some part of the image and paste it to another part of the same image, is there possibility to detect copy - pasted area by using SIFT ?