Bald detection using image processing - matlab

I was wondering if someone can provide me a guideline to detect if a person in a picture is bald or not, or even better, how much hair s\he has.
So far I tried to detect the face and the eyes position. From that information, I roughly estimate the forehead and bald area by cutting the area above the eyes as high as some portion of the face.
Then I extract HOG features and train the system with bald and not-bald images using SVM.
Now when I'm looking at the test results, I see some pictures classified as bald but some of them actually have blonde hair or long forehead that hair is not visible after the cutting process. I'm using MATLAB for these operations.
So I know the method seems to be a bit naive, but can you suggest a way of finding out the bald area or extracting the hair, if exists. What method would be the most appropriate for that kind of problem?

very general, so answer is general unless further info provided
Use Computer Vision (e.g MATLAB Computer Vision toolkit) to detect face/head
head has analogies (for human faces), using these one can get the area of the head where hair or baldness is (it seems you already have these)
Calculate the (probabilistic color space model) range where the skin of the person lies (most peorple have similar skin collor space range)
Calculate percentage of skin versus other color (meaning hair) in that area
You have it!
To estimate a skin color model check following papers:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.56.8637&rep=rep1&type=pdf
http://infoscience.epfl.ch/record/135966
http://www.eurasip.org/Proceedings/Eusipco/Eusipco2010/Contents/papers/1569293757.pdf
Link
If an area does not fit well with skin model it can be taken as non-skin (meaning hair, assuming no hats etc are present in samples)

Head region is very small, hence, using HOG for classification doesn't make much sense.
You can use prior information - like detect faces; baldness/hair is certain to be found on the area above the face. Also, use some denser feature descriptors.
You are probably ending up with very sparse representation or equivalently less information because of which your classifier is not able to classify correctly.

Related

Static image calibration

I am capturing static images of particulate biological materials on the millimeter scale, and then processing them in MATLAB. My routine is working well so far, but I am using a rudimentary calibration procedure where I include some coins in the image, automatically find them based on their size and circularity, count their pixels, and then remove them. This allows me to generate a calibration line with input "area-mm^2" and output "Area- pixels," which I then use to convert the pixel area of the particles into physical units of millimeters squared.
My question is: is there a better calibrant object that I can use, such as a stage graticule or "phantom" as some people seem to call them? Do you know where I could purchase such a thing? I can't even seem to find a possible vendor. Is there another rigorous way to approach this problem without using calibrant objects in the field of view?
Thanks in advance.
Clay
Image calibration is always done using features of knowns size or distance.
You could calculate the scale based on nominal specifications but your imaging equipment will always have some production tolerances, your object distance is only known to a certain accuracy...
So it's always safer and simpler to actually calibrate your scale.
As a calibrant you can use anything that meets your requirements. If you know the size well enough and if you are able to extract it's dimensions in pixels properly you can use it...
I don't know your requirements and your budget, but if you want something very precise and fancy you can use glass masks.
There are temperature stable glass slides that are coated with chrome for example. There are many companies that produce such masks customized (IMT AG, BVM maskshop, ...) Also most optics lab equipment suppliers have such things on stock. Edmund Optics, Newport, ...

How to detect if an image is a texture or a pattern-based image?

I have a question regarding computer vision; seems to be a general question but anyways, just wondering if you might have a clue. I was wondering if there is an efficient way to distinguish texture images (or photos with repetitive patterns) between whatnot, say realistic photos? The patterns could have exact repetitions, or just have major similarity. Actually I'm trying to see given an image if, it is possible to detect it is a texture or a pattern-based image, and that in real-time maybe?
For instance these three are considered textures in our context:
http://www.bigchrisart.com/sites/default/files/video/TR_Texture_RockWall.jpg
http://www.colourbox.com/preview/4440275-144135-seamless-geometric-op-art-texture.jpg
Thank you!
I cannot open your first image. I implemented the Fourier transform on your second one, and you can see frequency responses at specific points:
You can further process the image by extract the local maximum of the magnitude, and they share the same distance to the center (zero frequency). This may be considered as repetitive patterns.
Regarding the case that patterns share major similarity instead of repetitive feature, it is hard to tell whether the frequency magnitude still has such evident response. It depends on how the pattern looks like.
Another possible approach is the auto-correlation on your image.

What are the features in feature detection algorithms and other doubts

I am going through feature detection algorithms and a lot of things seems to be unclear. The original paper is quite complicated to understand for beginners in image processing. Shall be glad if these are answered
What are the features which are being detected by SURF and SIFT?
Is it necessary that these have to be computed on gray scale images?
What does the term "descriptor" mean in simple words.
Generally,how many features are selected/extracted?Is there a criteria for that?
What does the size of Hessian matrix determine?
What is the size of the features being detected?It is said that the size of a feature is the size of the blob.So, if size of image is M*N so will there be M*N n umber of features?
These questions may seem too trivial, but please help..
I will try to give an intuitive answer to some of your questions, I don't know answers to all.
(You didn't specify which paper you are reading)
What are the features and how many features are being detected by SURF
and SIFT?
Normally features are any part in an image around which you selected a small block. You move that block by a small distance in all directions. If you find considerable variations between the one you selected and its surroundings, it is considered as a feature. Suppose you moved your camera a little bit to take the image, still you will detect this feature. That is their importance. Normally best example of such a feature is corners in the image. Even edges are not so good features. When you move your block along the edge lines, you don't find any variation, right?
Check this image to understand what I said , only at the corner you get considerable variation while moving the patches, in other two cases you won't get much.
Image link : http://www.mathworks.in/help/images/analyzing-images.html
A very good explanation is given here : http://aishack.in/tutorials/features-what-are-they/
This the basic idea and the algorithms you mentioned make this more robust to several variations and solve many issues. (You can refer their papers for more details)
Is it necessary that these have to be computed on gray scale images?
I think so. Anyway OpenCV works on grayscale images
What does the term "descriptor" mean in simple words?
Suppose you found features in one image, say image of a building. Now you took another image of same building but from a slightly different direction. You found features in the second image also. But how can you match these features. Say feature 1 in image 1 match to which feature in image 2 ? (As a human, you can do easily, right ? This corner of building in first image corresponds to this corner in second image, so and so. Very easy).
Feature is just giving you pixel location. You need more information about that point to match it with others. So you have to describe the feature. And this description is called "descriptors". To describe this features, algorithms are there and you can see it SIFT paper.
Check this link also : http://aishack.in/tutorials/sift-scale-invariant-feature-transform-introduction/
Generally,how many features are selected/extracted?Is there a criteria
for that?
During processing you can see applying different thresholds, removing weak keypoints etc. It is all part of plan. You need to understand algorithm to understand these things. Yes, you can specify these threshold and other parameters (in OpenCV) or you can leave it as default. If you check for SIFT in OpenCV docs, you can see function parameters to specify number of features, number of octave layers, edge threshold etc.
What does the size of Hessian matrix determine?
That I don't know exactly, just it is a threshold for keypoint detector. Check OpenCV docs : http://docs.opencv.org/modules/nonfree/doc/feature_detection.html#double%20hessianThreshold

Matlab video processing of heart beating. code supplemented

I'm trying to write a code The helps me in my biology work.
Concept of code is to analyze a video file of contracting cells in a tissue
Example 1
Example 2: youtube.com/watch?v=uG_WOdGw6Rk
And plot out the following:
Count of beats per min.
Strenght of Beat
Regularity of beating
And so i wrote a Matlab code that would loop through a video and compare each frame vs the one that follow it, and see if there was any changes in frames and plot these changes on a curve.
Example of My code Results
Core of Current code i wrote:
for i=2:totalframes
compared=read(vidObj,i);
ref=rgb2gray(compared);%% convert to gray
level=graythresh(ref);%% calculate threshold
compared=im2bw(compared,level);%% convert to binary
differ=sum(sum(imabsdiff(vid,compared))); %% get sum of difference between 2 frames
if (differ ~=0) && (any(amp==differ)==0) %%0 is = no change happened so i dont wana record that !
amp(end+1)=differ; % save difference to array amp wi
time(end+1)=i/framerate; %save to time array with sec's, used another array so i can filter both later.
vid=compared; %% save current frame as refrence to compare the next frame against.
end
end
figure,plot(amp,time);
=====================
So thats my code, but is there a way i can improve it so i can get better results ?
because i get fealing that imabsdiff is not exactly what i should use because my video contain alot of noise and that affect my results alot, and i think all my amp data is actually faked !
Also i actually can only extract beating rate out of this, by counting peaks, but how can i improve my code to be able to get all required data out of it ??
thanks also really appreciate your help, this is a small portion of code, if u need more info please let me know.
thanks
You say you are trying to write a "simple code", but this is not really a simple problem. If you want to measure the motion accuratly, you should use an optical flow algorithm or look at the deformation field from a registration algorithm.
EDIT: As Matt is saying, and as we see from your curve, your method is suitable for extracting the number of beats and the regularity. To accuratly find the strength of the beats however, you need to calculate the movement of the cells (more movement = stronger beat). Unfortuantly, this is not straight forwards, and that is why I gave you links to two algorithms that can calculate the movement for you.
A few fairly simple things to try that might help:
I would look in detail at what your thresholding is doing, and whether that's really what you want to do. I don't know what graythresh does exactly, but it's possible it's lumping different features that you would want to distinguish into the same pixel values. Have you tried plotting the differences between images without thresholding? Or you could threshold into multiple classes, rather than just black and white.
If noise is the main problem, you could try smoothing the images before taking the difference, so that differences in noise would be evened out but differences in large features, caused by motion, would still be there.
You could try edge-detecting your images before taking the difference.
As a previous answerer mentioned, you could also look into motion-tracking and registration algorithms, which would estimate the actual motion between each image, rather than just telling you whether the images are different or not. I think this is a decent summary on Wikipedia: http://en.wikipedia.org/wiki/Video_tracking. But they can be rather complicated.
I think if all you need is to find the time and period of contractions, though, then you wouldn't necessarily need to do a detailed motion tracking or deformable registration between images. All you need to know is when they change significantly. (The "strength" of a contraction is another matter, to define that rigorously you probably would need to know the actual motion going on.)
What are the structures we see in the video? For example what is the big dark object in the lower part of the image? This object would be relativly easy to track, but would data from this object be relevant to get data about cell contraction?
Is this image from a light microscop? At what magnification? What is the scale?
From the video it looks like there are several motions and regions of motion. So should you focus on a smaller or larger area to get your measurments? Per cell contraction or region contraction? From experience I know that changing what you do at the microscope might be much better then complex image processing ;)
I had sucsess with Gunn and Nixons Dual Snake for a similar problem:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.6831
I placed the first aproximation in the first frame by hand and used the segmentation result as starting curv for the next frame and so on. My implementation for this is from 2000 and I only have it on paper, but if you find Gunn and Nixons paper interesting I can probably find my code and scan it.
#Matt suggested smoothing and edge detection to improve your results. This is good advise. You can combine smoothing, thresholding and edge detection in one function call, the Canny edge detector.Then you can dialate the edges to get greater overlap between frames. Little overlap will probably mean a big movement between frames. You can use this the same way as before to find the beat. You can now make a second pass and add all the dialated edge images related to one beat. This should give you an idea about the area traced out by the cells as they move trough a contraction. Maybe this can be used as a useful measure for contraction of a large cluster of cells.
I don't have access to Matlab and the Image Processing Toolbox now, so I can't give you tested code. Here are some hints: http://www.mathworks.se/help/toolbox/images/ref/edge.html , http://www.mathworks.se/help/toolbox/images/ref/imdilate.html and http://www.mathworks.se/help/toolbox/images/ref/imadd.html.

How do I count the number of persons in an image using MATLAB?

Has anyone ever tried doing this kinda job?
Before trying to count the number of people in an image, it is easier to first solve the decision problem of: does this region contain a person? And, it is even simpler if you reduce it to a specific size: does this 20x20 pixel image contain a person? It's even easier, if you only strive for detecting faces.. in which case you can use the Viola-Jones face detection algorithm to determine if there is a face in the image.
Once you can say yes/no there is or is not a person (or face) in a region, you can use a sliding-window approach to cover each possible region and say yes/no there is/isn't a person (or face) in that region. To count the number of people (or faces), you simply tally the number of yes responses.
As I've already stated, detecting faces is a little bit easier than detecting people, and detecting front-facing faces is easier than detecting faces in any orientation. That said, it is possible to create object detectors for pretty much any object provided that you have a large enough dataset. These techniques can be fairly accurate, but they aren't 100% reliable, so you will get false positives and false negatives.
You can find a Matlab face detector at the given link.
There's an active community of research into this problem. Here's a good start:
http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf
More info an links to implementations of HOG:
http://en.wikipedia.org/wiki/Histogram_of_oriented_gradients