I have a problem with shape detection on Matlab. I got two types of circular cell shapes but one is erythrocyte that has little difference to another cell that is leukocyte and is also circular. How could I distinguish them from each other with image processing?
Maybe will parent-child relationship be useful to detect circle in erythrocyte? Or other techniques?
There are 4 types of cell detection/segmentation: pixel-based, region-based, edge-based and contour-based segmentation. You may use one or several combinations of them for your task. But counting only on the shape may be insufficient.
The main difference between erythrocyte and leukocyte is the existence of nucleus. To my knowledge, the nucleus staining is often applied to microscopy. If that is the case,
(i) the ratio between green and blue channel intensities of each pixel can be used as discriminating feature to separate the nucleus pixels from other foreground pixels;
(ii) After that, it is possible to extract the leukocyte plasma based on the hue-value similarity between the pixel from that region and the nucleus region;
(iii) Contour-based methods such as active-contour methods (snakes) and level-set approaches can be used to refine the boundaries of white blood cells;
(iv) What left to you after (i)-(iii) are probably the erythrocytes. If your task also includes the segmentation of erythrocytes, you may threshold them easily (or search the studies for more accurate segmentation algorithms).
I would recommend T.Bergen et al, Segmentation of leukocytes and erythrocytes in blood smear images. My description above was included and detailed in this paper, and they applied more sophisticated strategies to improve the boundary accuracy. You may try to follow their steps and reproduce the similar result if your ultimate goal is also segmentation. Yet only detection without extraction might be much easier.
Related
Which method is commonly used to evaluate the remaining 'boundary' pixels after an initial segmentation (based on thresholds)?
I thought about classification based on a standard deviation from the threshold values but I don't know if that is common practice in image analysis. This would be a region growing method but based on the answer on this question ( http://www.mathworks.com/matlabcentral/answers/53351-how-can-i-segment-a-color-image-with-region-growing ) it is not sensible to use the region growing algorithm. Someone suggested imdilate. This method seems arbitrary, useful when enhancing images for aesthetic purpose or to enhance the visibility. For my problem the assigning of the pixels has to be correct because I have to do measurements on these extracted objects/features and a few pixels make a huge difference.
What I was looking for :
To collect my boundary pixels of the BW image from the first segmentation (which I found : http://nl.mathworks.com/help/images/ref/bwboundaries.html)
A decision rule (nearest neighbor ?) to classify those boundary pixels. It would be helpful if there were multiple methods to do this, because it makes a relative accuracy check of the classification possible.
I would really appreciate the input/advice from someone with more experience in this area to point me to the right direction (functions, tutorials etc…)
Thank you !
What will work for you depends very much on the images you have. This is no one-size-fits-all algorithm.
First, you need to answer the question: Given a pixel close to a segmented feature, what would make you believe that this pixel belongs to the feature? Also: what is "close"?
The answer to the second question determines your search area. Here, imdilate is useful to identify candidate pixels (i.e. you dilate your feature, subtract the feature, and you are left with a ring of candidate pixels around each feature). If you test on all pixels, the risk is not so much that it could take forever, but that for some images, your region growing mechanism expands to the entire image.
The answer to the first question determines what algorithm you'll use. Do you look for a gradient, i.e. "if pixel p is closer in intensity to the adjacent feature than to most of its neighbors, then I take it"? Do you look for texture? Do you look for a local threshold (hysteresis thresholding)? The answer, again, depends very much on the images you are segmenting. Make sure you test on a large set of images, because what may look good on one image may totally fail on a different one.
I have a image with noise. i want to remove all background variation from an image and want a plain image .My image is a retinal image and i want only the blood vessel and the retinal ring to remain how do i do it? 1 image is my original image and 2 image is how i want it to be.
this is my convoluted image with noise
There are multiple approaches for blood vessel extraction in retina images.
You can find a thorough overview of different approaches in Review of Blood Vessel Extraction Techniques and Algorithms. It covers prominent works of many approache.
As Martin mentioned, we have the Hessian-based Multiscale Vessel Enhancement Filtering by Frangi et al. which has been shown to work well for many vessel-like structures both in 2D and 3D. There is a Matlab implementation, FrangiFilter2D, that works on 2D vessel images. The overview fails to mention Frangi but cover other works that use Hessian-based methods. I would still recommend trying Frangi's vesselness approach since it is both powerful and simple.
Aside from the Hesisan-based methods, I would recommend looking into morphology-based methods since Matlab provides a good base for morphological operations. One such method is presented in An Automatic Hybrid Method for Retinal Blood Vessel Extraction. It uses a morphological approach with openings/closings together with the top-hat transform. It then complements the morphological approach with fuzzy clustering and some post processing. I haven't tried to reproduce their method, but the results look solid and the paper is freely available online.
This is not an easy task.
Detecting boundary of blood vessals - try edge( I, 'canny' ) and play with the threshold parameters to see what you can get.
A more advanced option is to use this method for detecting faint curves in noisy images.
Once you have reasonably good edges of the blood vessals you can do the segmentation using watershed / NCuts or boundary-sensitive version of meanshift.
Some pointers:
- the blood vessals seems to have relatively the same thickness, much like text strokes. Would you consider using Stroke Width Transform (SWT) to identify them? A mex implementation of SWT can be found here.
- If you have reasonably good boundaries, you can consider this approach for segmentation.
Good luck.
I think you'll be more served using a filter based on tubes. There is a filter available which is based on the work done by a man called Frangi, and the filter is often dubbed the Frangi filter. This can help you with identifying the vasculature in the retina. The filter is already written for Matlab and a public version is available here. If you would like to read about the underlying research search for: 'Multiscale vessel enhancement', by Frangi (1998). Another group who's done work in the same field are Sato et.al.
Sorry for the lack of a link in the last one, I could only find payed sites for looking at the research paper on this computer.
Hope this helps
Here is what I will do. Basically traditional image arithmetic to extract background and them subtract it from input image. This will give you the desired result without background. Below are the steps:
Use a median filter with large kernel as the first step. This will estimate the background.
Divide the input image with the output of step 1 [You may have to shift the denominator a little (+1) ] to avoid divide by 0.
Do the quantization to 8 or n bit integer based on what bit the original image is.
The output of step 3 above is the background. Subtract it from original image, to get the desired result. This clips all the negative values as well.
I am going through feature detection algorithms and a lot of things seems to be unclear. The original paper is quite complicated to understand for beginners in image processing. Shall be glad if these are answered
What are the features which are being detected by SURF and SIFT?
Is it necessary that these have to be computed on gray scale images?
What does the term "descriptor" mean in simple words.
Generally,how many features are selected/extracted?Is there a criteria for that?
What does the size of Hessian matrix determine?
What is the size of the features being detected?It is said that the size of a feature is the size of the blob.So, if size of image is M*N so will there be M*N n umber of features?
These questions may seem too trivial, but please help..
I will try to give an intuitive answer to some of your questions, I don't know answers to all.
(You didn't specify which paper you are reading)
What are the features and how many features are being detected by SURF
and SIFT?
Normally features are any part in an image around which you selected a small block. You move that block by a small distance in all directions. If you find considerable variations between the one you selected and its surroundings, it is considered as a feature. Suppose you moved your camera a little bit to take the image, still you will detect this feature. That is their importance. Normally best example of such a feature is corners in the image. Even edges are not so good features. When you move your block along the edge lines, you don't find any variation, right?
Check this image to understand what I said , only at the corner you get considerable variation while moving the patches, in other two cases you won't get much.
Image link : http://www.mathworks.in/help/images/analyzing-images.html
A very good explanation is given here : http://aishack.in/tutorials/features-what-are-they/
This the basic idea and the algorithms you mentioned make this more robust to several variations and solve many issues. (You can refer their papers for more details)
Is it necessary that these have to be computed on gray scale images?
I think so. Anyway OpenCV works on grayscale images
What does the term "descriptor" mean in simple words?
Suppose you found features in one image, say image of a building. Now you took another image of same building but from a slightly different direction. You found features in the second image also. But how can you match these features. Say feature 1 in image 1 match to which feature in image 2 ? (As a human, you can do easily, right ? This corner of building in first image corresponds to this corner in second image, so and so. Very easy).
Feature is just giving you pixel location. You need more information about that point to match it with others. So you have to describe the feature. And this description is called "descriptors". To describe this features, algorithms are there and you can see it SIFT paper.
Check this link also : http://aishack.in/tutorials/sift-scale-invariant-feature-transform-introduction/
Generally,how many features are selected/extracted?Is there a criteria
for that?
During processing you can see applying different thresholds, removing weak keypoints etc. It is all part of plan. You need to understand algorithm to understand these things. Yes, you can specify these threshold and other parameters (in OpenCV) or you can leave it as default. If you check for SIFT in OpenCV docs, you can see function parameters to specify number of features, number of octave layers, edge threshold etc.
What does the size of Hessian matrix determine?
That I don't know exactly, just it is a threshold for keypoint detector. Check OpenCV docs : http://docs.opencv.org/modules/nonfree/doc/feature_detection.html#double%20hessianThreshold
I have several images of the pugmark with lots of irrevelant background region. I cannot do intensity based algorithms to seperate background from the foreground.
I have tried several methods. one of them is detecting object in Homogeneous Intensity image
but this is not working with rough texture images like
http://img803.imageshack.us/img803/4654/p1030076b.jpg
http://imageshack.us/a/img802/5982/cub1.jpg
http://imageshack.us/a/img42/6530/cub2.jpg
Their could be three possible methods :
1) if i can reduce the roughness factor of the image and obtain the more smoother texture i.e more flat surface.
2) if i could detect the pugmark like shape in these images by defining rough pugmark shape in the database and then removing the background to obtain image like http://i.imgur.com/W0MFYmQ.png
3) if i could detect the regions with depth and separating them from the background based on difference in their depths.
please tell if any of these methods would work and if yes then how to implement them.
I have a hunch that this problem could benefit from using polynomial texture maps.
See here: http://www.hpl.hp.com/research/ptm/
You might want to consider top-down information in the process. See, for example, this work.
Looks like you're close enough from the pugmark, so I think that you should be able to detect pugmarks using Viola Jones algorithm. Maybe a PCA-like algorithm such as Eigenface would work too, even if you're not trying to recognize a particular pugmark it still can be used to tell whether or not there is a pugmark in the image.
Have you tried edge detection on your image ? I guess it should be possible to finetune Canny edge detector thresholds in order to get rid of the noise (if it's not good enough, low pass filter your image first), then do shape recognition on what remains (you would then be in the field of geometric feature learning and structural matching) Viola Jones and possibly PCA-like algorithm would be my first try though.
I was currently doing a project on recognizing the vehicle license plate at the rear side, i have done the OCR as the preliminary step, but i have no idea on how to detect the rectangle shaped(which is the concerned area of the car) license plate, i have read lot of papers but in no where i found a useful information about recognizing the rectangle shaped area of the license plate. I am doing my project using matlab. Please anyone help me with this ...
Many Thanks
As you alluded to, there are at least two distinct phases:
Locating the number plate in the image
Recognising the license number from the image
Since number plates do not embed any location marks (as found in QR codes for example), the complexity of recognising the number plate within the image is reduced by limiting range of transformation on the incoming image.
The success of many ANPR systems relies on the accuracy of the position and timing of the capturing equipment to obtain an image which places the number plate within a predictable range of distortion.
Once the image is captured the location phase can be handled by using a statistical analysis to locate a "number plate" shaped region within the image, i.e. one which is of the correct proportions for the perspective. This article describes one such approach.
This paper and another one describe using Sobel edge detector to locate vertical edges in the number plate. The reasoning is that the letters form more vertical lines compared to the background.
Another paper compares the effectiveness of some techniques (including Sobel detection and Haar wavelets) and may be a good starting point.
I had done my project on 'OCR based Vehicle Identification'
In general, LPR consist of three main phases: License Plate Extraction from the captured image, image segmentation to extract individual characters and character recognition. All the above phases of License Plate Detection are most challenging as it is highly sensitive towards weather condition, lighting condition and license plate placement and other artefact like frame, symbols or logo which is placed on licence plate picture, In India the license number is written either in one row or in two rows.
For LPR system speed and accuracy both are very important factors. In some of the literatures accuracy level is good but speed of the system is less. Like fuzzy logic and neural network approach the accuracy level is good but they are very time consuming and complex. In our work we have maintained a balance between time complexity and accuracy. We have used edge detection method and vertical and horizontal processing for number plate localization. The edge detection is done with ‘Roberts’ operator. The connected component analysis (CCA) with some proper thresholding is used for segmentation. For character recognition we have used template matching by correlation function and to enhance the level of matching we have used enhanced database.
My Approach for Project
Input image from webcam/camera.
Convert image into binary.
Detect number plate area.
Segmentation.
Number identification.
Display on GUI.
My Approach for Number Plate Extraction
Take input from webcam/camera.
Convert it into gray-scale image.
Calculate threshold value.
Edge detection using Roberts’s operator.
Calculate horizontal projection.
Crop image horizontally by comparing with 1.3 times of threshold value.
Calculate vertical projection.
Crop image vertically.
My Approach for Segmentation
Convert extracted image into binary image.
Find in-compliment image of extracted binary image.
Remove connected component whose pixel value is less than 2% of area.
Calculate number of connected component.
For each connected component find the row and column value
Calculate dynamic thresholding (DM).
Remove unwanted characters from segmented characters by applying certain conditions
Store segmented characters coordinates.
My Approach for Recognition
Initialize templates.
For each segmented character repeat step 2 to 7
Convert segmented characters to data base image size i.e. 24x42.
Find correlation coefficient value of segmented character with each data base image and store that value in array.
Find out the index position of maximum value in the array.
Find the letter which is link by that index value
Store that letter in a array.
Check out OpenALPR (http://www.openalpr.com). It recognizes plate regions using OpenCV and the LBP/Haar algorithm. This allows it to recognize both light on dark and dark on light plate regions. After it recognizes the general region, it uses OpenCV to localize based on strong lines/edges in the image.
It's written in C++, so hopefully you can use it. If not, at least it's a reference.