It seems that nothing changes on a jpg file after running the SIFT demo program here http://www.cs.ubc.ca/~lowe/keypoints/. Does anyone know how it works?
Thanks a lot.
SIFT is an algorithm that generate keypoints based on its renowned automatic feature detection capabilities. Those keypoints would most likely be taken to compare with or match against other images. The image itself is not being modified. Rather, we are looking for 'distinguishable clusters of pixels' so that it would 1) distinguish itself from other pictures 2) liken itself to similar images. I have used this beautifully crafted algorithm in several occasions in my research. If you need more clarifications, let me know.
#Gary Tsui: can we use SIFT to search for similar parts in one image? If we copy some part of the image and paste it to another part of the same image, is there possibility to detect copy - pasted area by using SIFT ?
Related
I'm making an image processing project which has a 6-step algorithm and I'm stuck in one of these.
First off all, the platform I using is MATLAB, so if you can supply some samples it would be great. But if you don't want to write code samples, please just give me some hints, techniques or etc.
Let me explain my problem. I've segmented a .jpg image and cut out some areas of it. Then I save the result as .png using a mask. The result is like that (The black part is actually transparent, I made it black to see better the problem);
As you may see in the picture, there is some irrelevant areas. I need to get rid of these irrelevant areas. Because I want the foreground as much as smooth. At the first sight, I applied Gaussian blur to the mask and save the image as .png, again. But the result isn't satisfying as you can imagine. I suppose this situation is needed more solid solution than I have tried.
Edit1: I used spectral matting. But it doesn't help. The best result that I can receive is like that;
As you can see there is some problems on the face and lots of problems on the bottom side of the picture. I guess I need kind of edge fixer or edge smoother for the first image above and it should be faster than matting.
Any MATLAB code samples, technique and approach would be great. If you need further explanation, feel free to ask.
You do not want just to "Gauss-blur" the result, you want soft segmentation a.k.a matting. As a first stop for image matting I would recommend Levine Rav-Acha and Lischinski Spectral Matting. You'll find some Matlab code there (I used it in the past - very impressive results).
I have a question regarding computer vision; seems to be a general question but anyways, just wondering if you might have a clue. I was wondering if there is an efficient way to distinguish texture images (or photos with repetitive patterns) between whatnot, say realistic photos? The patterns could have exact repetitions, or just have major similarity. Actually I'm trying to see given an image if, it is possible to detect it is a texture or a pattern-based image, and that in real-time maybe?
For instance these three are considered textures in our context:
http://www.bigchrisart.com/sites/default/files/video/TR_Texture_RockWall.jpg
http://www.colourbox.com/preview/4440275-144135-seamless-geometric-op-art-texture.jpg
Thank you!
I cannot open your first image. I implemented the Fourier transform on your second one, and you can see frequency responses at specific points:
You can further process the image by extract the local maximum of the magnitude, and they share the same distance to the center (zero frequency). This may be considered as repetitive patterns.
Regarding the case that patterns share major similarity instead of repetitive feature, it is hard to tell whether the frequency magnitude still has such evident response. It depends on how the pattern looks like.
Another possible approach is the auto-correlation on your image.
I am attempting to do some face recognition and hallucination experiments and in order to get the best results, I first need to ensure all the facial images are aligned. I am using several thousand images for experimenting.
I have been scouring the Internet for past few days and have found many different programs which claim to do so, however due to Matlabs poor backwards compatibility, many of the programs no longer work. I have tried several different programs which don't run as they are calling onto Matlab functions which have since been removed.
The closest I found was using the SIFT algorithm, code found here
http://people.csail.mit.edu/celiu/ECCV2008/
Which does help align the images, but unfortunately it also downsamples the image, so the result ends up quite blurry looking which would have a negative effect on any experiments I ran.
Does anyone have any Matlab code samples or be able to point me in the right direction to code that actually aligns faces in a database.
Any help would be much appreciated.
You can find this recent work on Face Detection, Pose Estimation and Landmark Localization in the Wild. It has a working Matlab implementation and it is quite a good method.
Once you identify keypoints on all your faces you can morph them into a single reference and work from there.
The easiest way it with PCA and the eigen vector. To found X and Y most representative data. So you'll get the direction of the face.
You can found explication in this document : PCA Aligment
Do you need to detect the faces first, or are they already cropped? If you need to detect the faces, you can use vision.CascadeObjectDetector object in the Computer Vision System Toolbox.
To align the faces you can try the imregister function in the Image Processing Toolbox. Alternatively, you can use a feature-based approach. The Computer Vision System Toolbox includes a number of interest point detectors, feature descriptors, and a matchFeatures function to match the descriptors between a pair of images. You can then use the estimateGeometricTransform function to estimate an affine or even a projective transformation between two images. See this example for details.
I have a image with noise. i want to remove all background variation from an image and want a plain image .My image is a retinal image and i want only the blood vessel and the retinal ring to remain how do i do it? 1 image is my original image and 2 image is how i want it to be.
this is my convoluted image with noise
There are multiple approaches for blood vessel extraction in retina images.
You can find a thorough overview of different approaches in Review of Blood Vessel Extraction Techniques and Algorithms. It covers prominent works of many approache.
As Martin mentioned, we have the Hessian-based Multiscale Vessel Enhancement Filtering by Frangi et al. which has been shown to work well for many vessel-like structures both in 2D and 3D. There is a Matlab implementation, FrangiFilter2D, that works on 2D vessel images. The overview fails to mention Frangi but cover other works that use Hessian-based methods. I would still recommend trying Frangi's vesselness approach since it is both powerful and simple.
Aside from the Hesisan-based methods, I would recommend looking into morphology-based methods since Matlab provides a good base for morphological operations. One such method is presented in An Automatic Hybrid Method for Retinal Blood Vessel Extraction. It uses a morphological approach with openings/closings together with the top-hat transform. It then complements the morphological approach with fuzzy clustering and some post processing. I haven't tried to reproduce their method, but the results look solid and the paper is freely available online.
This is not an easy task.
Detecting boundary of blood vessals - try edge( I, 'canny' ) and play with the threshold parameters to see what you can get.
A more advanced option is to use this method for detecting faint curves in noisy images.
Once you have reasonably good edges of the blood vessals you can do the segmentation using watershed / NCuts or boundary-sensitive version of meanshift.
Some pointers:
- the blood vessals seems to have relatively the same thickness, much like text strokes. Would you consider using Stroke Width Transform (SWT) to identify them? A mex implementation of SWT can be found here.
- If you have reasonably good boundaries, you can consider this approach for segmentation.
Good luck.
I think you'll be more served using a filter based on tubes. There is a filter available which is based on the work done by a man called Frangi, and the filter is often dubbed the Frangi filter. This can help you with identifying the vasculature in the retina. The filter is already written for Matlab and a public version is available here. If you would like to read about the underlying research search for: 'Multiscale vessel enhancement', by Frangi (1998). Another group who's done work in the same field are Sato et.al.
Sorry for the lack of a link in the last one, I could only find payed sites for looking at the research paper on this computer.
Hope this helps
Here is what I will do. Basically traditional image arithmetic to extract background and them subtract it from input image. This will give you the desired result without background. Below are the steps:
Use a median filter with large kernel as the first step. This will estimate the background.
Divide the input image with the output of step 1 [You may have to shift the denominator a little (+1) ] to avoid divide by 0.
Do the quantization to 8 or n bit integer based on what bit the original image is.
The output of step 3 above is the background. Subtract it from original image, to get the desired result. This clips all the negative values as well.
I am going through feature detection algorithms and a lot of things seems to be unclear. The original paper is quite complicated to understand for beginners in image processing. Shall be glad if these are answered
What are the features which are being detected by SURF and SIFT?
Is it necessary that these have to be computed on gray scale images?
What does the term "descriptor" mean in simple words.
Generally,how many features are selected/extracted?Is there a criteria for that?
What does the size of Hessian matrix determine?
What is the size of the features being detected?It is said that the size of a feature is the size of the blob.So, if size of image is M*N so will there be M*N n umber of features?
These questions may seem too trivial, but please help..
I will try to give an intuitive answer to some of your questions, I don't know answers to all.
(You didn't specify which paper you are reading)
What are the features and how many features are being detected by SURF
and SIFT?
Normally features are any part in an image around which you selected a small block. You move that block by a small distance in all directions. If you find considerable variations between the one you selected and its surroundings, it is considered as a feature. Suppose you moved your camera a little bit to take the image, still you will detect this feature. That is their importance. Normally best example of such a feature is corners in the image. Even edges are not so good features. When you move your block along the edge lines, you don't find any variation, right?
Check this image to understand what I said , only at the corner you get considerable variation while moving the patches, in other two cases you won't get much.
Image link : http://www.mathworks.in/help/images/analyzing-images.html
A very good explanation is given here : http://aishack.in/tutorials/features-what-are-they/
This the basic idea and the algorithms you mentioned make this more robust to several variations and solve many issues. (You can refer their papers for more details)
Is it necessary that these have to be computed on gray scale images?
I think so. Anyway OpenCV works on grayscale images
What does the term "descriptor" mean in simple words?
Suppose you found features in one image, say image of a building. Now you took another image of same building but from a slightly different direction. You found features in the second image also. But how can you match these features. Say feature 1 in image 1 match to which feature in image 2 ? (As a human, you can do easily, right ? This corner of building in first image corresponds to this corner in second image, so and so. Very easy).
Feature is just giving you pixel location. You need more information about that point to match it with others. So you have to describe the feature. And this description is called "descriptors". To describe this features, algorithms are there and you can see it SIFT paper.
Check this link also : http://aishack.in/tutorials/sift-scale-invariant-feature-transform-introduction/
Generally,how many features are selected/extracted?Is there a criteria
for that?
During processing you can see applying different thresholds, removing weak keypoints etc. It is all part of plan. You need to understand algorithm to understand these things. Yes, you can specify these threshold and other parameters (in OpenCV) or you can leave it as default. If you check for SIFT in OpenCV docs, you can see function parameters to specify number of features, number of octave layers, edge threshold etc.
What does the size of Hessian matrix determine?
That I don't know exactly, just it is a threshold for keypoint detector. Check OpenCV docs : http://docs.opencv.org/modules/nonfree/doc/feature_detection.html#double%20hessianThreshold