I'v been trying to do quadrangle detection and localization for weeks, my goal is to have a robust way of getting the 4 points of an quadrangle(rectangle), so I can apply projective transform to an Image then attach it to the source image. I have try the classic opencv contour method, and also using hough transform to find lines then calculate intersections, those two methods is unusable when apply it to real life images.
So I turn to CNN for help, but currently i haven't find any one try to use CNN to solve this simple problem.
My first attempt is to use state-of-art object detection and localization methods to get quadrangle's bounding box so i can narrow the search of 4 points, then use image processing & computer vision methods to further the search for 4 points. but after trying YOLOv2 and Faster-RCNN, the prediction accuracy is not ideal.
So I'm wondering if there is any idea i can do this end to end, training and feedforward all using a single neural network. it also must be able to deal with occlusion reasonably well.
Currently my idea is to remove the fc-layers and make a huge activation map that has the same width and height as the first input layer(eg. 448x448) then optimize the 4 most highly activated areas, using argmax to get the position. but this method only works for one quadrangle it doesn't work well with corner occlusions as well.
I'll be appreciated if anyone can provide any suggestions. Thanks a lot!
You are absolutely right about the first methods you mentioned. Hough transform like methods are old and not useful for images in the wild. And of course, computer vision field turned its face to object detection and recognition with rise of deep learning.
However, there is a very nice discussion came up recently.
Have we forgotten about Geometry in Computer Vision?
My suggestion would be contour detection and then apply Hough transform(use state of the art) methods to detect rectangles you want, about the occlusion, you can set parameters for Hough transform to be more forgiving for missing edge pixels with parameters.
You can for example check most recent contour detection methods as in recent CVPR paper.
Related
Recently, I have to do a project of multi view 3D scanning within this 2 weeks and I searched through all the books, journals and websites for 3D reconstruction including Mathworks examples and so on. I written a coding to track matched points between two images and reconstruct them into 3D plot. However, despite of using detectSURFFeatures() and extractFeatures() functions, still some of the object points are not tracked. How can I reconstruct them also in my 3D model?
What you are looking for is called "dense reconstruction". The best way to do this is with calibrated cameras. Then you can rectify the images, compute disparity for every pixel (in theory), and then get 3D world coordinates for every pixel. Please check out this Stereo Calibration and Scene Reconstruction example.
The tracking approach you are using is fine but will only get sparse correspondences. The idea is that you would use the best of these to try to determine the difference in camera orientation between the two images. You can then use the camera orientation to get better matches and ultimately to produce a dense match which you can use to produce a depth image.
Tracking every point in an image from frame to frame is hard (its called scene flow) and you won't achieve it by identifying individual features (such as SURF, ORB, Freak, SIFT etc.) because these features are by definition 'special' in that they can be clearly identified between images.
If you have access to the Computer Vision Toolbox of Matlab you could use their matching functions.
You can start for example by checking out this article about disparity and the related matlab functions.
In addition you can read about different matching techniques such as block matching, semi-global block matching and global optimization procedures. Just to name a few keywords. But be aware that the topic of stereo matching is huge one.
I am attempting to do some face recognition and hallucination experiments and in order to get the best results, I first need to ensure all the facial images are aligned. I am using several thousand images for experimenting.
I have been scouring the Internet for past few days and have found many different programs which claim to do so, however due to Matlabs poor backwards compatibility, many of the programs no longer work. I have tried several different programs which don't run as they are calling onto Matlab functions which have since been removed.
The closest I found was using the SIFT algorithm, code found here
http://people.csail.mit.edu/celiu/ECCV2008/
Which does help align the images, but unfortunately it also downsamples the image, so the result ends up quite blurry looking which would have a negative effect on any experiments I ran.
Does anyone have any Matlab code samples or be able to point me in the right direction to code that actually aligns faces in a database.
Any help would be much appreciated.
You can find this recent work on Face Detection, Pose Estimation and Landmark Localization in the Wild. It has a working Matlab implementation and it is quite a good method.
Once you identify keypoints on all your faces you can morph them into a single reference and work from there.
The easiest way it with PCA and the eigen vector. To found X and Y most representative data. So you'll get the direction of the face.
You can found explication in this document : PCA Aligment
Do you need to detect the faces first, or are they already cropped? If you need to detect the faces, you can use vision.CascadeObjectDetector object in the Computer Vision System Toolbox.
To align the faces you can try the imregister function in the Image Processing Toolbox. Alternatively, you can use a feature-based approach. The Computer Vision System Toolbox includes a number of interest point detectors, feature descriptors, and a matchFeatures function to match the descriptors between a pair of images. You can then use the estimateGeometricTransform function to estimate an affine or even a projective transformation between two images. See this example for details.
I have a image with noise. i want to remove all background variation from an image and want a plain image .My image is a retinal image and i want only the blood vessel and the retinal ring to remain how do i do it? 1 image is my original image and 2 image is how i want it to be.
this is my convoluted image with noise
There are multiple approaches for blood vessel extraction in retina images.
You can find a thorough overview of different approaches in Review of Blood Vessel Extraction Techniques and Algorithms. It covers prominent works of many approache.
As Martin mentioned, we have the Hessian-based Multiscale Vessel Enhancement Filtering by Frangi et al. which has been shown to work well for many vessel-like structures both in 2D and 3D. There is a Matlab implementation, FrangiFilter2D, that works on 2D vessel images. The overview fails to mention Frangi but cover other works that use Hessian-based methods. I would still recommend trying Frangi's vesselness approach since it is both powerful and simple.
Aside from the Hesisan-based methods, I would recommend looking into morphology-based methods since Matlab provides a good base for morphological operations. One such method is presented in An Automatic Hybrid Method for Retinal Blood Vessel Extraction. It uses a morphological approach with openings/closings together with the top-hat transform. It then complements the morphological approach with fuzzy clustering and some post processing. I haven't tried to reproduce their method, but the results look solid and the paper is freely available online.
This is not an easy task.
Detecting boundary of blood vessals - try edge( I, 'canny' ) and play with the threshold parameters to see what you can get.
A more advanced option is to use this method for detecting faint curves in noisy images.
Once you have reasonably good edges of the blood vessals you can do the segmentation using watershed / NCuts or boundary-sensitive version of meanshift.
Some pointers:
- the blood vessals seems to have relatively the same thickness, much like text strokes. Would you consider using Stroke Width Transform (SWT) to identify them? A mex implementation of SWT can be found here.
- If you have reasonably good boundaries, you can consider this approach for segmentation.
Good luck.
I think you'll be more served using a filter based on tubes. There is a filter available which is based on the work done by a man called Frangi, and the filter is often dubbed the Frangi filter. This can help you with identifying the vasculature in the retina. The filter is already written for Matlab and a public version is available here. If you would like to read about the underlying research search for: 'Multiscale vessel enhancement', by Frangi (1998). Another group who's done work in the same field are Sato et.al.
Sorry for the lack of a link in the last one, I could only find payed sites for looking at the research paper on this computer.
Hope this helps
Here is what I will do. Basically traditional image arithmetic to extract background and them subtract it from input image. This will give you the desired result without background. Below are the steps:
Use a median filter with large kernel as the first step. This will estimate the background.
Divide the input image with the output of step 1 [You may have to shift the denominator a little (+1) ] to avoid divide by 0.
Do the quantization to 8 or n bit integer based on what bit the original image is.
The output of step 3 above is the background. Subtract it from original image, to get the desired result. This clips all the negative values as well.
At image i need find "table" - simple rectangle.
Problem is with edge recognition, because potencial photos will be "dark".
I tried edge - sobel, canny, log, .... - recognition and after that Hough transformation and line finding. But this algorithms are not enough for this task.
Something what can help me:
- it is rectangle!, only in perspective view (something like fitting perspective rectangle?)
- that object MUST cover atleast for example 90% of photo (i know i need looking near photo edges)
- that rectangle have fast same color (for example wood dining table)
- i need find atleast "only" 4 corners..(but yes, better will be find the edges of that table)
I know how for example sobel, canny or log algorithms works and Hough as well. And naturally those algorithms fail at dark or non-contrast images. But is there some another method for example based at "fitting"?
Images showing photo i can get (you see it would be dark) and what i need find:
and this is really "nice" picture (without noise). I tested it on more noise pictures and the result was..simply horrible..
Result of this picture with actual algorithm log (with another ones it looks same):
I know image and edge recognition is not simple challenge but are there some new better methods or something like that what i can try to use?
In one of posts in here i found LSD algorithm. It seems very nice descripted and it seems it is recognizing really nice straight lines as well. Do you think it would be better to use it insted of the canny or sobel detection?
Another solution will be corner detection, on my sample images it works better but it recognize too much points and there will problem with time..i will need to connect all the points and "find" the table..
Another solution:
I thought about point to point mapping. That i will have some "virtual" table and try to map that table above with that "virtual" table (simple 2d square in painting :] )..But i think point to point mapping will give me big errors or it will not working.
Does someone have any advice what algorithm use to?
I tried recognize edges in FIJI and then put the edge detected image in matlab, but with hough it works bad as well..:/..
What do you think it would be best to use? In short i need find some algorithm working on non contrast, dark images.
I'd try some modified snakes algorithm:
you parameterize your rectangle with 4 points and initialize them somewhere in the image corners. Then you move the points towards image features using some optimization algorithm (e.g. gradient descent, simulated annealing, etc.).
The image features could be a combination of edge features (e.g. sobel directly or sobel of some gaussian filtered image) to be evaluated on the lines between those four points and corner features to be evaluated at those 4 points.
Additionally you can penalize unlikely rectangles (maybe depending on the angles between the points or on the distance to the image boundary).
I have two images. In one of the images, my eye is in the center position and in the other image, it is in the left. How do I find out whether my eye is in the left or the right?
I am using MATLAB. Are there any functions for this?
A simple solution is to try to detect the iris using circular Hough Transform.
You can find a lot materials out there. To name a few, these two fileexchange submissions:
Hough Transform for circle
detection
Circle Detection via Standard Hough
Transform
This sounds like Eye tracking implemented in MATLAB which is a fairly popular research topic.
If you want a more detailed answer, please answer the following questions:
Do you know the coordinates of your eye in the first image?
What kind of motion is there between the two images? Rotation/translation/scaling/...?
Do you want this to be real-time?
What is the resolution of the images?
Are there going to be more eyes in the image apart from yours?
If you are willing to select the eye in one image you can use template matching to find it in others (for example you can mark it in the first frame of a video and then find it in all other frames).
Look at the normxcor2 function in matlab:
http://www.nd.edu/~hpcc/solaris8_usr_local/src/matlab6.1/help/toolbox/images/normxcorr2.html
This technique is robust to constant illumination change, but will fail if the appearance of the eye changes significantly between the image you took the template from and the image you are searching in.
If you are going to search for the eye in a lot of frames (for example, eye tracking from a webcam) then you should look at stronger techniques such as the Kalman Filter or the Particle Filter (aka Condensation Filter in computer vision)
By using Color Distance Maps, the skin and non skin area can be differentiated and thus the non skin area contains the iris. From the iris, the whole eye could be detected. Hope it works.
You should also have a look at Eye Ball Detection in MATLAB , they have detected eyes first and then detected the EyeBall.