Edge Detection Along Blood Vessel in Matlab - matlab

I am trying to specify a line across a blood vessel in an image, and then have matlab specify the edges of the vessel (which are contained within the line). The next part will be comparing changes in the distance between these edges over time (so across 1000x more images).
I have tried the following code to get started:
I = imread('Obj1.tif');
imshow(I,[]);
improfile
And I was looking at available methods to detect the edges from intensity along that line that gets plotted (tangents, maxima/minima etc) but I am not convinced this is the best method. I looked into other tools on matlab such as the Canny Method, Sobel etc, but the examples for all of these only show how to detect edges throughout the entire image. My coding skills are not sufficient to have the algorithms specified along a single line of the users choosing. Methods that I have looked at on pubmed also seem more complicated then perhaps I need.
Does anybody have any ideas or suggestions from the point that I am currently at?
Thank you

Related

ideas on quadrangle/rectangle detection using convolutional neural networks

I'v been trying to do quadrangle detection and localization for weeks, my goal is to have a robust way of getting the 4 points of an quadrangle(rectangle), so I can apply projective transform to an Image then attach it to the source image. I have try the classic opencv contour method, and also using hough transform to find lines then calculate intersections, those two methods is unusable when apply it to real life images.
So I turn to CNN for help, but currently i haven't find any one try to use CNN to solve this simple problem.
My first attempt is to use state-of-art object detection and localization methods to get quadrangle's bounding box so i can narrow the search of 4 points, then use image processing & computer vision methods to further the search for 4 points. but after trying YOLOv2 and Faster-RCNN, the prediction accuracy is not ideal.
So I'm wondering if there is any idea i can do this end to end, training and feedforward all using a single neural network. it also must be able to deal with occlusion reasonably well.
Currently my idea is to remove the fc-layers and make a huge activation map that has the same width and height as the first input layer(eg. 448x448) then optimize the 4 most highly activated areas, using argmax to get the position. but this method only works for one quadrangle it doesn't work well with corner occlusions as well.
I'll be appreciated if anyone can provide any suggestions. Thanks a lot!
You are absolutely right about the first methods you mentioned. Hough transform like methods are old and not useful for images in the wild. And of course, computer vision field turned its face to object detection and recognition with rise of deep learning.
However, there is a very nice discussion came up recently.
Have we forgotten about Geometry in Computer Vision?
My suggestion would be contour detection and then apply Hough transform(use state of the art) methods to detect rectangles you want, about the occlusion, you can set parameters for Hough transform to be more forgiving for missing edge pixels with parameters.
You can for example check most recent contour detection methods as in recent CVPR paper.

Is there any regularity-detection tool for regions inside an image?

I'm working on MATLAB on some regions inside an image. I'm at a point in which I would like to be able to separate regions which exhibit some kind of regularity (e.g., being circle-ish or square-ish) from regions which does not resemble any known figure and which for my application are mere noise. I'll illustrate this using a descriptive MS Paint image:
Is there any tool that, most of the times (or even less, I know this can't be 100/100) will recognize the red thing as being different?
I'll deal with many shapes in a single image, so I don't mind if I carry on some red monsters along the way, as long as the majority of them is kicked out. Of course I know the indices of these regions, so I can manipulate them in MATLAB.
Many algorithms come to mind, e.g., getting the boundary and checking for its regularity/the number of times it changes curvature/..., checking for variations in vertical length through different columns (nearly 0 for the linear feature, really high for the red stuff), ...
However I was hoping in some help from a tool out there. It doesn't matter if this tool won't cover all cases (for example, will kick out circles), I've been very broad to get the maximum number of inputs from you guys - any tool will be inspiring and helpful (and, however, we can't expect a perfect answer for the deeper question - recognizing regular shapes - which seems more like a AI field of research). I also think that, while being broad, this is totally non-subjective so should fit in SO. Thank you.
Side note 1: I'll deal mostly with elongated, extended features like the top-right one, so circles are not that relevant.
Side note 2: To be 100% clear, I would need something (be it an already existant tool, or some ideas pointed out by you) that acts on the indices of the shapes, in terms of rows-columns into the original image, or on the boundary of the shape itself.
Side note 3: Apart from tools/suggestions/ideas, you are welcomed to write down some lines of code ;) I'm getting the regions as connected components from bwconncomp.
I had to solve a similar problem recently that involved counting the number of indentations on blobs within in an image (basically, the connected components returned by bwconncomp). The method I used was to look at curvature changes along the boundary calculated via the FFT. In your case, the red blobs would have a large number of curvature variations, whereas the black regions would not. It's a pretty easy calculation and relatively fast. The code is on github here:
https://github.com/mjsottile/blobdents
The file of interest is src/countindents.m. A short description of the approach is here:
http://arxiv.org/abs/1501.07692
I went for the easier road as suggested by #Mikhail in comments.
I found out regionprops has a really helpful tool called Solidity. Quoting docs,
Returns a scalar specifying the proportion of the pixels in the convex hull that are also in the region. Computed as Area/ConvexArea.
Convex hull is defined as the smallest convex polygon that can contain the region. So Solidity goes up to 1 if the shape is kind of regular and has no convexity changes; down to 0 for my red shape, which leaves space between itself and the convex polygon.
Of course it never reaches 0, lowest value should belong to a kind of +-shaped sign.

Automated placement of points/landmarks on shape outline using MATLAB

I'm just beginning with Image analysis in MATLAB.
My goal is to do an automated image segmention on images of plant leaves.
I have had reasonable success here thanks to multiple online resources.
The current objective, the reason why I'm placing this question here, is to able to place 25 equidistant points along each half of the margin/outline of leaf, like described in following image:
For the script to be able to recognize each half of the leaf, user can put two points within the GUI. One of these user-defined points will be on the base of leaf and the other on tip of leaf. It would be even better if a script would be able to automatically recognize these two features of the leaf.
For the output, I would like a plain text format file containing image coordinate of each point.
I'm not asking for a ready made script here, but looking for a starting point.
One way I think this can be done is by linearizing/open up the outline in such a way that it becomes a straight line. This can be done by treating any of user placed point/landmark as breakpoint. Once a linear outline is obtained it can again be broken into two halves at other user defined point and now points can be placed. One point to bear in mind here is that the placement of points for each half should start from the end that corresponds to the same breakpoint/user-defined point in each half. Now these straight lines can be superimposed on original image for reconstruction.
Thank you very much.
Parashar

Matlab face alignment code

I am attempting to do some face recognition and hallucination experiments and in order to get the best results, I first need to ensure all the facial images are aligned. I am using several thousand images for experimenting.
I have been scouring the Internet for past few days and have found many different programs which claim to do so, however due to Matlabs poor backwards compatibility, many of the programs no longer work. I have tried several different programs which don't run as they are calling onto Matlab functions which have since been removed.
The closest I found was using the SIFT algorithm, code found here
http://people.csail.mit.edu/celiu/ECCV2008/
Which does help align the images, but unfortunately it also downsamples the image, so the result ends up quite blurry looking which would have a negative effect on any experiments I ran.
Does anyone have any Matlab code samples or be able to point me in the right direction to code that actually aligns faces in a database.
Any help would be much appreciated.
You can find this recent work on Face Detection, Pose Estimation and Landmark Localization in the Wild. It has a working Matlab implementation and it is quite a good method.
Once you identify keypoints on all your faces you can morph them into a single reference and work from there.
The easiest way it with PCA and the eigen vector. To found X and Y most representative data. So you'll get the direction of the face.
You can found explication in this document : PCA Aligment
Do you need to detect the faces first, or are they already cropped? If you need to detect the faces, you can use vision.CascadeObjectDetector object in the Computer Vision System Toolbox.
To align the faces you can try the imregister function in the Image Processing Toolbox. Alternatively, you can use a feature-based approach. The Computer Vision System Toolbox includes a number of interest point detectors, feature descriptors, and a matchFeatures function to match the descriptors between a pair of images. You can then use the estimateGeometricTransform function to estimate an affine or even a projective transformation between two images. See this example for details.

"Simple" edge - line - detection

At image i need find "table" - simple rectangle.
Problem is with edge recognition, because potencial photos will be "dark".
I tried edge - sobel, canny, log, .... - recognition and after that Hough transformation and line finding. But this algorithms are not enough for this task.
Something what can help me:
- it is rectangle!, only in perspective view (something like fitting perspective rectangle?)
- that object MUST cover atleast for example 90% of photo (i know i need looking near photo edges)
- that rectangle have fast same color (for example wood dining table)
- i need find atleast "only" 4 corners..(but yes, better will be find the edges of that table)
I know how for example sobel, canny or log algorithms works and Hough as well. And naturally those algorithms fail at dark or non-contrast images. But is there some another method for example based at "fitting"?
Images showing photo i can get (you see it would be dark) and what i need find:
and this is really "nice" picture (without noise). I tested it on more noise pictures and the result was..simply horrible..
Result of this picture with actual algorithm log (with another ones it looks same):
I know image and edge recognition is not simple challenge but are there some new better methods or something like that what i can try to use?
In one of posts in here i found LSD algorithm. It seems very nice descripted and it seems it is recognizing really nice straight lines as well. Do you think it would be better to use it insted of the canny or sobel detection?
Another solution will be corner detection, on my sample images it works better but it recognize too much points and there will problem with time..i will need to connect all the points and "find" the table..
Another solution:
I thought about point to point mapping. That i will have some "virtual" table and try to map that table above with that "virtual" table (simple 2d square in painting :] )..But i think point to point mapping will give me big errors or it will not working.
Does someone have any advice what algorithm use to?
I tried recognize edges in FIJI and then put the edge detected image in matlab, but with hough it works bad as well..:/..
What do you think it would be best to use? In short i need find some algorithm working on non contrast, dark images.
I'd try some modified snakes algorithm:
you parameterize your rectangle with 4 points and initialize them somewhere in the image corners. Then you move the points towards image features using some optimization algorithm (e.g. gradient descent, simulated annealing, etc.).
The image features could be a combination of edge features (e.g. sobel directly or sobel of some gaussian filtered image) to be evaluated on the lines between those four points and corner features to be evaluated at those 4 points.
Additionally you can penalize unlikely rectangles (maybe depending on the angles between the points or on the distance to the image boundary).