Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am searching for segmenting gray matter from a T1 weighted brain MRI scan. But I could not get the correct tutorial to follow it. Please suggest me an algorithm that works better and accurately to segment the gray matter alone from the T1 wieghted MRI scan image. There are several tools to segment gray matter in matlab but I need algorithm to segment the gray matter. Please suggest me the algorithm.
Why reinvent the wheel? SPM does a good job of segmentation and the MATLAB source code is freely available: http://www.fil.ion.ucl.ac.uk/spm/
You can examine the algorithm that is used and customize it for your own purposes if you wish. It produces probabilistic maps of gray matter, white matter, and csf that you can use in subsequent analyses. There are also a variety of options to complete the segmentation in both normalized and native space. I highly recommend it as a place to get started, and then you can branch off from there depending on your needs.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
my project is to count White fly in an image using matlab, I'm new to image processing so I don't know where to start from , I searched for papers about the topic but I could't find anything useful , my question is to help me start from a point and if you can suggest some papers can help me , thanks .
There is a lot of research in this area, but not specific to Matlab. Try using google scholar to search for papers with "computer vision counting" or similar keywords.
Richard Szeliski also has a very good, free to download textbook on Computer Vision which could be helpful: http://szeliski.org/Book/. Topics like Feature Detection, Segmentation and Recognition might be useful, depending on your exact problem.
When you have an idea of what you want to do, have a look at Matlab's Computer Vision toolbox: http://au.mathworks.com/help/vision/index.html
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Basically I need to extract the identification number of marathon runners from the image. Till now I was able to get the bib part alone from the whole image. Now I need to extract the numbers from that image:
I need to extract 1430 from the image. I have tried some methods like OCR and blob detection techniques but they are not successfull for all images.
Have you tried using Stroke Width Transform (SWT)? You can find a Matlab implementation of the first stages of SWT here.
Take a look at this example in the Computer Vision System Toolbox.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have tried manual detection using LS polynomial fitting here. But that cannot be used in my project as mine has to be a fully automated system.
Take a look at the Scale-Invariant Feature Transform, or SIFT. This video explains it well. You "train" a detector with one or more images of eyelids, and the detector locates similar regions in the input images. It's the de facto general purpose feature detector - although more specialized tools like face detectors are faster.
The "Scale-Invariant" part means that it can detect the same object at different sizes and rotations.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to make a program that takes an image as input and outputs text. Now I know that I can use a neural network to turn an image of single character into that character. The difficult part is: given an image with text in it, how would I produce all the rectangles around each individual character? What method could I use to do it?
A basic approach is to make a histogram of black pixels. First: project all pixels on a line. The deep valleys in the histgram indicate separation between lines (try different angles if the paper might be tilted). Then, per line (or per page if you know the font is monospaced) project the pixels on a horizontal histogram. This will give you a strong indication of inter character spaces. As a minimum this gives you a value for the average character height and width that will help you in next steps.
After that, you need to take care of kerning (where characters overlap). Find the connected pixels, possibly by first doing dilatation or erosion on the image to compensate for scanning artifacts.
Depending on the quality of the scan image you may have to use more advanced techniques, but this will get you going.
This doesn't sound like artificial intelligence, it sounds like you're talking about OCR:
http://en.wikipedia.org/wiki/Optical_character_recognition
See google tesseract
http://code.google.com/p/tesseract-ocr/
EDIT The unedited question was asking about artificial intelligence.
To me the question per se does not seem clear.
As it talks about OCR will leave a couple of articles here that they may help (they help me at least):
Improve OCR Accuracy
How to use image preprocessing to improve the accuracy of Tesseract
Also as mentioned above tesseract is a good OCR open-source python library (the one that i personally use as well). Other approaches that you may take is through sklearn
You may also want to check this stackoverflow post.
I am also pretty sure that you can use researchgate to check for any papers out there (I found some, just not sure if this is what you need)
I think that the above generic answer suits the generic question.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was looking at CamScanner, Genius Scan, and JotNot and trying to figure out how they work.
They are known as 'Mobile Pocket Document Scanners.' What each of them do is take a picture of a document through the iPhone camera, then they find the angle/position of the document (because it is nearly impossible to shoot it straight on), straightens the photo and readjusts the brightness and then turns it into a pdf. The end-result is what looks like a scanned document.
Take a look here of one of the apps, Genuis Scan, in action:
http://www.youtube.com/watch?v=DEJ-u19mulI
It looks pretty difficult to implement but I'm thinking someone smart on stackoverflow can point me in the right direction!
Does any know how one would go about developing something like that? What sort of library or image processing technologies do you think they're using? Anyone know if there is something open source that is available?
I found an open source library that does the trick:
http://code.google.com/p/simple-iphone-image-processing
It probably is pretty difficult, and you will likely need to find at least some algorithms or libraries capable of detecting distorted text within bitmaps, analyzing the likely 2D and 3D geometric distortion within a text image, image processing to correct that distortion with its inverse, and DSP filtering to adaptively adjust the image contrast... plus use of iOS APIs to take photos in the first place.