Segregating digits from an image in matlab [duplicate] - matlab

This question already has answers here:
segment digits in an image - Matlab
(2 answers)
Closed 9 years ago.
I am doing a project to recognize the digits in a calculator screen. The full image is shown
.
After some image processing, I have extracted only the screen from the full image as shown
.
But the digits are getting overlapped. Can anyone suggest how to remove the bridges or connections between the images?? Would any morphological image processing using bwmorph() be of utility? Please help..

May be you can try these two ideas see where it takes you
1). I would clean up the salt and pepper noise first.
2). then use erodedBW = imerode(originalBW,se)
of course you have to play aroud with se to find the one that works best for you.

Related

How do slow down a matlab script so that you can view graphs updating [duplicate]

This question already has answers here:
How to do an animated plot in matlab
(3 answers)
Closed 4 years ago.
Is there a way in matlab to slow down the execution time of a script so that you can view the graphs? I currently use breakpoints and step through the code but that is not ideal for showing demos.
You can try to use the pause function
pause(1)
In this way matlab should sleep for 1 second and you can see your graphic construction
https://it.mathworks.com/matlabcentral/answers/55874-how-to-stop-delay-execution-for-specified-time

How to separate very close characters in binary image for OCR in matlab? [duplicate]

This question already has an answer here:
What is the typical method to separate connected letters in a word using OCR
(1 answer)
Closed 5 years ago.
I made a basic OCR system in Matlab using correlation. (It's not a professional project, only as an exercise and I am not using the ocr() function of Matlab). My code is working almost correctly for clean text images. But if I make the job a little harder (taking the text photo for side position with angle) my code does not give good results. I use Principal Component Analysis for correct text alignment, but if I do this (taking photo with angle), the characters are very close together and I can't separate them for the recognizing process.
Original Image and after preprocessing (adaptive thresholding, adjusting,PCA)
How can I separate the characters correctly?
An alternative to what Yves suggests is to erode the image. Which is implemented as imerode in matlab. Perhaps scale the image first (though it is not needed here)
e.g. with this code
ocr(imerode(I,strel('disk',3)))
where I is your "BOOLEAN" black-white image, I receive
ocrText with properties:
Text: 'BOOLEAN↵↵'
CharacterBoundingBoxes: [9×4 double]
CharacterConfidences: [9×1 single]
Words: {'BOOLEAN'}
WordBoundingBoxes: [14 36 208 43]
WordConfidences: 0.5477
Splitting characters is a pretty difficult problem.
Unless the character widths are constant (which is the case for this image but might not be true with other letters), the methods based on projection analysis (vertcial extent of the characters as a function of abscissa) will fail.
Actually, for a method to be effective, it must be font-aware, i.e. know in advance what the alphabet looks like. In other words, you can't separate segmentation from recognition.
A possibility is to attempt to decompose the blob assumed to be made of touching characters (possibly based on projections or known character sizes), perform the recognition, and check the recognition results. Preferably, try several decompositions and keep the best.

Use importdata with more than 4 digits precision [duplicate]

This question already has an answer here:
Importing Data to Matlab
(1 answer)
Closed 8 years ago.
My data is a mixture of strings and values tab delimited. 'Importdata' is working pretty well but doesn't have a higher precision than 4 digits. How can I fix that, because I really need more?
Thanks in advance!
Matlab shows you by default only a precision of 4 digits - but calculates with much more digits internally.
Try
format long
to see a more precise representation of your data

Matrix content is changed when saved as image [duplicate]

This question already has an answer here:
GetPixel after SetPixel gives incorrect result
(1 answer)
Closed 8 years ago.
I did some encryption technique which encrypted the image LENA.jpg.I saved it as a encrypted image.When I am reading the same matrix for decription process I noticed a change in the values of the matrix.Image has lost some of its characteristics.When I decrypted the Matrix (encrypted) without saving as picture the output is perfect..But Once saved it loses its quality...Why it is happening...
I am attaching the decrypted image..you can see some pixels missing clearly..
When you "save as picture" what format are you using? Some are lossy but it sounds like you need a lossless format for this.
What you are looking for is aptly explained in this SO answer.
If you had saved the image in a .jpg format, it is to be expected. JPEG is by default a lossy format. It does have some lossless variants though.

Check whether query image exists in template database for matching purpose [duplicate]

This question already has answers here:
Image comparison - fast algorithm
(12 answers)
Closed 9 years ago.
I am working on dorsal hand vein recognition system. I have already binarised and pre-processed the image followed by feature extraction (white pixels coordinates) of the thinned vein patterns as shown below in the figure (Image 1). These steps were repeated for 10 images and having their coordinates stored in .txt file.
Now, let's say I have a query image (Image 2) as below where all the mentioned above steps have been applied and coordinates were retrieved.
For the matching purpose, I want to adapt this paper matching strategy which stated that "An algorithm that, somehow, does the exact same thing is implemented in order to do a similarity matching between binary images. The matching is a two way process. In the first step, the algorithm scans through the query image and takes every foreground pixel (background pixels can also be taken) value and compares this with the pixel value in the database image at the corresponding location. If it finds the same value at the same position in the database image, this will be taken as a hit count. Otherwise, it will be taken as a miss count and finally the difference of the hit and the miss count is divided by the total number of foreground pixels in the query image. The result of this division gives a number that indicates how Similar the Query image is to the Database image (SQD). In the second step, the database image is scanned and its foreground pixel elements are compared against the query image as is done in the first step. This will give us a result that indicates how Similar
the Database image is to the Query image (SDQ). Then the average of the SQD and the SDQ, Average
Similarity Measure (ASM), is taken as a ranking measure for the retrieval process."
Thank You.
That is a very challenging problem. When you skeletonize the image you are potentially throwing away information that might be helpful. If you have to work with the skeletonized images, I would extract features of interest and then try to match on them. For example, you could identify all intersections of the veins to get a set of points. You could then do a best fit between the points in two different images to provide a metric of how similar they are.
Pixel-by-pixel is easy, you just hash the image and store the has, then hash the new image and compare the hashes. But that is going to fail if images are scaled, or in case of lossy compressions, re-saved.
That's going to essentially match if somebody uploads the same file twice. This may or may not be what you are after.
If not, you need some sort of image similarity algorithm. There is a question about that already, here. Image comparison - fast algorithm