How to extract digits (number) using Matlab - matlab

At work I have to record a lot of data from png data. Every time I have to manually record the digits (e.g. mean\SD 101.1\11) on the excel sheet and read it with Matlab. Would it be possible that Matlab could directly read the digits from the PNG image, so that lots of work could be saved?
I know it might involve pattern recognition, but still hope that there may be someone who has done this before.

You can make use of Optical Character Recognition (OCR). The code for it is available here

Related

How to separate very close characters in binary image for OCR in matlab? [duplicate]

This question already has an answer here:
What is the typical method to separate connected letters in a word using OCR
(1 answer)
Closed 5 years ago.
I made a basic OCR system in Matlab using correlation. (It's not a professional project, only as an exercise and I am not using the ocr() function of Matlab). My code is working almost correctly for clean text images. But if I make the job a little harder (taking the text photo for side position with angle) my code does not give good results. I use Principal Component Analysis for correct text alignment, but if I do this (taking photo with angle), the characters are very close together and I can't separate them for the recognizing process.
Original Image and after preprocessing (adaptive thresholding, adjusting,PCA)
How can I separate the characters correctly?
An alternative to what Yves suggests is to erode the image. Which is implemented as imerode in matlab. Perhaps scale the image first (though it is not needed here)
e.g. with this code
ocr(imerode(I,strel('disk',3)))
where I is your "BOOLEAN" black-white image, I receive
ocrText with properties:
Text: 'BOOLEAN↵↵'
CharacterBoundingBoxes: [9×4 double]
CharacterConfidences: [9×1 single]
Words: {'BOOLEAN'}
WordBoundingBoxes: [14 36 208 43]
WordConfidences: 0.5477
Splitting characters is a pretty difficult problem.
Unless the character widths are constant (which is the case for this image but might not be true with other letters), the methods based on projection analysis (vertcial extent of the characters as a function of abscissa) will fail.
Actually, for a method to be effective, it must be font-aware, i.e. know in advance what the alphabet looks like. In other words, you can't separate segmentation from recognition.
A possibility is to attempt to decompose the blob assumed to be made of touching characters (possibly based on projections or known character sizes), perform the recognition, and check the recognition results. Preferably, try several decompositions and keep the best.

Matlab noise source discontinuity

Using Matlab, I've made some random noise, filtered it and then successfully saved it as a gnuradio readable file for a file source. Once used in gnuradio, I set the file source to repeat and then viewed it using QT Gui Frequency Sink. I can see the filtered noise fine, but every now and then (every 10 seconds or so), the spectrum will drop in power and jump around for around a tenth of a second, then return back to normal power. My sample rate for the matlab filter is 320k and same with my gnuradio sample rate if that matters.
I think it may have to do with the fact that the noise generated on matlab is going to be a sequence that is repeated on gnuradio. I think the discontinuity happens right when the sequence repeats. Any idea how I can stop this discontinuity so I can transmit without having to worry about it? If I'm missing any info, please let me know and I'll edit the question. Thanks in advance.
NOTE: I needed to create a matlab binary file to be able to read it on GNU Radio. GNU Radio reads the binary file from my desktop, then uses the information as the file source.

Generating txt file in complex format from Matlab datas

I'm relatively new to Matlab and currently using it to calculate pressure cards for rapid dynamic applications on RADIOSS.
The function is done and can calculate Time-Pressure points.
For the moment I generated only .ascii files to import as curves into the software but I'd like to directly write a text file readable by RADIOSS. (after conversion)
The formatting I need is very specific and I'd like to know if such a thing is possible to do on Matlab. I've been searching on my own for some time now and didn't find really specific formatting options so I come seeking for your advice.
For example I have n time Arrays Te{1 to n} an n Pressure Arrays Pr{1 to n} the format needed is presented in the image linked. How can it be done if it is possible ?
The sprintf function is quite powerful and should provide all the facilities you need. Having looked at the image you linked, I don't see anything particularly special.

Fuzzy c-means tcp dump clustering in matlab

Hi I have some data thats represented like this:
0,tcp,http,SF,239,486,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,8,8,0.00,0.00,0.00,0.00,1.00,0.00,0.00,19,19,1.00,0.00,0.05,0.00,0.00,0.00,0.00,0.00,normal.
Its from the kdd cup 1999 which was based on the darpa set.
the text file I have has rows and rows of data like this, in matlab there is the generic clustering tool you can use by typing findcluster but it only accepts .dat files.
Im also not very sure if it will accept the format like this. Im also not sure why there is so many trailing zeros in the dump files.
Can anyone help how I can utilise the text document and run it thru a fcm clustering method in matlab? Code help is really needed.
FINDCLUSTER is simply a GUI interface for two clustering algorithms: FCM and SUBCLUST
You first need to read the data from file, look into the TEXTSCAN function for that.
Then you need to deal with non-numeric attributes; either remove them or convert them somehow. As far as I can tell, the two algorithms mentioned only support numeric data.
Visit the original website of the KDD cup dataset to find out the description of each attribute.

Efficient way to fingerprint an image (jpg, png, etc)?

Is there an efficient way to get a fingerprint of an image for duplicate detection?
That is, given an image file, say a jpg or png, I'd like to be able to quickly calculate a value that identifies the image content and is fairly resilient to other aspects of the image (eg. the image metadata) changing. If it deals with resizing that's even better.
[Update] Regarding the meta-data in jpg files, does anyone know if it's stored in a specific part of the file? I'm looking for an easy way to ignore it - eg. can I skip the first x bytes of the file or take x bytes from the end of the file to ensure I'm not getting meta-data?
Stab in the dark, if you are looking to circumvent meta-data and size related things:
Edge Detection and scale-independent comparison
Sampling and statistical analysis of grayscale/RGB values (average lum, averaged color map)
FFT and other transforms (Good article Classification of Fingerprints using FFT)
And numerous others.
Basically:
Convert JPG/PNG/GIF whatever into an RGB byte array which is independent of encoding
Use a fuzzy pattern classification method to generate a 'hash of the pattern' in the image ... not a hash of the RGB array as some suggest
Then you want a distributed method of fast hash comparison based on matching threshold on the encapsulated hash or encoding of the pattern. Erlang would be good for this :)
Advantages are:
Will, if you use any AI/Training, spot duplicates regardless of encoding, size, aspect, hue and lum modification, dynamic range/subsampling differences and in some cases perspective
Disadvantages:
Can be hard to code .. something like OpenCV might help
Probabilistic ... false positives are likely but can be reduced with neural networks and other AI
Slow unless you can encapsulate pattern qualities and distribute the search (MapReduce style)
Checkout image analysis books such as:
Pattern Classification 2ed
Image Processing Fundamentals
Image Processing - Principles and Applications
And others
If you are scaling the image, then things are simpler. If not, then you have to contend with the fact that scaling is lossy in more ways than sample reduction.
Using the byte size of the image for comparison would be suitable for many applications. Another way would be to:
Strip out the metadata.
Calculate the MD5 (or other suitable hashing algorithm) for the
image.
Compare that to the MD5 (or whatever) of the potential dupe
image (provided you've stripped out
the metadata for that one too)
You could use an algorithm like SIFT (Scale Invariant Feature Transform) to determine key points in the pictures and match these.
See http://en.wikipedia.org/wiki/Scale-invariant_feature_transform
It is used e.g. when stitching images in a panorama to detect matching points in different images.
You want to perform an image hash. Since you didn't specify a particular language I'm guessing you don't have a preference. At the very least there's a Matlab toolbox (beta) that can do it: http://users.ece.utexas.edu/~bevans/projects/hashing/toolbox/index.html. Most of the google results on this are research results rather than actual libraries or tools.
The problem with MD5ing it is that MD5 is very sensitive to small changes in the input, and it sounds like you want to do something a bit "smarter."
Pretty interesting question. Fastest and easiest would be to calculate crc32 of content byte array but that would work only on 100% identical images. For more intelligent compare you would probably need some kind of fuzy logic analyzis...
I've implemented at least a trivial version of this. I transform and resize all images to a very small (fixed size) black and white thumbnail. I then compare those. It detects exact, resized, and duplicates transformed to black and white. It gets a lot of duplicates without a lot of cost.
The easiest thing to do is to do a hash (like MD5) of the image data, ignoring all other metadata. You can find many open source libraries that can decode common image formats so it's quite easy to strip metadata.
But that doesn't work when image itself is manipulated in anyway, including scaling, rotating.
To do exactly what you want, you have to use Image Watermarking but it's patented and can be expensive.
This is just an idea: Possibly low frequency components present in the DCT of the jpeg could be used as a size invariant identifier.