Apple Vision – Can't recognize a single number as region - swift

I want to use VNDetectTextRectanglesRequest from a Vision framework to detect regions in an image containing only one character, number '9', with the white background. I'm using following code to do this:
private func performTextDetection() {
let textRequest = VNDetectTextRectanglesRequest(completionHandler: self.detectTextHandler)
textRequest.reportCharacterBoxes = true
textRequest.preferBackgroundProcessing = false
let handler = VNImageRequestHandler(cgImage: loadedImage.cgImage!, options: [:])
DispatchQueue.global(qos: .userInteractive).async {
do {
try handler.perform([textRequest])
} catch {
print ("Error")
}
}
}
func detectTextHandler(request: VNRequest, error: Error?) {
guard let observations = request.results, !observations.isEmpty else {
fatalError("no results")
}
print("there is result")
}
Number of observations results I get is 0, however if I provide an image with text '123' on black background, '123' is detected as a region with text. The described problem also occurs for 2 digit numbers, '22' on white background also doesn't get detected.
Why does a Vision API detect only 3 digits+ numbers on white background in my case?

Long characters continue to be a problem for VNRecognizeTextRequest and VNDetectTextRectanglesRequest in XCode 12.5 and Swift 5.
I've seen VNDetectTextRectanglesRequest find virtually all the individual words on a sheet of paper, but fail to detect lone characters [when processing the entire image]. Setting the property VNDetectTextRectanglesRequest.regionOfInterest to a smaller region may help.
What has worked for me is to have the single characters occupy more of the region of interest (ROI) for VNRecognizeTextRequest. I tested single characters at a variety of heights, and it became clear that single characters would start reading once they reached a certain size within the ROI.
For some single characters, detection seems to occur when the ROI is roughly three times the width and three times the height of the character itself. That's a rather tight region of interest. Placing it correctly is another problem, but also solvable.
If processing time isn't an issue for your application, you can create an array [CGRect] spanning a region suspected to contain lone characters.
My suspicion is that when VNRecognizeTextRequest performs an initial check for edge content, edge density, and/or image features that resemble strokes, it exits early if it doesn't find enough candidates. That initial check may simply be an embedded VNDetectTextRectanglesRequest. Whatever the initial check is, it runs fast, so I don't imagine it's that complicated.
For more about stroke detection to find characters, search for SO posts and articles about the Stroke Width Transform. Also this: https://www.microsoft.com/en-us/research/publication/detecting-text-in-natural-scenes-with-stroke-width-transform/. The SWT is meant to work on "natural" images, such as text seen outdoors.
There are some hacks to get around the problem. Some of these hacks are unpleasant, but for a particular application they may be worth it.
Create a grid of small regions of interest (ROIs). Run the text request on one ROI after the other.
As a cheap substitute for VNDetectTextRectanglesRequest, look for regions of the image with edge content that suggests a single character may be present. If nothing else, this could help ignore regions where there is no edge content.
Try use a scaling filter to scale up the image before processing it. That could ensure single characters are big enough to read. (For CIFilters, a very handy resource is https://cifilter.io/)
Run multiple passes on your image. First, run OCR on the full image. Then get the bounding boxes for words that were read. Search for suspicious gaps between boxes. Run grids of small ROIs on the suspiciously blank regions.
Use Tesseract as a backup. (https://www.seemuapps.com/swift-optical-character-recognition-tutorial)

Related

Detecting text in image using MSER

I'm trying to follow this tutorial http://www.mathworks.com/help/vision/examples/automatically-detect-and-recognize-text-in-natural-images.html to detect text in image using Matlab.
As a first step, the tutorial uses detectMSERFeatures to detect textual regions in the image. However, when I use this step on my image, the textual regions aren't detected.
Here is the snippet I'm using:
colorImage = imread('demo.png');
I = rgb2gray(colorImage);
% Detect MSER regions.
[mserRegions] = detectMSERFeatures(I, ...
'RegionAreaRange',[200 8000],'ThresholdDelta',4);
figure
imshow(I)
hold on
plot(mserRegions, 'showPixelList', true,'showEllipses',false)
title('MSER regions')
hold off
And here is the original image
and here is the image after the first step
[![enter image description here][2]][2]
Update
I've played around with parameters but none seem to detect textual region perfectly. Is there a better way to accomplish this than tweaking numbers? Tweaking the parameters won't work for wide array of images I might have.
Some parameters I've tried and their results:
[mserRegions] = detectMSERFeatures(I, ...
'RegionAreaRange',[30 100],'ThresholdDelta',12);
[mserRegions] = detectMSERFeatures(I, ...
'RegionAreaRange',[30 600],'ThresholdDelta',12);
Disclaimer: completely untested.
Try reducing MaxAreaVariation, since your text & background have very little variation (reduce false positives). You should be able to crank this down pretty far since it looks like the text was digitally generated (wouldn't work as well if it was a picture of text).
Try reducing the minimum value for RegionAreaRange, since small characters may be smaller than 200 pixels (increase true positives). At 200, you're probably filtering out most of your text.
Try increasing ThresholdDelta, since you know there is stark contrast between the text and background (reduce false positives). This won't be quite as effective as MaxAreaVariation for filtering, but should help a bit.

Extract Rectangular Image from Scanned Image

I have scanned copies of currency notes from which I need to extract only the rectangular notes.
Although the scanned copies have a very blank background, the note itself can be rotated or aligned correctly. I'm using matlab.
Example input:
Example output:
I have tried using thresholding and canny/sobel edge detection to no avail.
I also tried the solution given here but it detects the entire image for cropping and it would not work for rotated images.
PS: My primary objective is to determine the denomination of the currency. There are a couple of methods I thought I could use:
Color based, since all currency notes have varying primary colors.
The advantage of this method is that it's independent of the
rotation or scale of the input image.
Detect the small black triangle on the lower left corner of the note. This shape is unique
for each denomination.
Calculating the difference between 2 images. Since this is a small project, all input images will be of the same dpi and resolution and hence, once aligned, the difference between the input and the true images can give a rough estimate.
Which method do you think is the most viable?
It seems you are further advanced than you looked (seeing you comments) which is good! Im going to show you more or less the way you can go to solve you problem, however im not posting the whole code, just the important parts.
You have an image quite cropped and segmented. First you need to ensure that your image is without holes. So fill them!
Iinv=I==0; % you want 1 in money, 0 in not-money;
Ifill=imfill(Iinv,8,'holes'); % Fill holes
After that, you want to get only the boundary of the image:
Iedge=edge(Ifill);
And in the end you want to get the corners of that square:
C=corner(Iedge);
Now that you have 4 corners, you should be able to know the angle of this rotated "square". Once you get it do:
Irotate=imrotate(Icroped,angle);
Once here you may want to crop it again to end up just with the money! (aaah money always as an objective!)
Hope this helps!

artifacts in processed images

This question is related to my previous post Image Processing Algorithm in Matlab in stackoverflow, which I already got the results that I wanted to.
But now I am facing another problem, and getting some artefacts in the process images. In my original images (stack of 600 images) I can't see any artefacts, please see the original image from finger nail:
But in my 10 processed results I can see these lines:
I really don't know where they come from?
Also if they belong to the camera's sensor why can't I see them in my original images? Any idea?
Edit:
I have added the following code suggested by #Jonas. It reduces the artefact, but does not completely remove them.
%averaging of images
im = D{1}(:,:);
for i = 2:100
im = imadd(im,D{i}(:,:));
end
im = im/100;
imshow(im,[]);
for i=1:100
SD{i}(:,:)=imsubtract(D{i}(:,:),im(:,:))
end
#belisarius has asked for more images, so I am going to upload 4 images from my finger with speckle pattern and 4 images from black background size( 1280x1024 ):
And here is the black background:
Your artifacts are in fact present in your original image, although not visible.
Code in Mathematica:
i = Import#"http://i.stack.imgur.com/5hM3u.png"
EntropyFilter[i, 1]
The lines are faint, but you can see them by binarization with a very low level threshold:
Binarize[i, .001]
As for what is causing them, I can only speculate. I would start tracing from the camera output itself. Also, you may post two or three images "as they come straight from the camera" to allow us some experimenting.
The camera you're using is most likely has a CMOS chip. Since they have independent column (and possibly row) amplifiers, which may have slightly different electronic properties, you can get the signal from one column more amplified than from another.
Depending on the camera, these variability in column intensity can be stable. In that case, you're in luck: Take ~100 dark images (tape something over the lens), average them, and then subtract them from each image before running the analysis. This should make the lines disappear. If the lines do not disappear (or if there are additional lines), use the post-processing scheme proposed by Amro to remove the lines after binarization.
EDIT
Here's how you'd do the background subtraction, assuming that you have taken 100 dark images and stored them in a cell array D with 100 elements:
% take the mean; convert to double for safety reasons
meanImg = mean( double( cat(3,D{:}) ), 3);
% then you cans subtract the mean from the original (non-dark-frame) image
correctedImage = rawImage - meanImg; %(maybe you need to re-cast the meanImg first)
Here is an answer that in opinion will remove the lines more gently than the above mentioned methods:
im = imread('image.png'); % Original image
imFiltered = im; % The filtered image will end up here
imChanged = false(size(im));% To document the filter performance
% 1)
% Compute the histgrams for each column in the lower part of the image
% (where the columns are most clear) and compute the mean and std each
% bin in the histogram.
histograms = hist(double(im(501:520,:)),0:255);
colMean = mean(histograms,2);
colStd = std(histograms,0,2);
% 2)
% Now loop though each gray level above zero and...
for grayLevel = 1:255
% Find the columns where the number of 'graylevel' pixels is larger than
% mean_n_graylevel + 3*std_n_graylevel). - That is columns that contains
% statistically 'many' pixel with the current 'graylevel'.
lineColumns = find(histograms(grayLevel+1,:)>colMean(grayLevel+1)+3*colStd(grayLevel+1));
% Now remove all graylevel pixels in lineColumns in the original image
if(~isempty(lineColumns))
for col = lineColumns
imFiltered(:,col) = im(:,col).*uint8(~(im(:,col)==grayLevel));
imChanged(:,col) = im(:,col)==grayLevel;
end
end
end
imshow(imChanged)
figure,imshow(imFiltered)
Here is the image after filtering
And this shows the pixels affected by the filter
You could use some sort of morphological opening to remove the thin vertical lines:
img = imread('image.png');
SE = strel('line',2,0);
img2 = imdilate(imerode(img,SE),SE);
subplot(121), imshow(img)
subplot(122), imshow(img2)
The structuring element used was:
>> SE.getnhood
ans =
1 1 1
Without really digging into your image processing, I can think of two reasons for this to happen:
The processing introduced these artifacts. This is unlikely, but it's an option. Check your algorithm and your code.
This is a side-effect because your processing reduced the dynamic range of the picture, just like quantization. So in fact, these artifacts may have already been in the picture itself prior to the processing, but they couldn't be noticed because their level was very close to the background level.
As for the source of these artifacts, it might even be the camera itself.
This is a VERY interesting question. I used to deal with this type of problem with live IR imagers (video systems). We actually had algorithms built into the cameras to deal with this problem prior to the user ever seeing or getting their hands on the image. Couple questions:
1) are you dealing with RAW images or are you dealing with already pre-processed grayscale (or RGB) images?
2) what is your ultimate goal with these images. Is the goal to simply get rid of the lines regardless of the quality in the rest of the image that results, or is the point to preserve the absolute best image quality. Are you to perform other processing afterwards?
I agree that those lines are most likely in ALL of your images. There are 2 reasons for those lines ever showing up in an image, one would be in a bright scene where OP AMPs for columns get saturated, thus causing whole columns of your image to get the brightest value camera can output. Another reason could be bad OP AMPs or ADCs (Analog to Digital Converters) themselves (Most likely not an ADC as normally there is essentially 1 ADC for th whole sensor, which would make the whole image bad, not your case). The saturation case is actually much more difficult to deal with (and I don't think this is your problem). Note: Too much saturation on a sensor can cause bad pixels and columns to arise in your sensor (which is why they say never to point your camera at the sun). The bad column problem can be dealt with. Above in another answer, someone had you averaging images. While this may be good to find out where the bad columns (or bad single pixels, or the noise matrix of your sensor) are (and you would have to average pointing the camera at black, white, essentially solid colors), it isn't the correct answer to get rid of them. By the way, what I am explaining with the black and white and averaging, and finding bad pixels, etc... is called calibrating your sensor.
OK, so saying you are able to get this calibration data, then you WILL be able to find out which columns are bad, even single pixels.
If you have this data, one way that you could erase the columns out is to:
for each bad column
for each pixel (x, y) on the bad column
pixel(x, y) = Average(pixel(x+1,y),pixel(x+1,y-1),pixel(x+1,y+1),
pixel(x-1,y),pixel(x-1,y-1),pixel(x-1,y+1))
What this essentially does is replace the bad pixel with a new pixel which is the average of the 6 remaining good pixels around it. The above is an over-simplified version of an algorithm. There are certainly cases where a singly bad pixel could be right next the bad column and shouldn't be used for averaging, or two or three bad columns right next to each other. One could imagine that you would calculate the values for a bad column, then consider that column good in order to move on to the next bad column, etc....
Now, the reason I asked about the RAW versus B/W or RGB. If you were processing a RAW, depending on the build of the sensor itself, it could be that only one sub-pixel (if you will) of the bayer filtered image sensor has the bad OP AMP. If you could detect this, then you wouldn't necessarily have to throw out the other good sub-pixel's data. Secondarily, if you are using an RGB sensor, to take a grayscale photo, and you shot it in RAW, then you may be able to calculate your own grayscale pixels. Many sensors when giving back a grayscale image when using an RGB sensor, will simply pass back the Green pixel as the overall pixel. This is due to the fact that it really serves as the luminescence of an image. This is why most image sensors implement 2 green sub-pixels for every r or g sub-pixel. If this is what they are doing (not ALL sensors do this) then you may have better luck getting rid of just the bad channel column, and performing your own grayscale conversion using.
gray = (0.299*r + 0.587*g + 0.114*b)
Apologies for the long winded answer, but I hope this is still informational to someone :-)
Since you can not see the lines in the original image, they are either there with low intensity difference in comparison with original range of image, or added by your processing algorithm.
The shape of the disturbance hints to the first option... (Unless you have an algorithm that processes each row separately.)
It seems like your sensor's columns are not uniform, try taking a picture without the finger (background only) using the same exposure (and other) settings, then subtracting it from the photo of the finger (prior to other processing). (Make sure the background is uniform before taking both images.)

FreeType2: Get global font bounding box in pixels?

I'm using FreeType2 for font rendering, and I need to get a global bounding box for all fonts, so I can align them in a nice grid. I call FT_Set_Char_Size followed by extracting the global bounds using
int pixels_x = ::FT_MulFix((face->bbox.xMax - face->bbox.xMin), face->size->metrics.x_scale );
int pixels_y = ::FT_MulFix((face->bbox.yMax - face->bbOx.yMin), face->size->metrics.y_scale );
return Size (pixels_x / 64, pixels_y / 64);
which works, but it's quite a bit too large. I also tried to compute using doubles (as described in the FreeType2 tutorial), but the results are practically the same. Even using just face->bbox.xMax results in bounding boxes which are too wide. Am I doing the right thing, or is there simply some huge glyph in my font (Arial.ttf in this case?) Any way to check which glyph is supposedly that big?
Why not calculate the min/max from the characters that you are using in the string that you want to align? Just loop through the characters and store the maximum and minimum from the characters that you are using. You can store these values after you rendered them so you don't need to look it up every time you render the glyphs.
I have a similar problem using freetype to render a bunch of text elements that will appear in a grid. Not all of the text elements are the same size, and I need to prerender them before I know where they would be laid out. The different sizes were the biggest problem when the heights changed, such as for letters with descending portions (like "j" or "Q").
I ended up using the height that is on the face (kind of like you did with the bbox). But like you mentioned, that value was much to big. It's supposed to be the baseline to baseline distance, but it appeared to be about twice that distance. So, I took the easy way out and divided the reported height by 2 and used that as a general height value. Most likely, the height is too big because there are some characters in the font that go way high or way low.
I suppose a better way might be to loop through all the characters expected to be used, get their glyph metrics and store the largest height found. But that doesn't seem all that robust either.
Your code is right.
It's not too large.
Because there are so many special symbols that is vary large than ascii charater. . view special big symbol
it's easy to traverse all unicode charcode, to find those large symbol.
if you only need ascii, my hack method is
FT_MulFix(face_->units_per_EM, face_->size->metrics.x_scale ) >> 6
FT_MulFix(face_->units_per_EM, face_->size->metrics.y_scale ) >> 6

iPhone: How to Determine Average Light/Dark of an Area of an UIImage

I need to place labels with a transparent background over a variable-content UIImage. Readability will vary significantly depending on the relationship between the color of the label's text and the color/luminosity of the area of the image displayed under the label. Since the image will be constantly changing, the color of the label's text needs to change in sync.
I have found several techniques for determining the color, perceived luminosity etc of a single pixel. However, I need to rather quickly (while a view loads) determine the rough perceived color/luminosity of an area of the UIImage under the frame of the UILabel. I presume I will also need to measure the alpha because the same color/luminosity looks different at different alpha values.
Is there a way to calculate such a value for an area? Will I be reduced to simply summing pixels? If it comes to that, is there an algorithm to accomplish this?
I've thought of two possible approaches:
Perform some "folding" operations i.e. combining pixels from one half of the area to the other half. Then repeat until I get a single value. Would this be practical? How would you logically combine pixels to average their perceived color/luminosity?
Sample a statistically significant number of pixels in the area and then combine them (somehow) to get a rough measure.
I think this problem comes up a lot these days with people being so found of customizing backgrounds. Seems like something that would be worth my time to bang out a category or class to handle this and then share it around.
What about simply outlining your text in a way that it will show on both dark and light backgrounds?
This is how it is handled in other situations where text must be displayed over a background with unknown content (for example, films with subtitles).