How to auto-crop a barrel-distorted image using ImageMagick? - command-line

Using ImageMagick's convert to barrel-distort a photo to correct a strongly visible pincushion distortion, I provide positive a, b or c values (from a database for my lens + focal length). This results in an image that is corrected, has the original width and height, but includes a non-rectangular, bent/distorted border, as the image is corrected towards its center. Simplified example:
convert rose: -virtual-pixel black -distort Barrel '+0.0 +0.1 +0.0' out.png
How can I automatically crop the black, bent border to the largest possible rectangle in the original aspect ratio within the rose?
The ImageMagick website says, that a parameter "d" is automatically calculated, that could do this (resulting in linear distortion effectively zooming into the image and pushing the bent border right outside the image bounds), but the imagemagick-calculated value seems to aim for something different (v6.6.9 on ubuntu 12.04). If I guess and manually specify a "d", I can get the intended result:
convert rose: -virtual-pixel black -distort Barrel '+0.0 +0.1 +0.0 +0.6' out.png
The given formular a+b+c+d=1 does not seem to be a proper d for my cropping case. Also, d seems to depend on the aspect ratio of the image and not only on a/b/c. How do I make ImageMagick crop the image, or, how to I calculate a proper d?
Update
I found Fred's ImageMagick script innercrop (http://www.fmwconcepts.com/imagemagick/innercrop/index.php) that does a bit what I need, but has drawbacks and is no solution for me. It asumes arbitrary outer areas, so it takes long to find the cropping rectangle. It does not work within Unix pipes, and it does not keep the original aspect ratio.
Update 2
Contemplating on the problem makes me think that calculating a "d" is not the solution, as changing d introduces more or less bending and seems to do more than just zoom. The d=1-(a+b+c) that is calculated by imagemagick results in the bent image touching the upper/lower bounds (for landscape images) or the left/right bounds (for portrait images). So I think the proper solution would be to calculate where one of the new 4 corners will be given a/b/c/d, and then crop to those new corners.

The way I understand the docs, you do not use commas to separate the parameters for the barrel-distort operator.
Here is an example image, alongside the output of the two commands you gave:
convert o.png -virtual-pixel black -distort Barrel '+0.0 +0.1 +0.0' out0.png
convert o.png -virtual-pixel black -distort Barrel '+0.0 +0.1 +0.0 +0.6' out1.png
I created the example image in order to better visualize what you possibly want to achieve.
However, I do not see the point you stated about the automatically calculated parameter 'd', and I do not see the effect you stated about using 'd=+0.6'...
I'm not sure I understand your wanted result correctly, so I'm assuming you want the area marked by the yellow rectangle cropped.
The image on the left is out0.png as created by the first command above.
In order to guess the required coordinates, we have to determine the image dimensions first:
identify out0.png
out0.png PNG 700x700 700x700+0+0 8-bit sRGB 36KB 0.000u 0:00.000
The image in the center is marked up with the white rectangle. The rectangle is there so you can look at it and tell me if that is the region you want cropped. The image on the right is the cropped image (without scaling it back to the original size).
Is this what you want? If yes, I can possibly update the answer in order to automatically determine the required coordinates of the cropping. (For now I've done it based on guessing.)
Update
I think you may have mis-understood the purpose of the barrel-distortion operation. It is meant for correcting a barrel (slight) distortion, as is produced by camera lenses. The 3 parameters a, b and c to be used for any specific combination of camera, lens and current zoom could possibly be stated in your photo's EXIF data. The formula were a+b+c+d = 1 is meant to be used when the new, distortion-corrected image should have the same dimensions as the original (distorted) image.
So to imitate the barrel-correction, we should probably use the second image from the last row above as our input:
convert out3.png -virtual-pixel gray -distort barrel '0 -0.2 0' corrected.png
Result:

Related

Cropping the minimum sized rectangle of a shape from an image

I am making a card recognition project on MATLAB and I am stuck at this point. There are images of cards and on an image I want to define the smallest rectangle that takes the card inside. Example like below
Original image
Converted image
I am currently able to convert the image to black and white (leaves me only the cards white spaces), I want to define the rectangles by the whole white spaces. E.g., if I have 3 non-lapping cards in my image, I want to have 3 images like above (doesn't matter if another cards edge appears on the image, the important part is that rectangle must pass through the edges of the selected card).
I have tried edge definition methods but wasn't successful. Thanks for your help already.
I recommend you use regionprops function from the image processing tool box, i.e.,
bb = regionprops(yourImage, 'boundingbox');
which will return the bounding box. There is a nice MATWORKS video here and you can jump to about minute 26 for what you need.

Resizing command changes image shape

I have to resize image i.e if its dimension is 3456x5184 to 700X700 as my code needs image with less number of pixels otherwise it takes too much time to give results.So, when I use imresize command it changes the dimensions of image but at the same time it changes the shape of image i.e the circle in image which I also need to detect looks like oval instead of being cirle. I need your suggestions to resolve this problem. I am really grateful to you people.
Resizing images is done by either subsampling (to get smaller images) or some kind of interpolation (to get larger images)
Input is either a factor or a final dimension for width and height.
The only way to fit a rectangle into a square by simply resizing it is to use different scales for width and height. Which of course will yield in a distorted image.
To achieve what you want you can either crop a 700x700 region from your image or resize image using the same factor for with and height. Then you can fit the larger dimension into 700 and fill the rest around the other dimension with black or whatever you prefer.

Extract Rectangular Image from Scanned Image

I have scanned copies of currency notes from which I need to extract only the rectangular notes.
Although the scanned copies have a very blank background, the note itself can be rotated or aligned correctly. I'm using matlab.
Example input:
Example output:
I have tried using thresholding and canny/sobel edge detection to no avail.
I also tried the solution given here but it detects the entire image for cropping and it would not work for rotated images.
PS: My primary objective is to determine the denomination of the currency. There are a couple of methods I thought I could use:
Color based, since all currency notes have varying primary colors.
The advantage of this method is that it's independent of the
rotation or scale of the input image.
Detect the small black triangle on the lower left corner of the note. This shape is unique
for each denomination.
Calculating the difference between 2 images. Since this is a small project, all input images will be of the same dpi and resolution and hence, once aligned, the difference between the input and the true images can give a rough estimate.
Which method do you think is the most viable?
It seems you are further advanced than you looked (seeing you comments) which is good! Im going to show you more or less the way you can go to solve you problem, however im not posting the whole code, just the important parts.
You have an image quite cropped and segmented. First you need to ensure that your image is without holes. So fill them!
Iinv=I==0; % you want 1 in money, 0 in not-money;
Ifill=imfill(Iinv,8,'holes'); % Fill holes
After that, you want to get only the boundary of the image:
Iedge=edge(Ifill);
And in the end you want to get the corners of that square:
C=corner(Iedge);
Now that you have 4 corners, you should be able to know the angle of this rotated "square". Once you get it do:
Irotate=imrotate(Icroped,angle);
Once here you may want to crop it again to end up just with the money! (aaah money always as an objective!)
Hope this helps!

MATLAB: How do I resize (connected) components in a 3D binary image sequence without changing the dimensions of the sequence?

I'd like to resize the components contained in a 3D binary image sequence without changing any of the dimensions of the sequence itself.
I'm not sure if I need to do it on a component-by-component basis, if yes, then how do I create a transform such that the resized components are re-positioned 'correctly' in the image sequence? By 'correctly', I mean with the same centre of mass as the original unprocessed components.
(If that last paragraph doesn't make sense then please ignore)
A 2D example: suppose I wanted to enlarge by 10% the white blobs in the following [295x445] image
How would you do this without making the image itself larger?
you could use the imdilate function to dilate the regions of interest. The examples in the webpage show how to use this function.

How to convert a black and white photo that was originally colored, back to its original color?

I've converted a colored photo to black and white, and bolded the edges. Now i need to convert it back to its original color with the bolded edges. Is there any function in matlab which allows me to do so?
Once you remove the colour from an image, there is no possible way to automatically put it back. You're basically reducing a set of 16,777,216 colours to a set of 256 - on average each shade of grey has 65,536 equivalent colours, and without the original image there's no way to guess which it could be.
Now, if you were to take the bolded lines from your black-and-white image and paint them on top of the original coloured image, that might end up producing what you're looking for.
If what you are trying to do is to use some filter over the B/W image and then use that with the original color. I suggest you convert your image to a color space with Lightness channel that suits your needs (for example L*a*b* if you need the ligtness to be uniformly distributed regarding human recognition of differences) and apply your filter only over the Lightness channel.