Please look at the attached image. It is a GPR profile and using image processing techniques, I am trying to divide this image into 3 zones by labeling with colors the whole image on the top:
when parabolas in the image are very clear and distinct with high pixel values - green zone/ line at the top
when parabolas in the image are blurred but visible - yellow zone
when parabolas are distorted or when not parabolas are present - red zone
What techniques should I use? What's the best approach to solve it?
I have tried various techniques but not with success in every case, because, as you can in the following image, sometimes the parabolas are too close to one another and identifying them is becoming an issue.
A sample of how I want to zone it: https://www.dropbox.com/s/9zm9epgf0gt7591/sample.png?dl=0
One of the tried code: simplest one.
clear all
clc
%read png image
H=imread ('origpng.png');
%convert to gray scale
I = rgb2gray(H);
I(I>150)=0;I(I<100)=0;
figure,imshow(I)
J=I;
J=255-J;
figure, imshow (J)
J(J<255)=0;
figure,imshow (J)
Your question is not very clearly posed, but I spent some time on it and felt like sharing my thoughts. I am not preteding for an instant that this is anywhere near a complete, or rigorous answer - just some musings that might give you some ideas. Also, I use ImageMagick, but if you have and know Matlab, you should use that - I am not suggesting you switch tools.
First, I did a Canny Edge detection like this:
convert http://i.stack.imgur.com/XITAE.png -canny 0x1+15%+50% canny.jpg
that gives me this:
I then "squash" that down till it is just 1 pixel high, which effectively totals up and averages all the columns - I make it 10 pixels high here so you can see it. Where it is white, there are lots of parabolas, elsewhere there are fewer.
Then I stretch that back up to the full height of the original image and blur it a bit - note that everything up to the following image is just one line of "code":
convert http://i.stack.imgur.com/XITAE.png -canny 0x1+15%+50% -resize x1! -normalize -resize 827x310! -blur 0x11 -colorspace gray mask.png
I then use the above as an opacity mask for a red image the same size as your original like this:
convert -size 827x310! xc:red mask.png -compose copy-opacity -composite colouredmask.png
Then I took your original image and coloured it with yellow like this by first creating a yellow image and then blending it onto your image and then I blended the red image from above on top of that:
convert -size 827x310! xc:yellow yellow.png
convert http://i.stack.imgur.com/XITAE.png yellow.png -compose colorize -composite colouredmask.png -compose overlay -composite result.png
giving
Obviously you can set different parameters and use different thresholds and things, but it kind of heads towards the sort of thing you are aiming it.
So the entire process is:
# Make mask of peaky areas - line 1
convert http://i.stack.imgur.com/XITAE.png -canny 0x1+15%+50% -resize x1! -normalize -resize 827x310! -blur 0x11 -colorspace gray mask.png
# Colour mask with red - line 2
convert -size 827x310! xc:red mask.png -compose copy-opacity -composite colouredmask.png
# Tint original image with yellow and then overlay semi-transparent red area
convert -size 827x310! xc:yellow yellow.png
convert http://i.stack.imgur.com/XITAE.png yellow.png -compose colorize -composite colouredmask.png -compose overlay -composite result.png
Notes
Squashing pixels... sorry for confusing you with my terminology! Basically, when I squash the pixels down to a single row, you need to imagine dropping a brick on the top of the image and flattening it down to just one pixel tall. So, essentially, you draw an imaginary line underneath the image and then you work across the image totalling up the number of WHITE (i.e. edge) pixels in each vertical column. Columns that have more white pixels will add up to larger numbers. Columns that have no white pixels will add up to zero. Once you have got the totals for each column, you find the highest total - let's say it is 32 and then you multiply all totals by 255/32 so that everything is normalized to 255, or white. Now the squashed strip represents the edge energy in each column. And I then use that as the opacity for the red when I overlay - so columns with more white edges in the Canny image will show up with more red in the result.
Let's demo what happens if I squash down to 10 pixels wide and 1 pixel high before scaling back up to the original size - basically it means that my resulting mask will have only 10 possible values (or columns) columns and that each column will be a single constant brightness. I'll put the Canny image underneath so you can see that the brightness of the squashed strip represents the edge energy:
convert http://i.stack.imgur.com/XITAE.png -canny 0x1+15%+50% -resize 10x1! -normalize -scale 827x310! mask.png
If you want to introduce another colour, you need to work out what your algorithm is for controlling where that colour should appear. You then do exactly the same thing again - you make a mask that is light where you want that colour in your output image and dark where you don't want that colour. Then you use that mask as the opacity for your new colour (as I did at the line labelled line 2 above) and then you overlay it like I did in the last line of my code above.
Related
I want to add a pattern repeated image as the background of another image that has spaces around. For example i have the pattern 200x200 and an image 1200x800. I have accomplished to add a background color.
The command i'm using to add a background color
convert -background "#333" -resize 768x450 -gravity Center -extent 768x450
Now i need to add a pattern instead of that color. Some suggest that i should make the pattern as one image with the maximum size then use it to add it as background image.
Is it possible to do it with convert or any other command using ImageMagick ?
Not sure what you mean, but I am guessing it's this. Let's make a 200x200 tile to start off with:
convert -size 200x200 radial-gradient:red-blue t200x200.png
And now you want to make a 1200x800 image by tiling that basic unit:
convert -size 1200x800 tile:t200x200.png BigBoy.png
If you now want to overlay a fine-art, high-quality, centred portrait over the top of your harmonious, subtle background, you can do this:
convert -size 1200x800 tile:t200x200.png -gravity center smiley.gif -composite BigBoy.png
Using ImageMagick's convert to barrel-distort a photo to correct a strongly visible pincushion distortion, I provide positive a, b or c values (from a database for my lens + focal length). This results in an image that is corrected, has the original width and height, but includes a non-rectangular, bent/distorted border, as the image is corrected towards its center. Simplified example:
convert rose: -virtual-pixel black -distort Barrel '+0.0 +0.1 +0.0' out.png
How can I automatically crop the black, bent border to the largest possible rectangle in the original aspect ratio within the rose?
The ImageMagick website says, that a parameter "d" is automatically calculated, that could do this (resulting in linear distortion effectively zooming into the image and pushing the bent border right outside the image bounds), but the imagemagick-calculated value seems to aim for something different (v6.6.9 on ubuntu 12.04). If I guess and manually specify a "d", I can get the intended result:
convert rose: -virtual-pixel black -distort Barrel '+0.0 +0.1 +0.0 +0.6' out.png
The given formular a+b+c+d=1 does not seem to be a proper d for my cropping case. Also, d seems to depend on the aspect ratio of the image and not only on a/b/c. How do I make ImageMagick crop the image, or, how to I calculate a proper d?
Update
I found Fred's ImageMagick script innercrop (http://www.fmwconcepts.com/imagemagick/innercrop/index.php) that does a bit what I need, but has drawbacks and is no solution for me. It asumes arbitrary outer areas, so it takes long to find the cropping rectangle. It does not work within Unix pipes, and it does not keep the original aspect ratio.
Update 2
Contemplating on the problem makes me think that calculating a "d" is not the solution, as changing d introduces more or less bending and seems to do more than just zoom. The d=1-(a+b+c) that is calculated by imagemagick results in the bent image touching the upper/lower bounds (for landscape images) or the left/right bounds (for portrait images). So I think the proper solution would be to calculate where one of the new 4 corners will be given a/b/c/d, and then crop to those new corners.
The way I understand the docs, you do not use commas to separate the parameters for the barrel-distort operator.
Here is an example image, alongside the output of the two commands you gave:
convert o.png -virtual-pixel black -distort Barrel '+0.0 +0.1 +0.0' out0.png
convert o.png -virtual-pixel black -distort Barrel '+0.0 +0.1 +0.0 +0.6' out1.png
I created the example image in order to better visualize what you possibly want to achieve.
However, I do not see the point you stated about the automatically calculated parameter 'd', and I do not see the effect you stated about using 'd=+0.6'...
I'm not sure I understand your wanted result correctly, so I'm assuming you want the area marked by the yellow rectangle cropped.
The image on the left is out0.png as created by the first command above.
In order to guess the required coordinates, we have to determine the image dimensions first:
identify out0.png
out0.png PNG 700x700 700x700+0+0 8-bit sRGB 36KB 0.000u 0:00.000
The image in the center is marked up with the white rectangle. The rectangle is there so you can look at it and tell me if that is the region you want cropped. The image on the right is the cropped image (without scaling it back to the original size).
Is this what you want? If yes, I can possibly update the answer in order to automatically determine the required coordinates of the cropping. (For now I've done it based on guessing.)
Update
I think you may have mis-understood the purpose of the barrel-distortion operation. It is meant for correcting a barrel (slight) distortion, as is produced by camera lenses. The 3 parameters a, b and c to be used for any specific combination of camera, lens and current zoom could possibly be stated in your photo's EXIF data. The formula were a+b+c+d = 1 is meant to be used when the new, distortion-corrected image should have the same dimensions as the original (distorted) image.
So to imitate the barrel-correction, we should probably use the second image from the last row above as our input:
convert out3.png -virtual-pixel gray -distort barrel '0 -0.2 0' corrected.png
Result:
I have thousands of images shot for 3D reconstruction using photogrammetry and I want to evaluate which images are too blurry using Imagemagick or any other command line able software. What means too blury: based on the average blurriness/sharpness of all images, the worst images can be picked out easily. But how to evaluate the bluriness. I have gone into FFT, Fast Fournier Transform and think here can be found the solution. The frequencies can be calculated by the IM -fft command, which produces the magnitude and phases images. How can one use these images to calculate an overall bluriness/sharpness factor?
Update: Here are some of the images I have to treat. The real challenge is, all images are alongside many others of these kinds in a single folder and need to be checked for motion blur issues. I have to detect too high motion blur and avoid these images in further production.
Next 3 images have from all images the lowest deviation, but are very sharp in the original full-reso version.
These 2 images have a lower deviation because of the white areas, but also don't lack of enough sharpness.
Here the edge detection brings various edges, because of the mosaic. From all images the first image is blurry.
This image has low blurriness.
I have an idea using ImageMagick. I take an original image as follows:
then I put it into Photoshop and blur it with Motion blur of 5 pixesl and 10 pixels, saving the results as blur5.txt and blur10.txt.
Now, I use ImageMagick to compare the statistics:
identify -verbose original.jpg > orig.txt
identify -verbose blur5.jpg > blur5.txt
identify -verbose blur10.jpg > blur10.txt
Then use opendiff (on Mac) to compare the statistics,
opendiff orig.txt blur5.txt
I note that the blurrier the image, the lower the standard deviation - so that seems to measure relative bluriness.
I then used a Canny Edge Detector, and you can see that the sharpest image gets the most edges, not unexpectedly. So, you could count the white pixels in the Canny Edge detected image as a measure of your sharpness.
Like this:
convert original.jpg -canny 0x1+10%+30% -format %c histogram:info:-
875184: ( 0, 0, 0) #000000 gray(0)
72576: (255,255,255) #FFFFFF gray(255) <--- sharp image has high white pixel count
convert blur5.jpg -canny 0x1+10%+30% -format %c histogram:info:-
912322: ( 0, 0, 0) #000000 gray(0)
35438: (255,255,255) #FFFFFF gray(255) <--- slightly blurry has lower white pixel count
convert blur10.jpg -canny 0x1+10%+30% -format %c histogram:info:-
925759: ( 0, 0, 0) #000000 gray(0)
22001: (255,255,255) #FFFFFF gray(255) <--- blurriest has lowest white pixel count
If you want a single line that calculates the number of white pixels and echoes the filename, you can do this:
convert original.jpg -canny 0x1+10%+30% -format "%[fx:mean*h*w] %f\n" info:
72576 original.jpg
That will allow you to analyse all your images and sort them into order of sharpness like this:
find . -name "*.jpg" -exec convert "{}" -canny 0x1+10%+30% -format "%[fx:mean*h*w] %f\n" info: \; | sort
22001 blur10.jpg
35438 blur5.jpg
72576 original.jpg
I have switch from Imagick version 5 to Imagick version 6 and noticed the following
While using the command:
convert -gravity SouthEast -draw 'image Over 0,0 0,0 overlay.png'
In version 5 the overlay.png is being added the bottom right corner (SouthEast) as expected!
But version 6 of ImageMagick failed and the position of the overlay.png is at the top left corner!
The command is used in typo3 "imgResource.params" http://docs.typo3.org/typo3cms/TyposcriptReference/Functions/Imgresource/Index.html
But I think this has nothing to do with the CMS, but with compatibility of im5 and im6
Anyone knows how to solve this...?
You can use this command instead:
convert background.jpg foreground.jpg -gravity SouthEast -compose Src_Over -composite output.jpg
So if this is our background:
and this is our foreground:
we get the following result:
Actually, I think he looks better on the other side of the image, but flopped to still face inwards :-)
convert background.jpg \( tiger.png -flop \) -gravity SouthWest -compose Src_Over -composite out.jpg
Updated Answer
Sorry to hear that the command doesn't work inside typo3. There is another version here that may work for you...
First get the width and height of the background and foreground images - I guess there is a way to do this in typo3, but I'll do it like this:
identify -format "%w %h" background.jpg
906 603
So the background is 906 px wide and 603 px high, and
identify -format "%w %h" tiger.png
258 296
the tiger is 258 px by 296. Then we can overlay using geometry like this, by subtracting the width and height of tiger from the width and height of the background to give an offset from top left of image:
convert background.jpg tiger.png -geometry +648+307 -composite out.png
which gives the same effect as gravity southeast. Maybe that will get you there...
Updated One Last Time
This one must get you there... just put the correct offsets in your original draw command rather than relying on gravity. So the first two numbers are the x,y offsets of the top left corner of the overlaid image from the top left corner of the background image, and the second x,y pair are the offsets of the bottom right corner of the overlaid image. So basically,
x1,y1 = width background - width overlay, height background - height overlay
x2,y2 = width background, height background
convert background.jpg -draw 'image Over 648,307 906,603 tiger.png' out.jpg
I've converted a colored photo to black and white, and bolded the edges. Now i need to convert it back to its original color with the bolded edges. Is there any function in matlab which allows me to do so?
Once you remove the colour from an image, there is no possible way to automatically put it back. You're basically reducing a set of 16,777,216 colours to a set of 256 - on average each shade of grey has 65,536 equivalent colours, and without the original image there's no way to guess which it could be.
Now, if you were to take the bolded lines from your black-and-white image and paint them on top of the original coloured image, that might end up producing what you're looking for.
If what you are trying to do is to use some filter over the B/W image and then use that with the original color. I suggest you convert your image to a color space with Lightness channel that suits your needs (for example L*a*b* if you need the ligtness to be uniformly distributed regarding human recognition of differences) and apply your filter only over the Lightness channel.