I am performing the following steps:
I insert a mark in an original image img1 to obtain a watermarked image img2.
I crop the the watermarked image to obtain img3.
I want to crop a particular part of the original image img1 to obtain the same part like img3.
My question is how to find where the cropped part is located in the orignal image?
You could use cross-correlation http://www.mathworks.ch/ch/help/images/ref/normxcorr2.html
Here is an example http://www.mathworks.ch/products/demos/image/cross_correlation/imreg.html
If the cropped image has been resized I think your problem is more similar to this one http://thydzik.com/matlab-scaled-image-normalized-cross-correlation/
Related
I performed a superpixel segmentation to a specific image. In my hand are the original RGB image and the labeled image contours. How can I draw the superpixels in the image?
The solution is written in the MATLAB documentation. The relevant part is:
Read image into the workspace.
A = imread('kobi.png');
Calculate superpixels of the image.
[L,N] = superpixels(A,500);
Display the superpixel boundaries overlaid on the original image.
figure
BW = boundarymask(L);
imshow(imoverlay(A,BW,'cyan'),'InitialMagnification',67)
I am trying to write a matlab program for image blurring. I am required to use fspecial('average') and conv2 function. So far I have written the following code:
x=imread('ghoul.jpg');
subplot(211),imshow(x)
h=fspecial('average');
y=conv2(double(x),double(h));
subplot(212),imshow(y)
size of x is 250X250 uint8
The problem with the code is that it displays the original image fine but the image is only blurred at the bottom and white in the remaining area.
So far I have guessed that I haven't specified the size in h. But I am having problem in how to define the size in h. Whether it should be the size of x or not. It would be helpful if someone can just tell me how to write the size or another tip.
Thanks for your help.
The problem with the matlab code is that it was using imshow on double data type which caused the image intensity value to distort (barely visible or invisible in certain areas of the image). The filtered image needed rescaling of intensity values and as #eigenchris pointed out using:
imshow(y,[])
readjusted the image intensity values and the image was blurred perfectly.
side note: The size of the filter didn't had any effect on image distortion. (size is just used as a measure to how much you want to blur the image)
We can extract the edge of an image in MATLAB using the function edge()
My question is how can I recombine the edge with the original image to get an image with enhanced edges to increase the sharpness of the image.
What you are looking for already exists!
The image filter in question is called an 'unsharp mask'. It basically uses the edge data of an image to sharpen it. To elucidate, what it actually does in a manner of sorts is to use the difference of the image and a blurred version of it and then use that to sharpen the image. You can read more about it here.
To use it, simply do something like the following:
>> my_image = imread('lena.jpg');
>> subplot(1,2,1);
>> imshow(my_image);
>> subplot(1,2,2);
>> imshow(imfilter(my_image,fspecial('unsharp')));
This would yeild:
As you can see, the second image is visibly sharper and this is done by "adding" the edge data to the original image through the use of the unsharp mask.
Forget edge(). Just call imsharpen()
I'm using the MNIST digit images for a machine learning experiment, and I'm trying to center each image based on position, rather than the center of mass that they are centered on by default.
I'm using the regionprops class, BoundingBox method to extract the images. I create a B&W copy of the greyscale, use this to determine the BoundingBox properties (regionprops works only B&W images) and then apply that on the greyscale original to extract the precise image rectangle. This works fine on ~98% of the images.
The problem I have is that the other ~2% of images has some kind of noise or errant pixel in the upper left corner, and I end up extracting only that pixel, with the rest of the image discarded.
How can I incorporate all elements of the image into a single rectangle?
EDIT: Further research has made me realise that I can summarise and rephrase this question as "How do I find the bounding box for all regions?". I've tried adjusting a label matrix so that all regions are the same label, to no avail.
You can use an erosion mask with the same size of that noise to make it totally disappear " using imerode followed by imdilate to inverse erosion ", or you can use median filter
I working on Optical Character Recognition system.
I want to convert the license plate image from binary to gray scale.
let's look at the next example:
this is the binary image:
and this is the gray scale:
what I want to know is if there is a way to convert it from binary to the gray, or this is not possible because i've lost the information when I converted the picture to binary at the beginning.
any idea how to do this? thanks
To convert a binary image of class logical to a grayscale image of class double, you simply call
double(yourBinaryImage)
EDIT
To revert from a binary image to the grayscale image you had before thresholding is impossible without the grayscale image, since by thresholding you have dropped all the grayscale texture information.
Maybe you can use the distance transform to achieve a gray scale image from a binary image. In MATLAB, try bwdist or something like that.
The result, of course, will not be the original gray scale image.
I think you cannot exactly get the grayscale image which you have shown from the binary image. What you can do is convert the image into grayscale and then do gaussian noising to spread the edge and then you can also add random noise to the whole image. So, now your new grayscale image will look a lot different than binary image.