Strange behaviour when changing xlim [duplicate] - matlab

When I display a bitmap image using image in a Matlab figure window, I'm experiencing strange artifacts:
What I'm referring to are the cross-shaped structures which are particularly visible at the edges of the brain slice, but present throughout.
These structures are not in the underlying image data, which are exactly identical to those in this image:
I assume the artifacts have to do with the slight rescaling that is necessary to match the image to the given axes size.
Does someone have an idea how to avoid these artifacts? I tried changing the figure 'Renderer', which does indeed affect the artifact, but does not let it vanish.
How to reproduce the effect:
save the second image as "image.png"
execute:
im = imread('image.png');
image(im)
set(gca, 'Units', 'pixels')
set(gca, 'Position', [76.1094204520027 576.387782501678 343.969568136048 357.502797046319])
maximize the figure window, so that the axes with the image becomes visible
Native image dimensions are 306 x 318, but it is displayed at about 344 x 358 pixels.
I did some further experiments and found that the effect is not specific to this image, the particular positioning, or the colormap:
[x, y] = meshgrid(-1:0.01:1);
imagesc(cos(10*sqrt(x.^2 + y.^2)))
giving
for a specific size of the figure window shows the same kind of artifacts.
It would be interesting to know whether the artefact is specific to my Matlab version (2013a) or platform (Debian Linux, kernel 3.14 with NVidia graphics).

It looks to me as if the artifacts are caused by Matlab interpolating to translate picture pixels into screen pixels.
It would be nice to be able to change the interpolation method used by Matlab when displaying the image, but that doesn't seem to be possible (changing 'renderer' doesn't help). So you could manually interpolate the image to match the display size, and then display that interpolated image, for which one image pixel now corresponds to one screen pixel. That way Matlab doesn't have to interpolate.
To do the interpolation I've used the imresize function. I find all available interpolation methods give more or less the same results, except 'box', which is even worse than Matlab's automatic screen interpolation. I attach some results:
First picture is obtained with your approach. You can see artifacts in its left and right edges, and in the lower inner diagonal edges. Code:
m = 344;
n = 358;
image(im)
set(gca, 'units', 'pixels', 'Position', [40 40 m n])
Second picture applies manual interpolation with imresize using 'box' option. The artifacts are similar, or even more pronounced.
imr = imresize(double(im)/255, [m n], 'box'); %// convert to double and
%// interpolate to size [m, n]
image(imr/max(imr(:))) %// display with image size matching display size.
%// Normalization is required because the interpolation may give values
%// out of the interval [0 1]
set(gca, 'units', 'pixels', 'Position', [40 40 m n])
Third figure is as the second but with 'bilinear' option. The artifacts are very attenuated, although still visible in some parts. Other interpolation methods give similar results.

As has been mentioned, MATLAB uses a nearest-neighbor interpolation for both upsampling and downsampling images for display. Because the image window is user-resizeable, the artifacts due to this can change just by moving the window around.
One solution is to write a wrapper class for image display that monitors window events and resizes using imresize to more accurately display the data to the screen. I've written such a class, and it's publicly available. I do image processing work all the time, and MATLAB's inbuilt display system is very irritating. I use this one:
http://www.mathworks.com/matlabcentral/fileexchange/46051-rviewer
It's designed to be a drop-in replacement for image, and will properly resample the images.

Related

Maintaining correct aspect ratio while displaying multiple images together

I am expecting the following output,
But, getting the following output
I.e. the displayed images' have incorrect aspect ratio.
What is the reason and How can I fix this?
Source Code
main.m
clear_all();
image_name = 'woman.png';
I = gray_imread(image_name);
K = {I, I, I, I, ...
I, I, I, I, ...
I, I, I, I};
draw_cell(K);
draw_cell.m
function draw_cell(image_list)
if(iscell(image_list))
figure;
hold all
colormap(gray(256));
N = length(image_list);
[m, n] = factor_out(N);
display('cell');
for k=1:N
h = subplot(m,n,k);
image(image_list{k},'parent',h);
set(gca,'xtick',[],'ytick',[])
end
hold off
else
error('''image_list'' is not a cell array');
end
function [m, n] = factor_out(input_number)
sqrtt = ceil(sqrt(input_number));
m = sqrtt;
n = sqrtt;
Two possible options for maintaining aspect ratio of images
axis equal or axis image
For most plotting functions, you can use the axis equal command to set the same scale on the x and y axes. While plotting images, this is equivalent to maintaining the aspect ratio. You need to call this command for every subplot, so I would suggest using it immediately after the subplot command.
For plotting images specifically, the axis equal command will leave white space around the image. axis image will maintain aspect ratio and remove white space.
imshow instead of image
If you have the Image Processing Toolbox, you can substitute the imshow function for the image function. imshow makes the assumption that you want to display an image and restricts both the colormap and the aspect ratios accordingly. Despite its name image is designed to visualize any matrix data, not just images. Therefore, it scales pixels to fully utilize screen real estate. You'll run into the same problem if you use imagesc along with the additional problem of color scaling. To be on the safe side, always use imshow when displaying RGB and grayscale images unless you have an explicit reason not to.

Upscale whatershed output to match original image size

Introduction
Background: I am segmenting images using the watershed algorithm in MATLAB. For memory and time constraints, I prefer to perform this segmentation on subsampled images, let's say with a resize factor of 0.45.
The problem: I can't properly re-scale the output of the segmentation to the original image scale, both for visualization purposes and other post processing steps.
Minimal Working Example
For example, I have this image:
I run this minimal script and I get a watershed segmentation output L that consists in a label image, where each connected component is addressed with a natural number and the borders between the connected components are zero-valued:
im_orig = imread('kitty.jpg'); % Load image [530x530]
im_res = imresize(im_orig, 0.45); % Resize image [239x239]
im_res = rgb2gray(im_res); % Convert to grayscale
im_blur = imgaussfilt(im_res, 5); % Gaussian filtering
L = watershed(im_blur); % Watershed aglorithm
Now I have L that has the same dimension of im_res. How can I use the result stored in L to actually segment the original im_orig image?
Wrong solution
The first approach I tried was to resize L to the original scale by using imresize.
L_big = imresize(L, [size(im_orig,1), size(im_orig,2)]); % Upsample L
Unfortunately the upsampling of L produces a series of unwanted artifacts. It especially loses some of the fundamental zeros that represent the boundaries between the image segments. Here is what I mean:
figure; imagesc(imfuse(im_res, L == 0)); axis auto equal;
figure; imagesc(imfuse(im_orig, L_big == 0)); axis auto equal;
I know that this is due to the blurring caused by the upscaling process, but for now I couldn't think about anything else that could succeed.
The only other approach I thought about involve the use of Mathematical Morphology to "enlarge" the boundaries of the resized image and then upsample, but this would still lead to some unwanted artifacts.
TL;DR (or recap)
Is there a way to perform watershed on a downscaled image in MATLAB and then upscale the result to the original image, keeping the crisp region boundaries outputted by the algorithm? Is what I am looking for a completely absurd thing to ask?
If you only need the watershed segment borders after upsizing the image, then just make these little changes:
L_big = ~imresize(L==0, [size(im_orig,1), size(im_orig,2)]); % Upsample L
and here the results:
you can use nearest neighbor interpolation when resizing:
L_big = imresize(L, [size(im_orig,1), size(im_orig,2)],'nearest'); % Upsample L
Normally when we resize images we star fro the destination, iterate over x, y, and find the best matching pixel in the source. Here you want to do the reverse. Iterate over the source in x, y and write to the destination buffer, with 0 taking priority (so initialise to 0xFF, then don't overwrite any zeroes with other values),
There's unlikely to be a function that does this on the toolkit, you;ll hav e to roll your own.

Subpixel edge detection for almost vertical edges

I want to detect edges (with sub-pixel accuracy) in images like the one displayed:
The resolution would be around 600 X 1000.
I came across a comment by Mark Ransom here, which mentions about edge detection algorithms for vertical edges. I haven't come across any yet. Will it be useful in my case (since the edge isn't strictly a straight line)? It will always be a vertical edge though. I want it to be accurate till 1/100th of a pixel at least. I also want to have access to these sub-pixel co-ordinate values.
I have tried "Accurate subpixel edge location" by Agustin Trujillo-Pino. But this does not give me a continuous edge.
Are there any other algorithms available? I will be using MATLAB for this.
I have attached another similar image which the algorithm has to work on:
Any inputs will be appreciated.
Thank you.
Edit:
I was wondering if I could do this:
Apply Canny / Sobel in MATLAB and get the edges of this image (note that it won't be a continuous line). Then, somehow interpolate this Sobel edges and get the co-ordinates in subpixel. Is it possible?
A simple approach would be to project your image vertically and fit the projected profile with an appropriate function.
Here is a try, with an atan shape:
% Load image
Img = double(imread('bQsu5.png'));
% Project
x = 1:size(Img,2);
y = mean(Img,1);
% Fit
f = fit(x', y', 'a+b*atan((x0-x)/w)', 'Startpoint', [150 50 10 150])
% Display
figure
hold on
plot(x, y);
plot(f);
legend('Projected profile', 'atan fit');
And the result:
I get x_0 = 149.6 pix for your first image.
However, I doubt you will be able to achieve a subpixel accuracy of 1/100th of pixel with those images, for several reasons:
As you can see on the profile, your whites are saturated (grey levels at 255). As you cut the real atan profile, the fit is biased. If you have control over the experiments, I suggest you do it again again with a smaller exposure time for instance.
There are not so many points on the transition, so there is not so many information on where the transition is. Typically, your resolution will be the square root of the width of the atan (or whatever shape you prefer). In you case this limits the subpixel resolution at 1/5th of a pixel, at best.
Finally, your edges are not stricly vertical, they are slightly titled. If you choose to use this projection method, to increase the accuracy you should look for a way to correct this tilt before projecting. This won't increase your accuracy by several orders of magnitude, though.
Best,
There is a problem with your image. At pixel level, it seems like there are four interlaced subimages (odd and even rows and columns). Look at this zoomed area close to the edge.
In order to avoid this artifact, I just have taken the even rows and columns of your image, and compute subpixel edges. And finally, I look for the best fitting straight line, using the function clsq whose code is in this page:
%load image
url='http://i.stack.imgur.com/bQsu5.png';
image = imread(url);
imageEvenEven = image(1:2:end,1:2:end);
imshow(imageEvenEven, 'InitialMagnification', 'fit');
% subpixel detection
threshold = 25;
edges = subpixelEdges(imageEvenEven, threshold);
visEdges(edges);
% compute fit line
A = [ones(size(edges.x)) edges.x edges.y];
[c n] = clsq(A,2);
y = [1,200];
x = -(n(2)*y+c) / n(1);
hold on;
plot(x,y,'g');
When executing this code, you can see the green line that best aproximate all the edge points. The line is given by the equation c + n(1)*x + n(2)*y = 0
Take into account that this image has been scaled by 1/2 when taking only even rows and columns, so the right coordinates must be scaled.
Besides, you can try with the other tree subimages (imageEvenOdd, imageOddEven and imageOddOdd) and combine the four straigh lines to obtain the best solution.

abs function for fft2 is not working in MATLAB

i am trying to plot the figure of FFT magnitude of an image using the following code in the command window:
a= imread('lena','png')
figure,imshow(a)
ffta=fft2(a)
fftshift1=fftshift(ffta)
magnitude=abs(fftshift1)
figure,imshow(magnitude),title('magnitude')
However, the figure with the title magnitude shows nothing, even though MATLAB shows that it has computed abs() on fftshift. The figure is still empty, and there is no error. Also, why do we need to compute the phase shift before magnitude?
The reason why this is probably happening is because of the following things:
When you take the 2D fft of your image, it will produce a double valued result, even though your image is mostly unsigned 8-bit integer. MATLAB assumes that double formatted images have their intensities / colours between [0,1]. By doing imshow on just the magnitude itself, you will most likely get an entirely white image because I suspect a good majority of the FFT coefficients are bigger than 1. This is probably the blank figure that you're referring to.
Even if you rescale the magnitude so that it is between [0,1], the DC coefficient will be so large that if you try to display the image, you'll only see a white dot in the middle while every other component will be black.
As a side note, the reason why you are doing fftshift is because by default, MATLAB assumes that the origin of the FFT for 2D is located at the top left corner. Doing fftshift will allow the origin to be in the middle, which is what we would intuitively expect of the 2D FFT.
In order to remedy this situation, I would suggest doing a log transformation on the FFT coefficients so you can visually see the results. I would also normalize the coefficients once you log transform it so that they go between [0,1]. Do not actually modify the FFT coefficients as this would be improper. You need to leave them the same way that it is because if you intend to do any processing on the spectrum, you would start by working on the raw image. Doing filter design or anything of that sort will require the raw spectrum, as the final filter will depend on these coefficients untouched. Unless you actually want to do a log operation as part of your pipeline, then leave these coefficients as is. As such, this can be done through the following MATLAB code:
imshow(log(1 + magnitude), []);
I'm going to show an example, using your code that you have provided but using another image as you haven't provided one here. I'm going to use the cameraman.tif image that's part of the MATLAB system path. As such:
a= imread('cameraman.tif');
figure,imshow(a);
ffta=fft2(a);
fftshift1=fftshift(ffta);
magnitude=abs(fftshift1);
figure;
imshow(log(1 + magnitude), []); %// NEW
title('magnitude')
This is what I get:
As you can see, the magnitude is displayed more nicely. Also, the DC coefficient is in the middle of the spectrum thanks to fftshift.
If you want to apply this for colour images, fft2 should still work. It will apply the 2D fft to each colour plane by itself. However, if you want this to work, you'll not only need to take the log transform, but you'll also need to normalize each plane separately. You have to do this because if we tried doing the imshow command we did earlier, it would normalize it so that the greatest value in the spectrum of the colour image gets normalized to 1. This will inevitably produce that same small dot effect that we talked about earlier.
Let's try a colour image that's built-in to MATLAB: onion.png. We will use the same code that you used above, but we need an additional step of normalizing each colour plane by itself. As such:
a = imread('onion.png');
figure,imshow(a);
ffta=fft2(a);
fftshift1=fftshift(ffta);
magnitude=abs(fftshift1);
logMag = log(1 + magnitude); %// New
for c = 1 : size(a,3); %// New - normalize each plane
logMag(:,:,c) = mat2gray(logMag(:,:,c));
end
figure; imshow(logMag); title('magnitude');
Note that I had to loop through each colour plane and use mat2gray to normalize each plane to [0,1]. Also, I had to create a new variable called logMag because I have to modify each colour plane individually, and you can't do this with a single imshow call.
With this, these are the results I get:
What's different with this spectrum is that we are applying the FFT to each colour plane separately, and so you'll see a whole bunch of colour spatters because for each location in this image, we are visualizing a linear combination of components from the red, green and blue channels. For each location, we have a value in between [0,1] for each colour plane, and the combination of these give you a colour at this location. You could say that darker colours are for locations that have a relatively low magnitude for at least one of the colour channels, while locations that are brighter have a relatively high magnitude for at least one of the colour channels.
Hope this helps!
Can't be sure about your version of "lena.png", but if it's a color RGB picture, you need to convert it first to grayscale, or at least select which RGB plane you want to examine.
I.e., the following works for http://optipng.sourceforge.net/pngtech/img/lena.png (color png):
clear; close all;
a = imread('lena','png');
ag = rgb2gray(a);
ag = im2double(ag);
figure(1);
imshow(ag);
F = fftshift( fft2(ag) ); % also try fft2(ag, N, N) where N < image size. Say N=128.
magnitude=abs(F);
figure(2);
imshow(magnitude);

Making a 2D Matlab figure that contains multiple images which start at a given point (x,y)

Problem
I am trying to make a 2D figure in Matlab which consists of multiple images and a graph with plot data (which I could eventually convert into an image too). For these images and graph, I need to be able to specify where they are located in my cartesion coordinate system.
For my specific case, it is sufficient to be able to "tell" Matlab where the left-bottom corner of the image is.
So for the example above. I would need some "trick" to let "bird1.jpg" start at position (a,b), "bird2.jpg" at position (c,d) and my plot at position (e,f) in one Matlab figure.
Solution to problem
Thanks to chappjc I was able to find a solution for my problem. Here is the code I used such that other people can use it in the future too.
figure_color = [.741 .717 .42];
axe_color = [1 1 1];
Screen.fig = figure('units','pixels',...
'name','Parallel projection',...
'menubar','none',...
'numbertitle','off',...
'position',[100 100 650 720],...
'color',figure_color,...
'busyaction','cancel',...
'renderer','opengl');
Screen.axes = axes('units','pix',...
'position',[420 460 200 200],... % (420,460) is the position of the first image
'ycolor',axe_color,...
'xcolor',axe_color,...
'color',axe_color,...
'xtick',[],'ytick',[],...
'xlim',[-.1 7.1],...
'ylim',[-.1 7.1],...
'visible','On');
Screen.img = imshow(phantom);
Screen.axes2 = axes('units','pix',...
'position',[0 0 200 200],... % (0,0) is the position of the second image
'ycolor',axe_color,...
'xcolor',axe_color,...
'color',axe_color,...
'xtick',[],'ytick',[],...
'xlim',[-.1 7.1],...
'ylim',[-.1 7.1],...
'visible','On');
Screen.img2 = imshow(phantom);
Basically what I do is first creating a (big) figure, and then create a first axe at a certain position in this big picture, and make it the default axe. In this axe I display my first image (made with the phantom function). After that I make a new axe at a another position and make it again the default axe. After I have done that, I place an image there too (the same picture, but you can also use another one if you want). You can also use handles which is the more clean method, as chappjc describes.
Positioning axes in a figure
One approach would be to manipulate the Position property of multiple axes in a figure. To make multiple axes in a figure:
hf = figure;
ha0 = axes('parent',hf,'Position',[x0 y0 w0 h0]);
ha1 = axes('parent',hf,'Position',[x1 y1 w1 h1]);
Then display your images and plots into the axes by specifying the handle (i.e. ha0 or ha1). For example: image(img0,'Parent',ha0) or imshow(img1,'parent',ha1).
Single Large Image
Another approach is to make a single large image and simply display it with image/imshow/etc.
First for the plots, you can use getframe followed by frame2im to get an image in a matrix format.
Next, decide what goes into your combined image and compute the largest box required to circumscribe the images (using their origins and sizes find the largest x and y coordinate), which includes the origin presumably. Use this info to make a blank image (e.g. img = zeros(h,w,3) for and RGB image).