plot polar grey values in matrix without interpolating every for loop - matlab

I have a matrix with grey values between 0 and 1. For every entry in the matrix, there are certain polar coordinates that indicate the position of the grey values. I already have either Theta and Rho values (polar) ,both in separate 512×960 matrices. And grayscale values (in a matrix called C) for every Theta and Rho combination. I have the same for X and Y, as I just use pol2cart for the transformation. The problem is that I cannot directly plot these values, as they do not yet fit in the 'bins' of the new matrix.
What I want: to put the grey values in a square matrix of size 1024×1024. I cannot do this directly, because the polar coordinates fall in between the grid of this matrix. Therefore, we now use interpolation, but this is extremely time consuming and has to be done separately for every dataset, although the transformation from the original matrices to this final one will always be the same. Therefore, I'd like to solve this matrix once (either analytically or numerically) and use a matrix multiplication or something similar to apply the manipulation efficiently in every cycle of the code.
One example of what one of these transformations could look like this:
The zeros in the first matrix are the grid, and the value 1 (in between the grid) is the grey value that falls in between four grid points, then I'd like to transform to the second matrix (don't mind the visual spacing between the points).
For every dataset, I have hundreds of these matrices, so I would like to make the code more efficient.
Background: I'm using TriScatteredInterp now for the interpolation. We tried scatteredInterpolant as well, but that is slower. I also posted a related question, but decided to split the two possible solutions, because the solution I ask for here is also applicable to non-MATLAB code and will probably be faster and makes for a smoother (no continuous popping up of figures) execution of the code.

Using the image processing toolbox
Images work a bit differently than the data you have. However, it's fairly straightforward to map one representation into the other.
There is only one problem I see: wrapping. Obviously, θ = 2π = 0, but MATLAB does not know that. AFAIK, there is no easy way to tell MATLAB that.
Why does this matter? Well, simply put, inter-pixel interpolation uses information from the nearest N neighbors to find intermediate colors, with N depending on the interpolation kernel. When doing this somewhere in the middle of the image there is no problem, but at the edges MATLAB has to know that the left edge equals the right edge. This is not standard image processing, and I'm not aware of any function that is capable of this.
Implementation
Now, when disregarding the wrapping problem, this is one way to do it:
function resize_polar()
%% ORIGINAL IMAGE
% ==========================================================================
% Some random greyscale data
C = double(rgb2gray(imread('stars.png')))/255;
% Your current size, and desired size
sz_x = size(C,2); new_sz_x = 1024;
sz_y = size(C,1); new_sz_y = 1024;
% Ranges for teat and rho;
% replace with your actual values
rho_start = 0; theta_start = 0;
rho_end = 10; theta_end = 2*pi;
% Generate regularly spaced grid;
theta = linspace(theta_start, theta_end, sz_x);
rho = linspace(rho_start, rho_end, sz_y);
[theta, rho] = meshgrid(theta,rho);
% Make plot of generated data
plot_polar(theta, rho, C, 'Original image');
% Resize data
[theta,rho,C] = resize_polar_data(theta, rho, C, [new_sz_y new_sz_x]);
% Make plot of generated data
plot_polar(theta, rho, C, 'Rescaled image');
end
function [theta,rho,data] = resize_polar_data(theta,rho,data, new_dims)
% Create fake RGB image cube
IMG = cat(3, theta,rho,data);
% Rescale as if theta and rho are RG color data in the RGB
% image cube
IMG = imresize(IMG, new_dims, 'nearest');
% Split up the data again
theta = IMG(:,:,1);
rho = IMG(:,:,2);
data = IMG(:,:,3);
end
function plot_polar(theta, rho, data, label)
[X,Y] = pol2cart(theta, rho);
figure('renderer', 'opengl')
clf, hold on
surf(X,Y,zeros(size(X)), data, ...
'edgecolor', 'none');
colormap gray
title(label);
end
The images used and plotted:
Le awesomely-drawn 512×960 PNG image
Now, the two look the same (couldn't really come up with a better-suited image), so you'll have to believe me that the 512×960 has indeed been rescaled to 1024×1024, with nearest-neighbor interpolation.
Here are some timings for the actual imresize() operation for some simple kernels:
nearest : 0.008511 seconds.
bilinear: 0.019651 seconds.
bicubic : 0.025390 seconds. <-- default kernel
But this depends strongly on your hardware; I believe imresize offloads a lot of work to the GPU, so if you have a crappy one, it'll be slower.
Wrapping
If the wrapping problem is really important to you, you can modify the function above to do the following:
first, rescale the image with imresize() like before
horizontally concatenate the second half of the grayscale data and the first half. Meaning, you swap the first and second halves to make the left and right edges (0 and 2π) touch in the middle.
rescale this intermediate image with imresize()
Extract the central vertical strip of the rescaled intermediate image
split that up in two equal-width strips
and replace the edge strips of the output image with the two strips you just created
Now, this is kind of a brute force approach: you are re-scaling an image twice, and most of the pixels of the second image round will be discarded. If performance is a problem, you can of course apply the rescale to only the central strip of that intermediate image. But, well, that will be a bit more complicated.

Related

Upscale whatershed output to match original image size

Introduction
Background: I am segmenting images using the watershed algorithm in MATLAB. For memory and time constraints, I prefer to perform this segmentation on subsampled images, let's say with a resize factor of 0.45.
The problem: I can't properly re-scale the output of the segmentation to the original image scale, both for visualization purposes and other post processing steps.
Minimal Working Example
For example, I have this image:
I run this minimal script and I get a watershed segmentation output L that consists in a label image, where each connected component is addressed with a natural number and the borders between the connected components are zero-valued:
im_orig = imread('kitty.jpg'); % Load image [530x530]
im_res = imresize(im_orig, 0.45); % Resize image [239x239]
im_res = rgb2gray(im_res); % Convert to grayscale
im_blur = imgaussfilt(im_res, 5); % Gaussian filtering
L = watershed(im_blur); % Watershed aglorithm
Now I have L that has the same dimension of im_res. How can I use the result stored in L to actually segment the original im_orig image?
Wrong solution
The first approach I tried was to resize L to the original scale by using imresize.
L_big = imresize(L, [size(im_orig,1), size(im_orig,2)]); % Upsample L
Unfortunately the upsampling of L produces a series of unwanted artifacts. It especially loses some of the fundamental zeros that represent the boundaries between the image segments. Here is what I mean:
figure; imagesc(imfuse(im_res, L == 0)); axis auto equal;
figure; imagesc(imfuse(im_orig, L_big == 0)); axis auto equal;
I know that this is due to the blurring caused by the upscaling process, but for now I couldn't think about anything else that could succeed.
The only other approach I thought about involve the use of Mathematical Morphology to "enlarge" the boundaries of the resized image and then upsample, but this would still lead to some unwanted artifacts.
TL;DR (or recap)
Is there a way to perform watershed on a downscaled image in MATLAB and then upscale the result to the original image, keeping the crisp region boundaries outputted by the algorithm? Is what I am looking for a completely absurd thing to ask?
If you only need the watershed segment borders after upsizing the image, then just make these little changes:
L_big = ~imresize(L==0, [size(im_orig,1), size(im_orig,2)]); % Upsample L
and here the results:
you can use nearest neighbor interpolation when resizing:
L_big = imresize(L, [size(im_orig,1), size(im_orig,2)],'nearest'); % Upsample L
Normally when we resize images we star fro the destination, iterate over x, y, and find the best matching pixel in the source. Here you want to do the reverse. Iterate over the source in x, y and write to the destination buffer, with 0 taking priority (so initialise to 0xFF, then don't overwrite any zeroes with other values),
There's unlikely to be a function that does this on the toolkit, you;ll hav e to roll your own.

Draw evenly-spaced height lines of a function in MATLAB

I would like to draw height lines of a function (represented by matrices, of course), using MATLAB.
I'm familiar with contour, but contour draws lines at even-spaced heights, while I would like to see lines (with height labels), in constant distance from one another when plotted.
This means that if a function grows rapidly in one area, I won't get a plot with dense height lines, but only a few lines, at evenly spaced distances.
I tried to find such an option in the contour help page, but couldn't see anything. Is there a built in function which does it?
There is no built-in function to do this (to my knowledge). You have to realize that in the general case you can't have lines that both represent iso-values and that are spaced with a fixed distance. This is only possible with plots that have special scaling properties, and again, this is not the general case.
This being said, you can imagine to approach your desired plot by using the syntax in which you specify the levels to plots:
...
contour(Z,v) draws a contour plot of matrix Z with contour lines at the data values specified in the monotonically increasing vector v.
...
So all you need is the good vector v of height values. For this we can take the classical Matlab exemple:
[X,Y,Z] = peaks;
contour(X,Y,Z,10);
axis equal
colorbar
and transform it in:
[X,Y,Z] = peaks;
[~, I] = sort(Z(:));
v = Z(I(round(linspace(1, numel(Z),10))));
contour(X,Y,Z,v);
axis equal
colorbar
The result may not be as nice as what you expected, but this is the best I can think of given that what you ask is, again, not possible.
Best,
One thing you could do is, instead of plotting the contours at equally spaces levels (this is what happens when you pass an integer to contour), to plot the contours at fixed percentiles of your data (this requires passing a vector of levels to contour):
Z = peaks(100); % generate some pretty data
nlevel = 30;
subplot(121)
contour(Z, nlevel) % spaced equally between min(Z(:)) and max(Z(:))
title('Contours at fixed height')
subplot(122)
levels = prctile(Z(:), linspace(0, 100, nlevel));
contour(Z, levels); % at given levels
title('Contours at fixed percentiles')
Result:
For the right figure, the lines have somewhat equal spacing for most of the image. Note that the spacing is only approximately equal, and it is impossible to get the equal spacing over the complete image, except in some trivial cases.

abs function for fft2 is not working in MATLAB

i am trying to plot the figure of FFT magnitude of an image using the following code in the command window:
a= imread('lena','png')
figure,imshow(a)
ffta=fft2(a)
fftshift1=fftshift(ffta)
magnitude=abs(fftshift1)
figure,imshow(magnitude),title('magnitude')
However, the figure with the title magnitude shows nothing, even though MATLAB shows that it has computed abs() on fftshift. The figure is still empty, and there is no error. Also, why do we need to compute the phase shift before magnitude?
The reason why this is probably happening is because of the following things:
When you take the 2D fft of your image, it will produce a double valued result, even though your image is mostly unsigned 8-bit integer. MATLAB assumes that double formatted images have their intensities / colours between [0,1]. By doing imshow on just the magnitude itself, you will most likely get an entirely white image because I suspect a good majority of the FFT coefficients are bigger than 1. This is probably the blank figure that you're referring to.
Even if you rescale the magnitude so that it is between [0,1], the DC coefficient will be so large that if you try to display the image, you'll only see a white dot in the middle while every other component will be black.
As a side note, the reason why you are doing fftshift is because by default, MATLAB assumes that the origin of the FFT for 2D is located at the top left corner. Doing fftshift will allow the origin to be in the middle, which is what we would intuitively expect of the 2D FFT.
In order to remedy this situation, I would suggest doing a log transformation on the FFT coefficients so you can visually see the results. I would also normalize the coefficients once you log transform it so that they go between [0,1]. Do not actually modify the FFT coefficients as this would be improper. You need to leave them the same way that it is because if you intend to do any processing on the spectrum, you would start by working on the raw image. Doing filter design or anything of that sort will require the raw spectrum, as the final filter will depend on these coefficients untouched. Unless you actually want to do a log operation as part of your pipeline, then leave these coefficients as is. As such, this can be done through the following MATLAB code:
imshow(log(1 + magnitude), []);
I'm going to show an example, using your code that you have provided but using another image as you haven't provided one here. I'm going to use the cameraman.tif image that's part of the MATLAB system path. As such:
a= imread('cameraman.tif');
figure,imshow(a);
ffta=fft2(a);
fftshift1=fftshift(ffta);
magnitude=abs(fftshift1);
figure;
imshow(log(1 + magnitude), []); %// NEW
title('magnitude')
This is what I get:
As you can see, the magnitude is displayed more nicely. Also, the DC coefficient is in the middle of the spectrum thanks to fftshift.
If you want to apply this for colour images, fft2 should still work. It will apply the 2D fft to each colour plane by itself. However, if you want this to work, you'll not only need to take the log transform, but you'll also need to normalize each plane separately. You have to do this because if we tried doing the imshow command we did earlier, it would normalize it so that the greatest value in the spectrum of the colour image gets normalized to 1. This will inevitably produce that same small dot effect that we talked about earlier.
Let's try a colour image that's built-in to MATLAB: onion.png. We will use the same code that you used above, but we need an additional step of normalizing each colour plane by itself. As such:
a = imread('onion.png');
figure,imshow(a);
ffta=fft2(a);
fftshift1=fftshift(ffta);
magnitude=abs(fftshift1);
logMag = log(1 + magnitude); %// New
for c = 1 : size(a,3); %// New - normalize each plane
logMag(:,:,c) = mat2gray(logMag(:,:,c));
end
figure; imshow(logMag); title('magnitude');
Note that I had to loop through each colour plane and use mat2gray to normalize each plane to [0,1]. Also, I had to create a new variable called logMag because I have to modify each colour plane individually, and you can't do this with a single imshow call.
With this, these are the results I get:
What's different with this spectrum is that we are applying the FFT to each colour plane separately, and so you'll see a whole bunch of colour spatters because for each location in this image, we are visualizing a linear combination of components from the red, green and blue channels. For each location, we have a value in between [0,1] for each colour plane, and the combination of these give you a colour at this location. You could say that darker colours are for locations that have a relatively low magnitude for at least one of the colour channels, while locations that are brighter have a relatively high magnitude for at least one of the colour channels.
Hope this helps!
Can't be sure about your version of "lena.png", but if it's a color RGB picture, you need to convert it first to grayscale, or at least select which RGB plane you want to examine.
I.e., the following works for http://optipng.sourceforge.net/pngtech/img/lena.png (color png):
clear; close all;
a = imread('lena','png');
ag = rgb2gray(a);
ag = im2double(ag);
figure(1);
imshow(ag);
F = fftshift( fft2(ag) ); % also try fft2(ag, N, N) where N < image size. Say N=128.
magnitude=abs(F);
figure(2);
imshow(magnitude);

Hough transform in MATLAB without using hough function

I found an implementation of the Hough transform in MATLAB at Rosetta Code, but I'm having trouble understanding it. Also I would like to modify it to show the original image and the reconstructed lines (de-Houghing).
Any help in understanding it and de-Houghing is appreciated. Thanks
Why is the image flipped?
theImage = flipud(theImage);
I can't wrap my head around the norm function. What is its purpose, and can it be avoided?
EDIT: norm is just a synonym for euclidean distance: sqrt(width^2 + height^2)
rhoLimit = norm([width height]);
Can someone provide an explanation of how/why rho, theta, and houghSpace is calculated?
rho = (-rhoLimit:1:rhoLimit);
theta = (0:thetaSampleFrequency:pi);
numThetas = numel(theta);
houghSpace = zeros(numel(rho),numThetas);
How would I de-Hough the Hough space to recreate the lines?
Calling the function using a 10x10 image of a diagonal line created using the identity (eye) function
theImage = eye(10)
thetaSampleFrequency = 0.1
[rho,theta,houghSpace] = houghTransform(theImage,thetaSampleFrequency)
The actual function
function [rho,theta,houghSpace] = houghTransform(theImage,thetaSampleFrequency)
%Define the hough space
theImage = flipud(theImage);
[width,height] = size(theImage);
rhoLimit = norm([width height]);
rho = (-rhoLimit:1:rhoLimit);
theta = (0:thetaSampleFrequency:pi);
numThetas = numel(theta);
houghSpace = zeros(numel(rho),numThetas);
%Find the "edge" pixels
[xIndicies,yIndicies] = find(theImage);
%Preallocate space for the accumulator array
numEdgePixels = numel(xIndicies);
accumulator = zeros(numEdgePixels,numThetas);
%Preallocate cosine and sine calculations to increase speed. In
%addition to precallculating sine and cosine we are also multiplying
%them by the proper pixel weights such that the rows will be indexed by
%the pixel number and the columns will be indexed by the thetas.
%Example: cosine(3,:) is 2*cosine(0 to pi)
% cosine(:,1) is (0 to width of image)*cosine(0)
cosine = (0:width-1)'*cos(theta); %Matrix Outerproduct
sine = (0:height-1)'*sin(theta); %Matrix Outerproduct
accumulator((1:numEdgePixels),:) = cosine(xIndicies,:) + sine(yIndicies,:);
%Scan over the thetas and bin the rhos
for i = (1:numThetas)
houghSpace(:,i) = hist(accumulator(:,i),rho);
end
pcolor(theta,rho,houghSpace);
shading flat;
title('Hough Transform');
xlabel('Theta (radians)');
ylabel('Rho (pixels)');
colormap('gray');
end
The Hough Transform is a "voting" approach where each image point casts a vote on the existence of a certain line (not a line segment) in an image. The voting is carried out in the parameter space for a line: the polar coordinate representation of normal vectors.
We discretize the parameter space and allow each image point to suggest parameters which would be compatible with a line through the point. Each of your questions can be addressed in terms of how the parameter space is treated in code. Wikipedia has a good article with worked examples that might clarify things (if you are having any conceptual troubles).
For your specific questions:
The image is flipped so the origin is the bottom right corner. As far as I can tell this step is not technically necessary. It does change the outcome somewhat due to discretization issues. The other implementations on Rosetta Code do not flip the image.
rhoLimit holds the maximum radius of an image point in polar coordinates (recall the norm of a vector is its magnitude).
rho and theta are discretizations of the polar coordinate plane according to a sampling rate. houghSpace creates a matrix with an element for each possible combination of the discrete rho/theta values.
The Hough Transform does not specify the lengths of putative lines; the peaks in the voting space just specify the polar coordinates of the normal vector of the line. You can "de-Hough" by selecting the peaks and drawing the corresponding lines, or perhaps by drawing every possible line and using the number of votes as a grayscale weight. It is not possible to re-create the original image from the Hough Transform, just the lines identified by the transform (and your thresholding scheme on the votes).
Following the example from the question produces the following graph. The placement of grid lines and the datatips cursor can be a bit misleading (though the variable values in the 'tip are correct). Since this is an image of the parameter space and not the image space the sampling rate we chose is determining the number of bins in each variable. At this sampling rate, the image points are compatible with more than one possible line; in other words our lines have subpixel resolution, in the sense that they cannot be drawn without overlap in a 10x10 image.
Once we have chosen a peak, such as that corresponding to the line with normal (rho,theta) = (6.858,0.9), we can draw that line in an image however we choose. Automated peak picking, that is thresholding to find the highly up-voted lines, is its own problem - you could ask a another question about the topic in DSP or about a particular algorithm here.
For example methods see the code and documentation of MATLAB's houghpeaks and houghlines functions.

Detecting center point of cross using Matlab

Hello, I have an image as shown above. Is it possible for me to detect the center point of the cross and output the result using Matlab? Thanks.
Here you go. I'm assuming that you have the image toolbox because if you don't then you probably shouldn't be trying to do this sort of thing. However, all of these functions can be implemented with convolutions I believe. I did this process on the image you presented above and obtained the point (139,286) where 138 is the row and 268 is the column.
1.Convert the image to a binary image:
bw = bw2im(img, .25);
where img is the original image. Depending on the image you might have to adjust the second parameters (which ranges from 0 to 1) so that you only get the cross. Don't worry about the cross not being fully connected because we'll remedy that in the next step.
2.Dilate the image to join the parts. I had to do this twice because I had to set the threshold so low on the binary image conversion (some parts of your image were pretty dark). Dilation essentially just adds pixels around existing white pixels (I'll also be inverting the binary image as I send it into bwmorph because the operations are made to act on white pixels which are the ones that have a value of 1).
bw2 = bwmorph(~bw, 'dilate', 2);
The last parameter says how many times to do the dilation operation.
3.Shrink the image to a point.
bw3 = bwmorph(bw2, 'shrink',Inf);
Again, the last parameter says how many times to perform the operation. In this case I put in Inf which shrinks until there is only one pixel that is white (in other words a 1).
4.Find the pixel that is still a 1.
[i,j] = find(bw3);
Here, i is the row and j is the column of the pixel in bw3 such that bw3(i,j) is equal to 1. All the other pixels should be 0 in bw3.
There might be other ways to do this with bwmorph, but I think that this way works pretty well. You might have to adjust it depending on the picture too. I can include images of each step if desired.
I just encountered the same kind of problem, and I found other solutions that I would like to share:
Assume image file name is pict1.jpg.
1.Read input image, crop relevant part and covert to Gray-scale:
origI = imread('pict1.jpg'); %Read input image
I = origI(32:304, 83:532, :); %Crop relevant part
I = im2double(rgb2gray(I)); %Covert to Grayscale and to double (set pixel range [0, 1]).
2.Convert image to binary image in robust approach:
%Subtract from each pixel the median of its 21x21 neighbors
%Emphasize pixels that are deviated from surrounding neighbors
medD = abs(I - medfilt2(I, [21, 21], 'symmetric'));
%Set threshold to 5 sigma of medD
thresh = std2(medD(:))*5;
%Convert image to binary image using above threshold
BW = im2bw(medD, thresh);
BW Image:
3.Now I suggest two approaches for finding the center:
Find find centroid (find center of mass of the white cluster)
Find two lines using Hough transform, and find the intersection point
Both solutions return sub-pixel result.
3.1.Find cross center using regionprops (find centroid):
%Find centroid of the cross (centroid of the cluster)
s = regionprops(BW, 'centroid');
centroids = cat(1, s.Centroid);
figure;imshow(BW);
hold on, plot(centroids(:,1), centroids(:,2), 'b*', 'MarkerSize', 15), hold off
%Display cross center in original image
figure;imshow(origI), hold on, plot(82+centroids(:,1), 31+centroids(:,2), 'b*', 'MarkerSize', 15), hold off
Centroid result (BW image):
Centroid result (original image):
3.2 Find cross center by intersection of two lines (using Hough transform):
%Create the Hough transform using the binary image.
[H,T,R] = hough(BW);
%ind peaks in the Hough transform of the image.
P = houghpeaks(H,2,'threshold',ceil(0.3*max(H(:))));
x = T(P(:,2)); y = R(P(:,1));
%Find lines and plot them.
lines = houghlines(BW,T,R,P,'FillGap',5,'MinLength',7);
figure, imshow(BW), hold on
L = cell(1, length(lines));
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
% Plot beginnings and ends of lines
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
%http://robotics.stanford.edu/~birch/projective/node4.html
%Find lines in homogeneous coordinates (using cross product):
L{k} = cross([xy(1,1); xy(1,2); 1], [xy(2,1); xy(2,2); 1]);
end
%https://en.wikipedia.org/wiki/Line%E2%80%93line_intersection
%Lines intersection in homogeneous coordinates (using cross product):
p = cross(L{1}, L{2});
%Convert from homogeneous coordinate to euclidean coordinate (divide by last element).
p = p./p(end);
plot(p(1), p(2), 'x', 'LineWidth', 1, 'Color', 'white', 'MarkerSize', 15)
Hough transform result:
I think that there is a far simpler way of solving this. The lines which form the cross-hair are of equal length. Therefore it in will be symmetric in all orientations. So if we do a simple line scan horizontally as well as vertically, to find the extremities of the lines forming the cross-hair. the median of these values will give the x and y co-ordinates of the center. Simple geometry.
I just love these discussions of how to find something without defining first what that something is! But, if I had to guess, I’d suggest the center of mass of the original gray scale image.
What about this;
a) convert to binary just to make the algorithm faster.
b) Perform a find on the resulting array
c) choose the element which has either lowest/highest row/column index (you would have four points to choose from then
d) now keep searching neighbours
have a global criteria for search that if search does not result in more than a few iterations, the point selected is false and choose another extreme point
e) going along the neighbouring points, you will end up at a point where you have three possible neighbours.That is you intersection
I would start by using the grayscale image map. The darkest points are on the cross, so discriminating on the highest values is a starting point. After discrimination, set all the lower points to white and leave the rest as they are. This would maximize the contrast between points on the cross and points in the image. Next up is to come up with a filter for determining the position with the highest average values. I would step through the entire image with a NxM array and take the mean value at the center point. Create a new array of these means and you should have the highest mean at the intersection. I'm curious to see how someone else may try this!