I want to obtain a n*m matrix with a approximated "height" at each discrete point. The input is a picture (see link below) of the contours from a map, each contourline represents an 5m increase or decrease of the height.
My thoughts:
I imported the picture as a logical png to a matrix called A which means that every contourline in the matrix is a connected strip of '1's and everything else is just 0.
My initial thought was to just start in the upper left corner of the matrix, set that height to zero, declare a new matrix 'height' and start with figuring out height(:,1) by adding 5 meters each time we meet a '1' in the A matrix. Knowing the whole first colonn I now for each row start from the left and add 5 m each time we meet a '1'.
I quickly realized however that this wouldn't work since there is no way for the algorithm to understand whether it should add or subtract height, i.e if we are running uphill or downhill.
If I somehow could approximate the gradient from the intensity of contourlines that would be great even though it would always be possible for a uphill to be a downhill and vice versa but then I could manually decide which is true of these two cases.
Picture:
WORK IN PROGRESS
%% Read and binarize the image
I=imread('https://i.stack.imgur.com/pRkiY.jpg');
I=rgb2gray(I);
I=I>graythresh(I)*255;
%% Get skeleton, i.e. the lines!
sk=bwmorph(~I,'skel',Inf);
%% lines are too thin, dilate them
dilated=~imdilate(sk, strel('disk', 2, 4));
%% label the image!
test=bwlabel(dilated,8);
imshow(test,[]); colormap(plasma); % use colormap parula if you prefer.
Missing: label each adjacent area with a number +1 (or -1) its neighbours (No idea how to do this)
Missing: Interpolate flat areas. This should be doable once the altitudes are known. One can set the pixels in the skeleton image to the altitudes and interpolate the rest using griddata, which will be slow, but still doable.
Disclaimer: not full answer yet, feel free to edit or reuse the code in this answer to further it!
Related
Let S be the matrix holding the sign of a data matrix D that is S=sign(D). I want to find vertical changes between two following cells in S. (I am trying to detect edges after a difference of Gaussian). To avoid noise I want to preform this only if I find an edge in two consequence pixels.
I have implemented this code which is mathematically good (answered my requirements). (applies some weight to the edges)
[D,S]=DogCalc(FileName);%the function is in the end of this file
edges=zeros(size(D));
for i=1:rowSize*columnSize-columnSize
if(S(i)~=S(i+1)&&S(i+columnSize)~=S(i+1+columnSize))%apply weighted edge for horizontal edge
edges(i)=abs((D(i)*S(i+1)-D(i+1)*S(i))/(S(i+1)-S(i)));
elseif(S(i)~=S(i+columnSize)&&S(i+1)~=S(i+1+columnSize))%apply weighted edge for vertical edge
edges(i)=abs((D(i)*S(i+columnSize)-D(i+columnSize)*S(i))/(S(i+columnSize)-S(i)));
end
end
imshow(edges);
I tried to using a filer to avoid the for loop to no avail. The filter was suppose to extract a matrix with 1 in the interesting pixels which answers the conditions (replace the if statements). everything goes wrong:
Tester=[1 -1; 1 -1];
%V for vertical and H for horizontal
VEdges=abs(imfilter(S,0.25*Tester,'same'));
HEdges=abs(imfilter(S,0.25*Tester.','same'));
VEdges(VEdges<1)=0;
HEdges(HEdges<1)=0;
Is it doable\correct using a filter?
If so how and what am I doing wrong?
If I understand your problem correctly, this worked for me (in finding changes in sign where in two adjacent locations in the matrix). I assume that S contains values either 0 or 1:
abs(imfilter(S, [1 -1; 1 -1]/4, 'replicate'))>0.5-eps
explanation: your filter shape is correct, but I'm not sure what you are looking for. An exact match (an edge in two adjacent locations) will give you a value of about 0.5 or -0.5, but only "numerically" (because of fft used in the application of the filter), so you need to look for something that is close to 0.5 (up to epsilon).
replicate will cause the edges next to the side of the image to be replicated, but you might want to expeirment with the other options of that function.
I tried to implement the integral image in MATLAB by the following:
im = imread('image.jpg');
ii_im = cumsum(cumsum(double(im)')');
im is the original image and ii_im is the integral image.
The problem here is the value in ii_im flows out of the 0 to 255 range.
When using imshow(ii_im), I always get a very bright image which I am not sure is the correct result. Am I correct here?
You're implementing the integral image calculations right, but I don't understand why you would want to visualize it - especially since the sums will go beyond any normal integer range. This is expected as you are performing a summation of intensities bounded by larger and larger rectangular neighbourhoods as you move to the bottom right of the image. It is inevitable that you will get large numbers towards the bottom right. Also, you will obviously get a white image when trying to show this image because most of the values will go beyond 255, which is visualized as white.
If I can add something, one small optimization I have is to get rid of the transposing and use cumsum to specify the dimension you want to work on. Specifically, you can do this:
ii_im = cumsum(cumsum(double(im), 1), 2);
It doesn't matter what direction you specify first (2 then 1, or 1 then 2). The summation of all pixels within each bounded area, as long as you specify all directions to operate on, should be the same.
Back to your question for display, if you really, really, really really... I mean really want to, you can normalize the contrast by doing:
imshow(ii_im, []);
However, what you should expect is a gradient image which starts to be dark from the top, then becomes brighter when you get to the bottom right of the image. Remember, each point in the integral image calculates the total summation of pixel intensities bounded by the top left corner of the image to this point, thus forming a rectangle of intensities you need to sum over. Therefore, as we move further down and to the right of the integral image, the total summation should increase.
With the cameraman.tif image, this is the original image, as well as it's integral image visualized using the above command:
Either way, there is absolutely no reason why you would want to visualize it. You would use this directly with whatever application requires it (adaptive thresholding, Viola-Jones detector, etc.)
Another option could be applying a log operation for each value in the integral image. Something like:
imshow(log(1 + ii_im), []);
However, this will make most of the pixels have the same contrast and this is probably not useful. This is what I get with cameraman.tif:
The moral of this story is that you need some sort of contrast normalization so that you can fit all of the values in your integral image within the confines of the data type that is used to display the image on the screen using imshow.
I want to assign vector to a contourf graph, in order to show the direction and magnitude of wind.
For this I am using contourf(A) and quiver(x,y), where as A is a matrix 151x401 and x,y are matrices with the same sizes (151x401) with magnitude and direction respectively.
When I am using large maps i get the position of the arrows but they are to densily placed and that makes the graph look bad.
The final graph has the arrows as desired, but they are to many of them and too close, I would like them to be more scarce and distributed with more gap between them, so as to be able to increase their length and at the same time have the components of the contour map visible.
Can anyone help , any pointers would be helpful
i know its been a long time since the question was asked, but i think i found a way to make it work.
I attach the code in case someone encounters the same issues
[nx,ny]= size(A) % A is the matrix used as base
xx=1:1:ny; % set the x-axis to be equal to the y
yy=1:1:nx; % set the y-axis to be equal to the x
contourf(xx,yy,A)
hold on, delta = 8; %delta is the distance between arrows)
quiver(xx(1:delta:end),yy(1:delta:end),B(1:delta:end,1:delta:end),C(1:delta:end,1:delta:end),1) % the 1 at the end is the size of the arrows
set(gca,'fontsize',12);, hold off
A,B,C are the corresponding matrices ones want to use
My goal is to make a ridge(mountain)-like shape from the given line. For that purpose, I applied the gaussian filter to the given line. In this example below, one line is vertical and one has some slope. (here, background values are 0, line pixel values are 1.)
Given line:
Ridge shape:
When I applied gaussian filter, the peak heights are different. I guess this results from the rasterization problem. The image matrix itself is discrete integer space. The gaussian filter is actually not exactly circular (s by s matrix). Two lines also suffer from rasterization.
How can I get two same-peak-height nice-looking ridges(mountains)?
Is there more appropriate way to apply the filter?
Should I make a larger canvas(image matrix) and then reduce the canvas by interpolation? Is it a good way?
Moreover, I appreciate if you can suggest a way to make ridges with a certain peak height. When using gaussian filter, what we can do is deciding the size and sigma of the filter. Based on those parameters, the peak height varies.
For information, image matrix size is 250x250 here.
You can give a try to distance transform. Your image is a binary image (having only two type of values, 0 and 1). Therefore, you can generate similar effects with distance transform.
%Create an image similar to yours
img=false(250,250);
img(sub2ind(size(img),180:220,linspace(20,100,41)))=1;
img(1:200,150)=1;
%Distance transform
distImg=bwdist(img);
distImg(distImg>5)=0; %5 is set manually to achieve similar results to yours
distImg=5-distImg; %Get high values for the pixels inside the tube as shown
%in your figure
distImg(distImg==5)=0; %Making background pixels zero
%Plotting
surf(1:size(img,2),1:size(img,1),double(distImg));
To get images with certain peak height, you can change the threshold of 5 to a different value. If you set it to 10, you can get peaks with height equal to the next largest value present in the distance transform matrix. In case of 5 and 10, I found it to be around 3.5 and 8.
Again, if you want to be exact 5 and 10, then you may multiply the distance transform matrix with the normalization factor as follows.
normalizationFactor=(newValue-minValue)/(maxValue-minValue) %self-explanatory
Only disadvantage I see is, I don't get a smooth graph as you have. I tried with Gaussian filter too, but did not get a smooth graph.
My result:
If I explain why, this might make more sense
I have a logical matrix (103x3488) output of a photo of a measuring staff having been run through edge detect (1=edge, 0=noedge). Aim- to calculate the distance in pixels between the graduations on the staff. Problem, staff sags in the middle.
Idea: User inputs co-ordinates (using ginput or something) of each end of staff and the midpoint of the sag, then if the edges between these points can be extracted into arrays I can easily find the locations of the edges.
Any way of extracting an array from a matrix in this manner?
Also open to other ideas, only been using matlab for a month, so most functions are unknown to me.
edit:
Link to image
It shows a small area of the matrix, so in this example 1 and 2 are the points I want to sample between, and I'd want to return the points that occur along the red line.
Cheers
Try this
dat=imread('83zlP.png');
figure(1)
pcolor(double(dat))
shading flat
axis equal
% get the line ends
gi=floor(ginput(2))
x=gi(:,1);
y=gi(:,2);
xl=min(x):max(x); % line pixel x coords
yl=floor(interp1(x,y,xl)); % line pixel y coords
pdat=nan(length(xl),1);
for i=1:length(xl)
pdat(i)=dat(yl(i),xl(i));
end
figure(2)
plot(1:length(xl),pdat)
peaks=find(pdat>40); % threshhold for peak detection
bigpeak=peaks(diff(peaks)>10); % threshold for selecting only edge of peak
hold all
plot(xl(bigpeak),pdat(bigpeak),'x')
meanspacex=mean(diff(xl(bigpeak)));
meanspacey=mean(diff(yl(bigpeak)));
meanspace=sqrt(meanspacex^2+meanspacey^2);
The matrix pdat gives the pixels along the line you have selected. The meanspace is edge spacing in pixel units. The thresholds might need fiddling with, depending on the image.
After seeing the image, I'm not sure where the "sagging" you're referring to is taking place. The image is rotated, but you can fix that using imrotate. The degree to which it needs to be rotated should be easy enough; just input the coordinates A and B and use the inverse tangent to find the angle offset from 0 degrees.
Regarding the points, once it's aligned straight, all you need to do is specify a row in the image matrix (it would be a 1 x 3448 vector) and use find to get non-zero vector indexes. As the rotate function may have interpolated the pixels somewhat, you may get more than one index per "line", but they'll be identifiable as being consecutive numbers, and you can just average them to get an approximate value.