Heatmap in Matlab: heatmap_overlay() offset from picture - matlab

My goal is to create a heatmap in Matlab using an existing floor plan, and manually collected RF RSSI values.
I have marked numerical values (~30 to ~80) on a floor plan using pen and paper after recording signal strength in various areas of a building. I am looking to plot this in a heatmap overlay to the original floor plan.
What I have done is the following:
Google. YouTube. Man pages. My research has landed me accomplishing the following:
Importing an image to Matlab by asking the user to select a photo
Write it to the imread() function
Set an array the size of the image, all zeroed out
This is currently done manually, but not a concern right now
Manually placing, heatmap(x,y) = val; in locations marked on the original plan
I have found the x,y pixel coordinates from Photoshops info pallet
used a Gaussian low pass filter on the points set by heatmap()
The filter argument includes the image dimensions in pixels
At this point, I try to display the filtered heatmap()-ed points over the original floor plan. I am using the same dimensions for the filtering and the array made from the image size, in pixels. However, I am not getting the heatmap points to overlay where I specified by hard coding it.
The code is as follows:
%Import an image***********************************************
%Ask a user to import the image they want
%**************************************************************
%The commented out line will not show the file type when prompted to select
%an image
%[fn,pn] = uigetfile({'*.TIFF,*.jpg,*.JPG,*.jpeg,*.bmp','Image files'}, 'Select an image');
%This line will select any file type, want to restrict in future
[fn,pn] = uigetfile({'*.*','Image files'}, 'Select an image');
importedImage = imread(fullfile(pn,fn));
%Create size for heat map**************************************
%Setting the size for the map, see comments below
%**************************************************************
%What if I wanted an arbitrary dimension
%Or better yet, get the dimensions from the imported file
heatMap = zeros(1512,1080);
%Manually placing the heatmap values along a grid
%Want to set zones for this, maybe plot out in excel and use the cells to
%define the image size?
heatMap(328,84) = .38;
heatMap(385,132) = .42;
heatMap(418,86) = .40;
heatMap(340,405) = .60;
heatMap(515,263) = .35;
heatMap(627,480) = .40;
heatMap(800,673) = .28;
heatMap(892,598) = .38;
heatMap(1020,540) = .33;
heatMap(1145,684) = .38;
heatMap(912,275) = .44;
heatMap(798,185) = .54;
%Generate the Map**********************************************
%Making the density and heat map
%**************************************************************
gaussiankernel = fspecial('gaussian', [1512 1080], 60);
density = imfilter(heatMap, gaussiankernel, 'replicate');
%imshow(density, []);
oMask = heatmap_overlay(importedImage, density, 'summer');
set(figure(1), 'Position', [0 0 1512 1080]);
imshow(oMask,[]);
colormap(summer);
colorbar;
Any idea why the overlayed filter is offset and not where specified?
This can be reproduced by any image 1512 x 1080
Please let me know if you want the original image used

The array was off.
What I did was the following:
Output the density map only
Verified dimension sizes in the Matlab Workspace
I have noticed the array for the density, heatMap, importedImage and oMask were not all n x m size.
heatMap and importedImage were n x m, and importedImage and oMask were m x n.
I swapped the dimensions to all be n x m and now the image overlays just fine.

Related

Contrast Stretching for colore images with MATLAB

i'm working in matlab and i wanted to apply the Contrast Stretching for grey scale image and also RGB image ,
so for the grey scale i've tried this one and it worked
clear all;
clc;
itemp = imread('cameraman.tif'); %read the image
i = itemp(:,:,1);
rtemp = min(i); % find the min. value of pixels in all the
columns (row vector)
rmin = min(rtemp); % find the min. value of pixel in the image
rtemp = max(i); % find the max. value of pixels in all the
columns (row vector)
rmax = max(rtemp); % find the max. value of pixel in the image
m = 255/(rmax - rmin); % find the slope of line joining point
(0,255) to (rmin,rmax)
c = 255 - m*rmax; % find the intercept of the straight line
with the axis
i_new = m*i + c; % transform the image according to new slope
figure,imshow(i); % display original image
figure,imshow(i_new); % display transformed image
this is for greyscale image ,
the problem is that that i don't know how to do for the RGB image
any idea? how to implement that?
thank you :)
Could the function stretchlim (reference) be useful for your purpose?
Find limits to contrast stretch image.
Low_High = stretchlim(RGB,Tol) returns Low_High, a two-element vector
of pixel values that specify lower and upper limits that can be used
for contrast stretching truecolor image RGB.
img = imread('myimg.png');
lohi = stretchlim(img,[0.2 0.8]);
If you write
rmin = min(i(:));
Then it computes the minimum over all values in i. This will work for RGB images also, which simply are 3D matrices with 3 values along the 3rd dimension.
The rest of your code also applies directly to such images.

Matlab Template Matching Using FFT

I am struggling with template matching in the Fourier domain in Matlab. Here are my images (the artist is RamalamaCreatures on DeviantArt):
My aim is to place a bounding box around the ear of the possum, like this example (where I performed template matching using normxcorr2):
Here is the Matlab code I am using:
clear all; close all;
template = rgb2gray(imread('possum_ear.jpg'));
background = rgb2gray(imread('possum.jpg'));
%% calculate padding
bx = size(background, 2);
by = size(background, 1);
tx = size(template, 2); % used for bbox placement
ty = size(template, 1);
%% fft
c = real(ifft2(fft2(background) .* fft2(template, by, bx)));
%% find peak correlation
[max_c, imax] = max(abs(c(:)));
[ypeak, xpeak] = find(c == max(c(:)));
figure; surf(c), shading flat; % plot correlation
%% display best match
hFig = figure;
hAx = axes;
position = [xpeak(1)-tx, ypeak(1)-ty, tx, ty];
imshow(background, 'Parent', hAx);
imrect(hAx, position);
The code is not functioning as intended - it is not identifying the correct region. This is the failed result - the wrong area is boxed:
This is the surface plot of the correlations for the failed match:
Hope you can help! Thanks.
What you're doing in your code is actually not correlation at all. You are using the template and performing convolution with the input image. If you recall from the Fourier Transform, the multiplication of the spectra of two signals is equivalent to the convolution of the two signals in time/spatial domain.
Basically, what you are doing is that you are using the template as a kernel and using that to filter the image. You are then finding the maximum response of this output and that's what is deemed to be where the template is. Where the response is being boxed makes sense because that region is entirely white, and using the template as the kernel with a region that is entirely white will give you a very large response, which is why it most likely identified that area to be the maximum response. Specifically, the region will have a lot of high values (~255 or so), and naturally performing convolution with the template patch and this region will give you a very large output due to the operation being a weighted sum. As such, if you used the template in a dark area of the image, the output would be small - which is false because the template is also consisting of dark pixels.
However, you can certainly use the Fourier Transform to locate where the template is, but I would recommend you use Phase Correlation instead. Basically, instead of computing the multiplication of the two spectra, you compute the cross power spectrum instead. The cross power spectrum R between two signals in the frequency domain is defined as:
Source: Wikipedia
Ga and Gb are the original image and the template in frequency domain, and the * is the conjugate. The o is what is known as the Hadamard product or element-wise product. I'd also like to point out that the division of the numerator and denominator of this fraction is also element-wise. Using the cross power spectrum, if you find the (x,y) location here that produces the absolute maximum response, this is where the template should be located in the background image.
As such, you simply need to change the line of code that computes the "correlation" so that it computes the cross power spectrum instead. However, I'd like to point out something very important. When you perform normxcorr2, the correlation starts right at the top-left corner of the image. The template matching starts at this location and it gets compared with a window that is the size of the template where the top-left corner is the origin. When finding the location of the template match, the location is with respect to the top-left corner of the matched window. Once you compute normxcorr2, you traditionally add the half of the rows and half of the columns of the maximum response to find the centre location.
Because we are more or less doing the same operations for template matching (sliding windows, correlation, etc.) with the FFT / frequency domain, when you finish finding the peak in this correlation array, you must also take this into account. However, your call to imrect to draw a rectangle around where the template matches takes in the top left corner of a bounding box anyway, so there's no need to do the offset here. As such, we're going to modify that code slightly but keep the offset logic in mind when using this code for later if want to find the centre location of the match.
I've modified your code as well to read in the images directly from StackOverflow so that it's reproducible:
clear all; close all;
template = rgb2gray(imread('http://i.stack.imgur.com/6bTzT.jpg'));
background = rgb2gray(imread('http://i.stack.imgur.com/FXEy7.jpg'));
%% calculate padding
bx = size(background, 2);
by = size(background, 1);
tx = size(template, 2); % used for bbox placement
ty = size(template, 1);
%% fft
%c = real(ifft2(fft2(background) .* fft2(template, by, bx)));
%// Change - Compute the cross power spectrum
Ga = fft2(background);
Gb = fft2(template, by, bx);
c = real(ifft2((Ga.*conj(Gb))./abs(Ga.*conj(Gb))));
%% find peak correlation
[max_c, imax] = max(abs(c(:)));
[ypeak, xpeak] = find(c == max(c(:)));
figure; surf(c), shading flat; % plot correlation
%% display best match
hFig = figure;
hAx = axes;
%// New - no need to offset the coordinates anymore
%// xpeak and ypeak are already the top left corner of the matched window
position = [xpeak(1), ypeak(1), tx, ty];
imshow(background, 'Parent', hAx);
imrect(hAx, position);
With that, I get the following image:
I also get the following when showing a surface plot of the cross power spectrum:
There is a clear defined peak where the rest of the output has a very small response. That's actually a property of Phase Correlation and so obviously, the location of the maximum value is clearly defined and this is where the template is located.
Hope this helps!
Just ended up implementing the same with python with similar ideas as #rayryeng's using scipy.fftpack.fftn() / ifftn() functions with the following result on the same target and template images:
import numpy as np
import scipy.fftpack as fp
from skimage.io import imread
from skimage.color import rgb2gray, gray2rgb
import matplotlib.pylab as plt
from skimage.draw import rectangle_perimeter
im = 255*rgb2gray(imread('http://i.stack.imgur.com/FXEy7.jpg')) # target
im_tm = 255*rgb2gray(imread('http://i.stack.imgur.com/6bTzT.jpg')) # template
# FFT
F = fp.fftn(im)
F_tm = fp.fftn(im_tm, shape=im.shape)
# compute the best match location
F_cc = F * np.conj(F_tm)
c = (fp.ifftn(F_cc/np.abs(F_cc))).real
i, j = np.unravel_index(c.argmax(), c.shape)
print(i, j)
# 214 317
# draw rectangle around the best match location
im2 = (gray2rgb(im)).astype(np.uint8)
rr, cc = rectangle_perimeter((i,j), end=(i + im_tm.shape[0], j + im_tm.shape[1]), shape=im.shape)
for x in range(-2,2):
for y in range(-2,2):
im2[rr + x, cc + y] = (255,0,0)
# show the output image
plt.figure(figsize=(10,10))
plt.imshow(im2)
plt.axis('off')
plt.show()
Also, the below animation shows the result obtained while locating a bird's template image inside a set of (target) frames extracted from a video with a flock of birds.
One thing to note: the output is very much dependent on the similarity of the size and shape of the object that is to be matched with the template, if it's quite different from that of the template image, the template may not be matched at all.

Matlab - Trying to use vectors with grid coordinates and value at each point for a color plot

I'm trying to make a color plot in matlab using output data from another program. What I have are 3 vectors indicating the x-position, y-yposition (both in milliarcseconds, since this represents an image of the surroundings of a black hole), and value (which will be assigned a color) of every point in the desired image. I apparently can't use pcolor, because the values which indicate the color of each "pixel" are not in a matrix, and I don't know a way other than meshgrid to create a matrix out of the vectors, which didn't work due to the size of the vectors.
Thanks in advance for any help, I may not be able to reply immediately.
If we make no assumptions about the arrangement of the x,y coordinates (i.e. non-monotonic) and the sparsity of the data samples, the best way to get a nice image out of your vectors is to use TriScatteredInterp. Here is an example:
% samplesToGrid.m
function [vi,xi,yi] = samplesToGrid(x,y,v)
F = TriScatteredInterp(x,y,v);
[yi,xi] = ndgrid(min(y(:)):max(y(:)), min(x(:)):max(x(:)));
vi = F(xi,yi);
Here's an example of taking 500 "pixel" samples on a 100x100 grid and building a full image:
% exampleSparsePeakSamples.m
x = randi(100,[500 1]); y = randi(100,[500 1]);
v = exp(-(x-50).^2/50) .* exp(-(y-50).^2/50) + 1e-2*randn(size(x));
vi = samplesToGrid(x,y,v);
imagesc(vi); axis image
Gordon's answer will work if the coordinates are integer-valued, but the image will be spare.
You can assign your values to a matrix based on the x and y coordinates and then use imagesc (or a similar function).
% Assuming the X and Y coords start at 1
max_x = max(Xcoords);
max_y = max(Ycoords);
data = nan(max_y, max_x); % Note the order of y and x
indexes = sub2ind(size(data), max_y, max_x);
data(indexes) = Values;
imagesc(data); % note that NaN values will be colored with the minimum colormap value

remove the holes in an image by average values of surrounding pixels

can any one please help me in filling these black holes by values taken from neighboring non-zero pixels.
thanks
One nice way to do this is to is to solve the linear heat equation. What you do is fix the "temperature" (intensity) of the pixels in the good area and let the heat flow into the bad pixels. A passable, but somewhat slow, was to do this is repeatedly average the image then set the good pixels back to their original value with newImage(~badPixels) = myData(~badPixels);.
I do the following steps:
Find the bad pixels where the image is zero, then dilate to be sure we get everything
Apply a big blur to get us started faster
Average the image, then set the good pixels back to their original
Repeat step 3
Display
You could repeat averaging until the image stops changing, and you could use a smaller averaging kernel for higher precision---but this gives good results:
The code is as follows:
numIterations = 30;
avgPrecisionSize = 16; % smaller is better, but takes longer
% Read in the image grayscale:
originalImage = double(rgb2gray(imread('c:\temp\testimage.jpg')));
% get the bad pixels where = 0 and dilate to make sure they get everything:
badPixels = (originalImage == 0);
badPixels = imdilate(badPixels, ones(12));
%# Create a big gaussian and an averaging kernel to use:
G = fspecial('gaussian',[1 1]*100,50);
H = fspecial('average', [1,1]*avgPrecisionSize);
%# User a big filter to get started:
newImage = imfilter(originalImage,G,'same');
newImage(~badPixels) = originalImage(~badPixels);
% Now average to
for count = 1:numIterations
newImage = imfilter(newImage, H, 'same');
newImage(~badPixels) = originalImage(~badPixels);
end
%% Plot the results
figure(123);
clf;
% Display the mask:
subplot(1,2,1);
imagesc(badPixels);
axis image
title('Region Of the Bad Pixels');
% Display the result:
subplot(1,2,2);
imagesc(newImage);
axis image
set(gca,'clim', [0 255])
title('Infilled Image');
colormap gray
But you can get a similar solution using roifill from the image processing toolbox like so:
newImage2 = roifill(originalImage, badPixels);
figure(44);
clf;
imagesc(newImage2);
colormap gray
notice I'm using the same badPixels defined from before.
There is a file on Matlab file exchange, - inpaint_nans that does exactly what you want. The author explains why and in which cases it is better than Delaunay triangulation.
To fill one black area, do the following:
1) Identify a sub-region containing the black area, the smaller the better. The best case is just the boundary points of the black hole.
2) Create a Delaunay triangulation of the non-black points in inside the sub-region by:
tri = DelaunayTri(x,y); %# x, y (column vectors) are coordinates of the non-black points.
3) Determine the black points in which Delaunay triangle by:
[t, bc] = pointLocation(tri, [x_b, y_b]); %# x_b, y_b (column vectors) are coordinates of the black points
tri = tri(t,:);
4) Interpolate:
v_b = sum(v(tri).*bc,2); %# v contains the pixel values at the non-black points, and v_b are the interpolated values at the black points.

matlab: how to plot multidimensional array

Let's say I have 9 MxN black and white images that are in some way related to one another (i.e. time lapse of some event). What is a way that I can display all of these images on one surface plot?
Assume the MxN matrices only contain 0's and 1's. Assume the images simply contain white lines on a black background (i.e. pixel value == 1 if that pixel is part of a line, 0 otherwise). Assume images are ordered in such a way as to suggest movement progression of line(s) in subsequent images. I want to be able to see a "side-view" (or volumetric representation) of these images which will show the surface that a particular line "carves out" in its movement across the images.
Coding is done in MATLAB. I have looked at plot (but it only does 2D plots) and surf, which does 3D plots but doesn't work for my MxNx9 matrix of images. I have also tried to experiment with contourslice, but not sure what parameters to pass it.
Thanks!
Mariya
Are these images black and white with simple features on a "blank" field, or greyscale, with more dense information?
I can see a couple of approaches.
You can use movie() to display a sequence of images as an animation.
For a static view of sparse, simple data, you could plot each image as a separate layer in a single figure, giving each layer a different color for the foreground, and using AlphaData to make the background transparent so all the steps in the sequenc show through. The gradient of colors corresponds to position in the image sequence. Here's an example.
function plotImageSequence
% Made-up test data
nLayers = 9;
x = zeros(100,100,nLayers);
for i = 1:nLayers
x(20+(3*i),:,i) = 1;
end
% Plot each image as a "layer", indicated by color
figure;
hold on;
for i = 1:nLayers
layerData = x(:,:,i);
alphaMask = layerData == 1;
layerData(logical(layerData)) = i; % So each layer gets its own color
image('CData',layerData,...
'AlphaData',alphaMask,...
'CDataMapping','scaled');
end
hold off
Directly showing the path of movement a "line" carves out is hard with raster data, because Matlab won't know which "moved" pixels in two subsequent images are associated with each other. Don't suppose you have underlying vector data for the geometric features in the images? Plot3() might allow you to show their movement, with time as the z axis. Or you could use the regular plot() and some manual fiddling to plot the paths of all the control points or vertexes in the geometric features.
EDIT: Here's a variation that uses patch() to draw each pixel as a little polygon floating in space at the Z level of its index in the image sequence. I think this will look more like the "surface" style plots you are asking for. You could fiddle with the FaceAlpha property to make dense plots more legible.
function plotImageSequencePatch
% Made-up test data
nLayers = 6;
sz = [50 50];
img = zeros(sz(1),sz(2),nLayers);
for i = 1:nLayers
img(20+(3*i),:,i) = 1;
end
% Plot each image as a "layer", indicated by color
% With each "pixel" as a separate patch
figure;
set(gca, 'XLim', [0 sz(1)]);
set(gca, 'YLim', [0 sz(2)]);
hold on;
for i = 1:nLayers
layerData = img(:,:,i);
[x,y] = find(layerData); % X,Y of all pixels
% Reshape in to patch outline
x = x';
y = y';
patch_x = [x; x+1; x+1; x];
patch_y = [y; y; y+1; y+1];
patch_z = repmat(i, size(patch_x));
patch(patch_x, patch_y, patch_z, i);
end
hold off