Is it possible to create a custom heatmap in MATLAB? - matlab

I'd like to know if it is possible to create a custom heatmap in MATLAB. What I mean by this, is if it's possible to superimpose an image onto MATLAB (the image in this case would be the state I reside in) and produce a heat color in a particular region of the map. If it is, please send me a link with which I can teach myself how to make a script for this.

Totally possible! Instructions here: https://www.mathworks.com/help/thingspeak/create-heatmap-overlay-image.html
The first step is reading in an image file and showing it on the "graph" that you are forming. This step is detailed by the link as follows:
picture = imread('https://www.mathworks.com/help/examples/optim/win64/officeassign_01.png');
[height,width,depth] = size(picture);
imshow(picture);
hold on
The next step for you, presuming you already have a state map, is going to be a little trickier: you are going to have to know the x,y positions you would like to map heat onto and the intensity at those points. You are going to need an overlay, either blocks (like a grid) or a smooth map. I assume you'll want some smoothing so you will use a linear interpolation between points. Once you've decided on the x,y,heat intensity mapping, you can do the following:
OverlayImage=[];
F = scatteredInterpolant(Y, X, strengthPercent,'linear');
for i = 1:height-1
for j = 1:width-1
OverlayImage(i,j) = F(i,j);
end
end
alpha = (~isnan(OverlayImage))*0.6;
To deconstruct what they were doing here a little, they first made an empty overlay. They then made a map that "blended" between points using a linear interpolation. Finally, they made an alpha layer that was a fraction of that intensity.
The final and most important step -- more central to what you are trying to do -- is place the "image" of that heat matrix over the actual image:
OverlayImage = imshow( OverlayImage );
% Set the color limits to be relative to the data values
caxis auto
colormap( OverlayImage.Parent, jet );
colorbar( OverlayImage.Parent );
% Set the AlphaData to be the transparency matrix created earlier
set( OverlayImage, 'AlphaData', alpha );
The final line of this sets the transparency of the layer (to alpha) allowing for viewing of the image under the heat map. Really, in combination with the first code block, these are the two steps that should set you on your way. Let me know if you need any help!

Related

Active contour snake

I am trying using MATLAB activecontour code to segment the region. the example was used grayscale image while i am using binary image. it turn out ok when i run the code by calling the binary image. however, when i combined the code, nothing happen. it skips the iteration part, and generate the sama binary image. for your reference, below is my code.
%% snake
figure
x = imread('1.jpg');
threshold = 160;
I = rgb2gray(x);
I = Igray>threshold;
imshow (I);
I = imresize(I,.5);
imshow(I)
title('Original Image')
mask = zeros(size(I));
mask(25:end-25,25:end-25) = 1;
imshow(mask)
title('Initial Contour Location')
bw = activecontour(I,mask,1300);
imshow(bw)
title('Segmented Image, 300 Iterations')
no process happen starting from snake's code. it eventually only generate binary image. I hope someone could try run this and help me to find my mistake. Thank you in advance
Matlab's activecontour function uses the Chan–Vese (active contours without edges) method by default, like Cris said. The implementation "uses the Sparse-Field level-set method, similar to the method described in [3]", citing Whitaker, "A level-set approach to 3d reconstruction from range data". (Besides Chan–Vese, activecontour has an optional method arg that can be set to 'edge' to use an alternative "edge-based model" based on the (older) geodesic active contours method of Caselles, Kimmel, and Sapiro.)
The Chan–Vese method segments a grayscale image by looking for a binary image equal to "c1" inside the contour and "c2" outside the contour that both has a smooth contour and is a good approximation to the original image. The method optimizes c1, c2, and the shape of the contour, beginning from some initial contour and evolving it by an iterative process.
If you'll excuse a self-citation, you can find an article, open source C code, and online demo about Chan–Vese on the IPOL journal at http://www.ipol.im/pub/art/2012/g-cv/, which you may find helpful.
So why is it not working in your case? Some thoughts:
In your use, since the input image is already binary, it is clearly tempting for the method to simply set c1=0, c2=1, and the contour to the edges of the input, so "nothing happens". Try setting the optional 'SmoothFactor' arg (possibly to a large value) to force the method to look for a smoother contour.
It's conceivably a datatype problem, since image I is passed as a logical array to activecontour, but normally the function takes a numeric array. Try casting I to a double array before passing.

Superimposing Multiple Images + Adding Colormap

I'm an undergrad student working in a cell biology lab with a basic background in matlab. I'm working on a project of tracking cell trajectory (time lapse) on a petri dish. Below are two example images that i used the watershed feature to separate from the background. The original pictures had neon green cells, now this is all in black and white/
Let's say i have 20 pictures like this, how might I superimpose one on top of another so they all of equal transparency?
Then, how can i add a colormap that represents time? (The bottom most picture is one end of the colormap and the most recent picture is the opposite end) <- this is extremely challenging as it often things the background is black and not NaN
The Basic Idea
Probably the easiest way to do this, is to take the binary image for each layer, and multiply the image by the time at which it was acquired (or it's index in time). Then you can concatenate all images along the third dimension (using cat). You can compute the maximum value along the third dimension using max. This will make the newer time points appear to be "on top" of the older time points. You can then display the resulting flattened matrix using imagesc and it will automatically map to the colormap for the current figure. Typically we would refer to this as a maximum intensity projection.
Creating Some Data
First since you've only provided two images, I'm going to create some shifted versions of the first image you've provided for the demonstration.
% Create some pseudo-data in a cell array that represents the image over time
im = imread('http://i.imgur.com/xTurvfO.jpg');
im = im(:,:,1);
ims = cell(1, 5);
% Create some shifted versions of im1
shifts = round(linspace(0,1000,5));
for k = 1:numel(shifts)
ims{k} = circshift(im > 100, shifts([k k]));
end
Implementing the Method
Now for the application of the method I discussed
% For each image, multiply the binary mask by the time
for k = 1:numel(ims)
ims{k} = ims{k} * k;
end
% Concatenate all images along the third dimension
IMS = cat(3, ims{:});
% Flatten by taking the maximum value along the third dimension
MIP = max(IMS, [], 3);
% Display the resulting flattened image using imagesc
imagesc(MIP);
% Create a custom colormap with black at the end to create our black background
colormap(cat(1, [0 0 0], parula))
The Result
I have used imfuse to create composite images, which is similar to combining multiple channels on a fluorescent microscope. The Mathworks documentation is http://www.mathworks.com/help/images/ref/imfuse.html.
The tricky part is choosing the vector for color channels. For example, [2,1,2] means choosing B(lue) for image 1, R(ed) and G(reen) for image 2. [2,1,2] is the scheme recommended for colorblind people and gives figure on the left of this image. Using [1,0,2] for red/blue gives the the figure on the right.
fig1 = imread([basepath filesep 'fig.jpg']); %white --> black
fig2 = imread([basepath filesep 'fig2.jpg']);
fig_overlay = imfuse(fig1, fig2,'falsecolor','Scaling','joint', 'ColorChannels', [1,0,2]);
imshow(fig_overlay)

Matlab: separate connected components

I was working on my image processing problem with detecting coins.
I have some images like this one here:
and wanted to separate the falsely connected coins.
We already tried the watershed method as stated on the MATLAB-Homepage:
the-watershed-transform-strategies-for-image-segmentation.html
especially since the first example is exactly our problem.
But instead we get a somehow very messed up separation as you can see here:
We already extracted the area of the coin using the regionprops Extrema parameter and casting the watershed only on the needed area.
I'd appreciate any help with the problem or even another method of getting it separated.
If you have the Image Processing Toolbox, I can also suggest the Circular Hough Transform through imfindcircles. However, this requires at least version R2012a, so if you don't have it, this won't work.
For the sake of completeness, I'll assume you have it. This is a good method if you want to leave the image untouched. If you don't know what the Hough Transform is, it is a method for finding straight lines in an image. The circular Hough Transform is a special case that aims to find circles in the image.
The added advantage of the circular Hough Transform is that it is able to detect partial circles in an image. This means that those regions in your image that are connected, we can detect them as separate circles. How you'd call imfindcircles is in the following fashion:
[centers,radii] = imfindcircles(A, radiusRange);
A would be your binary image of objects, and radiusRange is a two-element array that specifies the minimum and maximum radii of the circles you want to detect in your image. The outputs are:
centers: A N x 2 array that tells you the (x,y) co-ordinates of each centre of a circle that is detected in the image - x being the column and y being the row.
radii: For each corresponding centre detected, this also gives the radius of each circle detected. This is a N x 1 array.
There are additional parameters to imfindcircles that you may find useful, such as the Sensitivity. A higher sensitivity means that it is able to detect circular shapes that are more non-uniform, such as what you are showing in your image. They aren't perfect circles, but they are round shapes. The default sensitivity is 0.85. I set it to 0.9 to get good results. Also, playing around with your image, I found that the radii ranged from 50 pixels to 150 pixels. Therefore, I did this:
im = im2bw(imread('http://dennlinger.bplaced.net/t06-4.jpg'));
[centers,radii] = imfindcircles(im, [50 150], 'Sensitivity', 0.9);
The first line of code reads in your image directly from StackOverflow. I also convert this to logical or true black and white as the image you uploaded is of type uint8. This image is stored in im. Next, we call imfindcircles in the method that we described.
Now, if we want to visualize the detected circles, simply use imshow to show your image, then use the viscircles to draw the circles in the image.
imshow(im);
viscircles(centers, radii, 'DrawBackgroundCircle', false);
viscircles by default draws the circles with a white background over the contour. I want to disable this because your image has white circles and I don't want to show false contouring. This is what I get with the above code:
Therefore, what you can take away from this is the centers and radii variables. centers will give you the centre of each detected circle while radii will tell you what the radii is for each circle.
Now, if you want to simulate what regionprops is doing, we can iterate through all of the detected circles and physically draw them onto a 2D map where each circle would be labeled by an ID number. As such, we can do something like this:
[X,Y] = meshgrid(1:size(im,2), 1:size(im,1));
IDs = zeros(size(im));
for idx = 1 : numel(radii)
r = radii(idx);
cen = centers(idx,:);
loc = (X - cen(1)).^2 + (Y - cen(2)).^2 <= r^2;
IDs(loc) = idx;
end
We first define a rectangular grid of points using meshgrid and initialize an IDs array of all zeroes that is the same size as the image. Next, for each pair of radii and centres for each circle, we define a circle that is centered at this point that extends out for the given radius. We then use these as locations into the IDs array and set it to a unique ID for that particular circle. The result of IDs will be that which resembles the output of bwlabel. As such, if you want to extract the locations of where the idx circle is, you would do:
cir = IDs == idx;
For demonstration purposes, this is what the IDs array looks like once we scale the IDs such that it fits within a [0-255] range for visibility:
imshow(IDs, []);
Therefore, each shaded circle of a different shade of gray denotes a unique circle that was detected with imfindcircles.
However, the shades of gray are probably a bit ambiguous for certain coins as this blends into the background. Another way that we could visualize this is to apply a different colour map to the IDs array. We can try using the cool colour map, with the total number of colours to be the number of unique circles + 1 for the background. Therefore, we can do something like this:
cmap = cool(numel(radii) + 1);
RGB = ind2rgb(IDs, cmap);
imshow(RGB);
The above code will create a colour map such that each circle gets mapped to a unique colour in the cool colour map. The next line applies a mapping where each ID gets associated with a colour with ind2rgb and we finally show the image.
This is what we get:
Edit: the following solution is more adequate to scenarios where one does not require fitting the exact circumferences, although simple heuristics could be used to approximate the radii of the coins in the original image based on the centers found in the eroded one.
Assuming you have access to the Image Processing toolbox, try imerode on your original black and white image. It will apply an erosion morphological operator to your image. In fact, the Matlab webpage with the documentation of that function has an example strikingly similar to your problem/image and they use a disk structure.
Run the following code (based on the example linked above) assuming the image you submitted is called ima.jpg and is local to the code:
ima=imread('ima.jpg');
se = strel('disk',50);
eroded = imerode(ima,se);
imshow(eroded)
and you will see the image that follows as output. After you do this, you can use bwlabel to label the connected components and compute whatever properties you may want, for example, count the number of coins or detect their centers.

Running feature extraction on region within a boundary

The image below shows a cow where the boundary has been detected using a combination of thresholding and subtracting a background from a 3D depth image.
My goal is to perform feature extraction on the area INSDIE the boundary. I have read the other questions and have struggled to implement the steps refereed to in similar questions. I do not want to extract the area in the boundary, I simply want to use it for feature extraction.
Please could someone offer a solution that is perhaps simpler? For example, is there a way to give the extractSURFFeatures the boundary coordinates from which to work within?
Below is my boundary code which recieves my processed thresholded image (BW1).
figure(1);
imshow(ImageCell_int{i-269});
%title('Outlines, from bwboundaries()'); axis square;
hold on;
boundaries = bwboundaries(BW1);
numberOfBoundaries = size(boundaries);
for k = 1 : numberOfBoundaries
thisBoundary = boundaries{k};
plot(thisBoundary(:,2), thisBoundary(:,1), 'g', 'LineWidth', 2);
end
hold off;
I would be extremely grateful for any assistance on this.
Great, now I see the cow! :)
You cannot specify an irregularly-shaped region of interest for the detectSURFFeatures function. However, you can detect the features in the whole image, and then create a binary mask of the region of interest, and use it to exclude keypoints, which are outside it.
Edit: If your boundary is represented as a polygon, you can use roipoly function to create a binary mask from it.
Having said that, features that are outside your object's boundary can actually be useful, because they capture information about the shape of the object.
Also, what is your final goal? If you want to recognize individual cows, then local features may not be the best approach. You may do better with a global HOG descriptor (extractHOGFeatures) or with a color histogram, or both.
This answer was discovered on Matlab Central and completely solves the problem above for anyone struggling with a similar issue.
Start with a grey scale outline of the object of interest (BW1).
% Make the mask black and white
double(BW1);
BW2 = logical(BW1);
Next the mask is created and forced to be the same size as the normal image.
mask = cast(BW2, class(normalImage));
maskedImage = normalImage .* mask;
imshow(maskedImage);
Yields the following result:
It is now possible to perform feature extraction on the object of interest.

Separate two overlapping circles in an image using MATLAB

How do I separate the two connected circles in the image below, using MATLAB? I have tried using imerode, but this does not give good results. Eroding does not work, because in order to erode enough to separate the circles, the lines disappear or become mangled. In other starting pictures, a circle and a line overlap, so isolating the overlapping objects won't work either.
The image shows objects identified by bwboundaries, each object painted a different color. As you can see, the two light blue circles are joined, and I want to disjoin them, producing two separate circles. Thanks
I would recommend you use the Circular Hough Transform through imfindcircles. However, you need version 8 of the Image Processing Toolbox, which was available from version R2012a and onwards. If you don't have this, then unfortunately this won't work :(... but let's go with the assumption that you do have it. However, if you are using something older than R2012a, Dev-iL in his/her comment above linked to some code on MATLAB's File Exchange on an implementation of this, most likely created before the Circular Hough Transform was available: http://www.mathworks.com/matlabcentral/fileexchange/9168-detect-circles-with-various-radii-in-grayscale-image-via-hough-transform/
This is a special case of the Hough Transform where you are trying to find circles in your image rather than lines. The beauty with this is that you are able to find circles even when the circle is partially completed or overlapping.
I'm going to take the image that you provided above and do some post-processing on it. I'm going to convert the image to binary, and remove the border, which is white and contains the title. I'm also going to fill in any holes that result so that all of the objects are filled in with solid white. There is also some residual quantization noise after I do this step, so I'm going to a small opening with a 3 x 3 square element. After, I'm going to close the shapes with a 3 x 3 square element, as I see that there are noticeable gaps in the shapes. Therefore:
Therefore, directly reading in your image from where you've posted it:
im = imread('http://s29.postimg.org/spkab8oef/image.jpg'); %// Read in the image
im_gray = im2double(rgb2gray(im)); %// Convert to grayscale, then [0,1]
out = imclearborder(im_gray > 0.6); %// Threshold using 0.6, then clear the border
out = imfill(out, 'holes'); %// Fill in the holes
out = imopen(out, strel('square', 3));
out = imclose(out, strel('square', 3));
This is the image I get:
Now, apply the Circular Hough Transform. The general syntax for this is:
[centres, radii, metric] = imfindcircles(img, [start_radius, end_radius]);
img would be the binary image that contains your shapes, start_radius and end_radius would be the smallest and largest radius of the circles you want to find. The Circular Hough Transform is performed such that it will find any circles that are within this range (in pixels). The outputs are:
centres: Which returns the (x,y) positions of the centres of each circle detected
radii: The radius of each circle
metric: A measure of purity of the circle. Higher values mean that the shape is more probable to be a circle and vice-versa.
I searched for circles having a radius between 30 and 60 pixels. Therefore:
[centres, radii, metric] = imfindcircles(out, [30, 60]);
We can then demonstrate the detected circles, as well as the radii by a combination of plot and viscircles. Therefore:
imshow(out);
hold on;
plot(centres(:,1), centres(:,2), 'r*'); %// Plot centres
viscircles(centres, radii, 'EdgeColor', 'b'); %// Plot circles - Make edge blue
Here's the result:
As you can see, even with the overlapping circles towards the top, the Circular Hough Transform was able to detect two distinct circles in that shape.
Edit - November 16th, 2014
You wish to ensure that the objects are separated before you do bwboundaries. This is a bit tricky to do. The only way I can see you do this is if you don't even use bwboundaries at all and do this yourself. I'm assuming you'll want to analyze each shape's properties by themselves after all of this, so what I suggest you do is iterate through every circle you have, then place each circle on a new blank image, do a regionprops call on that shape, then append it to a separate array. You can also keep track of all of the circles by having a separate array that adds the circles one at a time to this array.
Once you've finished with all of the circles, you'll have a structure array that contains all of the measured properties for all of the measured circles you have found. You would use the array that contains only the circles from above, then use these and remove them from the original image so you get just the lines. You'd then call one more regionprops on this image to get the information for the lines and append this to your final structure array.
Here's the first part of the procedure I outlined above:
num_circles = numel(radii); %// Get number of circles
struct_reg = []; %// Save the shape analysis per circle / line here
%// For creating our circle in the temporary image
[X,Y] = meshgrid(1:size(out,2), 1:size(out,1));
%// Storing all of our circles in this image
circles_img = false(size(out));
for idx = 1 : num_circles %// For each circle we have...
%// Place our circle inside a temporary image
r = radii(idx);
cx = centres(idx,1); cy = centres(idx,2);
tmp = (X - cx).^2 + (Y - cy).^2 <= r^2;
% // Save in master circle image
circles_img(tmp) = true;
%// Do regionprops on this image and save
struct_reg = [struct_reg; regionprops(tmp)];
end
The above code may be a bit hard to swallow, but let's go through it slowly. I first figure out how many circles we have, which is simply looking at how many radii we have detected. I keep a separate array called struct_reg that will append a regionprops struct for each circle and line we have in our image. I use meshgrid to determine the (X,Y) co-ordinates with respect to the image containing our shapes so that I can draw one circle onto a blank image at each iteration. To do this, you simply need to find the Euclidean distance with respect to the centre of each circle, and set the pixels to true only if that location has its distance less than r. After doing this operation, you will have created only one circle and filtered all of them out. You would then use regionprops on this circle, add it to our circles_img array, which will only contain the circles, then continue with the rest of the circles.
At this point, we will have saved all of our circles. This is what circles_img looks like so far:
You'll notice that the circles drawn are clean, but the actual circles in the original image are a bit jagged. If we tried to remove the circles with this clean image, you will get some residual pixels along the border and you won't completely remove the circles themselves. To illustrate what I mean, this is what your image looks like if I tried to remove the circles with circles_img by itself:
... not good, right?
If you want to completely remove the circles, then do a morphological reconstruction through imreconstruct where you can use this image as the seed image, and specify the original image to be what we're working on. The job of morphological reconstruction is essentially a flood fill. You specify seed pixels, and an image you want to work on, and the job of imreconstruct is from these seeds, flood fill with white until we reach the boundaries of the objects that the seed pixels resided in. Therefore:
out_circles = imreconstruct(circles_img, out);
Therefore, we get this for our final reconstructed circles image:
Great! Now, use this and remove the circles from the original image. Once you do this, run regionprops again on this final image and append to your struct_reg variable. Obviously, save a copy of the original image before doing this:
out_copy = out;
out_copy(out_circles) = false;
struct_reg = [struct_reg; regionprops(out_copy)];
Just for sake of argument, this is what the image looks like with the circles removed:
Now, we have analyzed all of our shapes. Bear in mind I did the full regionprops call because I don't know exactly what you want in your analysis... so I just decided to give you everything.
Hope this helps!
erosion is the way to go. You should probably use a larger structuring element.
How about
1 erode
2 detect your objects
3 dilate each object for itself using the same structuring element