Combine 2 images as a single planar image - matlab

I have two separate gray scale images im1 (Fig1) and im2 (Fig2) each of size 50 by 50 that are displayed here by color coding them. When I combine them together using cat() command and then display the result of concatenated images, they get displayed side by side (Fig3). However, if I create a third image by replicating either the first of the second image and then display the concatenation of the 3 images I get a single image (Fig4). I don't understand how the merging for the RGB (3 dimensions) could be possible whereas for the conversion to grayscale the merging did not happen. How can I get a single image using the two images im1 and im2 merged or overlayed whichever is legally possible and not side by side? Si how do I overlay im1 and im2 to get a single image and display it by color coding?
imgGray = cat(2,im1,im2);
imshow(imgGray)
imgGray = cat(2,im1,im2);
imshow(imgGray)
imagesc(imgGray)
im3=im1;
imgColor = cat(3,im1,im2,im3);
imagesc(imgColor)

You can also just add them to each other (after normalizing them) and have a single colormap represent it all
imagesc(I1+I2);
or if you want to set transparency according to the color and intensity you can add
alpha color
alpha scaled

You can to do it "manually":
Use ind2rgb for converting each Grayscale image to RGB with "color coding".
Compute average of two RGB images for getting "fused image".
Here is a code sample:
% Use cameraman as first image, and resized moon for the second image (just for example).
I1 = imread('cameraman.tif'); % I1 is uint8 Grayscale
I2 = imresize(imread('moon.tif'), size(I1)); % I2 is uint8 Grayscale
% Convert images to RGB using parula color map (you may choose any color map).
J1 = ind2rgb(I1, parula(256)); % J1 is double RGB (three color planes in range [0, 1]).
J2 = ind2rgb(I2, parula(256)); % J2 is double RGB (three color planes in range [0, 1]).
% Fuse J1 and J2 "manually".
alpah = 0.5; % Using alpah=0.5, gives average of J1 and J2.
K = J1*alpah + J2*(1-alpah); %K is is double RGB.
K = im2uint8(K); % Convert K to uint8 (im2uint8 multiplies pixels by 255, and convert to uint8).
%Display result
figure;imshow(K);
Result:

Related

Contrast Stretching for colore images with MATLAB

i'm working in matlab and i wanted to apply the Contrast Stretching for grey scale image and also RGB image ,
so for the grey scale i've tried this one and it worked
clear all;
clc;
itemp = imread('cameraman.tif'); %read the image
i = itemp(:,:,1);
rtemp = min(i); % find the min. value of pixels in all the
columns (row vector)
rmin = min(rtemp); % find the min. value of pixel in the image
rtemp = max(i); % find the max. value of pixels in all the
columns (row vector)
rmax = max(rtemp); % find the max. value of pixel in the image
m = 255/(rmax - rmin); % find the slope of line joining point
(0,255) to (rmin,rmax)
c = 255 - m*rmax; % find the intercept of the straight line
with the axis
i_new = m*i + c; % transform the image according to new slope
figure,imshow(i); % display original image
figure,imshow(i_new); % display transformed image
this is for greyscale image ,
the problem is that that i don't know how to do for the RGB image
any idea? how to implement that?
thank you :)
Could the function stretchlim (reference) be useful for your purpose?
Find limits to contrast stretch image.
Low_High = stretchlim(RGB,Tol) returns Low_High, a two-element vector
of pixel values that specify lower and upper limits that can be used
for contrast stretching truecolor image RGB.
img = imread('myimg.png');
lohi = stretchlim(img,[0.2 0.8]);
If you write
rmin = min(i(:));
Then it computes the minimum over all values in i. This will work for RGB images also, which simply are 3D matrices with 3 values along the 3rd dimension.
The rest of your code also applies directly to such images.

Watershed Algorithm setting removing all connected components

I'm using the watershed algorithm to try and segment touching nuclei. A typical image may look like:
or this:
I'm trying to apply the watershed algorithm with this code:
show(RGB_img)
%Convert to grayscale image
I = rgb2gray(RGB_img);
%Take structuring element of a disk of size 10, for the morphological transformations
%Attempt to subtract the background from the image: top hat is the
%subtraction of the open image from the original
%Morphological transformation to subtract background noise from the image
%Tophat is the subtraction of an opened image from the original. Remove all
%images smaller than the structuring element of 10
I1 = imtophat(I, strel('disk', 10));
%Increases contrast
I2 = imadjust(I1);
%show(I2,'contrast')
%Assume we have background and foreground and assess thresh as such
level = graythresh(I2);
%Convert to binary image based on graythreshold
BW = im2bw(I2,level);
show(BW,'C');
BW = bwareaopen(BW,8);
show(BW,'C2');
BW = bwdist(BW) <= 1;
show(BW,'joined');
%Complement because we want image to be black and background white
C = ~BW;
%Use distance tranform to find nearest nonzero values from every pixel
D = -bwdist(C);
%Assign Minus infinity values to the values of C inside of the D image
% Modify the image so that the background pixels and the extended maxima
% pixels are forced to be the only local minima in the image (So you could
% hypothetically fill in water on the image
D(C) = -Inf;
%Gets 0 for all watershed lines and integers for each object (basins)
L = watershed(D);
show(L,'L');
%Takes the labels and converts to an RGB (Using hot colormap)
fin = label2rgb(L,'hot','w');
% show(fin,'fin');
im = I;
%Superimpose ridgelines,L has all of them as 0 -> so mark these as 0(black)
im(L==0)=0;
clean_img = L;
show(clean_img)
For whatever reason after C = ~BW; the whole image goes dark. This very same code block has worked on a handful of other images, all of which were more "solid" or not as grainy as these. However, I thought I compensated for this with BW = bwdist(BW) <= 1;. I've experimented a ton and I don't really know what's happening. Any help would be great!
Ps. this is the image after BW = bwareaopen(BW,8);
Before the top-hat, you should perform a closing and an opening in order to reduce the noise.
If you perform an area opening on a noisy image, you may end up with the result on your black and white image.
So it would be:
Closing and opening
Top-Hat
Area opening if still necessary
Thresholding
Erosion and dilation to find the inner and outer markers respectively
Watershed (never use a watershed without markers).

Negative values in Watershed algorithm leading to black image

I'm using the watershed algorithm to try and segment touching nuclei. A typical image may look like:
or this:
I'm trying to apply the watershed algorithm with this code:
show(RGB_img)
%Convert to grayscale image
I = rgb2gray(RGB_img);
%Take structuring element of a disk of size 10, for the morphological transformations
%Attempt to subtract the background from the image: top hat is the
%subtraction of the open image from the original
%Morphological transformation to subtract background noise from the image
%Tophat is the subtraction of an opened image from the original. Remove all
%images smaller than the structuring element of 10
I1 = imtophat(I, strel('disk', 10));
%Increases contrast
I2 = imadjust(I1);
%show(I2,'contrast')
%Assume we have background and foreground and assess thresh as such
level = graythresh(I2);
%Convert to binary image based on graythreshold
BW = im2bw(I2,level);
show(BW,'C');
BW = bwareaopen(BW,8);
show(BW,'C2');
BW = bwdist(BW) <= 1;
show(BW,'joined');
%Complement because we want image to be black and background white
C = ~BW;
%Use distance tranform to find nearest nonzero values from every pixel
D = -bwdist(C);
%Assign Minus infinity values to the values of C inside of the D image
% Modify the image so that the background pixels and the extended maxima
% pixels are forced to be the only local minima in the image (So you could
% hypothetically fill in water on the image
D(C) = -Inf;
%Gets 0 for all watershed lines and integers for each object (basins)
L = watershed(D);
show(L,'L');
%Takes the labels and converts to an RGB (Using hot colormap)
fin = label2rgb(L,'hot','w');
% show(fin,'fin');
im = I;
%Superimpose ridgelines,L has all of them as 0 -> so mark these as 0(black)
im(L==0)=0;
clean_img = L;
show(clean_img)
After C = ~BW; the whole image goes dark. I believe this is because the image pixels are all -inf or some smaller negative number. This is there a way around this and if so what could I change in my code to get this algorithm working? I've experimented a ton and I don't really know what's happening. Any help would be great!
The problem is with your show command. As you said in the comments this uses imshow under the hood. If you try imshow directly you'll see you also get a black image. However, if you call it with appropriate limits:
imshow(clean_img,[min(clean_img(:)), max(clean_img(:))])
you'll see everything you expect to see.
In general I usually prefer imagesc for that reason. imshow makes arbitrary judgements as to what range to represent, and I usually can't be bothered to keep up with it. I think in your case, your end image is uint16 so imshow chooses to represent the range [1, 65025]. Since all your pixel values are below 400, they look black to the naked eye for that range.

what is the right way to compute the measures for images with different color properties

I need a little help guys in Matlab in Matrix Dimensions,
I Have two images imported by imread function:
im1 = imread('1.jpg');
im2 = imread('2.jpg');
im1 is the reference image, while im2 is the Noisy image.
In the workspace window, Matlab shows the im2 Dimensions like this: 768x1024x3
while im2 displayed as: 768x1024
They are both RGB, there's no greyscale images,
In fact the second image is the a compressed image (performed compression algorithm on it ) while the first image is natural JPEG Image, untouched
and for calculating MSE/PNSR for both images, the matrix dimensions must be the same.
I Will need to transform im1 dimensions to be 3d like the first image (768x1024)
I tried this functions (squeeze, reshape) and with no success
You were on the right track with repmat. Here's the correct syntax:
im2 = repmat(im2, [1 1 3]);
This says you want 1 replicate along the first dimension, 1 replicate along the second dimension, and 3 replicates along the third dimension.
Are you sure that both are RGB images because im2 has only one channel and it looks grayscale but it can also be a colormap image in that case try
[im2, map] = imread('im2.jpg');
and see if anything is appearing in map variable. If the image is indeed colormap image, the map variable should be of size 256 X 3.
What donda has suggested is repeating the grayscale channel 3 times to make it of size 768x1024x3. Another possibility is that noisy image was created by converting RGB image to grayscale or by taking green channel of RGB image. Verify the source of the image in that case.
About PSNR computation I have a feeling that there is some problem with your code. I have given my code below use this and see if it works. Get back to me if you face any problem.
function [Psnr_DB] = psnr(I,I_out)
I = double(I);
I_out = double(I_out);
total_error = 0;
for iterz = 1:size(I,3)
for iterx = 1:size(I,1)
for itery = 1:size(I,2)
total_error = total_error + (I(iterx,itery,iterz)-I_out(iterx,itery,iterz))^2;
end
end
end
MSE = total_error/numel(I);
Psnr = (255^2)/MSE;
Psnr_DB = 10*log10(Psnr) %#ok<NOPRT>

MATLAB Auto Crop

I am trying to automatically crop the image below to a bounding box. The background will always be the same colour. I have tried the answers at
Find the edges of image and crop it in MATLAB
and various applications and examples on Mathworks' file exchange but I get stuck at getting a proper boundingbox.
I was thinking to convert the image to black and white, converting it to binary and removing everything that's closer to white than black, but I'm not sure how to go about it.
Here's a nice way
img = im2double(imread('http://i.stack.imgur.com/ZuiEt.jpg')); % read image and convert it to double in range [0..1]
b = sum( (1-img).^2, 3 ); % check how far each pixel from "white"
% display
figure; imshow( b > .5 ); title('non background pixels');
% use regionprops to get the bounding box
st = regionprops( double( b > .5 ), 'BoundingBox' ); % convert to double to avoid bwlabel of logical input
rect = st.BoundingBox; % get the bounding box
% display
figure; imshow( img );
hold on; rectangle('Position', rect );
Following Jak's request, here's the second line explained
after converting img to double type (using im2double), the image is stored in memory as h-by-w-by-3 matrix of type double. Each pixel has 3 values between 0 and 1 (not 255!), representing its RGB values 0 being dark and 1 being bright.
Thus (1-img).^2 checks, for each pixel and each channel (RGB) how far it is from 1 - bright. The darker the pixel - the larger this distance.
Next, we sum the distance per channel to a single value per pixel using sum( . ,3 ) command leaving us with h-by-w 2D matrix of distances of each pixels from white.
Finally, assuming background is bright white we select all pixels that are significantly far from birght b > .5. This threshold is not perfect, but it captures well the boundary of the object.
Following the answer of Shai, I present a way to circumvent regionprops (image processing toolbox) just based on find on the black-white image.
% load
img = im2double(imread('http://i.stack.imgur.com/ZuiEt.jpg'));
% black-white image by threshold on check how far each pixel from "white"
bw = sum((1-img).^2, 3) > .5;
% show bw image
figure; imshow(bw); title('bw image');
% get bounding box (first row, first column, number rows, number columns)
[row, col] = find(bw);
bounding_box = [min(row), min(col), max(row) - min(row) + 1, max(col) - min(col) + 1];
% display with rectangle
rect = bounding_box([2,1,4,3]); % rectangle wants x,y,w,h we have rows, columns, ... need to convert
figure; imshow(img); hold on; rectangle('Position', rect);
to crop an image
first create boundry box where you want to crop.
crp = imcrop(original_image_name,boundry_box);
I have done this in my assignment. This really works!!!!!!