Calculate the contact length between segments of image in Matlab - matlab

I have images made of four colors. Each color represents a specific matter phase. I can segment the image based on the color and calculate the perimeter of the segmented images. Now, I need to calculate the contact length between different phases. An example of the image is shown here. For example, the contact length between the blue phase and the yellow phase is very small, while blue and gray phases have significant contact.
% aa is the image
oil = (aa(:,:,3)==255);
rock =(aa(:,:,2)==179);
gas =(aa(:,:,2)==255);
water =(aa(:,:,2)==0 && aa(:,:,3)==0);
O = bwboundaries(oil);
R = bwboundaries(rock);
G = bwboundaries(gas);
W = bwboundaries(water);

A brute force way would be to iterate through each pixel in your image and based on that pixel value, and the value of the pixel to the right of it, increment a variable representing that phase transition contact length count, if there is a phase transition. Then do the same for the value below your pixel. Do this for every pixel in the image, except edge conditions.
A more efficient way would be to do rows and columns at a time, and increment counters as you do this.
Or you could make multiple matrices of your image, shifted by one pixel, and do comparisons looking for your phase transitions which end up with binary matrix results and simply sum these result binary matrices

Here is the final code that I eventual wrote. The results look correct.
% aa is the image
oil = (aa(:,:,3)==255);
rock =(aa(:,:,2)==179);
gas =(aa(:,:,2)==255);
water =(aa(:,:,2)==0 && aa(:,:,3)==0);
phases(1,:,:)=RockBW;
phases(2,:,:)=WaterBW;
phases(3,:,:)=OilBW;
phases(4,:,:)=GasBW;
outMat = ContactMatrix(phases);
The function ContactMatrix is shown in following:
function [ContactMat] = ContactMatrix(phases)
%CONTACTMATRIX measure the contact area between diferent phases
% The output is a 2D matrix which shows the lengths
% Phase1, Phase2, Phase 3
%phase1 L1 L2 L4
%Phase2 L2 L3 L5
%Phase3 L4 L5 L6
% L1 is zero
% L2 is the contact area of Phase1 and Phase2
nph = size(phases,1);
imSize = size(phases(1,:,:));
xmax =imSize(2);
ymax = imSize(3);
% Idealy we need a check for all sizes :)
ContactMat = zeros(nph);
for i=1:1:nph
counts = zeros(1,nph);
dd2=bwmorph(squeeze(phases(i,:,:)),'dilate',1);
B = bwboundaries(dd2);
nB = size(B,1);
coefs=1:1:nph;
coefs(i)=[];
for j=1:1:nB
fd = B{j};
% Ignore the points at boundary of image
fd(fd(:,1)==1 | fd(:,1)==xmax | fd(:,2)==1 | fd(:,2)==ymax,:)=[];
nL = size(fd,1);
for k=1:1:nL(1)
%bufCheck=false(nph-1,1);
mat=fd(k,1) + (fd(k,2)-1)*xmax;
bufCheck = phases(coefs,mat);
counts(coefs)=counts(coefs)+bufCheck';
end
end
ContactMat(i,coefs)=counts(coefs);
end
ContactMat = 0.5*(ContactMat+ContactMat');
end

Related

Two-dimensional matched filter

I want to implement two dimensional matched filter for blood vessel extraction according to the paper "Detection of Blood Vessels in Retinal Images Using Two-Dimensional Matched Filters" by Chaudhuri et al., IEEE Trans. on Medical Imaging, 1989 (there's a PDF on the author's web site).
A brief discription is that blood vessel's cross-section has a gaussian distribution and therefore I want to use gaussian matched filter to increase SNR. Such a kernel may be mathematically expressed as:
K(x,y) = -exp(-x^2/2*sigma^2) for |x|<3*sigma, |y|<L/2
L here is the length of vessel with fixed orientation. Experimentally sigma=1.5 and L = 7.
My MATLAB code for this part is:
s = 1.5; %sigma
t = -3*s:3*s;
theta=0:15:165; %different rotations
%one dimensional kernel
x = 1/sqrt(6*s)*exp(-t.^2/(2*s.^2));
L=7;
%two dimensional gaussian kernel
x2 = repmat(x,L,1);
Consider the response of this filter for a pixel belonging to the background retina. Assuming the background to have constant intensity with zero mean additive Gaussian white noise, the expected value of the filter output should ideally be zero. The convolution kernel is, therefore, modified by subtracting the mean value of s(t) from the function itself. The mean value of the kernel is determined as: m = Sum(K(x,y))/(number of points).
Thus, the convolutional mask used in this algorithm is given by: K(x, y) = K(x,y) - m.
My MATLAB code:
m = sum(x2(:))/(size(x2,1)*size(x2,2));
x2 = x2-m;
A vessel may be oriented at any angle 0<theta<180 and the matched filter response is maximum when when it is aligned at theta+- 90 (cross-section distribution is gaussian not the vessel itself).
Thus we need to rotate the matched filter 12 times with 15 degree increment.
My MATLAB code is attached here but I don't get a desirable result. Any help is appreciated.
%apply rotated matched filter on image
r = {};
for k = 1:12
x3=imrotate(x2,theta(k),'crop');%figure;imagesc(x3);colormap gray;
r{k}=conv2(img,x3);
end
w=[];h = zeros(584,565);
for i = 1:565
for j = 1:584
for k = 1:32
w= [w ,r{k}(j,i)];
end
h(j,i)=max(abs(w));
w = [];
end
end
%show result
figure('Name','after matched filter');imagesc(h);colormap gray
For rotation I used imrotate which seems more sensible to me but in the paper it is different: suppose p=[x,y] be a discrete point in the kernel. To compute coefficients in the rotated kernel we have [u,v] = p*Rotation_Matrix.
Rotation_Matrix=[cos(theta),sin(theta);-sin(theta),cos(theta)]
And the kernel is:
K(x,y) = -exp(-u^2/2*s^2)
But the new kernel doesn't have a gaussian shape anymore. Using imrotate preserves gaussian shape. So what is the benefit of using Rotation matrix?
Input image is:
Output:
Matched filtering helps increase SNR but background noise is amplified too.
Am I right to use imrotate to rotate the kernel? My main problem is with rotation matrix that why and what is the right code to implement it.
The reason to build the filter from its analytic expression for each rotation, rather than using imrotate, is that the filter extent is not circular, and therefore rotating brings in "new" pixel values and pushes some other pixels out of the kernel. Furthermore, rotating a kernel constructed as here (smooth transition along one direction, step edge along the other dimension) requires different interpolation methods along each dimension, which imrotate cannot do. The resulting rotated kernel will always be wrong.
Both these issues can be easily seen when displaying the kernel you make together with two rotated versions:
This display brings an additional issues to the front: the kernel is not centered on a pixel, causing it to shift the output by half a pixel.
Note also that, when subtracting the mean, it is important that this mean be computed only over the original domain of the filter, and that any zeros used to pad this domain to a rectangular shape remain zero (these should not become negative).
The rotated kernels can be constructed as follows:
m = max(ceil(3*s),(L-1)/2);
[x,y] = meshgrid(-m:m,-m:m); % non-rotated coordinate system, contains (0,0)
t = pi/6; % angle in radian
u = cos(t)*x - sin(t)*y; % rotated coordinate system
v = sin(t)*x + cos(t)*y; % rotated coordinate system
N = (abs(u) <= 3*s) & (abs(v) <= L/2); % domain
k = exp(-u.^2/(2*s.^2)); % kernel
k = k - mean(k(N));
k(~N) = 0; % set kernel outside of domain to 0
This is the result for the three rotations used in the example above (the grey around the edges of the kernel corresponds to the value 0, the black pixels have a negative value):
Another issue is that you use conv2 with the default 'full' output shape, you should be using 'same' here, so that the output of the filter matches the input.
Note that, instead of computing all filter responses, and computing the max afterwards, it is much easier to compute the max as you compute each filter response. All of the above leads to the following code:
img = im2double(rgb2gray(img));
s = 1.5; %sigma
L = 7;
theta = 0:15:165; %different rotations
out = zeros(size(img));
m = max(ceil(3*s),(L-1)/2);
[x,y] = meshgrid(-m:m,-m:m); % non-rotated coordinate system, contains (0,0)
for t = theta
t = t / 180 * pi; % angle in radian
u = cos(t)*x - sin(t)*y; % rotated coordinate system
v = sin(t)*x + cos(t)*y; % rotated coordinate system
N = (abs(u) <= 3*s) & (abs(v) <= L/2); % domain
k = exp(-u.^2/(2*s.^2)); % kernel
k = k - mean(k(N));
k(~N) = 0; % set kernel outside of domain to 0
res = conv2(img,k,'same');
out = max(out,res);
end
out = out/max(out(:)); % force output to be in [0,1] interval that MATLAB likes
imwrite(out,'so_result.png')
I get the following output:

How to to identify letters on a license plate with varying perspectives

I am making a script in Matlab that takes in an image of the rear of a car. After some image processing I would like to output the original image of the car with a rectangle around the license plate of the car. Here is what I have written so far:
origImg = imread('CAR_IMAGE.jpg');
I = imresize(origImg, [500, NaN]); % easier viewing and edge connecting
G = rgb2gray(I);
M = imgaussfilt(G); % blur to remove some noise
E = edge(M, 'Canny', 0.4);
% I can assume all letters are somewhat upright
RP = regionprops(E, 'PixelIdxList', 'BoundingBox');
W = vertcat(RP.BoundingBox); W = W(:,3); % get the widths of the BBs
H = vertcat(RP.BoundingBox); H = H(:,4); % get the heights of the BBs
FATTIES = W > H; % find the BBs that are more wide than tall
RP = RP(FATTIES);
E(vertcat(RP.PixelIdxList)) = false; % remove more wide than tall regions
D = imdilate(E, strel('disk', 1)); % dilate for easier viewing
figure();
imshowpair(I, D, 'montage'); % display original image and processed image
Here are some examples:
From here I am unsure how to isolate the letters of the license plate, particularly like in the second example above where each letter has a decreased area due to the perspective of the image. My first idea was to get the bounding box of all regions and keep only the regions where the perimeter to area ratio is "similar" but this resulted in removing the letters of the plate that were connected when I dilate the image like the K and V in the fourth example above.
I would appreciate some suggestions on how I should go about isolating these letters. No code is necessary, and any advice is appreciated.
So I continued to work despite not receiving any answers here on SO and managed to get a working version through trial and error. All of the following code comes after the code in my original question and all plots below are from the first example image above. First, I found the variance for every single pixel row of the image and plotted them like so:
V = var(D, 0, 2);
X = 1:length(V);
figure();
hold on;
scatter(X, V);
I then fit a very high order polynomial to this scatter plot and saved the values where the slope of the polynomial was zero and the variance value was very low (i.e. the dark row of pixels immediately before or after a row with some white):
P = polyfit(X', V, 25);
PV = polyval(P, X);
Z = X(find(PV < 0.03 & abs(gradient(PV)) < 0.0001));
plot(X, PV); % red curve on plot
scatter(Z, zeros(1,length(Z))); % orange circles on x-axis
I then calculate the integral of the polynomial between any consecutive Z values (my dark rows), and save the two Z values between which the integral is the largest, which I mark with lines on the plot:
MAX_INTEG = -1;
MIN_ROW = -1;
MAX_ROW = -1;
for i = 1:(length(Z)-1)
TEMP_MIN = Z(i);
TEMP_MAX = Z(i+1);
Q = polyint(P);
TEMP_INTEG = diff(polyval(Q, [TEMP_MIN, TEMP_MAX]));
if (TEMP_INTEG > MAX_INTEG)
MAX_INTEG = TEMP_INTEG;
MIN_ROW = TEMP_MIN;
MAX_ROW = TEMP_MAX;
end
end
line([MIN_ROW, MIN_ROW], [-0.1, max(V)+0.1]);
line([MAX_ROW, MAX_ROW], [-0.1, max(V)+0.1]);
hold off;
Since the X-values of these lines correspond row numbers in the original image, I can crop my image between MIN_ROW and MAX_ROW:
I repeat the above steps now for the columns of pixels, crop, and remove any excess black rows of columns to result in the identified plate:
I then perform 2D cross correlation between this cropped image and the edged image D using Matlab's xcorr2 to locate the plate in the original image. After finding the location I just draw a rectangle around the discovered plate like so:

MATLAB - How to calculate uniformity of an image

I want to get uniformity value of an image intensity. In the description sheet it described as the following:
General Description
If Z is a random variable indicating image intensity, its nth moment around the mean is
where m is the mean of Z, p(.) its histogram and L is the number of intensity levels.
For Uniformity
Uniformity, defined as
uniformity's maximum value is reached when all intensity levels are equal.
What I do not understand and I want to know is p and L actually. I do not know how to calculate them. Additionally, what is (.) for, in p(.)?
EDIT
% V component of HSV image, contains 0~1 values in double type
% size(Iv) = 960 720
Z = Iv(:);
IL = unique(Z); % Get all intensity levels
noinle = numel(IL); % Number of intensity levels
p = hist(Ivc, noinle); % Histogram
U = 0;
for i = 1:noinle
U = U + sum( p(i,:).^2 );
end
This operation results very big numbers which are not compatible in the sample dataset given. They all are double numbers between 0 and 1.
Assuming your image is stored in an array called Img, you can obtain L and p with the following code:
IL = unique(Img(:)); % Get all intensity levels
L = numel(IL); % Number of intensity levels
p = histogram(Img, IL); % Histogram
Best,
Unless its defined in the paper otherwise generally these are the definitions:
p is the histogram of the image. What this is a break down of the colors in the image put into buckets. In matlab the function for this is h = histogram(x,nbins).
L is the number of buckets. Generally this is something you decide based on your application.

Efficient inpaint with neighbouring pixels

I am implementing a simple algorithm to do in-painting on a "damaged" image. I have a predefined mask that specifies the area which needs to be fixed. My strategy is to start at the border of the masked area and in-paint each pixel with the central mean of its neighboring non-zero pixels, repeating until there's no unknown pixels left.
function R = inPainting(I, mask)
H = [1 2 1; 2 0 2; 1 2 1];
R = I;
n = 1;
[row,col,~] = find(~mask); %Find zeros in mask (area to be inpainted)
unknown = horzcat(row, col)';
while size(unknown,2) > 0
new_unknown = [];
new_R = R;
for u = unknown
r = u(1);
c = u(2);
nb = R(max((r-n), 1):min((r+n), end), max((c-n),1):min((c+n),end));
nz = nb~=0;
nzs = sum(nz(:));
if nzs ~= 0 %We have non-zero neighbouring pixels. In-paint with average.
new_R(r,c) = sum(nb(:)) / nzs;
else
new_unknown = horzcat(new_unknown, u);
end
end
unknown = new_unknown;
R = new_R;
end
This works well, but it's not very efficient. Is it possible to vectorize such an approach, using mostly matrix operations? Does someone know of a more efficient way to implement this algorithm?
If I understand your problem statement, you are given a mask and you wish to fill in these pixels in this mask with the mean of the neighbourhood pixels that surround each pixel in the mask. Another constraint is that the image is defined such that any pixels that belong to the mask in the same spatial locations are zero in this mask. You are starting from the border of the mask and are propagating information towards the innards of the mask. Given this algorithm, there is unfortunately no way you can do this with standard filtering techniques as the current time step is dependent on the previous time step.
Image filtering mechanisms, like imfilter or conv2 can't work here because of this dependency.
As such, what I can do is help you speed up what is going on inside your loop and hopefully this will give you some speed up overall. I'm going to introduce you to a function called im2col. This is from the image processing toolbox, and given that you can use imfilter, we can use this function.
im2col creates a 2D matrix such that each column is a pixel neighbourhood unrolled into a single vector. How it works is that each pixel neighbourhood in column major order is grabbed, so we get a pixel neighbourhood at the top left corner of the image, then move down one row, and another row and we keep going until we reach the last row. We then move one column over and repeat the same process. For each pixel neighbourhood that we have, it gets unrolled into a single vector, and the output would be a MN x K matrix where you have a neighbourhood size of M x N for each pixel neighbourhood and there are K neighbourhoods.
Therefore, at each iteration of your loop, we can unroll the current inpainted image's pixel neighbourhoods into single vectors, determine which pixel neighborhoods are non-zero and from there, determine how many zero values there are for each of these selected pixel neighbourhood. After, we compute the mean for these non-zero columns disregarding the zero elements. Once we're done, we update the image and move to the next iteration.
What we're going to need to do first is pad the image with a 1 pixel border so that we're able to grab neighbourhoods that extend beyond the borders of the image. You can use padarray, also from the image processing toolbox.
Therefore, we can simply do this:
function R = inPainting(I, mask)
R = double(I); %// For precision
n = 1;
%// Change - column major indices
unknown = find(~mask); %Find zeros in mask (area to be inpainted)
%// Until we have searched all unknown pixels
while numel(unknown) ~= 0
new_R = R;
%// Change - take image at current iteration and
%// create columns of pixel neighbourhoods
padR = padarray(new_R, [n n], 'replicate');
cols = im2col(padR, [2*n+1 2*n+1], 'sliding');
%// Change - Access the right pixel neighbourhoods
%// denoted by unknown
nb = cols(:,unknown);
%// Get total sum of each neighbourhood
nbSum = sum(nb, 1);
%// Get total number of non-zero elements per pixel neighbourhood
nzs = sum(nb ~= 0, 1);
%// Replace the right pixels in the image with the mean
new_R(unknown(nzs ~= 0)) = nbSum(nzs ~= 0) ./ nzs(nzs ~= 0);
%// Find new unknown pixels to look at
unknown = unknown(nzs == 0);
%// Update image for next iteration
R = new_R;
end
%// Cast back to the right type
R = cast(R, class(I));

Model division of cancer cells on a grid

I have a 5000x5000 grid, and I'm trying to implement a simple model of cancer division in MATLAB. Initially, it picks a random point (x,y) and makes that cell a cancer cell. On the first iteration, it divides - the parent cell stays in it's place, the daughter cell is randomly assigned to any neighbouring cell.
Easy so far.
My problem is this: on successive iterations, a daughter cell will often be assigned to a cell that already has a cancer cell. In this case, I want the daughter cell to take its place and "bump" the cell already there to an adjacent cell. If that adjacent cell is empty, it is filled and the process stops. If not, the cell already in that place is bumped and so on until the last cell finds an empty space and the process stops.
This should be simple, but I have no idea how to code it up and what kind of loops to use.
I'm a physical scientists rather than a programmer, so please treat me like a simpleton!
Here is a function I hacked together that roughly meets the specs you provided.
I does slow down as the number of cancerous cells gets large.
Basically I have a few variables, the NxN matrix that represents the grid of cell locations (i call this a plate as grid is the name of an existing matlab function)
A vector of points that I can iterate through quickly. I pick a seed location and then run a while loop until the grid is full.
On each loop iteration I perform the following for each cell:
Generate a random number to determine if that cell should divide
Generate a random direction to divide
Find the first open plate position in that direction
Populate that position
I haven't tested it extensively but it appears to work.
function simulateCancer(plateSize, pDivide)
plate = zeros(plateSize, plateSize);
nCells = 1;
cellLocations = zeros(plateSize*plateSize,2);
initX = randi(plateSize);
initY = randi(plateSize);
cellLocations(nCells,:) = [initX, initY];
plate(initX, initY) = 1;
f = figure;
a = axes('Parent', f);
im = imagesc(plate, 'Parent', a);
while(nCells < (plateSize * plateSize))
currentGeneration = currentGeneration+1;
for i = 1:nCells
divide = rand();
if divide <= pDivide
divideLocation = cellLocations(i,:);
divideDir = randi(4);
[x, y, v] = findNewLocation(divideLocation(1), divideLocation(2), plate, divideDir);
if (v==1)
nCells = nCells+1;
plate(x,y) = 1;
cellLocations(nCells,:) = [x,y];
end
end
end
set(im,'CData', plate);
pause(.1);
end
end
function [x,y, valid] = findNewLocation(xin, yin, plate, direction)
x = xin;
y = yin;
valid = 1;
% keep looking for new spot if current spot is occupied
while( plate(x, y) == 1)
switch direction
case 1 % divide up
y = y-1;
case 2 % divide down
y = y+1;
case 3 % divide left
x = x-1;
case 4 % divide down
x = x+1;
otherwise
warning('Invalid direction')
x = xin;
y = yin;
return;
end
%if there has been a collision with a wall then just quit
if y==0 || y==size(plate,2)+1 || x==0 || x==size(plate,1)+1 % hit the top
x = xin; %return original values to say no division happend
y = yin;
valid = 0;
return;
end
end
end
Note: Instead of thinking of pushing cells, I coded this in a way that leaves cells where they currently are and creates the new cell at the end of the row/column. Semantically its different but logically it has the same end result, as long as you don't care about the generations.
Inspired by an another question, I though of using image processing techniques to implement this simulation. Specifically we can use morphological dilation to spread the cancerous cells.
The idea is to dilate each pixel using a structuring element that looks like:
1 0 0
0 1 0
0 0 0
where the center is fixed, and the other 1 is placed at random at one of the other eight remaining positions. This would effectively extend the pixel in that direction.
The way the dilation is performed is by created a blank image, with only one pixel set, then accumulating all the results using a simple OR operation.
To speed things up, we don't need to consider every pixel, only those on the perimeter of the current blocks formed by the clusters of cancerous cells. The pixels on the inside are already surrounded by cancer cells, and would have no effect if dilated.
To speed even further, we perform the dilation on all pixels that are chosen to be extended in the same direction in one call. Thus every iteration, we perform at most 8 dilation operations.
This made the code relatively fast (I tested up to 1000x1000 grid). Also it maintains the same timing across all iterations (will not slow down as the grid starts to fill up).
Here is my implementation:
%# initial grid
img = false(500,500);
%# pick 10 random cells, and set them as cancerous
img(randi(numel(img),[10 1])) = true;
%# show initial image
hImg = imshow(img, 'Border','tight', 'InitialMag',100);
%# build all possible structing elements
%# each one dilates in one of the 8 possible directions
SE = repmat([0 0 0; 0 1 0; 0 0 0],[1 1 8]);
SE([1:4 6:9] + 9*(0:7)) = 1;
%# run simulation until all cells have cancer
BW = false(size(img));
while ~all(img(:)) && ishandle(hImg)
%# find pixels on the perimeter of all "blocks"
on = find(bwperim(img,8));
%# percentage chance of division
on = on( rand(size(on)) > 0.5 ); %# 50% probability of cell division
if isempty(on), continue; end
%# decide on a direction for each pixel
d = randi(size(SE,3),[numel(on) 1]);
%# group pixels according to direction chosen
dd = accumarray(d, on, [8 1], #(x){x});
%# dilate each group of pixels in the chosen directions
%# to speed up, we perform one dilation for all pixels with same direction
for i=1:8
%# start with an image with only those pixels set
BW(:) = false;
BW(dd{i}) = true;
%# dilate in the specified direction
BW = imdilate(BW, SE(:,:,i));
%# add results to final image
img = img | BW;
end
%# show new image
set(hImg, 'CData',img)
drawnow
end
I also created an animation of the simulation on a 500x500 grid, with 10 random initial cancer cells (warning: the .gif image is approximately 1MB in size, so may take some time to load depending on your connection)