How to place an image A onto image B BUT I WANT TO IGNORE all O pixels of image A - matlab

I want to place a modelImage (RGB image) on baseImage (RGB image) where the center of modelImage will be placed at pPoint.
I already wrote the function.
And it works. However, there are some 0 pixels in the modelImage. I do not want to place 0 pixels of modelImage onto baseImage. Can you help me to modify my function?
% Put the modelImage onto baseImage at pPoint
function newImage = imgTranslate(modelImage,baseImage, pPoint)
[nRow nCol noDim] = size(modelImage);
pPointX = pPoint(1);
pPointY = pPoint(2);
startColumn = pPointY - nCol/2;
startRow = pPointX - nRow/2;
startColumn = round(startColumn);
startRow = round(startRow);
endColumn = startColumn+ nCol;
endRow = startRow+nRow;
%% Place modelImage onto baseImage BUT I WANT TO IGNORE O pixels of modelImage
baseImage(startRow:(endRow-1),startColumn:(endColumn-1),:) = modelImage;
newImage = baseImage;
Using FOR and IF works but it will slow the progam
%%
for i = startRow: (endRow-1)
x = (i-startRow +1);
for j = startColumn : (endColumn-1)
y = j-startColumn + 1;
if modelImage(x,y,:)~=0
baseImage(i,j,:) = modelImage(x,y,:);
end
end
end
Any way to not to use FOR and IF ?

There may be a way to simplify this, but something like (untested):
% Indicator map of non-zeros
nzs = any(modelImage,3);
nzs = repmat(nzs, [1 1 3]);
% Crop destination image
tmpImage = baseImage(startRow:(endRow-1),startColumn:(endColumn-1),:);
% Replace all but non-zeros
tmpImage(nzs) = modelImage(nzs);
% Place into output image as before
baseImage(startRow:(endRow-1),startColumn:(endColumn-1),:) = tmpImage;

% Indicator map of non-zeros
zs = modelImage(:,:,:)==0;
nzs = ~zs;
% Crop destination image
tmpImage = baseImage(startRow:(endRow-1),startColumn:(endColumn-1),:);
% Replace all but non-zeros
tmpImage(nzs) = modelImage(nzs);
% Place into output image as before
baseImage(startRow:(endRow-1),startColumn:(endColumn-1),:) = tmpImage;

Related

How to process image before applying bwlabel?

I = imread('Sub1.png');
figure, imshow(I);
I = imcomplement(I);
I = double(I)/255;
I = adapthisteq(I,'clipLimit',0.0003,'Distribution','exponential');
k = 12;
beta = 2;
maxIter = 100;
for i=1:length(beta)
[seg,prob,mu,sigma,it(i)] = ICM(I, k, beta(i), maxIter,5);
pr(i) = prob(end);
hold on;
end
figure, imshow(seg,[]);
and ICM function is defined as
function [segmented_image,prob,mu,sigma,iter] = ICM(image, k, beta, max_iterations, neigh)
[width, height, bands] = size(image);
image = imstack2vectors(image);
segmented_image = init(image,k,1);
clear c;
iter = 0;
seg_old = segmented_image;
while(iter < max_iterations)
[mu, sigma] = stats(image, segmented_image, k);
E1 = energy1(image,mu,sigma,k);
E2 = energy2(segmented_image, beta, width, height, k);
E = E1 + E2;
[p2,~] = min(E2,[],2);
[p1,~] = min(E1,[],2);
[p,segmented_image] = min(E,[],2);
prob(iter+1) = sum(p);
%find mismatch with previous step
[c,~] = find(seg_old~=segmented_image);
mismatch = (numel(c)/numel(segmented_image))*100;
if mismatch<0.1
iter
break;
end
iter = iter + 1;
seg_old = segmented_image;
end
segmented_image = reshape(segmented_image,[width height]);
end
Output of my algorithm is a logical matrix (seg) of size 305-by-305. When I use
imshow(seg,[]);
I am able to display the image. It shows different component with varying gray value. But bwlabel returns 1. I want to display the connected components. I think bwlabel thresholds the image to 1. unique(seg) returns values 1 to 10 since number of classes used in k-means is 10. I used
[label n] = bwlabel(seg);
RGB = label2rgb(label);
figure, imshow(RGB);
I need all the ellipse-like structures which are in between the two squares close to the middle of the image. I don't know the number of classes present in it.
Input image:
Ground truth:
My output:
If you want to explode the label image to different connected components you need to use a loop to extract labels for each class and sum label images to get the out label image.
u = unique(seg(:));
out = zeros(size(seg));
num_objs = 0;
for k = 1: numel(u)
mask = seg==u(k);
[L,N] = bwlabel(mask);
L(mask) = L(mask) + num_objs;
out = out + L;
num_objs = num_objs + N ;
end
mp = jet(num_objs);
figure,imshow(out,mp)
Something like this is produced:
I have tried to do everything out of scratch. I wish it is of some help.
I have a treatment chain that get at first contours with parameters tuned on a trial-and-error basis, I confess. The last "image" is given at the bottom ; with it, you can easily select the connected components and do for example a reconstruction by markers using "imreconstruct" operator.
clear all;close all;
I = imread('C:\Users\jean-marie.becker\Desktop\imagesJPG10\spinalchord.jpg');
figure,imshow(I);
J = I(:,:,1);% select the blue channel because jpg image
J=double(J<50);% I haven't inverted the image
figure, imshow(J);
se = strel('disk',5);
J=J-imopen(J,se);
figure, imshow(J);
J=imopen(J,ones(1,15));% privilegizes long horizontal strokes
figure, imshow(J);
K=imdilate(J,ones(20,1),'same');
% connects verticaly not-to-far horizontal "segments"
figure, imshow(K);

convert image from sagittal view to transverse view matlab

I have a coronary Artery Image which consist of 271 frames . I have converted images into sagittal view and find the inner surface of artery by converting it into binary . after it i want to convert it back to transverse view to get sagittal view information but it gives me the wrong result i want just two points in transverse view but it gives me whole lines .
Below is the image with edge in sagittal view
Below is what I am getting
clear , clc
mri = uint8(zeros(512,512,271)); % preallocate 4-D array
for frame=1:271
[mri(:,:,frame),map] = imread('image.tif',frame);
end
for angle = 0:1:180
if angle == 0
Img(:,:,:) = mri(:,:,:);
else
Img(:,:,:) = imrotate(mri(:,:,:),angle,'bilinear','crop');
end
ImgSg = squeeze(permute(Img, [1 3 2]));
for slice = 1: 271
Image(:,:,:) = Img(:,:,slice);
xcentre = floor(size(Image, 2) / 2);
ycentre = floor(size(Image, 1) / 2);
sliceC =(xcentre+ycentre)/2;
sliceimage = squeeze(permute(Image(:, sliceC, :), [1 3 2]));
if slice == 1
image3D = sliceimage;
else
image3D = cat(2, image3D, sliceimage);
end
end
ImgSg1 = ImgSg;
image3D(226:286,1:271)=10;
% imshow(ImgSg(:,:,256));
[counts,x] = imhist(image3D,16);
T = otsuthresh(counts);
BW = imbinarize(image3D,T);
% imshow3D(BW);
[rows, columns, numberOfColorChannels] = size(BW);
middleRow = floor(rows/2);
topHalf = BW(1:middleRow, :,:);
bottomHalf = BW(1+middleRow:end, :,:);
filterImg = bwareaopen(bottomHalf,25);
output_img = zeros(size(bottomHalf));
for col = 1: size(bottomHalf,2)
first_nnz_row = find(filterImg(:,col),1,'first');
output_img(first_nnz_row:end,col) = 1;
end
filterImg = bwareaopen(topHalf,25);
output_img1 = zeros(size(topHalf));
for col = 1: size(topHalf,2)
first_nnz_row1 = find(filterImg(:,col),1,'last');
output_img1(1:first_nnz_row1,col) = 1;
end
fullImage = [output_img1; output_img];
fullImage=logical(1 - fullImage);
i2 = bwperim(fullImage,8);
im1 = i2;
other_images = ImgSg;
blend = 0.2;
mask1 = im1;
fused = other_images;
for slice = 1 : size(fused, 2)
this_slice = fused(:,:,slice);
this_slice(mask1)=double(this_slice(mask1))*blend+double(im1(mask1)*(1-blend));
% imshow(this_slice)
fused(:,:,slice) = this_slice;
end
imshow(fused(:,:,256));
% imshow3D(fullImage);
ImgSg = squeeze(permute(fused, [1 3 2]));
imshow3D(ImgSg)
end
Here i have attached my code it first read the image . get segittal view in 2d and find the edge and after finding the edge i combine it with original sagittal image . and convert it back to tansverse view i want the result like this
Below is Output Image

how to find the corners of rotated object in matlab?

I want to find the corners of objects.
I tried the following code:
Vstats = regionprops(BW2,'Centroid','MajorAxisLength','MinorAxisLength',...
'Orientation');
u = [Vstats.Centroid];
VcX = u(1:2:end);
VcY = u(2:2:end);
[VcY id] = sort(VcY); % sorting regions by vertical position
VcX = VcX(id);
Vstats = Vstats(id); % permute according sort
Bv = Bv(id);
Vori = [Vstats.Orientation];
VRmaj = [Vstats.MajorAxisLength]/2;
VRmin = [Vstats.MinorAxisLength]/2;
% find corners of vertebrae
figure,imshow(BW2)
hold on
% C = corner(VER);
% plot(C(:,1), C(:,2), 'or');
C = cell(size(Bv));
Anterior = zeros(2*length(C),2);
Posterior = zeros(2*length(C),2);
for i = 1:length(C) % for each region
cx = VcX(i); % centroid coordinates
cy = VcY(i);
bx = Bv{i}(:,2); % edge points coordinates
by = Bv{i}(:,1);
ux = bx-cx; % move to the origin
uy = by-cy;
[t, r] = cart2pol(ux,uy); % translate in polar coodinates
t = t - deg2rad(Vori(i)); % unrotate
for k = 1:4 % find corners (look each quadrant)
fi = t( (t>=(k-3)*pi/2) & (t<=(k-2)*pi/2) );
ri = r( (t>=(k-3)*pi/2) & (t<=(k-2)*pi/2) );
[rp, ip] = max(ri); % find farthest point
tc(k) = fi(ip); % save coordinates
rc(k) = rp;
end
[xc,yc] = pol2cart(tc+1*deg2rad(Vori(i)) ,rc); % de-rotate, translate in cartesian
C{i}(:,1) = xc + cx; % return to previous place
C{i}(:,2) = yc + cy;
plot(C{i}([1,4],1),C{i}([1,4],2),'or',C{i}([2,3],1),C{i}([2,3],2),'og')
% save coordinates :
Anterior([2*i-1,2*i],:) = [C{i}([1,4],1), C{i}([1,4],2)];
Posterior([2*i-1,2*i],:) = [C{i}([2,3],1), C{i}([2,3],2)];
end
My input image is :
I got the following output image
The bottommost object in the image is not detected properly. How can I correct the code? It fails to work for a rotated image.
You can get all the points from the image, and use kmeans clustering and partition the points into 8 groups. Once partition is done, you have the points in and and you can pick what ever the points you want.
rgbImage = imread('your image') ;
%% crop out the unwanted white background from the image
grayImage = min(rgbImage, [], 3);
binaryImage = grayImage < 200;
binaryImage = bwareafilt(binaryImage, 1);
[rows, columns] = find(binaryImage);
row1 = min(rows);
row2 = max(rows);
col1 = min(columns);
col2 = max(columns);
% Crop
croppedImage = rgbImage(row1:row2, col1:col2, :);
I = rgb2gray(croppedImage) ;
%% Get the white regions
[y,x,val] = find(I) ;
%5 use kmeans clustering
[idx,C] = kmeans([x,y],8) ;
%%
figure
imshow(I) ;
hold on
for i = 1:8
xi = x(idx==i) ; yi = y(idx==i) ;
id1=convhull(xi,yi) ;
coor = [xi(id1) yi(id1)] ;
[id,c] = kmeans(coor,4) ;
plot(coor(:,1),coor(:,2),'r','linewidth',3) ;
plot(c(:,1),c(:,2),'*b')
end
Now we are able to capture the regions..the boundary/convex hull points are in hand. You can do what ever math you want with the points.
Did you solve the problem? I Looked into it and it seems that the rotation given by 'regionprops' seems to be off. To fix that I've prepared a quick solution: I've dilated the image to close the gaps, found 4 most distant peaks of each spine, and then validated if a peak is on the left, or on the right of the centerline (that I have obtained by extrapolating form sorted centroids). This method seems to work for this particular problem.
BW2 = rgb2gray(Image);
BW2 = imbinarize(BW2);
%dilate and erode will help to remove extra features of the vertebra
se = strel('disk',4,4);
BW2_dilate = imdilate(BW2,se);
BW2_erode = imerode(BW2_dilate,se);
sb = bwboundaries(BW2_erode);
figure
imshow(BW2)
hold on
centerLine = [];
corners = [];
for bone = 1:length(sb)
x0 = sb{bone}(:,2) - mean(sb{bone}(:,2));
y0 = sb{bone}(:,1) - mean(sb{bone}(:,1));
%save the position of the centroid
centerLine = [centerLine; [mean(sb{bone}(:,1)) mean(sb{bone}(:,2))]];
[th0,rho0] = cart2pol(x0,y0);
%make sure that the indexing starts at the dip, not at the corner
lowest_val = find(rho0==min(rho0));
rho1 = [rho0(lowest_val:end); rho0(1:lowest_val-1)];
th00 = [th0(lowest_val:end); th0(1:lowest_val-1)];
y1 = [y0(lowest_val:end); y0(1:lowest_val-1)];
x1 = [x0(lowest_val:end); x0(1:lowest_val-1)];
%detect corners, using smooth data to remove noise
[pks,locs] = findpeaks(smooth(rho1));
[pksS,idS] = sort(pks,'descend');
%4 most pronounced peaks are where the corners are
edgesFndCx = x1(locs(idS(1:4)));
edgesFndCy = y1(locs(idS(1:4)));
edgesFndCx = edgesFndCx + mean(sb{bone}(:,2));
edgesFndCy = edgesFndCy + mean(sb{bone}(:,1));
corners{bone} = [edgesFndCy edgesFndCx];
end
[~,idCL] = sort(centerLine(:,1),'descend');
centerLine = centerLine(idCL,:);
%extrapolate the spine centerline
yDatExt= 1:size(BW2_erode,1);
extrpLine = interp1(centerLine(:,1),centerLine(:,2),yDatExt,'spline','extrap');
plot(centerLine(:,2),centerLine(:,1),'r')
plot(extrpLine,yDatExt,'r')
%find edges to the left, and to the right of the centerline
for bone = 1:length(corners)
x0 = corners{bone}(:,2);
y0 = corners{bone}(:,1);
for crn = 1:4
xCompare = extrpLine(y0(crn));
if x0(crn) < xCompare
plot(x0(crn),y0(crn),'go','LineWidth',2)
else
plot(x0(crn),y0(crn),'ro','LineWidth',2)
end
end
end
Solution

How project Velodyne point clouds on image? (KITTI Dataset)

Here is my code to project Velodyne points into the images:
cam = 2;
frame = 20;
% compute projection matrix velodyne->image plane
R_cam_to_rect = eye(4);
[P, Tr_velo_to_cam, R] = readCalibration('D:/Shared/training/calib/',frame,cam)
R_cam_to_rect(1:3,1:3) = R;
P_velo_to_img = P*R_cam_to_rect*Tr_velo_to_cam;
% load and display image
img = imread(sprintf('D:/Shared/training/image_2/%06d.png',frame));
fig = figure('Position',[20 100 size(img,2) size(img,1)]); axes('Position',[0 0 1 1]);
imshow(img); hold on;
% load velodyne points
fid = fopen(sprintf('D:/Shared/training/velodyne/%06d.bin',frame),'rb');
velo = fread(fid,[4 inf],'single')';
% remove every 5th point for display speed
velo = velo(1:5:end,:);
fclose(fid);
% remove all points behind image plane (approximation
idx = velo(:,1)<5;
velo(idx,:) = [];
% project to image plane (exclude luminance)
velo_img = project(velo(:,1:3),P_velo_to_img);
% plot points
cols = jet;
for i=1:size(velo_img,1)
col_idx = round(64*5/velo(i,1));
plot(velo_img(i,1),velo_img(i,2),'o','LineWidth',4,'MarkerSize',1,'Color',cols(col_idx,:));
where readCalibration function is defined as
function [P, Tr_velo_to_cam, R_cam_to_rect] = readCalibration(calib_dir,img_idx,cam)
% load 3x4 projection matrix
P = dlmread(sprintf('%s/%06d.txt',calib_dir,img_idx),' ',0,1);
Tr_velo_to_cam = P(6,:);
R_cam_to_rect = P(5,1:9);
P = P(cam+1,:);
P = reshape(P ,[4,3])';
Tr_velo_to_cam = reshape(Tr_velo_to_cam ,[3,4])';
R_cam_to_rect = reshape(R_cam_to_rect ,[3,3])';
end
But here is the result:
what is wrong with my code? I changed the "cam" variable from 0 to 3 and none of them worked. You can find a sample of Calibration file in this link:
How to understand KITTI camera calibration files
I fixed it by myself. here is the modification in readCalibration function:
Tr_velo_to_cam = P(6,:);
Tr_velo_to_cam = reshape(Tr_velo_to_cam ,[4,3])';
Tr_velo_to_cam = [Tr_velo_to_cam;0 0 0 1];

Motion History Image in Matlab

I am working on a project about action recognition using motion history images in matlab. I am new to this field. I did background subtraction using frame differencing method to get images that have only the moving person. Now I want to compute MHI. I found the following code for MHI. I did not understand what is fg{1} and how to use it. Any help will be appreciated. Thank you.
vid= VideoReader('PS7A1P1T1.avi');
n = vid.NumberOfFrames;
fg = cell(1, n);
for i = 1:n
frame = read(vid,i);
frame = rgb2gray(frame);
fg{i} = frame;
end
%---------------------------------------------------------------
%background subtraction using frame differencing method
thresh = 25;
bg = fg{1}; % read in 1st frame as background frame
% ----------------------- set frame size variables -----------------------
fr_size = size(bg);
width = fr_size(2);
height = fr_size(1);
% --------------------- process frames -----------------------------------
for i = 2:n
fr = fg{i}; % read in frame
fr_diff = abs(double(fr) - double(bg)); % cast operands as double to avoid negative overflow
for j=1:width % if fr_diff > thresh pixel in foreground
for k=1:height
if ((fr_diff(k,j) > thresh))
fg {i}(k,j) = fr(k,j);
else
fg {i}(k,j) = 0;
end
end
end
bg = fr;
imshow(fg{i})
end
out = MHI(fg);
//----------------------------------------
function MHI = MHI(fg)
% Initialize the output, MHI a.k.a. H(x,y,t,T)
MHI = fg;
% Define MHI parameter T
T = 15; % # of frames being considered; maximal value of MHI.
% Load the first frame
frame1 = fg{1};
% Get dimensions of the frames
[y_max x_max] = size(frame1);
% Compute H(x,y,1,T) (the first MHI)
MHI{1} = fg{1} .* T;
% Start global loop for each frame
for frameIndex = 2:length(fg)
%Load current frame from image cell
frame = fg{frameIndex};
% Begin looping through each point
for y = 1:y_max
for x = 1:x_max
if (frame(y,x) == 255)
MHI{frameIndex}(y,x) = T;
else
if (MHI{frameIndex-1}(y,x) > 1)
MHI{frameIndex}(y,x) = MHI{frameIndex-1}(y,x) - 1;
else
MHI{frameIndex}(y,x) = 0;
end
end
end
end
end
fg{1} is most likely the first frame of a grayscale video. Given your comments, you are using the VideoReader class to read in frames. As such, read in each frame individually, convert to grayscale then place on a cell in a cell array. When you're done, call the script.
Here's the code modified from your comments to suit this task:
vid = VideoReader('PS7A1P2T1.avi');
n = vid.NumberOfFrames;
fg = cell(1, n);
for i = 1:n
frame = read(vid,i);
frame = rgb2gray(frame);
fg{i} = frame;
end
You can then call the MHI script:
out = MHI(fg);