Suppose we take any image from Internet and then copy or move some part from that image to any other area inside that image the image should show from where that the part is copied / moved and then pasted. By using matlab.
a = imread('obama.jpg');
a = rgb2gray(a);
[x1 y1] = size(a);
b = uint8(imcrop(a, [170 110 200 150]));
[x2 y2] = size(b);
c = uint8(zeros(x1,y1));
for i = 1:x2
for j = 1:y2
c(i+169,j+109) = b(i,j);
end
end
[x3 y3] = size(c)
subplot(1,3,1),imshow(a);
subplot(1,3,2),imshow(b);
subplot(1,3,3),imshow(c);
Code
%%// Input data and params
a = imread('Lenna.png');
a = rgb2gray(a);
src_xy = [300,300]; %% Starting X-Y of the source from where the portion would be cut from
src_dims = [100 100]; %% Dimensions of the portion to be cut
tgt_xy = [200,200]; %% Starting X-Y of the target to where the portion would be put into
%%// Get masks
msrc = false(size(a));
msrc(src_xy(1):src_xy(1)+src_dims(1)-1,src_xy(2):src_xy(2)+src_dims(2)-1)=true;
mtgt = false(size(a));
mtgt(tgt_xy(1):tgt_xy(1)+src_dims(1)-1,tgt_xy(2):tgt_xy(2)+src_dims(2)-1)=true;
%%// If you would like to have a cursor based cut, explore ROIPOLY, GINPUT - e.g. - mask1 = roipoly(a)
mask1 = msrc;
a2 = double(a);
%%// Get crop-mask boundary and dilate it a bit to show it as the "frame" on the original image
a2(imdilate(edge(mask1,'sobel'),strel('disk',2))) = 0;
a2 = uint8(a2);
%%// Display original image with cropped portion being highlighted
figure,imshow(a2);title('Cropped portion highlighted')
figure,imshow(a);title('Original Image')
figure,imshow(mask1);title('Mask that was cropped')
img1 = uint8(bsxfun(#times,double(a),mask1));
figure,imshow(img1);title('Masked portion of image')
%%// Get and display original image with cropped portion being overlayed at the target coordinates
a_final = a;
a_final(mtgt) = a(msrc);
figure,imshow(uint8(a_final));title('Original image with the cut portion being overlayed')
Output
Please note that to use RGB images, you would need to tinker a bit more with the above code.
Related
I am writing a short Matlab script to use im2bw to calculate the area of objects in the image. I am trying to validate that the areas are being calculated correctly, but on the reference image I am using the last square's area is 2% off and I cannot figure out why.
myFolder = 'Your Directory';
% Get a list of all files in the folder with the desired file name pattern.
filePattern = fullfile(myFolder, '*.png'); % Change to whatever pattern you need.
theFiles = dir(filePattern);
for k = 1 : length(theFiles)
baseFileName = theFiles(k).name;
fullFileName = fullfile(theFiles(k).folder, baseFileName);
J = imread(fullFileName);
%J = imcrop(I,[500,200,4400,3500]);
BW = im2bw(J,0.3);
BW2 = imcomplement(BW);
BW3 = imfill(BW2,4,'holes');
%figure(1)
%imshowpair(I,BW3,'montage')
stats = regionprops(BW3,'area','PixelList','majoraxislength','minoraxislength');
area = zeros(size(stats));
for i = 1:size(stats,1)
area(i) = stats(i).Area;
end
scale = 1e-6;
%scale = 3.134755344e-8;
cutoff = 0;
area = area*scale;
stats(area<cutoff)=[];
area(area<cutoff)=[];
writematrix(area,'Your Directory','WriteMode','append')
end
figure(1)
imshowpair(J,BW3,'montage')
The image is 1000x1000 the squares are 20x20, 100x100, 100x200, 200x200 respectively.
I assumed the reference image in reality was 1mx1m and scaled accordingly, but the 200x200 square seems to not come out exactly right.
clear_all();
image_name = 'canny_test.png';
% no of pixels discarded on border areas
discard_pixels = 10;
% read image and convert to grayscale
input_image = gray_imread(image_name);
% discard border area
input_image = discard_image_area(input_image, discard_pixels);
% create a binary image
binary_image = edge(input_image,'canny');
imshow(binary_image);
Input
Expected Outcome
Actual Outcome
Here, we see that the borderlines of the image are being detected by the Canny Edge Detector which is not my expected outcome.
How can I achieve this?
Source Code
function [output_image] = discard_image_area( input_image, pixel_count)
output_image = input_image;
[height, width] = size(input_image);
% discard top
output_image(1:pixel_count, :) = 0;
% discard bottom
h = height - pixel_count;
output_image(h:height, :) = 0;
% discard left
output_image(:,1:pixel_count) = 0;
% discard right
output_image(:,(width-pixel_count):width) = 0;
end
function img = gray_imread( image_name )
I = imread(image_name);
if(is_rgb(I))
img = rgb2gray(I);
elseif (is_gray(I))
img = I;
end
end
Apply discard_image_area after applying the edge function. Otherwise the discarded area makes its boundary apparently. i.e. do this:
image_name = 'canny_test.png';
discard_pixels = 10;
input_image = rgb2gray(imread(image_name)); % there is no such this as gray_imread
% create a binary image
binary_image = edge(input_image,'canny');
% discard border area
binary_image = discard_image_area(binary_image , discard_pixels);
imshow(binary_image);
Output:
The answer is simple, your function discard_image_area changes the image value to 0 near the borders of the image.
Hence it creates Step between the image value and 0.
This is exactly what the Canny Edge Detector looks for.
You can easily see it if you display the image after applying the function.
Just don't use that function.
For some reason, the following code only displays the masked image with roi at 10,10 with height and width 100,100. These are the initial values. It seems the image does not update even after the getPosition function. Could anyone explain this issue?
`I = imread('/Users/imageuser/Documents/PT300.tif');
h = imshow(I);
% define circular roi by square bounding box
x = 10;
y = 10;
d1 = 100;
d2 = 100;
e = imellipse(gca, [x y d1 d2]);
% roi can be interactively moved/adjusted
% do not close figure window before createMask is called
%%% these lines are only needed if you move or resize the roi
pos = getPosition(e);
x = pos(1);
y = pos(2);
d1 = pos(3);
d2 = pos(4);
%%%
BW = createMask(e,h);
pause;
imshow(BW);`
You need to put those lines (note I inverted their order):
pause;
BW = createMask(e,h);
before calling getPosition, otherwise the new position is not updated.
Whole code:
clear
clc
close all
I = imread('coins.png');
h = imshow(I);
% define circular roi by square bounding box
x = 10;
y = 10;
d1 = 100;
d2 = 100;
e = imellipse(gca, [x y d1 d2]);
pause;
BW = createMask(e,h);
% roi can be interactively moved/adjusted
% do not close figure window before createMask is called
%%% these lines are only needed if you move or resize the roi
pos = getPosition(e)
x = pos(1);
y = pos(2);
d1 = pos(3);
d2 = pos(4);
%%%
figure
imshow(BW);
sample output after dragging the ROI:
Yay!
Note: As mentioned by juicestain, instead of using pause to halt execution of the program while the user is done creating the GUI, you can use wait to wait until the user double-clicks on the ROI object instead of having to press a key as with the pause command.
Therefore, you could replace the call to pause with a call to wait(e).
I have an image of a product on a solid background that I would like to crop as close as possible to the product.
I brighten it and find the edges with the following code:
limits = stretchlim(original, 0.01);
img1 = imadjust(original, limits, []);
img = rgb2gray(img1);
BW = edge(img,'canny',0.2);
[B,L,N,A] = bwboundaries(BW);
figure; imshow(BW); hold on;
for k=1:length(B),
if(~sum(A(k,:)))
boundary = B{k};
plot(boundary(:,2),boundary(:,1),'r','LineWidth',2);hold on;
end
end
Which give me the following image:
The following code gives me rectangles on every blob/line detected:
blobMeasurements = regionprops(logical(BW), 'BoundingBox');
numberOfBlobs = size(blobMeasurements, 1);
rectCollection = [];
for k = 1 : numberOfBlobs % Loop through all blobs.
rects = blobMeasurements(k).BoundingBox; % Get list ofpixels in current blob.
x1 = rects(1);
y1 = rects(2);
x2 = x1 + rects(3);
y2 = y1 + rects(4);
x = [x1 x2 x2 x1 x1];
y = [y1 y1 y2 y2 y1];
rectCollection(k,:,:,:) = [x1; y1; x2; y2];
end
I'm able to then draw a bounding rectangle and crop with all these points collected with the following code:
% get min max
xmin=min(rectCollection(:,1))-1;
ymin=min(rectCollection(:,2))-1;
xmax=max(rectCollection(:,3))+1;
ymax=max(rectCollection(:,4))+1;
% define outer rect:
outer_rect=[xmin ymin xmax-xmin ymax-ymin];
crop = imcrop(original,outer_rect);
Which gives me the following result:
My question is how can I get a polygon as close as possible to the product and crop it with the polygon or, alternatively, just crop as close as possible to the product and its cap?
If you don't want to get a bounding box but a polygon, I think you need to generate a mask - a matrix with the same size of your image, value 1 if the pixel in on your object, 0 if not.
I heard about an algorithm (sorry I can't find the name, I'll edit this post if I find it) which works with a lasso :
step 0 : your lasso is your bounding box.
step i : segment the lasso and for each part, you retract it if the color (or any other) gradient in the image is less than a fixed value.
step n (last) : you cannot retract any part of the lasso, it is finished. Inside your lasso : your object. Outside : the background.
As I remember there is a lot of work with this method : the definition of the lasso, the retractation step, the solidity of your lasso (to avoid too much deformation of the lasso).
Beside the lasso method you can search about the watershed transform, it can also work with your problem.
Finally, if you generate the pictures, take shots with a plained background (green, pink, blue, etc) and use a simple chromakey.
Using active contours also looks like a good approach but getting a nice mask is troublesome.
original = imread('1.jpg');
level = graythresh(original);
img = rgb2gray(original);
mask = im2bw(img,level+0.1);
mask = imfill(~mask,'holes');
bw = activecontour(img,mask);
rows = numel(original(:,1,1));
columns = numel(original(1,:,1));
for i = 1:rows
for j = 1:columns
if ( bw(i,j,1) == 0 )
original(i,j,:) = 255;
end
end
end
imshow(original);
I have this code converting a fisheye image into rectangular form but the code is only able to perform this operation on a grayscale image. Can anybody help converting the code to perform the operation on a RGB image. The code is as follows:
edit: I have updated the code to contain a functionality which performs interpolation in each color channel. But this seem to disform the output image. See pictures below
function imP = FISHCOLOR (imR)
rMin=0.1;
rMax=1;
[Mr, Nr, Dr] = size(imR); % size of rectangular image
xRc = (Mr+1)/2; % co-ordinates of the center of the image
yRc = (Nr+1)/2;
sx = (Mr-1)/2; % scale factors
sy = (Nr-1)/2;
M=size(imR,1);N=size(imR,2);
dr = (rMax - rMin)/(M-1);
dth = 2*pi/N;
r=rMin:dr:rMin+(M-1)*dr;
th=(0:dth:(N-1)*dth)';
[r,th]=meshgrid(r,th);
x=r.*cos(th);
y=r.*sin(th);
xR = x*sx + xRc;
yR = y*sy + yRc;
imP =zeros(M, N); % initialize the final matrix
for k=1:3 % colors
T = imR(:,:,k);
Ichannel = interp2(T,xR,yR);
imP(:,:,k)= Ichannel; % add k channel
end
SOLVED
Input image <- Image link
Grayscale output, what i would like in color <- Image link
Try changing these three lines:
[Mr Nr] = size(imR); % size of rectangular image
...
imP = zeros(M, N);
...
imP = interp2(imR, xR, yR); %interpolate (imR, xR, yR);
...to these:
[Mr Nr Pr] = size(imR); % size of rectangular image
...
imP = zeros(M, N, Pr);
...
for dim = 1:Pr
imP(:,:,dim) = interp2(imR(:,:,dim), xR, yR); %interpolate (imR, xR, yR);
end