I am working on dress feature identification using opencv.
As a first step, I need to segment t-shirt by removing face and hands from the image.
Any suggestion is appreciated.
I suggest the following approach:
Use Adrian Rosebrock's skin detection algorithm for detecting the skin (thank you for Rosa Gronchi for his comment).
Use region growing algorithm on the variance map. The initial seed can be calculated by using stage 1(see the attached code for more information).
code:
%stage 1: skin detection - Adrian Rosebrock solution
im = imread(<path to input image>);
hsb = rgb2hsv(im)*255;
skinMask = hsb(:,:,1) > 0 & hsb(:,:,1) < 20;
skinMask = skinMask & (hsb(:,:,2) > 48 & hsb(:,:,2) < 255);
skinMask = skinMask & (hsb(:,:,3) > 80 & hsb(:,:,3) < 255);
skinMask = imclose(skinMask,strel('disk',6));
%stage 2: calculate top, left and right centroid from the different connected
%components of the skin
stats = regionprops(skinMask,'centroid');
topCentroid = stats(1).Centroid;
rightCentroid = stats(1).Centroid;
leftCentroid = stats(1).Centroid;
for x = 1 : length(stats)
centroid = stats(x).Centroid;
if topCentroid(2)>centroid(2)
topCentroid = centroid;
elseif centroid(1)<leftCentroid(1)
leftCentroid = centroid;
elseif centroid(1)>rightCentroid(1)
rightCentroid = centroid;
end
end
%first seed - the average of the most left and right centroids.
centralSeed = int16((rightCentroid+leftCentroid)/2);
%second seed - a pixel which is right below the face centroid.
faceSeed = int16(topCentroid);
faceSeed(2) = faceSeed(2)+40;
%stage 3: std filter
varIm = stdfilt(rgb2gray(im));
%stage 4 - region growing on varIm using faceSeed and centralSeed
res1=regiongrowing(varIm,centralSeed(2),centralSeed(1),8);
res2=regiongrowing(varIm,faceSeed(2),faceSeed(1),8);
res = res1|res2;
%noise reduction
res = imclose(res,strel('disk',3));
res = imopen(res,strel('disk',2));
result after stage 1(skin detection):
final result:
Comments:
Stage 1 is calculated using the following algorithm.
The region growing function can be downloaded here.
The solution is not perfect. For example, it may fail if the texture of the shirt is similar to the texture of the background. But I think that it can be a good start.
Another improvement which can be done is to use a better region growing algorithm, which doesn't grows into the skinMask location. Also, instead of using the region growing algorithm twice independently, the result of the second call of region growing can can be based on the result from the first one.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 months ago.
Improve this question
I am performing SIFT matching with VLFEAT in Matlab.
A single match is simple to display: I followed the tutorial.
Update 1: (extracting the problem from my needs)
Next, I consider 4 different views of the scene: I want to match the feature found in the first camera (bottom left) with the others.
Images are already undistorted.
I could match the third image: I managed to correct the coordinates with offsets for a proper display.
I set high threshold (fewer points) to have a more understandable image.
My code is posted below, then there will be the question
Pointing out that (it does not affect the question or the answer, only the variable names in my code)
Since my 4 cameras are in fact a stereo camera moving in the space,
the 4 cameras (and relative outputs) are:
Bottom left: left camera named a. The features in this image
are fa, descriptors da... Bottom right: left camera
named a. The features in this image are fb, descriptors
db... Top left: left camera named a in the previous
instant. The features in this image are fa_old, descriptors
da_old... Top right: right camera named b in the
previous instant. The features in this image are fb_old, descriptors
db_old...
Movements are smaller so I expected that SIFT could retrieve the same points.
This code finds points and performs the "blue matching" and "red matching"
%classic instruction for searching feature
[fa,da] = vl_sift((Ia_f),'NormThresh', thresh_N, 'EdgeThresh', thresh_E) ;
% with the same line I obtain
%fa are features in the current left image (da are descriptors)
%fb are features in the current right image (db... )
%fa_old are features in the previous left image
%fb_old are features in the previous right image
%code from tutorials (find the feature)
[matches, scores] = vl_ubcmatch(da,db,thresh_SIFT) ;
[drop, perm] = sort(scores, 'descend') ;
matches = matches(:, perm);
%my code
figure(1) ; %clf ;
axis equal;
%prepare the image
imshow(cat(1,(cat(2, Ia_v_old, Ib_v_old)),cat(2,Ia_v,Ib_v)));
%matching between the left frames (current and previous)
[matches_prev, scores_prev] = vl_ubcmatch(da,da_old,thresh_SIFT) ;
[drop_prev, perm_prev] = sort(scores_prev, 'descend') ;
matches_prev = matches_prev(:, perm_prev) ;
%find index of descriptors in common, write them in order
I = intersect(matches(1,:), matches_prev(1,:),'stable');
MI_1 = arrayfun(#(x)find(matches(1,:)==x,1),I);
MI_2 = arrayfun(#(x)find(matches_prev(1,:)==x,1),I);
matches_M = matches(:,MI_1(:));
matches_prev_M = matches_prev(:,MI_2(:));
%features coordinates in the current images (bottom)
xa = fa(1,matches_M(1,:)) + offset_column ;
xb = fb(1,matches_M(2,:)) + size(Ia,2); %+offset_column-offset_column ;
ya = fa(2,matches_M(1,:)) + offset_row + size(Ia,1);
yb = fb(2,matches_M(2,:)) + offset_row + size(Ia,1);
%matching "in space" (blue lines)
space_corr = line([xa ; xb], [ya ; yb]) ;
set(space_corr,'linewidth', 1, 'color', 'b') ;
%plotting features
fa(1,:) = fa(1,:) + offset_column ;
fa(2,:) = fa(2,:) + offset_row + size(Ia,1);
vl_plotframe(fa(:,matches_M(1,:))) ;
fb(1,:) = fb(1,:) + size(Ia,2) ;
fb(2,:) = fb(2,:) + offset_row + size(Ia,1);
vl_plotframe(fb(:,matches_M(2,:))) ;
%matching "in time" %corrx and coor y are corrected offsets
xa2 = fa_old(1,matches_prev_M(2,:)) + corrx; %coordinate per display
ya2 = fa_old(2,matches_prev_M(2,:)) - size(Ia,1) + corry;
fa_old(1,:) = fa_old(1,:) + corrx;
fa_old(2,:) = fa_old(2,:) - size(Ia,1) + corry;
fb_old(1,:) = fb_old(1,:) + corrx ;
fb_old(2,:) = fb_old(2,:) - size(Ia,1) + corry;
%plot red lines
time_corr = line([xa ; xa2], [ya ; ya2]) ;
set(time_corr,'linewidth', 1, 'color', 'r') ;
%plot feature in top left image
vl_plotframe(fa_old(:,matches_prev_M(2,:))) ;
%plot feature in top right image
vl_plotframe(fb_old(:,matches_ex_M(2,:))) ;
All works. I thought to repeat few lines of code and find the proper matches_ex_M index array in the proper order and finally connect the feature in the last (top right) image (with any one of the other images)
% one of many tries (all wrong)
[matches_ex, scores_ex] = vl_ubcmatch(da_old,db_old,thresh_SIFT) ;
[drop_ex, perm_ex] = sort(scores_ex, 'descend') ;
matches_ex = matches_ex(:, perm_ex);
Ib = intersect(matches_prev_M(2,:), matches_ex(1,:),'stable');
MIb_2 = arrayfun(#(x)find(matches_ex(1,:)==x,1),Ib);
matches_ex_M = matches_ex(:,MIb_2(:));
The problem is that a new intersection will cause a new reorder, and all the matches will be wrong.
But I have admitted I have no more ideas, after trying all possible combinations of matching index arrays. The problem is that I can't perform either intersection between 3 arrays simultaneously, nor changing their order. Featured are well displayed in all 4 images and I can perform single matches from any image to another in different scripts. In the top right images, there are the same features but with different order.
What I obtain (obviously wrong)
Synthesizing my problem:
I thought I should change the order of the points in the top right
frame to have a good "yellow" matching, but I don't know how to make
it without changing the order in the top left (this will destroy the
"red" matching" and/or the "blue matching")
Any idea? Any different strategies?
Thank you all in advance.
UPDATE 2: After thinking to switch from MATLAB + VLFEAT to Python(2.7) + OpenCV (2.4.13) (I'd prefer to have a solution in Matlab and VLFEAT) I found this answer.
Someone could do that in C++. But I'm unable to convert it nor in Matlab neither in Python.
A pythonic solution could be accepted as well (added proper tags for that reason).
After 2 or 3 days of search, I still didn't find a solution to my problem.
I want to create a segmentation of the mouse without the shadow. The problem is that If I manage to remove the shadow I also remove the tail and the feets which is a problem. The shadow comes from the wall of the arena in which the mouse is.
I want to remove the shadow from a grayscale image but I have no clue how doing it. First I removed the background of the image and I obtain the following picture.
edit1 : Thank you for the answer it works well when the shadow doesn't touch the mouse. This is what I get otherwise :
from this original image :
I am extracting each frame from a tif file and apply your code for each frame. This is the code I use :
for k=1:1000
%reads image
I = imread('souris3.tif',k);
%first stage: perform thesholding and fill holes
seg = I >20000;
seg = imfill(seg,'holes');
%fixes the missing tail problem
%extract edges, and add them to the segmentation.
edges = edge(I);
seg = seg | edges;
%fill holes (again)
seg = imfill(seg,'holes');
%find all the connected components
CC = bwconncomp(seg,8);
%keeps only the biggest CC
numPixels = cellfun(#numel,CC.PixelIdxList);
[biggest,idx] = max(numPixels);
seg = zeros(size(edges));
seg(CC.PixelIdxList{idx}) = 1;
imshow(seg);
end
I choose 20000 for step with the command impixelinfo because the image is in uint16 and it's the mean value of the mouse.
This is the link if you want to have the tif file :
souris3.tif
Thank you for helping.
I suggest the following approach:
perform thresholding on the image, and get a mask which contains most of the mouse's body without his tail and legs.
perform hole filling by using MATLAB's imfill function. At this stage, the segmentation is almost perfect, except for a part of the tail which is missing.
use the edge map in order to find the boundaries of the tail. This can be done by adding the edges map to the segmentation and perform hole filling once again. keep only the biggest connected component at this stage.
Code:
%reads image
I = rgb2gray(imread('mSWm4.png'));
%defines thersholds (you may want to tweak these thresholds, or find
%a way to calculate it automatically).
FIRST_STAGE_THRESHOLD = 70;
IM_BOUNDARY_RELEVANCE_THRESHOLD = 10;
%perform thesholding and fill holes, the tail is still missing
seg = I > FIRST_STAGE_THRESHOLD;
seg = imfill(seg,'holes');
%second stage fix the missing tail problem:
%extract edges from relevant areas (in which the matter is not too dark), and add them to the segmentation.
%the boundries of the image which are close enough to edges are also considered as edges
edges = edge(I);
imageBoundries = ones(size(I));
imageBoundries(2:end-1,2:end-1) = 0;
relevantDistFromEdges = bwdist(edges) > IM_BOUNDARY_RELEVANCE_THRESHOLD;
imageBoundries(bwdist(edges) > IM_BOUNDARY_RELEVANCE_THRESHOLD) = 0;
seg = seg | (edges | imageBoundries);
%fill holes (again) and perform noise cleaning
seg = imfill(seg,'holes');
seg = getBiggestCC(imopen(seg,strel('disk',1)));
getBiggestCC function:
function [ res ] = getBiggestCC(mask)
CC = bwconncomp(mask,8);
numPixels = cellfun(#numel,CC.PixelIdxList);
[~,idx] = max(numPixels);
res = zeros(size(mask));
res(CC.PixelIdxList{idx}) = 1;
end
results
results of each stage:
results
image 1 results:
image 2 results:
Another view (segmentation is in red):
I am attempting to create a radial gradient image to look like the following using Matlab. The image needs to be of size 640*640*3 as I have to blend it with another image of that size. I have written the following code but the image that prints out is simply a grey circle on a black background with no fading around the edges.
p = zeros(640,640,3);
for i=1:640
for j=1:640
d = sqrt((i-320)^2+(j-320)^2);
if d < 640/3
p(i,j,:) = .5;
elseif d > 1280/3
p(i,j,:) = 0;
else
p(i,j,:) = (1 + cos(3*pi)*(d-640/3))/4;
end
end
end
imshow(p);
Any help would be greatly appreciated as I am new to Matlab.
Change:
p(i,j,:) = (1 + cos(3*pi)*(d-640/3))/4;
to
p(i,j,:) = .5-( (.5-0)*(d-640/3)/(640/3)) ;
This is an example of linear interpolation, where the grey value from the inner rim drops linearly to the background.
You can try other equations to have different kinds of gradient fading!
If you look more closely on your third case (which by the way should be a simple else instead of elseif), you can see that you have
= (1 + cos(3*pi))*...
Since cos(3*pi) = -1, this will always be 0, thus making all pixels within that range black. I assume that you would want a "d" in there somewhere.
This question is based on a modified Matlab code from the online documentation for the optical flow system objects in version 2015a as appears in opticalFlowLK class
clc; clearvars; close all;
inputVid = VideoReader('viptraffic.avi');
opticFlow = opticalFlowLKDoG('NumFrames',3);
inputVid.currentTime = 2;
k = 1;
while inputVid.currentTime<=2 + 1/inputVid.FrameRate
frameRGB{k} = readFrame(inputVid);
frameGray{k} = rgb2gray(frameRGB{k});
flow{k} = estimateFlow(opticFlow,frameGray{k});
k = k+1;
end
By looking at flow{2}.Vx and flow{2}.Vy I get the motion maps U and V that describe the motion from frameGray{1} to frameGray{2}.
Iwant to use flow{2}.Vx and flow{2}.Vy directly on the data in frameGray{1} in order to warp frameGray{1} to appear visually similar to frameGray{2}.
I tried this code:
[x, y] = meshgrid(1:size(frameGray{1},2), 1:size(frameGray{1},1));
frameGray1Warped = interp2(double(frameGray{1}) , x-flow{2}.Vx , y-flow{2}.Vy);
But it doesn't seem to do much at all except ruin the image quality (but the objects don't display any real motion towards their locations in frameGray{2}.
I added 3 images showing the 2 original frames followed by frame 1 warped using the motion field to appear similar to frame 2:
It can be seen easily that frame 1 warped to 2 is essentially frame 1 with degraded quality but the cars haven't moved at all. That is - the location of the cars is the same: look at the car closest to the camera with respect to the road separation line near it; it's virtually the same in frame 1 and frame 1 warped to 2, but is quite different in frame 2.
I am trying to rotate the image manually using the following code.
clc;
m1 = imread('owl','pgm'); % a simple gray scale image of order 260 X 200
newImg = zeros(500,500);
newImg = int16(newImg);
rotationMatrix45 = [cos((pi/4)) -sin((pi/4)); sin((pi/4)) cos((pi/4))];
for x = 1:size(m1,1)
for y = 1:size(m1,2)
point =[x;y] ;
product = rotationMatrix45 * point;
product = int16(product);
newx =product(1,1);
newy=product(2,1);
newImg(newx,newy) = m1(x,y);
end
end
imshow(newImg);
Simply I am iterating through every pixel of image m1, multiplying m1(x,y) with rotation matrix, I get x',y', and storing the value of m1(x,y) in to `newImg(x',y')' BUT it is giving the following error
??? Attempted to access newImg(0,1); index must be a positive integer or logical.
Error in ==> at 18
newImg(newx,newy) = m1(x,y);
I don't know what I am doing wrong.
Part of the rotated image will get negative (or zero) newx and newy values since the corners will rotate out of the original image coordinates. You can't assign a value to newImg if newx or newy is nonpositive; those aren't valid matrix indices. One solution would be to check for this situation and skip such pixels (with continue)
Another solution would be to enlarge the newImg sufficiently, but that will require a slightly more complicated transformation.
This is assuming that you can't just use imrotate because this is homework?
The problem is simple, the answer maybe not : Matlab arrays are indexed from one to N (whereas in many programming langages it's from 0 to (N-1) ).
Try newImg( max( min(1,newX), m1.size() ) , max( min(1,newY), m1.size() ) ) maybe (I don't have Matlab at work so I can tell if it's gonna work), but the resulting image will be croped.
this is an old post so I guess it wont help the OP but as I was helped by his attempt I post here my corrected code.
basically some freedom in the implementation regarding to how you deal with unassigned pixels as well as wether you wish to keep the original size of the pic - which will force you to crop areas falling "outside" of it.
the following function rotates the image around its center, leaves unassigned pixels as "burned" and crops the edges.
function [h] = rot(A,ang)
rotMat = [cos((pi.*ang/180)) sin((pi.*ang/180)); -sin((pi.*ang/180)) cos((pi.*ang/180))];
centerW = round(size(A,1)/2);
centerH = round(size(A,2)/2);
h=255.* uint8(ones(size(A)));
for x = 1:size(A,1)
for y = 1:size(A,2)
point =[x-centerW;y-centerH] ;
product = rotMat * point;
product = int16(product);
newx =product(1,1);
newy=product(2,1);
if newx+centerW<=size(A,1)&& newx+centerW > 0 && newy+centerH<=size(A,2)&& newy+centerH > 0
h(newx+centerW,newy+centerH) = A(x,y);
end
end
end