cvBlobsLib Filter functionality and Matlabs bwareaopen - matlab

I am trying to delete edges whose length is under a limit in OpenCv. In Matlab Canny edge detecor+bwareaopen does the job, however I could not get the cvBlobsLib CBlobResult::Filter function and cannot show the results using FillBlob. Here is my code
image = imread(imageflname, CV_LOAD_IMAGE_COLOR);
Mat gray;
cvtColor(image,gray, CV_BGR2GRAY);
float min_thr = 54;
GaussianBlur(gray, gray, Size(5,5), 0.8, BORDER_REPLICATE);
imwrite("gray.png", gray);
Canny(gray, edge_im, min_thr, min_thr*3, 3);
imwrite("edge_im.png", edge_im);
CBlobResult blobs;
Mat binary_image(edge_im.size(), CV_8UC1);
blobs = CBlobResult(binary_image,Mat(),4);
cout<<"# blobs found: "<<blobs.GetNumBlobs()<<endl;
blobs.Filter( blobs, B_INCLUDE, CBlobGetArea(), B_GREATER, 70 );
cout<<"After deletion found: "<<blobs.GetNumBlobs()<<endl;
Mat edge_open(edge_im.size(), image.type());
edge_open.setTo(0);
for(int i=0;i<blobs.GetNumBlobs();i++){
blobs.GetBlob(i)->FillBlob(edge_open,CV_RGB(255,255,255));
}
imwrite("edge_im_open.png", edge_open);
Edge image comes out ok, but the area opening part does not work properly. 'edge_open' is not what it is supposed to be. What could be the reason for this? Any coding suggestions?
Thanks.

Related

Matlab - Removing image background using normalization

I have an image like this:
my goal is to get the output under background normalization at this link.
Following the above link, I did the following:
(1). I first dilate the image to get the background
(2). then try to remove it via normalization
I got the background:
However, when I try to do the normalized division, I get this :
(black borders added to make clear of the boundary of the image)
this is my code:
image = imread('image.png');
image = rgb2gray(image);
se = offsetstrel('ball',9,9);
dilatedI = imdilate(image,se);
output = imdivide(image,dilatedI);
imshow(output,[]);
using
imshow(output)
just gives a black image.
I thought it might be a type conversion issue, but based on the resources mentioned earlier, I am uncertain if it is the case...
Any advice would be appreciated
Just make sure you dont do integer division! your images are integer type, so 4/5 returns 0 and 5/4 returns 1, not a floating point number. Just convert to float before dividing:
image = imread('https://i.stack.imgur.com/bIVRT.png');
%image = rgb2gray(image); The image is not a RGB online
se = offsetstrel('ball',21,21);
dilatedI = imdilate(image,se);
output = imdivide(double(image),double(dilatedI));
figure
subplot(121)
imshow(image);
subplot(122)
imshow(output);

Smooth and fit edge of binary images

i am working on a research about the swimming of fishes using analysis of videos, then i need to be carefully with the images (obtained from video frames) with emphasis in the tail.
The images are in High-Resolution and the software that i customize works with binary images, because is easy to use maths operations on this.
For obten this binary images i use 2 methods:
1)Convert the image to gray, invert the colors,later to bw and finally to binary with a treshold that give me images like this, with almost nothing of noise. The images sometimes loss a bit of area and doesn't is very exactly with the tail(now i need more acurracy for determinate the amplitude of tail moves)
image 1
2)i use this code, for cut the border that increase the threshold, this give me a good image of the edge, but i dont know like joint these point and smooth the image, or fitting binary images, the app fitting of matlab 2012Rb doesn't give me a good graph and i don't have access to the toolboxs of matlab.
s4 = imread('arecorte.bmp');
A=[90 90 1110 550]
s5=imcrop(s4,A)
E = edge(s5,'canny',0.59);
image2
My question is that
how i can fit the binary image or joint the points and smooth without disturb the tail?
Or how i can use the edge of the image 2 to increase the acurracy of the image 1?
i will upload a image in the comments that give me the idea of the method 2), because i can't post more links, please remember that i am working with iterations and i can't work frame by frame.
Note: If i ask this is because i am in a dead point and i don't have the resources to pay to someone for do this, until this moment i was able to write the code but in this final problem i can't alone.
I think you should use connected component labling and discard the small labels and than extract the labels boundary to get the pixels of each part
the code:
clear all
% Read image
I = imread('fish.jpg');
% You don't need to do it you haef allready a bw image
Ibw = rgb2gray(I);
Ibw(Ibw < 100) = 0;
% Find size of image
[row,col] = size(Ibw);
% Find connceted components
CC = bwconncomp(Ibw,8);
% Find area of the compoennts
stats = regionprops(CC,'Area','PixelIdxList');
areas = [stats.Area];
% Sort the areas
[val,index] = sort(areas,'descend');
% Take the two largest comonents ids and create filterd image
IbwFilterd = zeros(row,col);
IbwFilterd(stats(index(1,1)).PixelIdxList) = 1;
IbwFilterd(stats(index(1,2)).PixelIdxList) = 1;
imshow(IbwFilterd);
% Find the pixels of the border of the main component and tail
boundries = bwboundaries(IbwFilterd);
yCorrdainteOfMainFishBody = boundries{1}(:,1);
xCorrdainteOfMainFishBody = boundries{1}(:,2);
linearCorrdMainFishBody = sub2ind([row,col],yCorrdainteOfMainFishBody,xCorrdainteOfMainFishBody);
yCorrdainteOfTailFishBody = boundries{2}(:,1);
xCorrdainteOfTailFishBody = boundries{2}(:,2);
linearCorrdTailFishBody = sub2ind([row,col],yCorrdainteOfTailFishBody,xCorrdainteOfTailFishBody);
% For visoulaztion put color for the boundries
IFinal = zeros(row,col,3);
IFinalChannel = zeros(row,col);
IFinal(:,:,1) = IFinalChannel;
IFinalChannel(linearCorrdMainFishBody) = 255;
IFinal(:,:,2) = IFinalChannel;
IFinalChannel = zeros(row,col);
IFinalChannel(linearCorrdTailFishBody) = 125;
IFinal(:,:,3) = IFinalChannel;
imshow(IFinal);
The final image:

RGB Depth Alignment [duplicate]

I am trying to allign two images - one rgb and another depth using MATLAB. Please note that I have checked several places for this - like here , here which requires a kinect device, and here here which says that camera parameters are required for calibration. I was also suggested to use EPIPOLAR GEOMETRY to match the two images though I do not know how. The dataset I am referring to is given in rgb-d-t face dataset. One such example is illustrated below :
The ground truth which basically means the bounding boxes which specify the face region of interest are already provided and I use them to crop the face regions only. The matlab code is illustrated below :
I = imread('1.jpg');
I1 = imcrop(I,[218,198,158,122]);
I2 = imcrop(I,[243,209,140,108]);
figure, subplot(1,2,1),imshow(I1);
subplot(1,2,2),imshow(I2);
The two cropped images rgb and depth are shown below :
Is there any way by which we can register/allign the images. I took the hint from
here where basic sobel operator has been used on both the rgb and depth images to generate an edge map and then keypoints will need to be generated for matching purposes. The edge maps for both the images are generated here.
.
However they are so noisy that I do not think we will be able to do keypoint matching for this images.
Can anybody suggest some algorithms in matlab to do the same ?
prologue
This answer is based on mine previous answer:
Does Kinect Infrared View Have an offset with the Kinect Depth View
I manually crop your input image so I separate colors and depth images (as my program need them separated. This could cause minor offset change by few pixels. Also as I do not have the depths (depth image is 8bit only due to grayscale RGB) then the depth accuracy I work with is very poor see:
So my results are affected by all this negatively. Anyway here is what you need to do:
determine FOV for both images
So find some measurable feature visible on both images. The bigger in size the more accurate the result. For example I choose these:
form a point cloud or mesh
I use depth image as reference so my point cloud is in its FOV. As I do not have the distances but 8bit values instead I converted that to some distance by multiplying by constant. So I scan whole depth image and for every pixel I create point in my point cloud array. Then convert the dept pixel coordinate to color image FOV and copy its color too. something like this (in C++):
picture rgb,zed; // your input images
struct pnt3d { float pos[3]; DWORD rgb; pnt3d(){}; pnt3d(pnt3d& a){ *this=a; }; ~pnt3d(){}; pnt3d* operator = (const pnt3d *a) { *this=*a; return this; }; /*pnt3d* operator = (const pnt3d &a) { ...copy... return this; };*/ };
pnt3d **xyz=NULL; int xs,ys,ofsx=0,ofsy=0;
void copy_images()
{
int x,y,x0,y0;
float xx,yy;
pnt3d *p;
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
{
p=&xyz[y][x];
// copy point from depth image
p->pos[0]=2.000*((float(x)/float(xs))-0.5);
p->pos[1]=2.000*((float(y)/float(ys))-0.5)*(float(ys)/float(xs));
p->pos[2]=10.0*float(DWORD(zed.p[y][x].db[0]))/255.0;
// convert dept image x,y to color image space (FOV correction)
xx=float(x)-(0.5*float(xs));
yy=float(y)-(0.5*float(ys));
xx*=98.0/108.0;
yy*=106.0/119.0;
xx+=0.5*float(rgb.xs);
yy+=0.5*float(rgb.ys);
x0=xx; x0+=ofsx;
y0=yy; y0+=ofsy;
// copy color from rgb image if in range
p->rgb=0x00000000; // black
if ((x0>=0)&&(x0<rgb.xs))
if ((y0>=0)&&(y0<rgb.ys))
p->rgb=rgb2bgr(rgb.p[y0][x0].dd); // OpenGL has reverse RGBorder then my image
}
}
where **xyz is my point cloud 2D array allocated t depth image resolution. The picture is my image class for DIP so here some relevant members:
xs,ys is the image resolution in pixels
p[ys][xs] is the image direct pixel access as union of DWORD dd; BYTE db[4]; so I can access color as single 32 bit variable or each color channel separately.
rgb2bgr(DWORD col) just reorder color channels from RGB to BGR.
render it
I use OpenGL for this so here the code:
glBegin(GL_QUADS);
for (int y0=0,y1=1;y1<ys;y0++,y1++)
for (int x0=0,x1=1;x1<xs;x0++,x1++)
{
float z,z0,z1;
z=xyz[y0][x0].pos[2]; z0=z; z1=z0;
z=xyz[y0][x1].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
z=xyz[y1][x0].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
z=xyz[y1][x1].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
if (z0 <=0.01) continue;
if (z1 >=3.90) continue; // 3.972 pre vsetko nad .=3.95m a 4.000 ak nechyti vobec nic
if (z1-z0>=0.10) continue;
glColor4ubv((BYTE* )&xyz[y0][x0].rgb);
glVertex3fv((float*)&xyz[y0][x0].pos);
glColor4ubv((BYTE* )&xyz[y0][x1].rgb);
glVertex3fv((float*)&xyz[y0][x1].pos);
glColor4ubv((BYTE* )&xyz[y1][x1].rgb);
glVertex3fv((float*)&xyz[y1][x1].pos);
glColor4ubv((BYTE* )&xyz[y1][x0].rgb);
glVertex3fv((float*)&xyz[y1][x0].pos);
}
glEnd();
You need to add the OpenGL initialization and camera settings etc of coarse. Here the unaligned result:
align it
If you notice I added ofsx,ofsy variables to copy_images(). This is the offset between cameras. I change them on arrows keystrokes by 1 pixel and then call copy_images and render the result. This way I manually found the offset very quickly:
As you can see the offset is +17 pixels in x axis and +4 pixels in y axis. Here side view to better see the depths:
Hope It helps a bit
Well I have tried doing it after reading lots of blogs and all. I am still not sure whether I am doing it correct or not. Please feel free to give comments if something is found amiss. For this I used a mathworks fex submission that can be found here : ginputc function.
The matlab code is as follows :
clc; clear all; close all;
% no of keypoint
N = 7;
I = imread('2.jpg');
I = rgb2gray(I);
[Gx, Gy] = imgradientxy(I, 'Sobel');
[Gmag, ~] = imgradient(Gx, Gy);
figure, imshow(Gmag, [ ]), title('Gradient magnitude')
I = Gmag;
[x,y] = ginputc(N, 'Color' , 'r');
matchedpoint1 = [x y];
J = imread('2.png');
[Gx, Gy] = imgradientxy(J, 'Sobel');
[Gmag, ~] = imgradient(Gx, Gy);
figure, imshow(Gmag, [ ]), title('Gradient magnitude')
J = Gmag;
[x, y] = ginputc(N, 'Color' , 'r');
matchedpoint2 = [x y];
[tform,inlierPtsDistorted,inlierPtsOriginal] = estimateGeometricTransform(matchedpoint2,matchedpoint1,'similarity');
figure; showMatchedFeatures(J,I,inlierPtsOriginal,inlierPtsDistorted);
title('Matched inlier points');
I = imread('2.jpg'); J = imread('2.png');
I = rgb2gray(I);
outputView = imref2d(size(I));
Ir = imwarp(J,tform,'OutputView',outputView);
figure; imshow(Ir, []);
title('Recovered image');
figure,imshowpair(I,J,'diff'),title('Difference with original');
figure,imshowpair(I,Ir,'diff'),title('Difference with restored');
Step 1
I used the sobel edge detector to extract the edges for both the depth and rgb images and then used a thresholding values to get the edge map. I will be primarily working with the gradient magnitude only. This gives me two images as this :
Step 2
Next I use the ginput or ginputc function to mark keypoints on both the images. The correspondence between the points are established by me beforehand. I tried using SURF features but they do not work well on depth images.
Step 3
Use the estimategeometrictransform to get the transformation matrix tform and then use this matrix to recover the original position of the moved image. The next set of images tells this story.
Granted I still believe the results can be further improved if the keypoint selections in either of the images are more judiciously done. I also think #Specktre method is better. I just noticed that I used a separate image-pair in my answer compared to that of the question. Both images come from the same dataset to be found here vap rgb-d-t dataset.

Align already captured rgb and depth images

I am trying to allign two images - one rgb and another depth using MATLAB. Please note that I have checked several places for this - like here , here which requires a kinect device, and here here which says that camera parameters are required for calibration. I was also suggested to use EPIPOLAR GEOMETRY to match the two images though I do not know how. The dataset I am referring to is given in rgb-d-t face dataset. One such example is illustrated below :
The ground truth which basically means the bounding boxes which specify the face region of interest are already provided and I use them to crop the face regions only. The matlab code is illustrated below :
I = imread('1.jpg');
I1 = imcrop(I,[218,198,158,122]);
I2 = imcrop(I,[243,209,140,108]);
figure, subplot(1,2,1),imshow(I1);
subplot(1,2,2),imshow(I2);
The two cropped images rgb and depth are shown below :
Is there any way by which we can register/allign the images. I took the hint from
here where basic sobel operator has been used on both the rgb and depth images to generate an edge map and then keypoints will need to be generated for matching purposes. The edge maps for both the images are generated here.
.
However they are so noisy that I do not think we will be able to do keypoint matching for this images.
Can anybody suggest some algorithms in matlab to do the same ?
prologue
This answer is based on mine previous answer:
Does Kinect Infrared View Have an offset with the Kinect Depth View
I manually crop your input image so I separate colors and depth images (as my program need them separated. This could cause minor offset change by few pixels. Also as I do not have the depths (depth image is 8bit only due to grayscale RGB) then the depth accuracy I work with is very poor see:
So my results are affected by all this negatively. Anyway here is what you need to do:
determine FOV for both images
So find some measurable feature visible on both images. The bigger in size the more accurate the result. For example I choose these:
form a point cloud or mesh
I use depth image as reference so my point cloud is in its FOV. As I do not have the distances but 8bit values instead I converted that to some distance by multiplying by constant. So I scan whole depth image and for every pixel I create point in my point cloud array. Then convert the dept pixel coordinate to color image FOV and copy its color too. something like this (in C++):
picture rgb,zed; // your input images
struct pnt3d { float pos[3]; DWORD rgb; pnt3d(){}; pnt3d(pnt3d& a){ *this=a; }; ~pnt3d(){}; pnt3d* operator = (const pnt3d *a) { *this=*a; return this; }; /*pnt3d* operator = (const pnt3d &a) { ...copy... return this; };*/ };
pnt3d **xyz=NULL; int xs,ys,ofsx=0,ofsy=0;
void copy_images()
{
int x,y,x0,y0;
float xx,yy;
pnt3d *p;
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
{
p=&xyz[y][x];
// copy point from depth image
p->pos[0]=2.000*((float(x)/float(xs))-0.5);
p->pos[1]=2.000*((float(y)/float(ys))-0.5)*(float(ys)/float(xs));
p->pos[2]=10.0*float(DWORD(zed.p[y][x].db[0]))/255.0;
// convert dept image x,y to color image space (FOV correction)
xx=float(x)-(0.5*float(xs));
yy=float(y)-(0.5*float(ys));
xx*=98.0/108.0;
yy*=106.0/119.0;
xx+=0.5*float(rgb.xs);
yy+=0.5*float(rgb.ys);
x0=xx; x0+=ofsx;
y0=yy; y0+=ofsy;
// copy color from rgb image if in range
p->rgb=0x00000000; // black
if ((x0>=0)&&(x0<rgb.xs))
if ((y0>=0)&&(y0<rgb.ys))
p->rgb=rgb2bgr(rgb.p[y0][x0].dd); // OpenGL has reverse RGBorder then my image
}
}
where **xyz is my point cloud 2D array allocated t depth image resolution. The picture is my image class for DIP so here some relevant members:
xs,ys is the image resolution in pixels
p[ys][xs] is the image direct pixel access as union of DWORD dd; BYTE db[4]; so I can access color as single 32 bit variable or each color channel separately.
rgb2bgr(DWORD col) just reorder color channels from RGB to BGR.
render it
I use OpenGL for this so here the code:
glBegin(GL_QUADS);
for (int y0=0,y1=1;y1<ys;y0++,y1++)
for (int x0=0,x1=1;x1<xs;x0++,x1++)
{
float z,z0,z1;
z=xyz[y0][x0].pos[2]; z0=z; z1=z0;
z=xyz[y0][x1].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
z=xyz[y1][x0].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
z=xyz[y1][x1].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
if (z0 <=0.01) continue;
if (z1 >=3.90) continue; // 3.972 pre vsetko nad .=3.95m a 4.000 ak nechyti vobec nic
if (z1-z0>=0.10) continue;
glColor4ubv((BYTE* )&xyz[y0][x0].rgb);
glVertex3fv((float*)&xyz[y0][x0].pos);
glColor4ubv((BYTE* )&xyz[y0][x1].rgb);
glVertex3fv((float*)&xyz[y0][x1].pos);
glColor4ubv((BYTE* )&xyz[y1][x1].rgb);
glVertex3fv((float*)&xyz[y1][x1].pos);
glColor4ubv((BYTE* )&xyz[y1][x0].rgb);
glVertex3fv((float*)&xyz[y1][x0].pos);
}
glEnd();
You need to add the OpenGL initialization and camera settings etc of coarse. Here the unaligned result:
align it
If you notice I added ofsx,ofsy variables to copy_images(). This is the offset between cameras. I change them on arrows keystrokes by 1 pixel and then call copy_images and render the result. This way I manually found the offset very quickly:
As you can see the offset is +17 pixels in x axis and +4 pixels in y axis. Here side view to better see the depths:
Hope It helps a bit
Well I have tried doing it after reading lots of blogs and all. I am still not sure whether I am doing it correct or not. Please feel free to give comments if something is found amiss. For this I used a mathworks fex submission that can be found here : ginputc function.
The matlab code is as follows :
clc; clear all; close all;
% no of keypoint
N = 7;
I = imread('2.jpg');
I = rgb2gray(I);
[Gx, Gy] = imgradientxy(I, 'Sobel');
[Gmag, ~] = imgradient(Gx, Gy);
figure, imshow(Gmag, [ ]), title('Gradient magnitude')
I = Gmag;
[x,y] = ginputc(N, 'Color' , 'r');
matchedpoint1 = [x y];
J = imread('2.png');
[Gx, Gy] = imgradientxy(J, 'Sobel');
[Gmag, ~] = imgradient(Gx, Gy);
figure, imshow(Gmag, [ ]), title('Gradient magnitude')
J = Gmag;
[x, y] = ginputc(N, 'Color' , 'r');
matchedpoint2 = [x y];
[tform,inlierPtsDistorted,inlierPtsOriginal] = estimateGeometricTransform(matchedpoint2,matchedpoint1,'similarity');
figure; showMatchedFeatures(J,I,inlierPtsOriginal,inlierPtsDistorted);
title('Matched inlier points');
I = imread('2.jpg'); J = imread('2.png');
I = rgb2gray(I);
outputView = imref2d(size(I));
Ir = imwarp(J,tform,'OutputView',outputView);
figure; imshow(Ir, []);
title('Recovered image');
figure,imshowpair(I,J,'diff'),title('Difference with original');
figure,imshowpair(I,Ir,'diff'),title('Difference with restored');
Step 1
I used the sobel edge detector to extract the edges for both the depth and rgb images and then used a thresholding values to get the edge map. I will be primarily working with the gradient magnitude only. This gives me two images as this :
Step 2
Next I use the ginput or ginputc function to mark keypoints on both the images. The correspondence between the points are established by me beforehand. I tried using SURF features but they do not work well on depth images.
Step 3
Use the estimategeometrictransform to get the transformation matrix tform and then use this matrix to recover the original position of the moved image. The next set of images tells this story.
Granted I still believe the results can be further improved if the keypoint selections in either of the images are more judiciously done. I also think #Specktre method is better. I just noticed that I used a separate image-pair in my answer compared to that of the question. Both images come from the same dataset to be found here vap rgb-d-t dataset.

How to smooth the perimeter output of bwperim?

I have a segmented image
When I apply bwperim function on this I get the output as below
I want to have a thin line of perimeter - just one pixel-thick. This is essential for further processing work. What is the best approach?
Please suggest.
======
BoundingBox
%%% ComputeBoundingBox
%%%
function [stats, statsAlreadyComputed] = ...
ComputeBoundingBox(imageSize,stats,statsAlreadyComputed)
% [minC minR width height]; minC and minR end in .5.
if ~statsAlreadyComputed.BoundingBox
statsAlreadyComputed.BoundingBox = 1;
[stats, statsAlreadyComputed] = ...
ComputePixelList(imageSize,stats,statsAlreadyComputed);
num_dims = numel(imageSize);
for k = 1:length(stats)
list = stats(k).PixelList;
if (isempty(list))
stats(k).BoundingBox = [0.5*ones(1,num_dims) zeros(1,num_dims)];
else
min_corner = min(list,[],1) - 0.5;
max_corner = max(list,[],1) + 0.5;
stats(k).BoundingBox = [min_corner (max_corner - min_corner)];
end
end
end
That is happening because your image had quantization error when you were saving the image. Did you save your image using a lossy compression algorithm, like JPEG? If you want to preserve the intensities so that they don't change when you save the image, use a lossless compression algorithm, like PNG.
To eliminate these "noisy" effects, threshold your image first to eliminate any quantization errors so that you can set these pixels to completely white, then try using bwperim again. In other words, do something like this:
im = im2bw(imread('http://i.stack.imgur.com/dagEc.png'));
im_noborder = imclearborder(im);
out = bwperim(im_noborder);
imshow(out);
The first line of code reads in your image directly from StackOverflow and we use im2bw to threshold your image. This image was originally grayscale, and so we want to convert this into black and white only. This will also remove any quantization artifacts as it thresholds anything higher than 128. The next line of code removes the white border with imclearborder that surrounds your shape because the image you uploaded has a white border surrounding it for some reason. Once we remove this border, we then apply bwperim and we show the image.
This is the image I get: