Ellipsoid fitting for 3D data on matlab - matlab

I am working on a 3D volume of CT lung images, In order to detect nodules I need to fit an ellipsoid model for each suspected nodule, How can I make a code for that ???
Nodule is the suspected object to be a tumor, my algorithm needs to check every object, and approximate it to an ellipsoid, and from the ellipsoid parameters we calculate 8 features to build a classifier which detects whether it a nodule or not through training and testing data, so I need to fit such ellipsoid
here one slice of the volume CT lung image
here another slice of the same volume but it contains a nodule (the yellow circle there is a nodule) so I need my code to check every shape to determine whether it is a nodule or not

As we do not have 3D dataset at disposal I start with 2D.
So first we need to select the lungs so we do not count any other objects then those inside. As this is gray scale we first need to binarize it somehow. I use my own class picture for DIP and this will heavily use my growthfill so I strongly recommend to first read this:
Fracture detection in hand using image proccessing
Where you will find all explanations you need for this. Now for your task I would:
turn RGBA to grayscale <0,765>
I just compute intensity i=R+G+B as for 24 bit image the channels are 8 bit the result is up to 3*255 = 765. As the input image was compressed by JPEG there are distortions in color and also noise in the image so do not forget about that.
crop out the white border
Just cast rays (scan lines) from middle of each outer line of the border towards middle and stop if non white-ish pixel hit. I treshold with 700 instead of 765 to compensate the noise in image. Now you got the bounding box of usable image so crop out the rest.
compute histogram
To compensate distortions in image smooth the histogram enough to remove all unwanted noise and gaps. Then find the local maximum from left and from right (red). This will be used for binarisation treshold (middle between them green) This is my final result:
binarise image
Just treshold image against the **green* intensity from histogram. So if i0,i1 are the local maximum intensities from left and right in the histogram then treshold against (i0+i1)/2. This is the result:
remove everything except lungs
That is easy just fill the black from outside to some predefined background color. Then the same way all the white stuff neighboring background color. That will remove human surface, skeleton, organs and the CT machine leaving just the lungs. Now recolor the remaining black with some predefined Lungs color.
There should be no black color left and the remaining white are the possible nodules.
process all remaining white pixels
So just loop through the image and on first white pixel hit flood fill it with predefined nodule color or distinct object index for latter use. I also distinct the surface (aqua) and the inside (magenta). This is the result:
Now you can compute your features per nodule. If you code your custom floodfill for this then you can obtain from it directly things like:
Volume in [pixels]
Surface area in [pixels]
Bounding box
Position (relative to Lungs)
Center of gravity or centroid
Which all can be used as your feature variables and also to help with fitting.
fit the found surface points
There are many methods for this but I would ease up it as much as I could to improve performance and accuracy. For example you can use centroid as your ellipsoid center. Then find the min and max distant points from it and use them as semi-axises (+/- some orthogonality corrections). And then just search around these initial values. For more info see:
How approximation search works
You will find examples of use in linked Q/As there.
[Notes]
All of the bullets are applicable in 3D. While constructing custom floodfill be careful with the recursion tail. Too much info will really fast overflow your stack and also slow things down considerably. Here small example of how I deal with it with few custom return parameters + growthfill I used:
//---------------------------------------------------------------------------
void growfill(DWORD c0,DWORD c1,DWORD c2); // grow/flood fill c0 neigbouring c1 with c2
void floodfill(int x,int y,DWORD c); // flood fill from (x,y) with color c
DWORD _floodfill_c0,_floodfill_c1; // recursion filled color and fill color
int _floodfill_x0,_floodfill_x1,_floodfill_n; // recursion bounding box and filled pixel count
int _floodfill_y0,_floodfill_y1;
void _floodfill(int x,int y); // recursion for floodfill
//---------------------------------------------------------------------------
void picture::growfill(DWORD c0,DWORD c1,DWORD c2)
{
int x,y,e;
for (e=1;e;)
for (e=0,y=1;y<ys-1;y++)
for ( x=1;x<xs-1;x++)
if (p[y][x].dd==c0)
if ((p[y-1][x].dd==c1)
||(p[y+1][x].dd==c1)
||(p[y][x-1].dd==c1)
||(p[y][x+1].dd==c1)) { e=1; p[y][x].dd=c2; }
}
//---------------------------------------------------------------------------
void picture::_floodfill(int x,int y)
{
if (p[y][x].dd!=_floodfill_c0) return;
p[y][x].dd=_floodfill_c1;
_floodfill_n++;
if (_floodfill_x0>x) _floodfill_x0=x;
if (_floodfill_y0>y) _floodfill_y0=y;
if (_floodfill_x1<x) _floodfill_x1=x;
if (_floodfill_y1<y) _floodfill_y1=y;
if (x> 0) _floodfill(x-1,y);
if (x<xs-1) _floodfill(x+1,y);
if (y> 0) _floodfill(x,y-1);
if (y<ys-1) _floodfill(x,y+1);
}
void picture::floodfill(int x,int y,DWORD c)
{
if ((x<0)||(x>=xs)||(y<0)||(y>=ys)) return;
_floodfill_c0=p[y][x].dd;
_floodfill_c1=c;
_floodfill_n=0;
_floodfill_x0=x;
_floodfill_y0=y;
_floodfill_x1=x;
_floodfill_y1=y;
_floodfill(x,y);
}
//---------------------------------------------------------------------------
And here C++ code I did the example images with:
picture pic0,pic1;
// pic0 - source img
// pic1 - output img
int x0,y0,x1,y1,x,y,i,hist[766];
color c;
// copy source image to output
pic1=pic0;
pic1.pixel_format(_pf_u); // grayscale <0,765>
// 0xAARRGGBB
const DWORD col_backg=0x00202020; // gray
const DWORD col_lungs=0x00000040; // blue
const DWORD col_out =0x0000FFFF; // aqua nodule surface
const DWORD col_in =0x00800080; // magenta nodule inside
const DWORD col_test =0x00008040; // green-ish distinct color just for safe recoloring
// [remove white background]
// find white background area (by casting rays from middle of border into center of image till non white pixel hit)
for (x0=0 ,y=pic1.ys>>1;x0<pic1.xs;x0++) if (pic1.p[y][x0].dd<700) break;
for (x1=pic1.xs-1,y=pic1.ys>>1;x1> 0;x1--) if (pic1.p[y][x1].dd<700) break;
for (y0=0 ,x=pic1.xs>>1;y0<pic1.ys;y0++) if (pic1.p[y0][x].dd<700) break;
for (y1=pic1.ys-1,x=pic1.xs>>1;y1> 0;y1--) if (pic1.p[y1][x].dd<700) break;
// crop it away
pic1.bmp->Canvas->Draw(-x0,-y0,pic1.bmp);
pic1.resize(x1-x0+1,y1-y0+1);
// [prepare data]
// raw histogram
for (i=0;i<766;i++) hist[i]=0;
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
hist[pic1.p[y][x].dd]++;
// smooth histogram a lot (remove noise and fill gaps due to compression and scanning nature of the image)
for (x=0;x<100;x++)
{
for (i=0;i<765;i++) hist[i]=(hist[i]+hist[i+1])>>1;
for (i=765;i>0;i--) hist[i]=(hist[i]+hist[i-1])>>1;
}
// find peaks in histogram (for tresholding)
for (x=0,x0=x,y0=hist[x];x<766;x++)
{
y=hist[x];
if (y0<y) { x0=x; y0=y; }
if (y<y0>>1) break;
}
for (x=765,x1=x,y1=hist[x];x>=0;x--)
{
y=hist[x];
if (y1<y) { x1=x; y1=y; }
if (y<y1>>1) break;
}
// binarize image (tresholding)
i=(x0+x1)>>1; // treshold with middle intensity between peeks
pic1.pf=_pf_rgba; // result will be RGBA
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
if (pic1.p[y][x].dd>=i) pic1.p[y][x].dd=0x00FFFFFF;
else pic1.p[y][x].dd=0x00000000;
pic1.save("out0.png");
// recolor outer background
for (x=0;x<pic1.xs;x++) pic1.p[ 0][x].dd=col_backg; // render rectangle along outer border so the filling starts from there
for (x=0;x<pic1.xs;x++) pic1.p[pic1.ys-1][x].dd=col_backg;
for (y=0;y<pic1.ys;y++) pic1.p[y][ 0].dd=col_backg;
for (y=0;y<pic1.ys;y++) pic1.p[y][pic1.xs-1].dd=col_backg;
pic1.growfill(0x00000000,col_backg,col_backg); // fill its black content outside in
// recolor white human surface and CT machine
pic1.growfill(0x00FFFFFF,col_backg,col_backg);
// recolor Lungs dark matter
pic1.growfill(0x00000000,col_backg,col_lungs); // outer border
pic1.growfill(0x00000000,col_lungs,col_lungs); // fill its black content outside in
pic1.save("out1.png");
// find/recolor individual nodules
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
if (pic1.p[y][x].dd==0x00FFFFFF)
{
pic1.floodfill(x,y,col_test);
pic1.growfill(col_lungs,col_test,col_out);
pic1.floodfill(x,y,col_in);
}
pic1.save("out2.png");
// render histogram
for (x=0;(x<766)&&(x>>1<pic1.xs);x++)
for (y=0;(y<=hist[x]>>6)&&(y<pic1.ys);y++)
pic1.p[pic1.ys-1-y][x>>1].dd=0x000040FF;
for (x=x0 ,y=0;(y<=100)&&(y<pic1.ys);y++) pic1.p[pic1.ys-1-y][x>>1].dd=0x00FF0000;
for (x=x1 ,y=0;(y<=100)&&(y<pic1.ys);y++) pic1.p[pic1.ys-1-y][x>>1].dd=0x00FF0000;
for (x=(x0+x1)>>1,y=0;(y<=100)&&(y<pic1.ys);y++) pic1.p[pic1.ys-1-y][x>>1].dd=0x0000FF00;

you may be interested in a recent plugin that we developed for the open-source software Icy http://icy.bioimageanalysis.org/
The plugin name is FitEllipsoid, it allows rapidly fitting an ellipsoid to the image contents by first clicking on a few points on the orthogonal view.
A tutorial is available here: https://www.youtube.com/watch?v=MjotgTZi6RQ
Also note that we provide Matlab and Java source codes on GitHub (but I cannot provide them since it is my first appearance on the website).

Related

RGB Depth Alignment [duplicate]

I am trying to allign two images - one rgb and another depth using MATLAB. Please note that I have checked several places for this - like here , here which requires a kinect device, and here here which says that camera parameters are required for calibration. I was also suggested to use EPIPOLAR GEOMETRY to match the two images though I do not know how. The dataset I am referring to is given in rgb-d-t face dataset. One such example is illustrated below :
The ground truth which basically means the bounding boxes which specify the face region of interest are already provided and I use them to crop the face regions only. The matlab code is illustrated below :
I = imread('1.jpg');
I1 = imcrop(I,[218,198,158,122]);
I2 = imcrop(I,[243,209,140,108]);
figure, subplot(1,2,1),imshow(I1);
subplot(1,2,2),imshow(I2);
The two cropped images rgb and depth are shown below :
Is there any way by which we can register/allign the images. I took the hint from
here where basic sobel operator has been used on both the rgb and depth images to generate an edge map and then keypoints will need to be generated for matching purposes. The edge maps for both the images are generated here.
.
However they are so noisy that I do not think we will be able to do keypoint matching for this images.
Can anybody suggest some algorithms in matlab to do the same ?
prologue
This answer is based on mine previous answer:
Does Kinect Infrared View Have an offset with the Kinect Depth View
I manually crop your input image so I separate colors and depth images (as my program need them separated. This could cause minor offset change by few pixels. Also as I do not have the depths (depth image is 8bit only due to grayscale RGB) then the depth accuracy I work with is very poor see:
So my results are affected by all this negatively. Anyway here is what you need to do:
determine FOV for both images
So find some measurable feature visible on both images. The bigger in size the more accurate the result. For example I choose these:
form a point cloud or mesh
I use depth image as reference so my point cloud is in its FOV. As I do not have the distances but 8bit values instead I converted that to some distance by multiplying by constant. So I scan whole depth image and for every pixel I create point in my point cloud array. Then convert the dept pixel coordinate to color image FOV and copy its color too. something like this (in C++):
picture rgb,zed; // your input images
struct pnt3d { float pos[3]; DWORD rgb; pnt3d(){}; pnt3d(pnt3d& a){ *this=a; }; ~pnt3d(){}; pnt3d* operator = (const pnt3d *a) { *this=*a; return this; }; /*pnt3d* operator = (const pnt3d &a) { ...copy... return this; };*/ };
pnt3d **xyz=NULL; int xs,ys,ofsx=0,ofsy=0;
void copy_images()
{
int x,y,x0,y0;
float xx,yy;
pnt3d *p;
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
{
p=&xyz[y][x];
// copy point from depth image
p->pos[0]=2.000*((float(x)/float(xs))-0.5);
p->pos[1]=2.000*((float(y)/float(ys))-0.5)*(float(ys)/float(xs));
p->pos[2]=10.0*float(DWORD(zed.p[y][x].db[0]))/255.0;
// convert dept image x,y to color image space (FOV correction)
xx=float(x)-(0.5*float(xs));
yy=float(y)-(0.5*float(ys));
xx*=98.0/108.0;
yy*=106.0/119.0;
xx+=0.5*float(rgb.xs);
yy+=0.5*float(rgb.ys);
x0=xx; x0+=ofsx;
y0=yy; y0+=ofsy;
// copy color from rgb image if in range
p->rgb=0x00000000; // black
if ((x0>=0)&&(x0<rgb.xs))
if ((y0>=0)&&(y0<rgb.ys))
p->rgb=rgb2bgr(rgb.p[y0][x0].dd); // OpenGL has reverse RGBorder then my image
}
}
where **xyz is my point cloud 2D array allocated t depth image resolution. The picture is my image class for DIP so here some relevant members:
xs,ys is the image resolution in pixels
p[ys][xs] is the image direct pixel access as union of DWORD dd; BYTE db[4]; so I can access color as single 32 bit variable or each color channel separately.
rgb2bgr(DWORD col) just reorder color channels from RGB to BGR.
render it
I use OpenGL for this so here the code:
glBegin(GL_QUADS);
for (int y0=0,y1=1;y1<ys;y0++,y1++)
for (int x0=0,x1=1;x1<xs;x0++,x1++)
{
float z,z0,z1;
z=xyz[y0][x0].pos[2]; z0=z; z1=z0;
z=xyz[y0][x1].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
z=xyz[y1][x0].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
z=xyz[y1][x1].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
if (z0 <=0.01) continue;
if (z1 >=3.90) continue; // 3.972 pre vsetko nad .=3.95m a 4.000 ak nechyti vobec nic
if (z1-z0>=0.10) continue;
glColor4ubv((BYTE* )&xyz[y0][x0].rgb);
glVertex3fv((float*)&xyz[y0][x0].pos);
glColor4ubv((BYTE* )&xyz[y0][x1].rgb);
glVertex3fv((float*)&xyz[y0][x1].pos);
glColor4ubv((BYTE* )&xyz[y1][x1].rgb);
glVertex3fv((float*)&xyz[y1][x1].pos);
glColor4ubv((BYTE* )&xyz[y1][x0].rgb);
glVertex3fv((float*)&xyz[y1][x0].pos);
}
glEnd();
You need to add the OpenGL initialization and camera settings etc of coarse. Here the unaligned result:
align it
If you notice I added ofsx,ofsy variables to copy_images(). This is the offset between cameras. I change them on arrows keystrokes by 1 pixel and then call copy_images and render the result. This way I manually found the offset very quickly:
As you can see the offset is +17 pixels in x axis and +4 pixels in y axis. Here side view to better see the depths:
Hope It helps a bit
Well I have tried doing it after reading lots of blogs and all. I am still not sure whether I am doing it correct or not. Please feel free to give comments if something is found amiss. For this I used a mathworks fex submission that can be found here : ginputc function.
The matlab code is as follows :
clc; clear all; close all;
% no of keypoint
N = 7;
I = imread('2.jpg');
I = rgb2gray(I);
[Gx, Gy] = imgradientxy(I, 'Sobel');
[Gmag, ~] = imgradient(Gx, Gy);
figure, imshow(Gmag, [ ]), title('Gradient magnitude')
I = Gmag;
[x,y] = ginputc(N, 'Color' , 'r');
matchedpoint1 = [x y];
J = imread('2.png');
[Gx, Gy] = imgradientxy(J, 'Sobel');
[Gmag, ~] = imgradient(Gx, Gy);
figure, imshow(Gmag, [ ]), title('Gradient magnitude')
J = Gmag;
[x, y] = ginputc(N, 'Color' , 'r');
matchedpoint2 = [x y];
[tform,inlierPtsDistorted,inlierPtsOriginal] = estimateGeometricTransform(matchedpoint2,matchedpoint1,'similarity');
figure; showMatchedFeatures(J,I,inlierPtsOriginal,inlierPtsDistorted);
title('Matched inlier points');
I = imread('2.jpg'); J = imread('2.png');
I = rgb2gray(I);
outputView = imref2d(size(I));
Ir = imwarp(J,tform,'OutputView',outputView);
figure; imshow(Ir, []);
title('Recovered image');
figure,imshowpair(I,J,'diff'),title('Difference with original');
figure,imshowpair(I,Ir,'diff'),title('Difference with restored');
Step 1
I used the sobel edge detector to extract the edges for both the depth and rgb images and then used a thresholding values to get the edge map. I will be primarily working with the gradient magnitude only. This gives me two images as this :
Step 2
Next I use the ginput or ginputc function to mark keypoints on both the images. The correspondence between the points are established by me beforehand. I tried using SURF features but they do not work well on depth images.
Step 3
Use the estimategeometrictransform to get the transformation matrix tform and then use this matrix to recover the original position of the moved image. The next set of images tells this story.
Granted I still believe the results can be further improved if the keypoint selections in either of the images are more judiciously done. I also think #Specktre method is better. I just noticed that I used a separate image-pair in my answer compared to that of the question. Both images come from the same dataset to be found here vap rgb-d-t dataset.

Align already captured rgb and depth images

I am trying to allign two images - one rgb and another depth using MATLAB. Please note that I have checked several places for this - like here , here which requires a kinect device, and here here which says that camera parameters are required for calibration. I was also suggested to use EPIPOLAR GEOMETRY to match the two images though I do not know how. The dataset I am referring to is given in rgb-d-t face dataset. One such example is illustrated below :
The ground truth which basically means the bounding boxes which specify the face region of interest are already provided and I use them to crop the face regions only. The matlab code is illustrated below :
I = imread('1.jpg');
I1 = imcrop(I,[218,198,158,122]);
I2 = imcrop(I,[243,209,140,108]);
figure, subplot(1,2,1),imshow(I1);
subplot(1,2,2),imshow(I2);
The two cropped images rgb and depth are shown below :
Is there any way by which we can register/allign the images. I took the hint from
here where basic sobel operator has been used on both the rgb and depth images to generate an edge map and then keypoints will need to be generated for matching purposes. The edge maps for both the images are generated here.
.
However they are so noisy that I do not think we will be able to do keypoint matching for this images.
Can anybody suggest some algorithms in matlab to do the same ?
prologue
This answer is based on mine previous answer:
Does Kinect Infrared View Have an offset with the Kinect Depth View
I manually crop your input image so I separate colors and depth images (as my program need them separated. This could cause minor offset change by few pixels. Also as I do not have the depths (depth image is 8bit only due to grayscale RGB) then the depth accuracy I work with is very poor see:
So my results are affected by all this negatively. Anyway here is what you need to do:
determine FOV for both images
So find some measurable feature visible on both images. The bigger in size the more accurate the result. For example I choose these:
form a point cloud or mesh
I use depth image as reference so my point cloud is in its FOV. As I do not have the distances but 8bit values instead I converted that to some distance by multiplying by constant. So I scan whole depth image and for every pixel I create point in my point cloud array. Then convert the dept pixel coordinate to color image FOV and copy its color too. something like this (in C++):
picture rgb,zed; // your input images
struct pnt3d { float pos[3]; DWORD rgb; pnt3d(){}; pnt3d(pnt3d& a){ *this=a; }; ~pnt3d(){}; pnt3d* operator = (const pnt3d *a) { *this=*a; return this; }; /*pnt3d* operator = (const pnt3d &a) { ...copy... return this; };*/ };
pnt3d **xyz=NULL; int xs,ys,ofsx=0,ofsy=0;
void copy_images()
{
int x,y,x0,y0;
float xx,yy;
pnt3d *p;
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
{
p=&xyz[y][x];
// copy point from depth image
p->pos[0]=2.000*((float(x)/float(xs))-0.5);
p->pos[1]=2.000*((float(y)/float(ys))-0.5)*(float(ys)/float(xs));
p->pos[2]=10.0*float(DWORD(zed.p[y][x].db[0]))/255.0;
// convert dept image x,y to color image space (FOV correction)
xx=float(x)-(0.5*float(xs));
yy=float(y)-(0.5*float(ys));
xx*=98.0/108.0;
yy*=106.0/119.0;
xx+=0.5*float(rgb.xs);
yy+=0.5*float(rgb.ys);
x0=xx; x0+=ofsx;
y0=yy; y0+=ofsy;
// copy color from rgb image if in range
p->rgb=0x00000000; // black
if ((x0>=0)&&(x0<rgb.xs))
if ((y0>=0)&&(y0<rgb.ys))
p->rgb=rgb2bgr(rgb.p[y0][x0].dd); // OpenGL has reverse RGBorder then my image
}
}
where **xyz is my point cloud 2D array allocated t depth image resolution. The picture is my image class for DIP so here some relevant members:
xs,ys is the image resolution in pixels
p[ys][xs] is the image direct pixel access as union of DWORD dd; BYTE db[4]; so I can access color as single 32 bit variable or each color channel separately.
rgb2bgr(DWORD col) just reorder color channels from RGB to BGR.
render it
I use OpenGL for this so here the code:
glBegin(GL_QUADS);
for (int y0=0,y1=1;y1<ys;y0++,y1++)
for (int x0=0,x1=1;x1<xs;x0++,x1++)
{
float z,z0,z1;
z=xyz[y0][x0].pos[2]; z0=z; z1=z0;
z=xyz[y0][x1].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
z=xyz[y1][x0].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
z=xyz[y1][x1].pos[2]; if (z0>z) z0=z; if (z1<z) z1=z;
if (z0 <=0.01) continue;
if (z1 >=3.90) continue; // 3.972 pre vsetko nad .=3.95m a 4.000 ak nechyti vobec nic
if (z1-z0>=0.10) continue;
glColor4ubv((BYTE* )&xyz[y0][x0].rgb);
glVertex3fv((float*)&xyz[y0][x0].pos);
glColor4ubv((BYTE* )&xyz[y0][x1].rgb);
glVertex3fv((float*)&xyz[y0][x1].pos);
glColor4ubv((BYTE* )&xyz[y1][x1].rgb);
glVertex3fv((float*)&xyz[y1][x1].pos);
glColor4ubv((BYTE* )&xyz[y1][x0].rgb);
glVertex3fv((float*)&xyz[y1][x0].pos);
}
glEnd();
You need to add the OpenGL initialization and camera settings etc of coarse. Here the unaligned result:
align it
If you notice I added ofsx,ofsy variables to copy_images(). This is the offset between cameras. I change them on arrows keystrokes by 1 pixel and then call copy_images and render the result. This way I manually found the offset very quickly:
As you can see the offset is +17 pixels in x axis and +4 pixels in y axis. Here side view to better see the depths:
Hope It helps a bit
Well I have tried doing it after reading lots of blogs and all. I am still not sure whether I am doing it correct or not. Please feel free to give comments if something is found amiss. For this I used a mathworks fex submission that can be found here : ginputc function.
The matlab code is as follows :
clc; clear all; close all;
% no of keypoint
N = 7;
I = imread('2.jpg');
I = rgb2gray(I);
[Gx, Gy] = imgradientxy(I, 'Sobel');
[Gmag, ~] = imgradient(Gx, Gy);
figure, imshow(Gmag, [ ]), title('Gradient magnitude')
I = Gmag;
[x,y] = ginputc(N, 'Color' , 'r');
matchedpoint1 = [x y];
J = imread('2.png');
[Gx, Gy] = imgradientxy(J, 'Sobel');
[Gmag, ~] = imgradient(Gx, Gy);
figure, imshow(Gmag, [ ]), title('Gradient magnitude')
J = Gmag;
[x, y] = ginputc(N, 'Color' , 'r');
matchedpoint2 = [x y];
[tform,inlierPtsDistorted,inlierPtsOriginal] = estimateGeometricTransform(matchedpoint2,matchedpoint1,'similarity');
figure; showMatchedFeatures(J,I,inlierPtsOriginal,inlierPtsDistorted);
title('Matched inlier points');
I = imread('2.jpg'); J = imread('2.png');
I = rgb2gray(I);
outputView = imref2d(size(I));
Ir = imwarp(J,tform,'OutputView',outputView);
figure; imshow(Ir, []);
title('Recovered image');
figure,imshowpair(I,J,'diff'),title('Difference with original');
figure,imshowpair(I,Ir,'diff'),title('Difference with restored');
Step 1
I used the sobel edge detector to extract the edges for both the depth and rgb images and then used a thresholding values to get the edge map. I will be primarily working with the gradient magnitude only. This gives me two images as this :
Step 2
Next I use the ginput or ginputc function to mark keypoints on both the images. The correspondence between the points are established by me beforehand. I tried using SURF features but they do not work well on depth images.
Step 3
Use the estimategeometrictransform to get the transformation matrix tform and then use this matrix to recover the original position of the moved image. The next set of images tells this story.
Granted I still believe the results can be further improved if the keypoint selections in either of the images are more judiciously done. I also think #Specktre method is better. I just noticed that I used a separate image-pair in my answer compared to that of the question. Both images come from the same dataset to be found here vap rgb-d-t dataset.

Finding dark purple pixels in an image

I am doing a research for my higher studies in automation. I have done the automation part of the microscope but I need help in MATLAB. An example of what I would like to segment is shown here:
I need to extract the dark purple pixels from this image and only display that in a figure. It is almost like colour based segmentation but I just want to only take the dark purple pixel from the whole image.
What would I do in this case?
Here's something to get you started. Let's go with the theme of colour segmentation where you only want to extract pixels that are of a deep purple. I would like to point you to the HSV colour space before we get started. The HSV colour space is ideal for representing colours in a way that is most intuitive to humans. We tend to describe colours by their dominant colour, followed by attributes such as how washed out or how pure the colour is, and how bright or dark the colour is. The dominant colour is represented by the Hue, the appearance of how washed out or how pure the colour is is represented by the Saturation and the intensity of the colour is represented by the Value, and hence Hue-Saturation-Value, or the HSV colour space.
We can transform a RGB image so that it becomes HSV by rgb2hsv. This will return a 3D matrix that has the hue, saturation and value as 2D slices in a 3D matrix, much like a RGB image where each slices represents the red, green and blue channels. Let's see what each component looks like once we transform the image into HSV:
im = imread('https://www.cdc.gov/dpdx/images/malaria/ovale/Po_gametocyte_thickB.jpg');
hsv = rgb2hsv(im2double(im));
figure;
for idx = 1 : 3
subplot(1,3,idx);
imshow(hsv(:,:,idx));
end
The first line of code reads in an image from a URL. I'm going to use the one that Hoki referred you to, as it's the most simplest one to deal with. For self-containment, this is what the original image looks like:
Once we do this, we convert the image into the HSV colour space. It is important that you convert the image to double precision and you normalize each component to [0,1], and that is performed by im2double. Next, we spawn a new figure, and place each component in a single row over three columns. The first column represents the hue, next column the saturation and finally the last column being the value. This is the figure that we see:
With the first figure, it looks like the dominant colour is purple, whether it's a light shade or a dark shade of the colour, so the hue won't help us here. If you look at a HSV colour wheel:
(source: hobbitsandhobos.com)
Normalize the wheel so that it falls between [0,1] instead of 0 to 360 degrees. The hue is actually represented as degrees due to the nature of the colour space, but MATLAB normalizes this to [0,1]. You can see that purple falls within a hue of [0.6,0.8], which corresponds to the first figure I showed you that displays the hue for our image. If you examine the pixels around the image, they fluctuate between this range. Therefore, the hue won't help us much here.
What will certainly help us are the saturation and value components. If you take a look, the deep purple pixels have a higher saturation than the rest of the background, which makes sense because the deep purple has a much more pure version of purple than the rest of the background. For the value, you can see that the brightness of the dark purple is darker than the background.
We can use these two points as an exploit to segment out the purple colour in the image. The easiest thing to do would be to threshold the saturation and value planes so that any values that are within a certain range you keep while those that are outside you throw away. Therefore, you can do something like this:
sThresh = hsv(:,:,2) > 0.6 & hsv(:,:,2) < 0.9;
vThresh = hsv(:,:,3) > 0.4 & hsv(:,:,3) < 0.65;
I used impixelinfo and I hovered my mouse over the saturation and value components to examine what the values were for the deep purple regions. It looks like those pixels that are deep purple have a saturation value between 0.6 and 0.9, while the value component has values between 0.4 and 0.65. The above code will create two binary masks where true means that the pixel satisfies our criteria while false means it doesn't. Because I want to combine both things together and not leave any stone unturned, let's logical OR the masks together for the final result:
figure;
result = sThresh | vThresh;
imshow(result);
We will also show the result too. This is what we get:
As you can see, this does a pretty good job, but we have remnants of the red arrow that we don't want in the final result. To do a bit of cleanup, we can use morphology - specifically an opening filter of a small window so that we don't affect the pixels that we want as much. We can use imopen to perform our opening operation for us. A morphological opening removes isolated pixels that appear around your image. You use what is called a structuring element that is used to look at local neighbourhoods of your image. For the basics, any pixel regions that are as small as the shape that is contained within the structuring element get removed. Because we want to preserve the shape of the other objects, we can try using a 5 x 5 disk structuring element to clean these pixels up:
figure;
se = strel('disk', 2, 0);
final = imopen(result, se);
imshow(final);
This is what we get:
Not bad! There are some holes that we need to patch up, so let's fill in those holes with imfill:
figure;
final_noholes = imfill(final, 'holes');
imshow(final_noholes);
This is what we get:
OK! So we have our mask. The last thing we need to do is present the image so that you only show the deep purple colours from the original image, and nothing else. That can easily be achieved with bsxfun:
figure;
out = bsxfun(#times, im, uint8(final_noholes));
imshow(out);
The above operation takes your mask, and multiplies every pixel in your image by this mask. One small thing I'd like to point out is that the mask we found in the previous step needs to be cast to uint8, because bsxfun requires that the multiplication (or whatever operation you perform) need to be the same type. We replicate this mask in 3D so that you mask out the unwanted RGB pixels and only keep the ones you are looking for.
This is what we finally get:
As you can see, it isn't perfect, but it's certainly enough to get you started. Those thresholds are what are important, but with some very simple thresholding, I extracted most of the purple pixels out.
To make it easier for you, here's the code that I wrote above that can easily be copied and pasted into MATLAB for you to run:
clear all; close all; clc;
im = imread('https://www.cdc.gov/dpdx/images/malaria/ovale/Po_gametocyte_thickB.jpg');
hsv = rgb2hsv(im2double(im));
figure;
for idx = 1 : 3
subplot(1,3,idx);
imshow(hsv(:,:,idx));
end
sThresh = hsv(:,:,2) > 0.6 & hsv(:,:,2) < 0.9;
vThresh = hsv(:,:,3) > 0.4 & hsv(:,:,3) < 0.65;
figure;
result = sThresh | vThresh;
imshow(result);
figure;
se = strel('disk', 2, 0);
final = imopen(result, se);
imshow(final);
figure;
final_noholes = imfill(final, 'holes');
imshow(final_noholes);
figure;
out = bsxfun(#times, im, uint8(final_noholes));
imshow(out);
Good luck!
Try this:
function main
clc,clear
A = imread('https://www.cdc.gov/dpdx/images/malaria/ovale/Po_gametocyte_thickB.jpg');
subplot(1,2,1)
imshow(A)
RGB = [230 210 200]; % color you want
e = 40; % color shift
B = pix_in(A,RGB,e);
B = B + 255.*uint8(~B); % choosing white background
subplot(1,2,2)
imshow(B)
end
function B = pix_in(A,RGB,e)
% select specific pixels in image
% A - color image (3D matrix uint8)
% RGB - [R G B] - color to select
% e - color shift/deviation
A = double(A); % for same class operations (RGB - double)
[m, n, ~] = size(A);
RGB = reshape(RGB,1,1,3);
RGB = repmat(RGB,m,n,1); % creating 3D matrix
b = abs(A-RGB) < e; % logical 3D
b = sum(b,3) == 3; % if [R,G,B] of a pixel in range
B = A.*repmat(b,1,1,3); % selecting pixels those in range
B = uint8(B);
end

Matlab and OpenCV calculate different image moment m00 for the same image

For exactly the same image
Opencv Code:
img = imread("testImg.png",0);
threshold(img, img_bw, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Mat tmp;
img_bwR.copyTo(tmp);
findContours(tmp, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// Get the moment
vector<Moments> mu(contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mu[i] = moments( contours[i], false );
}
// Display area (m00)
for( int i = 0; i < contours.size(); i++ )
{
cout<<mu[i].m00 <<endl;
// I also tried the code
//cout<<contourArea(contours.at(i))<<endl;
// But the result is the same
}
Matlab code:
Img = imread('testImg.png');
lvl = graythresh(Img);
bw = im2bw(Img,lvl);
stats = regionprops(bw,'Area');
for k = 1:length(stats)
Area = stats(k).Area; %m00
end
Any one has any thought on it? How to unify them? I think they use different methods to find contours.
I uploaded the test image at the link below so that someone who is interested in this can reproduce the procedure
It is a 100 by 100 small 8 bit grayscale image with only 0 and 255 pixel intensity. For simplicity, it only has one blob on it.
For OpenCV, the area of contour (image moment m00) is 609.5 (Very odd value)
For Matlab, the area of contour (image moment m00) is 763.
Thanks
Exist many different definitions of how contours should be extracted from binary image. For example it can be polygon that is the perimeter of white object in a binary image. If this definition was used by OpenCV, then areas of contours would be the same as areas of connected components found by Matlab. But this is not the case. Contour found by findContour() function is the polygon that connects centers of neighbor "edge pixels". Edge pixel is a white pixel that has black neighbor in N4 neighborhood.
Example: suppose you have an image whose size is 100x100 pixels. Every pixel above the diagonal is black. Every pixel below or on the diagonal is white (black triangle and white triangle). Exact separation polygon will have almost 200 vertexes at distance of 1 pixel: (0,0), (1,0), (1,1), (2,1), (2,2),.... (100,99), (100,100), (0,100). As you can see this definition is not very good from practical point of view. Polygon returned by OpenCV will have exactly 3 vertexes needed to define the triangle: (0,0), (99,99), (0,99). Its area is (99 x 99 / 2) pixels. It is not equal to number of white pixels. It is not even an integer. But this polygon is more practical than previous one.
Those are not the only possible definitions for polygon extraction. Many other definitions exist. Some of them (in my opinion) may be better than the one used by OpenCV. But this is the one that was implemented and used by a lot of people.
Currently there no effective workaround for your problem. If you want to get exactly same numbers from MATLAB and OpenCV you will have to draw the contours found by foundContours on some black image, and use function moments() on image. I know that upcoming OpenCV 3 have function that finds connected components but I didn't tried it myself.

Extract black objects from color background

It is easy for human eyes to tell black from other colors. But how about computers?
I printed some color blocks on the normal A4 paper. Since there are three kinds of ink to compose a color image, cyan, magenta and yellow, I set the color of each block C=20%, C=30%, C=40%, C=50% and rest of two colors are 0. That is the first column of my source image. So far, no black (K of CMYK) ink is supposed to print. After that, I set the color of each dot K=100% and rest colors are 0 to print black dots.
You may feel my image is weird and awful. In fact, the image is magnified 30 times and how the ink cheat our eyes can be seen clearly. The color strips hamper me to recognize these black dots (the dot is printed as just one pixel in 800 dpi). Without the color background, I used to blur and do canny edge detector to extract the edge. However, when adding color background, simply do grayscale and edge detector cannot get good results because of the strips. How will my eyes do in order to solve such problems?
I determined to check the brightness of source image. I referred this article and formula:
brightness = sqrt( 0.299 R * R + 0.587 G * G + 0.114 B * B )
The brightness is more close to human perception and it works very well in the yellow background because the brightness of yellow is the highest compared with cyan and magenta. But how to make cyan and magenta strips as bright as possible? The expected result is that all the strips disappear.
More complicated image:
C=40%, M=40%
C=40%, Y=40%
Y=40%, M=40%
FFT result of C=40%, Y=40% brightness image
Anyone can give me some hints to remove the color strips?
#natan I tried FFT method you suggested me, but I was not lucky to get peak at both axis x and y. In order to plot the frequency as you did, I resized my image to square.
I would convert the image to the HSV colour space and then use the Value channel. This basically separates colour and brightness information.
This is the 50% cyan image
Then you can just do a simple threshold to isolate the dots.
I just did this very quickly and im sure you could get better results. Maybe find contours in the image and then remove any contours with a small area, to filter any remaining noise.
After inspecting the images, I decided that a robust threshold will be more simple than anything. For example, looking at the C=40%, M=40% photo, I first inverted the intensities so black (the signal) will be white just using
im=(abs(255-im));
we can inspect its RGB histograms using this :
hist(reshape(single(im),[],3),min(single(im(:))):max(single(im(:))));
colormap([1 0 0; 0 1 0; 0 0 1]);
so we see that there is a large contribution to some middle intensity whereas the "signal" which is now white, is mostly separated to higher value. I then applied a simple thresholds as follows:
thr = #(d) (max([min(max(d,[],1)) min(max(d,[],2))])) ;
for n=1:size(im,3)
imt(:,:,n)=im(:,:,n).*uint8(im(:,:,n)>1.1*thr(im(:,:,n)));
end
imt=rgb2gray(imt);
and got rid of objects smaller than some typical area size
min_dot_area=20;
bw=bwareaopen(imt>0,min_dot_area);
imagesc(bw);
colormap(flipud(bone));
here's the result together with the original image:
The origin of this threshold is from this code I wrote that assumed sparse signals in the form of 2-D peaks or blobs in a noisy background. By sparse I meant that there's no pile up of peaks. In that case, when projecting max(image) on the x or y axis (by (max(im,[],1) or (max(im,[],1) you get a good measure of the background. That is because you take the minimal intensity of the max(im) vector.
If you want to look at this differently you can look at the histogram of the intensities of the image. The background is supposed to be a normal distribution of some kind around some intensity, the signal should be higher than that intensity, but with much lower # of occurrences. By finding max(im) of one of the axes (x or y) you discover what was the maximal noise level.
You'll see that the threshold picks that point in the histogram where there are still some noise above it, but ALL the signal is above it too. that's why I adjusted it to be 1.1*thr. Last, there are many fancier ways to obtain a robust threshold, this is a quick and dirty way that in my view is good enough...
Thanks to everyone for posting his answer! After some search and attempt, I also come up with an adaptive method to extract these black dots from the color background. It seems that considering only the brightness could not solve the problem perfectly. Therefore natan's method which calculates and analyzes the RGB histogram is more robust. Unfortunately, I still cannot obtain a robust threshold to extract the black dots in other color samples, because things are getting more and more unpredictable when we add deeper color (e.g. Cyan > 60) or mix two colors together (e.g. Cyan = 50, Magenta = 50).
One day, I google "extract color" and TinEye's color extraction and color thief inspire me. Both of them are very cool application and the image processed by the former website is exactly what I want. So I determine to implement a similar stuff on my own. The algorithm I used here is k-means clustering. And some other related key words to search may be color palette, color quantation and getting dominant color.
I firstly apply Gaussian filter to smooth the image.
GaussianBlur(img, img, Size(5, 5), 0, 0);
OpenCV has kmeans function and it saves me a lot of time on coding. I modify this code.
// Input data should be float32
Mat samples(img.rows * img.cols, 3, CV_32F);
for (int i = 0; i < img.rows; i++) {
for (int j = 0; j < img.cols; j++) {
for (int z = 0; z < 3; z++) {
samples.at<float>(i + j * img.rows, z) = img.at<Vec3b>(i, j)[z];
}
}
}
// Select the number of clusters
int clusterCount = 4;
Mat labels;
int attempts = 1;
Mat centers;
kmeans(samples, clusterCount, labels, TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 10, 0.1), attempts, KMEANS_PP_CENTERS, centers);
// Draw clustered result
Mat cluster(img.size(), img.type());
for (int i = 0; i < img.rows; i++) {
for(int j = 0; j < img.cols; j++) {
int cluster_idx = labels.at<int>(i + j * img.rows, 0);
cluster.at<Vec3b>(i, j)[0] = centers.at<float>(cluster_idx, 0);
cluster.at<Vec3b>(i, j)[1] = centers.at<float>(cluster_idx, 1);
cluster.at<Vec3b>(i, j)[2] = centers.at<float>(cluster_idx, 2);
}
}
imshow("clustered image", cluster);
// Check centers' RGB value
cout << centers;
After clustering, I convert the result to grayscale and find the darkest color which is more likely to be the color of the black dots.
// Find the minimum value
cvtColor(cluster, cluster, CV_RGB2GRAY);
Mat dot = Mat::zeros(img.size(), CV_8UC1);
cluster.copyTo(dot);
int minVal = (int)dot.at<uchar>(dot.cols / 2, dot.rows / 2);
for (int i = 0; i < dot.rows; i += 3) {
for (int j = 0; j < dot.cols; j += 3) {
if ((int)dot.at<uchar>(i, j) < minVal) {
minVal = (int)dot.at<uchar>(i, j);
}
}
}
inRange(dot, minVal - 5 , minVal + 5, dot);
imshow("dot", dot);
Let's test two images.
(clusterCount = 4)
(clusterCount = 5)
One shortcoming of the k-means clustering is one fixed clusterCount cannot be applied to every image. Also clustering is not so fast for larger images. That's the issue annoys me a lot. My dirty method for better real time performance (on iPhone) is to crop 1/16 of the image and cluster the smaller area. Then compare all the pixels in the original image with each cluster center, and pick the pixel that are the nearest to the "black" color. I simply calculate euclidean distance between two RGB colors.
A simple method is to just threshold all the pixels. Here is this idea expressed in pseudo code.
for each pixel in image
if brightness < THRESHOLD
pixel = BLACK
else
pixel = WHITE
Or if you're always dealing with cyan, magenta and yellow backgrounds then maybe you might get better results with the criteria
if pixel.r < THRESHOLD and pixel.g < THRESHOLD and pixel.b < THRESHOLD
This method will only give good results for easy images where nothing except the black dots is too dark.
You can experiment with the value of THRESHOLD to find a good value for your images.
I suggest to convert to some chroma-based color space, like LCH, and adjust simultaneous thresholds on lightness and chroma. Here is the result mask for L < 50 & C < 25 for the input image:
Seems like you need adaptive thresholds since different values work best for different areas of the image.
You may also use HSV or HSL as a color space, but they are less perceptually uniform than LCH, derived from Lab.