Matlab Kinect Depth Imaging - matlab

I'm working with the kinect camera and trying to display real-life depth imaging using the ptCloud method combining the RGB and Depth Sensor. However just using the initial setup my image is disfigured missing pertinent information, is there anyway to improve this so that it captures more data. I have also attached an image of what i mean. Any help would be great thank you!
colorDevice = imaq.VideoDevice('kinect',1)
depthDevice = imaq.VideoDevice('kinect',2)
step(colorDevice);
step(depthDevice);
colorImage = step(colorDevice);`enter code here`
depthImage = step(depthDevice);
gridstep = 0.1;
ptCloud = pcfromkinect(depthDevice,depthImage,colorImage);
player = pcplayer(ptCloud.XLimits,ptCloud.YLimits,ptCloud.ZLimits,...
'VerticalAxis','y','VerticalAxisDir','down');
xlabel(player.Axes,'X (m)');
ylabel(player.Axes,'Y (m)');
zlabel(player.Axes,'Z (m)');
for i = 1:1000
colorImage = step(colorDevice);
depthImage = step(depthDevice);
ptCloud = pcfromkinect(depthDevice,depthImage,colorImage);
ptCloudOut = pcdenoise(ptCloud);
view(player,ptCloudOut);
end
release(colorDevice);
release(depthDevice);

From the looks of the image, you are trying to capture a cabinet with a TV screen in the middle. In cases like these, the TV screen actually absorbs the IR emitted from the sensor or reflects it at oblong angles/multiple reflections etc. Therefore Kinect is unable to capture the depth data. Furthermore, since when you want to display the RGB data on top of the point cloud, it tries to align the two and rejects any depth data that is not aligned with the RGB image pixels.
So in order to improve your depth data acquisition, you could either take care that there are no reflective surfaces like screen, mirrors etc in the scene. Also, try displaying the depth data without the RGB overlay, which will hopefully improve the point cloud shown.

Related

Detect the position,orientation and color in Matlab of not overlapped Tiles to be picked by robot

I am currently working on a project where I need to find the
square shape tiles in pile which are not overlapped,
am currently working on a project where
I need to determine the orientation , position (center ) ,and color
of each square tile . These orientation and positions
will be used as input for a robot to be picked
and the robot will sort them in a specific locations .
I am using Matlab and i should transfer the data using TCP/IP.
I've experimenting with edge detection(canny,sobel) ,
found the boundaries,segmentation using threshold and FCM but
I haven't found a reliable way to determine the tiles which are
not overlapped ,i am trying to use template shape matching but
i don't know how to do that . This needs to be done in real time
as i will be using frame which is taken from a USB camera that
attached to PC . I was wondering if someone could offer a
reliable solution ? Here is a sample image.
I was wondering if someone could offer
a reliable solution to determine the square shape tiles
which are not overlapped? Here is a sample imageoverlapped Tiles
You've separated the image into tiles and background. So now simply label all the connected components. Take each one and test for single tile-ness. If you know the approximate size of the tiles, first exclude by area. Then calculate the centroid and the extreme left, right, top and bottom. If it is tile, the intersection of top bottom and left-right will be approximately in the centroid, and the half angles will be perpendicular to the tile edge. So rotate, take the bounding box, and count unset pixels, which should be almost zero for a rectangluar tile.
(You'll probably need to do a morphological operation or two to clean up the images if the tile / background separation is a bit dicey).
Check out the binary image processing library http://malcolmmclean.github.io/binaryimagelibrary 1
thanks for your quick replay.i already did some morphological operation and found connected component and below is my code in matlab ,and each tile has 2.5*2.5 cm area
a = imread('origenal image.jpg');
I = rgb2gray(a);
imshow(I)
thresold = graythresh(I);
se1=strel('diamond',2);
I1=imerode(I,se1);
figure(1)
imshow(I1);
bw = imclose(I1 , ones(25) );
imshow(bw)
CC = bwconncomp(bw);
L = labelmatrix(CC);

Offline point cloud creation from Kinect V2 RGB and Depth images

I have a saved set of data captured with a Kinect V2 using the Kinect SDK. Data are in the form of RGB image, depth image, and colored point cloud. I used C# for this.
Now I want to create the point cloud separately using only the saved color and depth images, but in Matlab.
The pcfromkinect Matlab function requires a live Kinect. But I want to generate the point cloud without a connected Kinect.
Any idea please.
I found the following related questions, but none of them have a clear clue.
Convert kinect RGB and depth values to XYZ coordinates
How to Convert Kinect rgb and depth images to Real world coordinate xyz?
generate a point cloud from a given depth image-matlab Computer Vision System Toolbox
I have done the same for my application. So here a brief overview what i have done:
Save the data (C#/Kinect SDK):
How to save a depth Image:
MultiSourceFrame mSF = (MultiSourceFrame)reference;
var frame = mSF.DepthFrameReference.AcquireFrame();
if (frame != null )
{
using (KinectBuffer depthBuffer = frame.LockImageBuffer())
{
Marshal.Copy(depthBuffer.UnderlyingBuffer, targetBuffer,0, DEPTH_IMAGESIZE);
}
frame.Dispose();
}
write buffer to file:
File.WriteAllBytes(filePath + fileName, targetBuffer.Buffer);
For fast saving think about a ringbuffer.
ReadIn the data (Matlab)
how to get z_data:
fid = fopen(fileNameImage,'r');
img= fread(fid[IMAGE_WIDTH*IMAGE_HEIGHT,1],'uint16');
fclose(fid);
img= reshape(img,IMAGE_WIDTH,MAGE_HEIGHT);
how to get XYZ-Data:
For that think about the pinhole-model-formula to convert uv-coordinates to xyz
(formula).
To get the cameramatrix K you need to calibrate your camera (matlab calibration app) or get the cameraparameters from Kinect-SDK (var cI= kinectSensor.CoordinateMapper.GetDepthCameraIntrinsics();).
coordinateMapper with the SDK:
The way to get XYZ from Kinect SDK directly is quite easier. For that this link could help you. Just get the buffer by kinect sdk and convert the rawData with the coordinateMapper to xyz. Afterwards save it to csv or txt, so it is easy for reading in matlab.

vision.PeopleDetector function in Matlab

Have anyone ever used vision.PeopleDetector function from Computer Vision System Toolbox in Matlab?
I've installed it and tried to apply to images I have.
Although it detects people on the training image, it detects nothing on real photos. Either it doesn't detect people at all or detects people at parts of the image where they are not presented.
Could anyone share the experience of using this function?
Thanks a lot!
Here is a sample image:
The vision.PeopleDetector object does indeed detect upright standing people in images. However, like most computer vision algorithms it is not 100% accurate. Can you post a sample image where it fails?
There are several things you can try to improve performance.
Try changing the ClassificationModel parameter to 'UprightPeople_96x48'. There are two models that come with the object, trained on different data sets.
How big (in pixels) are the people in your image? If you use the default 'UprightPeople_128x64' model, then you will not be able to detect a person smaller than 128x64 pixels. Similarly, for the 'UprightPeople_96x48' model the smallest size person you can detect is 96x48. If the people in your image are smaller than that, you can up-sample the image using imresize.
Try reducing the ClassificationThreshold parameter to get more detections.
Edit:
Some thoughts on your particular image. My guess would be that the people detector is not working well here, because it was not trained on this kind of images. The training sets for both models consist of natural images of pedestrians. Ironically, the fact that your image has a perfectly clean background may be throwing the detector off.
If this image is typical of what you have to deal with, then I have a few suggestions. One possibility is to use simple thresholding to segment out the people. The other is to use vision.CascadeObjectDetector to detect the faces or the upper bodies, which happens to work perfectly on this image:
im = imread('postures.jpg');
detector = vision.CascadeObjectDetector('ClassificationModel', 'UpperBody');
bboxes = step(detector, im);
im2 = insertObjectAnnotation(im, 'rectangle', bboxes, 'person', 'Color', 'red');
imshow(im2);

Varying intensity in the frames captured by camera (uEye)

I have been using Matlab to capture images from an uEye Camera at regular
intervals and use them for processing. The following is the small piece of
code that I am using to achieve that,
h=actxcontrol('UEYECAM.uEyeCamCtrl.1','position',[250 100 640 480]);
d=h.InitCamera(1);
check = 1;
str_old = 'img000.jpeg';
m = h.SaveImage('img000.jpeg');
pause(60);
And following are the images captured by the camera. There was no change in the
lighting conditions outside but you can notice the difference in the intensity
levels in the image captured by the camera.
Is there any reason for this?
Solved thanks to Zaphod
Allow sometime for the camera to adjust its exposure. I did it by moving the pause
statement to just after the InitCamera() command to delay the image capture by the
camera and give it enough time to adjust itself.
Solved thanks to Zaphod
Allow sometime for the camera to adjust its exposure. I did it by moving the pause statement to just after the InitCamera() command to delay the image capture by the camera and give it enough time to adjust itself.
h=actxcontrol('UEYECAM.uEyeCamCtrl.1','position',[250 100 640 480]);
d=h.InitCamera(1);
pause(60);
check = 1;
str_old = 'img000.jpeg';
m = h.SaveImage('img000.jpeg');

How do I binarize a CGImage using OpenCV on iOS?

In my iOS project, I have a CGImage in RGB that I'd like to binarize (convert to black and white). I would like to use OpenCV to do this, but I'm new to OpenCV. I found a book on OpenCV, but it was not for iPhone.
How can I binarize such an image using OpenCV on iOS?
If you don't want to set up OpenCV in your iOS project, my open source GPUImage framework has two threshold filters within it for binarization of images, a simple threshold and an adaptive one based on local luminance near a pixel.
You can apply a simple threshold to an image and then extract a resulting binarized UIImage using code like the following:
UIImage *inputImage = [UIImage imageNamed:#"inputimage.png"];
GPUImageLuminanceThresholdFilter *thresholdFilter = [[GPUImageLuminanceThresholdFilter alloc] init];
thresholdFilter.threshold = 0.5;
UIImage *thresholdFilter = [thresholdFilter imageByFilteringImage:inputImage];
(release the above filter if not using ARC in your application)
If you wish to display this image to the screen instead, you can send the thresholded output to a GPUImageView. You can also process live video with these filters, if you wish, because they are run entirely on the GPU.
Take a look at cv::threshold() adn pass thresholdType as cv::THRESH_BINARY:
double cv::threshold(const cv::Mat& src,
cv::Mat& dst,
double thresh,
double maxVal,
int thresholdType)
This example uses the C interface of OpenCV to convert an image to black & white.
What you want to do is remove the low rate of changes and leave the high rate of changes, this is a high pass filter. I only have experience with audio signal processing so I don't really know what options are available to you but that is the direction I would be looking.