Offline point cloud creation from Kinect V2 RGB and Depth images - matlab

I have a saved set of data captured with a Kinect V2 using the Kinect SDK. Data are in the form of RGB image, depth image, and colored point cloud. I used C# for this.
Now I want to create the point cloud separately using only the saved color and depth images, but in Matlab.
The pcfromkinect Matlab function requires a live Kinect. But I want to generate the point cloud without a connected Kinect.
Any idea please.
I found the following related questions, but none of them have a clear clue.
Convert kinect RGB and depth values to XYZ coordinates
How to Convert Kinect rgb and depth images to Real world coordinate xyz?
generate a point cloud from a given depth image-matlab Computer Vision System Toolbox

I have done the same for my application. So here a brief overview what i have done:
Save the data (C#/Kinect SDK):
How to save a depth Image:
MultiSourceFrame mSF = (MultiSourceFrame)reference;
var frame = mSF.DepthFrameReference.AcquireFrame();
if (frame != null )
{
using (KinectBuffer depthBuffer = frame.LockImageBuffer())
{
Marshal.Copy(depthBuffer.UnderlyingBuffer, targetBuffer,0, DEPTH_IMAGESIZE);
}
frame.Dispose();
}
write buffer to file:
File.WriteAllBytes(filePath + fileName, targetBuffer.Buffer);
For fast saving think about a ringbuffer.
ReadIn the data (Matlab)
how to get z_data:
fid = fopen(fileNameImage,'r');
img= fread(fid[IMAGE_WIDTH*IMAGE_HEIGHT,1],'uint16');
fclose(fid);
img= reshape(img,IMAGE_WIDTH,MAGE_HEIGHT);
how to get XYZ-Data:
For that think about the pinhole-model-formula to convert uv-coordinates to xyz
(formula).
To get the cameramatrix K you need to calibrate your camera (matlab calibration app) or get the cameraparameters from Kinect-SDK (var cI= kinectSensor.CoordinateMapper.GetDepthCameraIntrinsics();).
coordinateMapper with the SDK:
The way to get XYZ from Kinect SDK directly is quite easier. For that this link could help you. Just get the buffer by kinect sdk and convert the rawData with the coordinateMapper to xyz. Afterwards save it to csv or txt, so it is easy for reading in matlab.

Related

MATLAB: webcam video acquisition

The Logitech C910 webcam spec indicates image and video capture. Because image and video capture are listed separately, I am assuming that they are encoded and sent differently: effectively forming two different 'channels' to select from. If this understanding is incorrect, please respond with an explanation of the true nature.
This reference indicates a maximum frame-rate of 15 because of windows.
My search returned webcam video acquisition that comprises a series of time lapsed images 'stitched' together
% Connect to the webcam.
cam = webcam
% Open Video File
vidWriter = VideoWriter('frames.avi');
open(vidWriter);
% Write images file
for index = 1:20
% Acquire frame for processing
img = snapshot(cam);
% Write frame to video
writeVideo(vidWriter,img);
end
%Close file and cam
close(vidWriter);
clear cam
MATLAB has captured successfully images with the C910.
Question
If possible within MATLAB, how does one configure the webcam's **video* frame rate and save the video stream to .avi or the like? (not writing still images to video file as depicted above).
Perhaps someone with experience or sharper Google skills can provide an example of bridging the webcam's video (vs the image) stream into MATLAB. Any example that can be tested is greatly appreciated.
A good way to 'jumpstart' or accelerate newbie familiarity is with the image acquisition tool:
imaqtool
The tool seems to be GUI wrapper that reduces the command-line syntax to a GUI. Note that the bottom right panel shows the command line equivalent of a GUI interaction.
Configurable video capture paramaters comprise:
Resolution & ROI (Region of Interest)
Frame Rate
Video type (.avi .mp4) etc.

Kinect v1 with Processing to .obj to unity [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I want to create an hologram that is exported via the kinect to an hololense. But it's very slow.
I use this tutorial to collect point cloud data, and this library to export my data as a 3D object in .obj format. The library that exports obj doesnt accept points so I had to draw little triangles. I save the files .obj .png and .mtl on my local xampp.
Next, I download the files with a unity script and WWW object. I also use Runtime OBJ Importer from unity's asset store to create a 3D object at runtime.
The last part is to export the unity app on a hololense. (I will do it next).
But before that,
The process is working but is very slow. I want the hologram to be fluid. A lot of time is wasted :
take depth and rgb data of the kinect
export data to an obj png and mtl file
download the files on unity as frequent as possible
render the files
I think of streaming but does unity need a complet obj file to render ? If I compress .png to .jpg will a gain some time ?
Do you have some pointers to help me ?
Currently the way your question is phrased is confusing: it's unclear wether you want to record point clouds that you later load and render in Unity or you want to somehow stream the point cloud with the aligned RGB texture in close to realtime to Unity.
You initial attempts are using Processing.
In terms of recording data, I recommend using the SimpleOpenNI library which can record both depth and RGB data to an .oni file (see the RecorderPlay example).
Once you have a recording, you can loop through each frame and for each frame store the vertices to a file.
In terms of saving to .obj you'll need to convert the point cloud to a mesh (triangulate the vertices in 3D).
Another option would be to store the point cloud to a format like .ply.
You can find more info on writing to a .ply file in Processing in this answer
In terms of streaming the data, this will be complicated:
if you stream all the vertices it that's a lot of data: up to 921600 floats ( (640 x 480 = 307200) * 3)
if you stream both depth (11bit 640x480) and RGB (8bit 640x480) images that will be even more data.
One option might be to only send the vertices that have a depth and overall skipping points (e.g. send every 3rd point). In terms of sending the data you can try OSC
Once you get the points in Unity you should be able to render a point cloud in Unity
What would be ideal in terms of network performance is a codec (compressor/decompressor) for the depth data. I haven't used one thus far, but doing a quick I see there are options like this one(very date).
You'll need to do a bit of research and see what kinect v1 depth streaming libraries are out there already, test and see what works best for your scenario.
Ideally, if the library is written in C# there's a chance you'll be able to use it to decode the received in Unity.

Matlab Kinect Depth Imaging

I'm working with the kinect camera and trying to display real-life depth imaging using the ptCloud method combining the RGB and Depth Sensor. However just using the initial setup my image is disfigured missing pertinent information, is there anyway to improve this so that it captures more data. I have also attached an image of what i mean. Any help would be great thank you!
colorDevice = imaq.VideoDevice('kinect',1)
depthDevice = imaq.VideoDevice('kinect',2)
step(colorDevice);
step(depthDevice);
colorImage = step(colorDevice);`enter code here`
depthImage = step(depthDevice);
gridstep = 0.1;
ptCloud = pcfromkinect(depthDevice,depthImage,colorImage);
player = pcplayer(ptCloud.XLimits,ptCloud.YLimits,ptCloud.ZLimits,...
'VerticalAxis','y','VerticalAxisDir','down');
xlabel(player.Axes,'X (m)');
ylabel(player.Axes,'Y (m)');
zlabel(player.Axes,'Z (m)');
for i = 1:1000
colorImage = step(colorDevice);
depthImage = step(depthDevice);
ptCloud = pcfromkinect(depthDevice,depthImage,colorImage);
ptCloudOut = pcdenoise(ptCloud);
view(player,ptCloudOut);
end
release(colorDevice);
release(depthDevice);
From the looks of the image, you are trying to capture a cabinet with a TV screen in the middle. In cases like these, the TV screen actually absorbs the IR emitted from the sensor or reflects it at oblong angles/multiple reflections etc. Therefore Kinect is unable to capture the depth data. Furthermore, since when you want to display the RGB data on top of the point cloud, it tries to align the two and rejects any depth data that is not aligned with the RGB image pixels.
So in order to improve your depth data acquisition, you could either take care that there are no reflective surfaces like screen, mirrors etc in the scene. Also, try displaying the depth data without the RGB overlay, which will hopefully improve the point cloud shown.

How do I interpret an Intel Realsense camera depth map in MATLAB?

I was able to view and capture the image from the depth stream in MATLAB (using the webcam from the Hardware Support Package) from an F200 Intel Realsense camera. However, it does not look the same way as it does in the Camera Explorer.
What I see from MATLAB -
I have also linked Depth.mat that contains the image in the variable "D".
The image is returned as a 3 dimensional array of uint8. I assumed that the depth stream is a larger number that is broken in bits in each plane and tried bitshifting each plane and adding it to the next while taking care of the datatypes. Then displayed it using imagesc, but did not get a proper depth image.
How do I properly interpret this image? Or, is there an alternate way to capture images in MATLAB?

Camera Calibration Kinect Vision Caltech IR camera

So I was trying to calibrate the IR camera of the new Kinect v2 sensor. So I am following all the steps from here http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html
The problem I am having is the following:
The IR image looks fine but once I put it through the program the image I am getting is a mostly white(bright) image. See pics below
Anyone encountered this issue before?
Thanks
You are not reading the IR image pixels correctly. The format is 16 bits per pixel, with only the high 10 ones used (see specification here). You are probably visualizing them as if they were 8bpp images, and therefore they end up white-saturated.
The simplest thing you can do is downshift the values by 8 bits (i.e. divide by 256) before interpreting them in a "standard" 8bpp image.
However, in Matlab you can simply use imagesc to display them with color scaling.