Get Intel RealSense Depth Stream in Unity - unity3d

I'm currently trying to get the depthstream of the new RealSense Generation (D435, SDK2) as a Texture2D in Unity. I can easily access the regular RGB stream as a WebCamTexture, when I try to get the depthsream, I get this error:
Could not connect pins - RenderStream()
Unity recognizes the depthcamera, but can't display it.
I also tried to use the prefabs of the Unity Wrapper, but they don't really work for my project. If I use the prefabs, I can get the data to an R16 texture. Does anyone have an idea, how I can get the depth information at a certain point in the image (GetPixel() doesn't work for R16 textures...)? I'd prefer to get a WebCamTexture stream, if this doesn't work, I have to save the information in a different way...

What i did to get depth data was to create my own class inheriting from RawImage. I used my custom class as the target for the depth render stream, and got the image from the texture component in my class.
Binding to custom class
In my case i wanted to convert the 16bit depth data to an 8bit pr channel rgb pgn so that i could export it as a greyscale image. Here's how i parsed the image data:
byte[] input = texture.GetRawTextureData();
//create array of pixels from texture
//remember to convert to texture2D first
Color[] pixels = new Color[width*height];
//converts R16 bytes to PNG32
for(int i = 0; i < input.Length; i+=2)
{
//combine bytes into 16bit number
UInt16 num = System.BitConverter.ToUInt16(input, i);
//turn into float with range 0->1
float greyValue = (float)num / 2048.0f;
alpha = 1.0f;
//makes pixels outside measuring range invisible
if (num >= 2048 || num <= 0)
alpha = 0.0f;
Color grey = new Color(greyValue, greyValue, greyValue, alpha);
//set grey value of pixel based on float
pixels.SetValue(grey, i/2);
}
to get the pixels you can simply access the new pixels array.

Related

Set pixel in RFloat texture

I want to implement an algorithm on GPU using Graphics.Blit. The input values are floats and output values are also float. I Create a texture with RFloat format and want to set values for every pixel. How can I set that? According to unity manual SetPixels doesn't work:
This function works only on ARGB32, RGB24 and Alpha8 texture formats.
For other formats SetPixels is ignored.
The algorithm needs float precision so the neither of these formats are usable. So how can it be done?
EDIT: After more struggle with unity RenderTextures, Here is the code I came up with to transfer data to GPU.
int res=512;
Texture2D tempTexture= new Texture2D(res, res, TextureFormat.RFloat, false);
public void ApplyHeightsToRT(float[,] heights, RenderTexture renderTexture)
{
RenderTexture.active = renderTexture;
Texture2D tempTexture = new Texture2D(res, res, TextureFormat.RFloat, false);
tempTexture.ReadPixels(new Rect(0, 0, renderTexture.width, renderTexture.height), 0, 0
, false);
for (int i = 0; i < tempTexture.width; i++)
for (int j = 0; j < tempTexture.height; j++)
{
tempTexture.SetPixel(i, j, new Color(heights[i, j], 0, 0, 0));
}
tempTexture.Apply();
RenderTexture.active = null;
Graphics.Blit(tempTexture, renderTexture);
}
This code successfully uploads the tempTexture to RenderTexture. The inverse operation is similarly done with the following method (RenderTexture is copied to tempTexture):
public void ApplyRTToHeights(RenderTexture renderTexture, float[,] heights)
{
RenderTexture.active = renderTexture;
tempTexture.ReadPixels(new Rect(0, 0, renderTexture.width, renderTexture.height), 0, 0
, false);
for (int i = 0; i < tempTexture.width; i++)
for (int j = 0; j < tempTexture.height; j++)
{
heights[i, j]=tempTexture.GetPixel(i, j).r;
}
RenderTexture.active = null;
}
To test the code I get the heightmap of a terrain then I call the first method to fill the RenderTexture with heightmap. Then I call the second method to get pixels from RenderTexture and put them on the terrain. It should do nothing. Right?
Actually calling the two methods one after another would flip the terrain heightmap and also create banding artifacts. Very weird. After further investigation the reason for flip turned out to be a formatting problem. The tempTexture that is created above the two methods is actually an ARGB32 texture not the RFloat I hoped it would be.
This explains the flips. After changing the code of tempTexture to be a ARGB32 texture and changing RenderTexture to be RGBA32, flip behaviour gone away. Now there is only banding artifacts:
And that would be understandable since I'm using only 8 bits (red channel) of both tempTexture and RenderTexture.
Now the problem is not about setting data on a RFloat texture. The problem is RFloat textures are not supported in my graphics card and probably many other different graphic devices. The problem is to find a way to transfer float arrays to the RenderTexture.

Microsoft Kinect V2 + Unity 3D Depth = Warping

I've been working on a scene in Unity3D where I have the KinectV2 depth information coming in at 512 x 424 and I'm converting that in real time to Mesh that is also 512 x 424. So there is a 1:1 ratio of pixel data (depth) and vertices (mesh).
My end goal is to make the 'Monitor 3D View' scene found in 'Microsoft Kinect Studio v2.0' with the Depth.
I've pretty much got it working in terms of the point cloud. However, there is a large amount of warping in my Unity scene. I though it might of been down to my maths, etc.
However I noticed that its the same case for the Unity Demo kinect supplied in their Development kit.
I'm just wondering if I'm missing something obvious here? Each of my pixels (or vertices in this case) is mapped out in a 1 by 1 fashion.
I'm not sure if its because I need to process the data from the DepthFrame before rendering it to scene? Or if there's some additional step I've missed out to get the true representation of my room? Because it looks like theres a slight 'spherical' effect being added right now.
These two images are a top down shot of my room. The green line represents my walls.
The left image is the Kinect in a Unity scene, and the right is within Microsoft Kinect Studio. Ignoring the colour difference, you can see that the left (Unity) is warped, whereas the right is linear and perfect.
I know it's quite hard to make out, especially that you don't know the layout of the room I'm sat in :/ Side view too. Can you see the warping on the left? Use the green lines as a reference - these are straight in the actual room, as shown correctly on the right image.
Check out my video to get a better idea:
https://www.youtube.com/watch?v=Zh2pAVQpkBM&feature=youtu.be
Code C#
Pretty simple to be honest. I'm just grabbing the depth data straight from the Kinect SDK, and placing it into a point cloud mesh on the Z axis.
//called on application start
void Start(){
_Reader = _Sensor.DepthFrameSource.OpenReader();
_Data = new ushort[_lengthInPixels];
_Sensor.Open();
}
//called once per frame
void Update(){
if(_Reader != null){
var dep_frame = _Reader.AcquireLatestFrame();
dep_frame.CopyFrameDataToArray(_Data);
dep_frame.Dispose();
dep_frame = null;
UpdateScene();
}
}
//update point cloud in scene
void UpdateScene(){
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int index = (y * width) + x;
float depthAdjust = 0.1;
Vector3 new_pos = new Vector3(points[index].x, points[index].y, _Data[index] * depthAdjust;
points[index] = new_pos;
}
}
}
Kinect API can be found here:
https://msdn.microsoft.com/en-us/library/windowspreview.kinect.depthframe.aspx
Would appreciate any advise, thanks!
With thanks to Edward Zhang, I figured out what I was doing wrong.
It's down to me not projecting my depth points correctly, in where I need to use the CoordinateMapper to map my DepthFrame into CameraSpace.
Currently, my code assumes an orthogonal depth instead of using a perspective depth camera. I just needed to implement this:
https://msdn.microsoft.com/en-us/library/windowspreview.kinect.coordinatemapper.aspx
//called once per frame
void Update(){
if(_Reader != null){
var dep_frame = _Reader.AcquireLatestFrame();
dep_frame.CopyFrameDataToArray(_Data);
dep_frame.Dispose();
dep_frame = null;
CameraSpacePoint[] _CameraSpace = new CameraSpacePoint[_Data.Length];
_Mapper.MapDepthFrameToCameraSpace(_Data, _CameraSpace);
UpdateScene();
}
}
//update point cloud in scene
void UpdateScene(){
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int index = (y * width) + x;
Vector3 new_pos = new Vector3(_CameraSpace[index].X, _CameraSpace[index].Y, _CameraSpace[index].Z;
points[index] = new_pos;
}
}
}

How to get depth value of specific RGB pixels in Kinect v2 images using Matlab

I'm working with the Kinect v2 and I have to map the depth information onto the RGB images to process them: in particular, I need to know which pixels in the RGB images are in a certain range of distance (depth) along the Z axis; I'm acquiring all the data with a C# program and saving them as images (RGB) and txt files (depth).
I've followed the instruction from here and here (and I thank them for sharing), but I still have some problems I don't know how to solve.
I have calculated the rotation (R) and translation (T) matrix between the depth sensor and the RGB camera, as well as their intrinsic parameters.
I have created P3D_d (depth pixels in world coordinates related to depth sensor) and P3D_rgb (depth pixels in world coordinates related to rgb camera).
row_num = 424;
col_num = 512;
P3D_d = zeros(row_num,col_num,3);
P3D_rgb = zeros(row_num,col_num,3);
cont = 1;
for row=1:row_num
for col=1:col_num
P3D_d(row,col,1) = (row - cx_d) * depth(row,col) / fx_d;
P3D_d(row,col,2) = (col - cy_d) * depth(row,col) / fy_d;
P3D_d(row,col,3) = depth(row,col);
temp = [P3D_d(row,col,1);P3D_d(row,col,2);P3D_d(row,col,3)];
P3D_rgb(row,col,:) = R*temp+T;
end
end
I have created P2D_rgb_x and P2D_rgb_y.
P2D_rgb_x(:,:,1) = (P3D_rgb(:,:,1)./P3D_rgb(:,:,3))*fx_rgb+cx_rgb;
P2D_rgb_y(:,:,2) = (P3D_rgb(:,:,2)./P3D_rgb(:,:,3))*fy_rgb+cy_rgb;
but now I don't understand how to continue.
Assuming that the calibration parameters are correct, I've tried to click on a defined point in both the depth (coordinates: row_d, col_d) and rgb (coordinates: row_rgb, col_rgb) images, but P2D_rgb_x(row_d, col_d) is totally different from row_rgb, as well as P2D_rgb_y(row_d, col_d) is totally different from col_rgb.
So, what do exactly mean P2D_rgb_x and P2D_rgb_y? How can I use them to map depth value onto rgb images or just to get the depth of a certain RGB pixel?
I'll apreciate any suggest or help!
PS: I've also a related post on MathWorks at this link

Unity3D new ui Mask behaves wrong with runtime generated Texture2D

In my project I have a texture2D that is generated in runtime, and a texture2D, that is stored in my project, not generated in runtime, both under a mask. When I try to increase their scale and move them with scroll rect under this mask, it behaves strange on some devices. The runtime-generated texture becomes partially invisible, though the other one works as needed.
I've already tried to change texture formats, filter modes and all the properties that texture has at all. I've configured runtime generated texture to have exactly the same properties as the preloaded one that works fine. But it still behaves the same.
In my code I load all textures from a specified folder with Resources.LoadAll(), and then I change every visible pixel of every loaded texture to white color.
maskTexturesObj is an Object[] array resulted by Resources.LoadAll() method.
Here is the code where I create my texture:
processedTexture = maskTexturesObj[i] as Texture2D;
for (int y = 0; y < processedTexture.height; y++)
{
for (int x = 0; x < processedTexture.width; x++)
{
if (processedTexture.GetPixel(x, y).a > 0)
processedTexture.SetPixel(x, y, Color.white);
}
}
processedTexture.Apply();
lessonPartImage.sprite = Sprite.Create(processedTexture, new Rect(0, 0, processedTexture.width, processedTexture.height), Vector2.zero);
The result is on the screenshot:
And here is what it is supposed to be:

(Unity3D) Paint with soft brush (logic)

During the last few days i was coding a painting behavior for a game am working on, and am currently in a very advanced phase, i can say that i have 90% of the work done and working perfectly, now what i need to do is being able to draw with a "soft brush" cause for now it's like am painting with "pixel style" and that was totally expected cause that's what i wrote,
my current goal consist of using this solution :
import a brush texture, this image
create an array that contain all The alpha values of that texture
When drawing use the array elements in order to define the new pixels alpha
And this is my code to do that (it's not very long, there is too much comments)
//The main painting method
//theObject = the object to be painted
//tmpTexture = the object current texture
//targetTexture = the new texture
void paint (GameObject theObject, Texture2D tmpTexture, Texture2D targetTexture)
{
//x and y are 2 floats from another class
//they store the coordinates of the pixel
//that get hit by the RayCast
int x = (int)(coordinates.pixelPos.x);
int y = (int)(coordinates.pixelPos.y);
//iterate through a block of pixels that goes fro
//Y and X and go #brushHeight Pixels up
// and #brushWeight Pixels right
for (int tmpY = y; tmpY<y+brushHeight; tmpY++) {
for (int tmpX = x; tmpX<x+brushWidth; tmpX++) {
//check if the current pixel is different from the target pixel
if (tmpTexture.GetPixel (tmpX, tmpY) != targetTexture.GetPixel (tmpX, tmpY)) {
//create a temporary color from the target pixel at the given coordinates
Color tmpCol = targetTexture.GetPixel (tmpX, tmpY);
//change the alpha of that pixel based on the brush alpha
//myBrushAlpha is a 2 Dimensional array that contain
//the different Alpha values of the brush
//the substractions are to keep the index in range
if (myBrushAlpha [tmpY - y, tmpX - x].a > 0) {
tmpCol.a = myBrushAlpha [tmpY - y, tmpX - x].a;
}
//set the new pixel to the current texture
tmpTexture.SetPixel (tmpX, tmpY, tmpCol);
}
}
}
//Apply
tmpTexture.Apply ();
//change the object main texture
theObject.renderer.material.mainTexture = tmpTexture;
}
Now the fun (and bad) part is the code did exactly what i asked for, but there is something that i didn't think of and i couldn't solve after spend the whole night trying,
the thing is that by asking to draw anytime with the brush alpha i found myself create a very weird effect which is decreasing the alpha value of an "old" pixel, so i tried to fix that by adding an if statement that check if the current alpha of the pixel is less than the equivalent brush alpha pixel, if it is, then augment the alpha to be equal to the brush, and if the pixel alpha is bigger, then keep adding the brush alpha value to it in order to have that "soft brushing" effect, and in code it become this :
if (myBrushAlpha [tmpY - y, tmpX - x].a > tmpCol.a) {
tmpCol.a = myBrushAlpha [tmpY - y, tmpX - x].a;
} else {
tmpCol.a += myBrushAlpha [tmpY - y, tmpX - x].a;
}
But after i've done that, i got the "pixelized brush" effect back, am not sure but i think maybe it's because am making these conditions inside a for loop so everything is executed before the end of the current frame so i don't see the effect, could it be that ?
Am really lost here and hope that you can put me in the right direction,
Thank you very much and have a great day