I want to implement an algorithm on GPU using Graphics.Blit. The input values are floats and output values are also float. I Create a texture with RFloat format and want to set values for every pixel. How can I set that? According to unity manual SetPixels doesn't work:
This function works only on ARGB32, RGB24 and Alpha8 texture formats.
For other formats SetPixels is ignored.
The algorithm needs float precision so the neither of these formats are usable. So how can it be done?
EDIT: After more struggle with unity RenderTextures, Here is the code I came up with to transfer data to GPU.
int res=512;
Texture2D tempTexture= new Texture2D(res, res, TextureFormat.RFloat, false);
public void ApplyHeightsToRT(float[,] heights, RenderTexture renderTexture)
{
RenderTexture.active = renderTexture;
Texture2D tempTexture = new Texture2D(res, res, TextureFormat.RFloat, false);
tempTexture.ReadPixels(new Rect(0, 0, renderTexture.width, renderTexture.height), 0, 0
, false);
for (int i = 0; i < tempTexture.width; i++)
for (int j = 0; j < tempTexture.height; j++)
{
tempTexture.SetPixel(i, j, new Color(heights[i, j], 0, 0, 0));
}
tempTexture.Apply();
RenderTexture.active = null;
Graphics.Blit(tempTexture, renderTexture);
}
This code successfully uploads the tempTexture to RenderTexture. The inverse operation is similarly done with the following method (RenderTexture is copied to tempTexture):
public void ApplyRTToHeights(RenderTexture renderTexture, float[,] heights)
{
RenderTexture.active = renderTexture;
tempTexture.ReadPixels(new Rect(0, 0, renderTexture.width, renderTexture.height), 0, 0
, false);
for (int i = 0; i < tempTexture.width; i++)
for (int j = 0; j < tempTexture.height; j++)
{
heights[i, j]=tempTexture.GetPixel(i, j).r;
}
RenderTexture.active = null;
}
To test the code I get the heightmap of a terrain then I call the first method to fill the RenderTexture with heightmap. Then I call the second method to get pixels from RenderTexture and put them on the terrain. It should do nothing. Right?
Actually calling the two methods one after another would flip the terrain heightmap and also create banding artifacts. Very weird. After further investigation the reason for flip turned out to be a formatting problem. The tempTexture that is created above the two methods is actually an ARGB32 texture not the RFloat I hoped it would be.
This explains the flips. After changing the code of tempTexture to be a ARGB32 texture and changing RenderTexture to be RGBA32, flip behaviour gone away. Now there is only banding artifacts:
And that would be understandable since I'm using only 8 bits (red channel) of both tempTexture and RenderTexture.
Now the problem is not about setting data on a RFloat texture. The problem is RFloat textures are not supported in my graphics card and probably many other different graphic devices. The problem is to find a way to transfer float arrays to the RenderTexture.
Related
I just started learning monogame a couple of days ago and I was wondering how I can draw a line from a Start Vector2 and an End Vector2 with a specific line thickness?
Should I use a one pixel image to draw it onto the screen and then use a Bresenham's line algorithm to find there positions, or is there a more optimized and less complicated method to do this using monogames built in functions?
One way is to create a Texture2D with a width that is the distance between the two Vector2s and a height that is the desired width. Then you apply a rotation to the texture when you draw it.
Here is an example (as a SpriteBatch extension method):
public static void DrawLineBetween(
this SpriteBatch spriteBatch,
Vector2 startPos,
Vector2 endPos,
int thickness,
Color color)
{
// Create a texture as wide as the distance between two points and as high as
// the desired thickness of the line.
var distance = (int)Vector2.Distance(startPos, endPos);
var texture = new Texture2D(spriteBatch.GraphicsDevice, distance, thickness);
// Fill texture with given color.
var data = new Color[distance * thickness];
for (int i = 0; i < data.Length; i++)
{
data[i] = color;
}
texture.SetData(data);
// Rotate about the beginning middle of the line.
var rotation = (float)Math.Atan2(endPos.Y - startPos.Y, endPos.X - startPos.X);
var origin = new Vector2(0, thickness / 2);
spriteBatch.Draw(
texture,
startPos,
null,
Color.White,
rotation,
origin,
1.0f,
SpriteEffects.None,
1.0f);
}
Example of use:
var startPos = new Vector2(0, 0);
var endPos = new Vector2(800, 480);
_spriteBatch.DrawLineBetween(startPos, endPos, 12, Color.White);
How it looks:
It's not a perfect solution. You'll want to modify it if you want to draw connected lines with different angles without any visible seams. Also not sure about the performance.
I use a library I found. It draws a number of 2D primitives, like lines, boxes, etc. Really easy to use. Called "C3.MonoGame.Primitives2D", it can be found here:
https://github.com/z2oh/C3.MonoGame.Primitives2D
Here's a screenshot of a demo I wrote using many of its methods:
It's just one file of around 500 lines. If you don't like Git or using libraries, you can just copy & paste it into your project.
I'm currently trying to get the depthstream of the new RealSense Generation (D435, SDK2) as a Texture2D in Unity. I can easily access the regular RGB stream as a WebCamTexture, when I try to get the depthsream, I get this error:
Could not connect pins - RenderStream()
Unity recognizes the depthcamera, but can't display it.
I also tried to use the prefabs of the Unity Wrapper, but they don't really work for my project. If I use the prefabs, I can get the data to an R16 texture. Does anyone have an idea, how I can get the depth information at a certain point in the image (GetPixel() doesn't work for R16 textures...)? I'd prefer to get a WebCamTexture stream, if this doesn't work, I have to save the information in a different way...
What i did to get depth data was to create my own class inheriting from RawImage. I used my custom class as the target for the depth render stream, and got the image from the texture component in my class.
Binding to custom class
In my case i wanted to convert the 16bit depth data to an 8bit pr channel rgb pgn so that i could export it as a greyscale image. Here's how i parsed the image data:
byte[] input = texture.GetRawTextureData();
//create array of pixels from texture
//remember to convert to texture2D first
Color[] pixels = new Color[width*height];
//converts R16 bytes to PNG32
for(int i = 0; i < input.Length; i+=2)
{
//combine bytes into 16bit number
UInt16 num = System.BitConverter.ToUInt16(input, i);
//turn into float with range 0->1
float greyValue = (float)num / 2048.0f;
alpha = 1.0f;
//makes pixels outside measuring range invisible
if (num >= 2048 || num <= 0)
alpha = 0.0f;
Color grey = new Color(greyValue, greyValue, greyValue, alpha);
//set grey value of pixel based on float
pixels.SetValue(grey, i/2);
}
to get the pixels you can simply access the new pixels array.
I've been working on a scene in Unity3D where I have the KinectV2 depth information coming in at 512 x 424 and I'm converting that in real time to Mesh that is also 512 x 424. So there is a 1:1 ratio of pixel data (depth) and vertices (mesh).
My end goal is to make the 'Monitor 3D View' scene found in 'Microsoft Kinect Studio v2.0' with the Depth.
I've pretty much got it working in terms of the point cloud. However, there is a large amount of warping in my Unity scene. I though it might of been down to my maths, etc.
However I noticed that its the same case for the Unity Demo kinect supplied in their Development kit.
I'm just wondering if I'm missing something obvious here? Each of my pixels (or vertices in this case) is mapped out in a 1 by 1 fashion.
I'm not sure if its because I need to process the data from the DepthFrame before rendering it to scene? Or if there's some additional step I've missed out to get the true representation of my room? Because it looks like theres a slight 'spherical' effect being added right now.
These two images are a top down shot of my room. The green line represents my walls.
The left image is the Kinect in a Unity scene, and the right is within Microsoft Kinect Studio. Ignoring the colour difference, you can see that the left (Unity) is warped, whereas the right is linear and perfect.
I know it's quite hard to make out, especially that you don't know the layout of the room I'm sat in :/ Side view too. Can you see the warping on the left? Use the green lines as a reference - these are straight in the actual room, as shown correctly on the right image.
Check out my video to get a better idea:
https://www.youtube.com/watch?v=Zh2pAVQpkBM&feature=youtu.be
Code C#
Pretty simple to be honest. I'm just grabbing the depth data straight from the Kinect SDK, and placing it into a point cloud mesh on the Z axis.
//called on application start
void Start(){
_Reader = _Sensor.DepthFrameSource.OpenReader();
_Data = new ushort[_lengthInPixels];
_Sensor.Open();
}
//called once per frame
void Update(){
if(_Reader != null){
var dep_frame = _Reader.AcquireLatestFrame();
dep_frame.CopyFrameDataToArray(_Data);
dep_frame.Dispose();
dep_frame = null;
UpdateScene();
}
}
//update point cloud in scene
void UpdateScene(){
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int index = (y * width) + x;
float depthAdjust = 0.1;
Vector3 new_pos = new Vector3(points[index].x, points[index].y, _Data[index] * depthAdjust;
points[index] = new_pos;
}
}
}
Kinect API can be found here:
https://msdn.microsoft.com/en-us/library/windowspreview.kinect.depthframe.aspx
Would appreciate any advise, thanks!
With thanks to Edward Zhang, I figured out what I was doing wrong.
It's down to me not projecting my depth points correctly, in where I need to use the CoordinateMapper to map my DepthFrame into CameraSpace.
Currently, my code assumes an orthogonal depth instead of using a perspective depth camera. I just needed to implement this:
https://msdn.microsoft.com/en-us/library/windowspreview.kinect.coordinatemapper.aspx
//called once per frame
void Update(){
if(_Reader != null){
var dep_frame = _Reader.AcquireLatestFrame();
dep_frame.CopyFrameDataToArray(_Data);
dep_frame.Dispose();
dep_frame = null;
CameraSpacePoint[] _CameraSpace = new CameraSpacePoint[_Data.Length];
_Mapper.MapDepthFrameToCameraSpace(_Data, _CameraSpace);
UpdateScene();
}
}
//update point cloud in scene
void UpdateScene(){
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int index = (y * width) + x;
Vector3 new_pos = new Vector3(_CameraSpace[index].X, _CameraSpace[index].Y, _CameraSpace[index].Z;
points[index] = new_pos;
}
}
}
In my project I have a texture2D that is generated in runtime, and a texture2D, that is stored in my project, not generated in runtime, both under a mask. When I try to increase their scale and move them with scroll rect under this mask, it behaves strange on some devices. The runtime-generated texture becomes partially invisible, though the other one works as needed.
I've already tried to change texture formats, filter modes and all the properties that texture has at all. I've configured runtime generated texture to have exactly the same properties as the preloaded one that works fine. But it still behaves the same.
In my code I load all textures from a specified folder with Resources.LoadAll(), and then I change every visible pixel of every loaded texture to white color.
maskTexturesObj is an Object[] array resulted by Resources.LoadAll() method.
Here is the code where I create my texture:
processedTexture = maskTexturesObj[i] as Texture2D;
for (int y = 0; y < processedTexture.height; y++)
{
for (int x = 0; x < processedTexture.width; x++)
{
if (processedTexture.GetPixel(x, y).a > 0)
processedTexture.SetPixel(x, y, Color.white);
}
}
processedTexture.Apply();
lessonPartImage.sprite = Sprite.Create(processedTexture, new Rect(0, 0, processedTexture.width, processedTexture.height), Vector2.zero);
The result is on the screenshot:
And here is what it is supposed to be:
I have particles that I want to be able to change the color of in code, so any color can be used. So I have only one texture that basically has luminance.
I've been using glColor4f(1f, 0f, 0f, 1f); to apply the color.
Every blendfunc I've tried that has come close to working ends up like the last picture below. I still want to preserve luminance, like in the middle picture. (This is like the Overlay or Soft Light filters in Photoshop, if the color layer was on top of the texture layer.)
Any ideas for how to do this without programmable shaders? Also, since these are particles, I don't want a black box behind it, I want it to add onto the scene.
Here is a solution that might be close to what you're looking for:
glColor4f(1.0f, 0.0f, 0.0f, 1.0f);
glActiveTexture( GL_TEXTURE0 );
glEnable( GL_TEXTURE_2D );
glBindTexture(GL_TEXTURE_2D, spriteTexture);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glActiveTexture( GL_TEXTURE1 );
glEnable( GL_TEXTURE_2D );
glBindTexture(GL_TEXTURE_2D, spriteTexture);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ADD );
What it does is multiply the original texture by the specified color and then adds the pixels values of the original texture on top:
final_color.rgba = original_color.rgba * color.rgba + original_color.rgba;
This will result in a brighter image than what you've asked for but might be good enough with some tweaking.
Should you want to preserve the alpha value of the texture, you'll need to use GL_COMBINE instead of GL_ADD (+ set GL_COMBINE_RGB and GL_COMBINE_ALPHA properly).
Here are some results using this technique on your texture.
NONSENSE! You don't have to use multi-texturing. Just premultiply your alpha.
If you premultiply alpha on the image after you load it in and before you create the GL texture for it then you only need one texture unit for the GL_ADD texture env mode.
If you're on iOS then Apple's libs can premultiply for you. See the example Texture2D class and look for the kCGImageAlphaPremultipliedLast flag.
If you're not using an image loader that supports premultiply then you have to do it manually after loading the image. Pseudo code:
uint8* LoadRGBAImage(const char* pImageFileName) {
Image* pImage = LoadImageData(pImageFileName);
if (pImage->eFormat != FORMAT_RGBA)
return NULL;
// allocate a buffer to store the pre-multiply result
// NOTE that in a real scenario you'll want to pad pDstData to a power-of-2
uint8* pDstData = (uint8*)malloc(pImage->rows * pImage->cols * 4);
uint8* pSrcData = pImage->pBitmapBytes;
uint32 bytesPerRow = pImage->cols * 4;
for (uint32 y = 0; y < pImage->rows; ++y) {
byte* pSrc = pSrcData + y * bytesPerRow;
byte* pDst = pDstData + y * bytesPerRow;
for (uint32 x = 0; x < pImage->cols; ++x) {
// modulate src rgb channels with alpha channel
// store result in dst rgb channels
uint8 srcAlpha = pSrc[3];
*pDst++ = Modulate(*pSrc++, srcAlpha);
*pDst++ = Modulate(*pSrc++, srcAlpha);
*pDst++ = Modulate(*pSrc++, srcAlpha);
// copy src alpha channel directly to dst alpha channel
*pDst++ = *pSrc++;
}
}
// don't forget to free() the pointer!
return pDstData;
}
uint8 Modulate(uint8 u, uint8 uControl) {
// fixed-point multiply the value u with uControl and return the result
return ((uint16)u * ((uint16)uControl + 1)) >> 8;
}
Personally, I'm using libpng and premultiplying manually.
Anyway, after you premultiply, just bind the byte data as an RGBA OpenGL texture. Using glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ADD); with a single texture unit should be all you need after that. You should get exactly (or pretty damn close) to what you want. You might have to use glBlendFunc(GL_SRC_ALPHA, GL_ONE); as well if you really want to make the thing look shiny btw.
This is subtly different from the Ozirus method. He's never "reducing" the RGB values of the texture by premultiplying, so the RGB channels get added too much and look sort of washed out/overly bright.
I suppose the premultiply method is more akin to Overlay whereas the Ozirus method is Soft Light.
For more, see:
http://en.wikipedia.org/wiki/Alpha_compositing
Search for "premultiplied alpha"