Unity3D new ui Mask behaves wrong with runtime generated Texture2D - unity3d

In my project I have a texture2D that is generated in runtime, and a texture2D, that is stored in my project, not generated in runtime, both under a mask. When I try to increase their scale and move them with scroll rect under this mask, it behaves strange on some devices. The runtime-generated texture becomes partially invisible, though the other one works as needed.
I've already tried to change texture formats, filter modes and all the properties that texture has at all. I've configured runtime generated texture to have exactly the same properties as the preloaded one that works fine. But it still behaves the same.
In my code I load all textures from a specified folder with Resources.LoadAll(), and then I change every visible pixel of every loaded texture to white color.
maskTexturesObj is an Object[] array resulted by Resources.LoadAll() method.
Here is the code where I create my texture:
processedTexture = maskTexturesObj[i] as Texture2D;
for (int y = 0; y < processedTexture.height; y++)
{
for (int x = 0; x < processedTexture.width; x++)
{
if (processedTexture.GetPixel(x, y).a > 0)
processedTexture.SetPixel(x, y, Color.white);
}
}
processedTexture.Apply();
lessonPartImage.sprite = Sprite.Create(processedTexture, new Rect(0, 0, processedTexture.width, processedTexture.height), Vector2.zero);
The result is on the screenshot:
And here is what it is supposed to be:

Related

Get Intel RealSense Depth Stream in Unity

I'm currently trying to get the depthstream of the new RealSense Generation (D435, SDK2) as a Texture2D in Unity. I can easily access the regular RGB stream as a WebCamTexture, when I try to get the depthsream, I get this error:
Could not connect pins - RenderStream()
Unity recognizes the depthcamera, but can't display it.
I also tried to use the prefabs of the Unity Wrapper, but they don't really work for my project. If I use the prefabs, I can get the data to an R16 texture. Does anyone have an idea, how I can get the depth information at a certain point in the image (GetPixel() doesn't work for R16 textures...)? I'd prefer to get a WebCamTexture stream, if this doesn't work, I have to save the information in a different way...
What i did to get depth data was to create my own class inheriting from RawImage. I used my custom class as the target for the depth render stream, and got the image from the texture component in my class.
Binding to custom class
In my case i wanted to convert the 16bit depth data to an 8bit pr channel rgb pgn so that i could export it as a greyscale image. Here's how i parsed the image data:
byte[] input = texture.GetRawTextureData();
//create array of pixels from texture
//remember to convert to texture2D first
Color[] pixels = new Color[width*height];
//converts R16 bytes to PNG32
for(int i = 0; i < input.Length; i+=2)
{
//combine bytes into 16bit number
UInt16 num = System.BitConverter.ToUInt16(input, i);
//turn into float with range 0->1
float greyValue = (float)num / 2048.0f;
alpha = 1.0f;
//makes pixels outside measuring range invisible
if (num >= 2048 || num <= 0)
alpha = 0.0f;
Color grey = new Color(greyValue, greyValue, greyValue, alpha);
//set grey value of pixel based on float
pixels.SetValue(grey, i/2);
}
to get the pixels you can simply access the new pixels array.

Set pixel in RFloat texture

I want to implement an algorithm on GPU using Graphics.Blit. The input values are floats and output values are also float. I Create a texture with RFloat format and want to set values for every pixel. How can I set that? According to unity manual SetPixels doesn't work:
This function works only on ARGB32, RGB24 and Alpha8 texture formats.
For other formats SetPixels is ignored.
The algorithm needs float precision so the neither of these formats are usable. So how can it be done?
EDIT: After more struggle with unity RenderTextures, Here is the code I came up with to transfer data to GPU.
int res=512;
Texture2D tempTexture= new Texture2D(res, res, TextureFormat.RFloat, false);
public void ApplyHeightsToRT(float[,] heights, RenderTexture renderTexture)
{
RenderTexture.active = renderTexture;
Texture2D tempTexture = new Texture2D(res, res, TextureFormat.RFloat, false);
tempTexture.ReadPixels(new Rect(0, 0, renderTexture.width, renderTexture.height), 0, 0
, false);
for (int i = 0; i < tempTexture.width; i++)
for (int j = 0; j < tempTexture.height; j++)
{
tempTexture.SetPixel(i, j, new Color(heights[i, j], 0, 0, 0));
}
tempTexture.Apply();
RenderTexture.active = null;
Graphics.Blit(tempTexture, renderTexture);
}
This code successfully uploads the tempTexture to RenderTexture. The inverse operation is similarly done with the following method (RenderTexture is copied to tempTexture):
public void ApplyRTToHeights(RenderTexture renderTexture, float[,] heights)
{
RenderTexture.active = renderTexture;
tempTexture.ReadPixels(new Rect(0, 0, renderTexture.width, renderTexture.height), 0, 0
, false);
for (int i = 0; i < tempTexture.width; i++)
for (int j = 0; j < tempTexture.height; j++)
{
heights[i, j]=tempTexture.GetPixel(i, j).r;
}
RenderTexture.active = null;
}
To test the code I get the heightmap of a terrain then I call the first method to fill the RenderTexture with heightmap. Then I call the second method to get pixels from RenderTexture and put them on the terrain. It should do nothing. Right?
Actually calling the two methods one after another would flip the terrain heightmap and also create banding artifacts. Very weird. After further investigation the reason for flip turned out to be a formatting problem. The tempTexture that is created above the two methods is actually an ARGB32 texture not the RFloat I hoped it would be.
This explains the flips. After changing the code of tempTexture to be a ARGB32 texture and changing RenderTexture to be RGBA32, flip behaviour gone away. Now there is only banding artifacts:
And that would be understandable since I'm using only 8 bits (red channel) of both tempTexture and RenderTexture.
Now the problem is not about setting data on a RFloat texture. The problem is RFloat textures are not supported in my graphics card and probably many other different graphic devices. The problem is to find a way to transfer float arrays to the RenderTexture.

Gradient not rendering correctly on a Unity3D GUI object

I am attempting to apply a gradient effect on a Unity3D(5.2) gui object but its as if one of the gradient color keys is being completely ignored. I have tried with both instantiating a new gradient field and declaring a gradient field public and edit its keys in the editor but yet the effects remain the same.
I'm beginning to think that I am not supposed to use Gradients in a BaseMeshEffect in the way I am using it. If only have 2 keys, the colors render properly. Where am I wrong?
Here is a code sample followed by a screen shot.
public class GradientUI : BaseMeshEffect
{
[SerializeField]
public Gradient Grad;
public override void ModifyMesh(VertexHelper vh)
{
if (!IsActive())
{
return;
}
List<UIVertex> vertexList = new List<UIVertex>();
vh.GetUIVertexStream(vertexList);
ModifyVertices(vertexList);
vh.Clear();
vh.AddUIVertexTriangleStream(vertexList);
}
void ModifyVertices(List<UIVertex> vertexList)
{
int count = vertexList.Count;
float bottomY = vertexList[0].position.y;
float topY = vertexList[0].position.y;
for (int i = 1; i < count; i++)
{
float y = vertexList[i].position.y;
if (y > topY)
{
topY = y;
}
else if (y < bottomY)
{
bottomY = y;
}
}
float uiElementHeight = topY - bottomY;
for (int i = 0; i < count; i++)
{
UIVertex uiVertex = vertexList[i];
float percentage = (uiVertex.position.y - bottomY) / uiElementHeight;
// Debug.Log(percentage);
Color col = Grad.Evaluate(percentage);
uiVertex.color = col;
vertexList[i] = uiVertex;
Debug.Log(uiVertex.position);
}
}
Screen shot
Your script is actually OK, no problem with it. The problem here is that UI elements simply don't have enough geometry for you to actually see the whole gradient.
Let me explain. In a nutshell, each UI element is actually a mesh made of several 3D triangles, each one rotated to face the camera exactly with its front so it looks 2D. You filter works by assigning a color value to each vertex of those triangles. The color values in the middle of triangles are interpolated based on the proximity to each of the colored vertices.
If you look at UI element in wireframe, you will see that its geometry is very simple. This is for example how a sliced image looks:
As you can see, all of its vertices are concentrated at the corners, and there are no vertices in the middle. So, lets assume your gradient is 2 keys WHITE=>RED. The upper vertices get value WHITE or close to WHITE, the bottom values get value RED or close to RED. This works OK for 2 keys.
Now assume you have 3 keys WHITE=>BLUE=>RED. The upper value is WHITE or close to WHITE, the bottom values get value RED or close to RED, the BLUE value is supposed to be somewhere in the middle, but there is no vertex in the middle, so it is not assigned to anything. So you still get WHITE to RED gradient.
Now, what you can do:
1) You can add some more geometry programmatically, for example by simply subdividing the whole mesh. This may help you: http://answers.unity3d.com/questions/259127/does-anyone-have-any-code-to-subdivide-a-mesh-and.html. Pay attention that in this case, the more keys your gradient has, the more subdivisions are required.
2) Use texture that looks like a gradient gradient.

Unity 2D: dynamically create a hole in a image

I am using Unity 2D and I am trying to achieve this simple effect.
I am making a Candy Crush Saga - like game.
I have a grid with all the items. In every level the grid field can have different shapes, created at runtime (a dark grey area in the middle of the background).
The background is a still image (blue sky and green hills). When the pieces (for example a blue pentagon) fall down from the top they must be hidden until they enter the grid field area (dark grey); so in practice the background image (sky and hills) is no more a background, but is a foreground with a hole that is represented by the grey area. The grey field is composed as well by tiles from a sprite sheet.
I have prepared a picture, but I cannot load it here unfortunately yet. How can I achieve this effect in Unity?
The most simple solution would be to create all the static levels graphics already with the hole, but I do not want to create them because it is a waste of time and also a waste of memory, I want to be able to create this effect at runtime.
I was thinking about creating a dynamic bitmap mask for the hole shape using a sprite sheet. Then using this bitmap mask as a material for example to be applied to the image in the foreground and make a hole.
click on your texture in the unity editor
change "texture type" from "texture" to "advanced"
check the "read/write enabled" checkbox
change "format" form "automatic compressed" to "RGBA 32 bit"
i attached this component to a raw image (you can attach it to something else, just change the "RawImage image" part)
this will create a hole with dimensions 100x100 at position 10,10 of the image, so make sure that your texture is at least 110x110 large.
using UnityEngine;
using UnityEngine.UI;
public class HoleInImage : MonoBehaviour
{
public RawImage image;
void Start()
{
// this will change the original:
Texture2D texture = image.texture as Texture2D;
// use this to change a copy (and not the original)
//Texture2D texture = Instantiate(image.mainTexture) as Texture2D;
//image.mainTexture = texture;
Color[] colors = new Color[100*100];
for( int i = 0; i < 100*100; ++i )
colors[i] = Color.clear;
texture.SetPixels( 10, 10, 100, 100, colors );
texture.Apply(false);
}
}
EDIT:
hole defined by one or more sprites:
do the same for these sprites: (advanced, read/write enabled, RGBA 32 bit)
for example: if sprite is white and the hole is defined with black:
for( int i = 0; i < 100*100; ++i )
colors[i] = Color.clear;
change to:
Texture2D holeTex; // set this in editor, it must have same dimensions (100x100 in my example)
Color[] hole = holeTex.GetPixels();
for( int i = 0; i < 100*100; ++i )
{
if( hole[i] == Color.black ) // where sprite is black, there will be a hole
colors[i] = Color.clear;
}

(Unity3D) Paint with soft brush (logic)

During the last few days i was coding a painting behavior for a game am working on, and am currently in a very advanced phase, i can say that i have 90% of the work done and working perfectly, now what i need to do is being able to draw with a "soft brush" cause for now it's like am painting with "pixel style" and that was totally expected cause that's what i wrote,
my current goal consist of using this solution :
import a brush texture, this image
create an array that contain all The alpha values of that texture
When drawing use the array elements in order to define the new pixels alpha
And this is my code to do that (it's not very long, there is too much comments)
//The main painting method
//theObject = the object to be painted
//tmpTexture = the object current texture
//targetTexture = the new texture
void paint (GameObject theObject, Texture2D tmpTexture, Texture2D targetTexture)
{
//x and y are 2 floats from another class
//they store the coordinates of the pixel
//that get hit by the RayCast
int x = (int)(coordinates.pixelPos.x);
int y = (int)(coordinates.pixelPos.y);
//iterate through a block of pixels that goes fro
//Y and X and go #brushHeight Pixels up
// and #brushWeight Pixels right
for (int tmpY = y; tmpY<y+brushHeight; tmpY++) {
for (int tmpX = x; tmpX<x+brushWidth; tmpX++) {
//check if the current pixel is different from the target pixel
if (tmpTexture.GetPixel (tmpX, tmpY) != targetTexture.GetPixel (tmpX, tmpY)) {
//create a temporary color from the target pixel at the given coordinates
Color tmpCol = targetTexture.GetPixel (tmpX, tmpY);
//change the alpha of that pixel based on the brush alpha
//myBrushAlpha is a 2 Dimensional array that contain
//the different Alpha values of the brush
//the substractions are to keep the index in range
if (myBrushAlpha [tmpY - y, tmpX - x].a > 0) {
tmpCol.a = myBrushAlpha [tmpY - y, tmpX - x].a;
}
//set the new pixel to the current texture
tmpTexture.SetPixel (tmpX, tmpY, tmpCol);
}
}
}
//Apply
tmpTexture.Apply ();
//change the object main texture
theObject.renderer.material.mainTexture = tmpTexture;
}
Now the fun (and bad) part is the code did exactly what i asked for, but there is something that i didn't think of and i couldn't solve after spend the whole night trying,
the thing is that by asking to draw anytime with the brush alpha i found myself create a very weird effect which is decreasing the alpha value of an "old" pixel, so i tried to fix that by adding an if statement that check if the current alpha of the pixel is less than the equivalent brush alpha pixel, if it is, then augment the alpha to be equal to the brush, and if the pixel alpha is bigger, then keep adding the brush alpha value to it in order to have that "soft brushing" effect, and in code it become this :
if (myBrushAlpha [tmpY - y, tmpX - x].a > tmpCol.a) {
tmpCol.a = myBrushAlpha [tmpY - y, tmpX - x].a;
} else {
tmpCol.a += myBrushAlpha [tmpY - y, tmpX - x].a;
}
But after i've done that, i got the "pixelized brush" effect back, am not sure but i think maybe it's because am making these conditions inside a for loop so everything is executed before the end of the current frame so i don't see the effect, could it be that ?
Am really lost here and hope that you can put me in the right direction,
Thank you very much and have a great day