Unity shader error; presumably in if statement - unity3d

I am doing two line plane intersections in a shader, however I need to take account if a ray was not hit and which one of the rays has the shortest distance.
The following code however throws an error which doesn't give me any useful information (and points me in the wrong direction). If I set tex and selectN to say intersection.xy and N, it works fine (but ofcourse doesn't give the result I need).
I'm working in Unity.
float3 selectN;
float2 tex;
if (dist == 0.0) {
selectN = N2;
tex = intersection2.xy;
} else if (dist2 == 0.0) {
selectN = N;
tex = intersection.xy;
} else if (dist < dist2) {
selectN = N;
tex = intersection.xy;
} else {
selectN = N2;
tex = intersection2.xy;
}

I needed to add #pragma target 3.0 because my shader was getting "too complex".

Related

AR camera distance measurement

I have a question about AR(Augmented Reality).
I want to know how to show the distance information(like centermeter...) between AR camera and target object. (Using Smartphone)
Can I do that in Unity ? Should I use AR Foundation? and with ARcore? How to write code?
I tried finding some relative code(below), but it seems just like Printing information between object and object, nothing about "AR camera"...
var other : Transform;
if (other) {
var dist = Vector3.Distance(other.position, transform.position);
print ("Distance to other: " + dist);
}
Thank again!
Here is how to do it Unity and AR Foundation 4.1.
This example script prints the depth in meters at the depth texture's center and works both with ARCore and ARKit:
using System;
using System.Collections;
using UnityEngine;
using UnityEngine.Assertions;
using UnityEngine.XR.ARFoundation;
using UnityEngine.XR.ARSubsystems;
public class GetDepthOfCenterPixel : MonoBehaviour {
// assign this field in inspector
[SerializeField] AROcclusionManager manager = null;
IEnumerator Start() {
while (ARSession.state < ARSessionState.SessionInitializing) {
// manager.descriptor.supportsEnvironmentDepthImage will return a correct value if ARSession.state >= ARSessionState.SessionInitializing
yield return null;
}
if (!manager.descriptor.supportsEnvironmentDepthImage) {
Debug.LogError("!manager.descriptor.supportsEnvironmentDepthImage");
yield break;
}
while (true) {
if (manager.TryAcquireEnvironmentDepthCpuImage(out var cpuImage) && cpuImage.valid) {
using (cpuImage) {
Assert.IsTrue(cpuImage.planeCount == 1);
var plane = cpuImage.GetPlane(0);
var dataLength = plane.data.Length;
var pixelStride = plane.pixelStride;
var rowStride = plane.rowStride;
Assert.AreEqual(0, dataLength % rowStride, "dataLength should be divisible by rowStride without a remainder");
Assert.AreEqual(0, rowStride % pixelStride, "rowStride should be divisible by pixelStride without a remainder");
var numOfRows = dataLength / rowStride;
var centerRowIndex = numOfRows / 2;
var centerPixelIndex = rowStride / (pixelStride * 2);
var centerPixelData = plane.data.GetSubArray(centerRowIndex * rowStride + centerPixelIndex * pixelStride, pixelStride);
var depthInMeters = convertPixelDataToDistanceInMeters(centerPixelData.ToArray(), cpuImage.format);
print($"depth texture size: ({cpuImage.width},{cpuImage.height}), pixelStride: {pixelStride}, rowStride: {rowStride}, pixel pos: ({centerPixelIndex}, {centerRowIndex}), depthInMeters of the center pixel: {depthInMeters}");
}
}
yield return null;
}
}
float convertPixelDataToDistanceInMeters(byte[] data, XRCpuImage.Format format) {
switch (format) {
case XRCpuImage.Format.DepthUint16:
return BitConverter.ToUInt16(data, 0) / 1000f;
case XRCpuImage.Format.DepthFloat32:
return BitConverter.ToSingle(data, 0);
default:
throw new Exception($"Format not supported: {format}");
}
}
}
I'm working on AR depth image as well and the basic idea is:
Acquire an image using API, normally it's in format Depth16;
Split the image into shortbuffers, as Depth16 means each pixel is 16 bits;
Get the distance value, which is stored in the lower 13 bits of each shortbuffer, you can do this by doing (shortbuffer & 0x1ff), then you can have the distance for each pixel, normally it's in millimeters.
By doing this through all the pixels, you can create a depth image and store it as jpg or other formats, here's the sample code of using AR Engine to get the distance:
try (Image depthImage = arFrame.acquireDepthImage()) {
int imwidth = depthImage.getWidth();
int imheight = depthImage.getHeight();
Image.Plane plane = depthImage.getPlanes()[0];
ShortBuffer shortDepthBuffer = plane.getBuffer().asShortBuffer();
File sdCardFile = Environment.getExternalStorageDirectory();
Log.i(TAG, "The storage path is " + sdCardFile);
File file = new File(sdCardFile, "RawdepthImage.jpg");
Bitmap disBitmap = Bitmap.createBitmap(imwidth, imheight, Bitmap.Config.RGB_565);
for (int i = 0; i < imheight; i++) {
for (int j = 0; j < imwidth; j++) {
int index = (i * imwidth + j) ;
shortDepthBuffer.position(index);
short depthSample = shortDepthBuffer.get();
short depthRange = (short) (depthSample & 0x1FFF);
//If you only want the distance value, here it is
byte value = (byte) depthRange;
byte value = (byte) depthRange ;
disBitmap.setPixel(j, i, Color.rgb(value, value, value));
}
}
//I rotate the image for a better view
Matrix matrix = new Matrix();
matrix.setRotate(90);
Bitmap rotatedBitmap = Bitmap.createBitmap(disBitmap, 0, 0, imwidth, imheight, matrix, true);
try {
FileOutputStream out = new FileOutputStream(file);
rotatedBitmap.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
MainActivity.num++;
} catch (Exception e) {
e.printStackTrace();
}
} catch (Exception e) {
e.printStackTrace();
}
}
While the answers are great, they may be too complicated and advanced for this question, which is about the distance between the ARCamera and another object, and not about the depth of pixels and their occlusion.
transform.position gives you the position of whatever game object you attach the script to in the hierarchy. So attach the script to the ARCamera object. And obviously, other should be the target object.
Alternately, you can get references to the two game objects using inspector variables or GetComponent
/raycasting should be in update/
Ray ray = new Ray(cam.transform.position, cam.transform.forward);
if (Physics.Raycast(ray, out info, 50f, layerMaskAR))//50 meters detection range bcs of 50f
{
distanca.text = string.Format("{0}: {1:N2}m", info.collider.name, info.distance, 2);
}
This is func that does it what u need with this is ofc on UI txt element and layer assigne to object/prefab.
int layerMaskAR = 1 << 6; (here u see 6 bcs 6th is my custom layer ,,layerMaskAR,,)
This is ray cating on to objects in only this layer rest object are ignored(if u dont want to ignore anything remove layerMask from raycast and it will print out name of anything with collider).
Totally doable by this line of code
Vector3.Distance(gameObject.transform.position, Camera.main.transform.position)

sometimes my meshes are black for large textures in a texture array, sometimes the textures render

I'm trying to shade meshes I generated with a noise heightmap using an array of textures. With a smaller texture size (e.g. 512px*512px) everything works completely fine. However, if I use larger texture for example 1024px*1024px or 2048px*2048px, my meshes usually render black. Every now and then the textures will render correctly around 5% of the time, while around 20% of the time they will seem to render correctly for the first frame and then switch to black.
This issue seems to appear no matter how long my texture array is. (a size 1 array still causes the same behavior) I also see the same issue regardless of whether the images are JPGs or PNGs. I also tried a variety of different images as texture and reproduced the same problem. I have no errors or warnings in my console.
Below are simplified versions of the relevant code which also suffer from the same issue. This just additive blends the textures, but in the full version of the code, the height of the mesh is used to determine the texture(s) to use and the degree of blending between nearby textures. My code is based off of Sebastian Lague's procedural landmass generation youtube tutorial series, which only deals with 512px*512px textures.
The code that puts the texture array and layer number into the shader:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System.Linq;
[CreateAssetMenu()]
public class TextureData : UpdatableData {
const int textureSize = 2048;
const TextureFormat textureFormat = TextureFormat.RGB565;
public Layer[] layers;
public void UpdateMeshHeights(Material material, float minHeight, float maxHeight) {
material.SetInt("layerCount", layers.Length);
Texture2DArray texturesArray = GenerateTextureArray(layers.Select(x => x.texture).ToArray());
material.SetTexture("baseTextures", texturesArray);
}
Texture2DArray GenerateTextureArray(Texture2D[] textures) {
Texture2DArray textureArray = new Texture2DArray(textureSize, textureSize, textures.Length, textureFormat, true);
for (int i=0; i < textures.Length; i++) {
textureArray.SetPixels(textures[i].GetPixels(), i);
}
textureArray.Apply();
return textureArray;
}
[System.Serializable]
public class Layer {
public Texture2D texture;
}
}
The shader itself:
Shader "Custom/Terrain" {
SubShader {
Tags { "RenderType"="Opaque" }
CGPROGRAM
#pragma surface surf Standard fullforwardshadows
#pragma target 3.0
int layerCount;
UNITY_DECLARE_TEX2DARRAY(baseTextures);
struct Input {
float3 worldPos;
float3 worldNormal;
};
float3 triplanar(float3 worldPos, float scale, float3 blendAxes, int textureIndex) {
float3 scaledWorldPos = worldPos / scale;
float3 xProjection = UNITY_SAMPLE_TEX2DARRAY(baseTextures, float3(scaledWorldPos.y, scaledWorldPos.z, textureIndex)) * blendAxes.x;
float3 yProjection = UNITY_SAMPLE_TEX2DARRAY(baseTextures, float3(scaledWorldPos.x, scaledWorldPos.z, textureIndex)) * blendAxes.y;
float3 zProjection = UNITY_SAMPLE_TEX2DARRAY(baseTextures, float3(scaledWorldPos.x, scaledWorldPos.y, textureIndex)) * blendAxes.z;
return xProjection + yProjection + zProjection;
}
void surf (Input IN, inout SurfaceOutputStandard o) {
float3 blendAxes = abs(IN.worldNormal);
blendAxes /= blendAxes.x + blendAxes.y + blendAxes.z;
for (int i = 0; i < layerCount; i++) {
float3 textureColor = triplanar(IN.worldPos, 1, blendAxes, i);
o.Albedo += textureColor;
}
}
ENDCG
}
FallBack "Diffuse"
}
Here is a screenshot of then problem in action:
I had a similar problem, so I thought I'd document this here.
Making a global reference to the Texture2DArray created within GenerateTextureArray(..) fixed this for me:
Texture2DArray textureArray;
Texture2DArray GenerateTextureArray(Texture2D[] textures) {
textureArray = new Texture2DArray(textureSize, textureSize, textures.Length, textureFormat, true);
for (int i=0; i < textures.Length; i++) {
textureArray.SetPixels(textures[i].GetPixels(), i);
}
textureArray.Apply();
return textureArray;
}
(Of course you can then remove the return value and use the reference instead)
I can only guess the reason being, but as far as I know the Apply() function moves data to the GPU. As the data is not relevant for the CPU, the garbage collector removes it, which causes problems when updating the texture. Why exactly the reference is still needed though is questionable for me.

(Unity) How to bake data (Vector3 and Color32) onto render textures?

With the recent introduction of VFX Graph, attribute maps are being used to 'Set Position/Color from Map'.
In order to get an attribute map, one must bake position and color data into render textures. But there is no reference to how to do this that I could find or even on the Unity docs.
Any help on how to do this will be appreciated!
Most of the time you would want to use a Compute Shader to bake a list of points into your textures. I'd suggest you check these repositories for reference:
Bake Skinned Mesh Renderer Data into textures
https://github.com/keijiro/Smrvfx
Bake Kinect data into textures
https://github.com/roelkok/Kinect-VFX-Graph
Bake pointcloud data into texture:
https://github.com/keijiro/Pcx
Personally, I'm using these scripts which work for my purpose though I'm no expert in Compute Shaders:
public class FramePositionBaker
{
ComputeShader bakerShader;
RenderTexture VFXpositionMap;
RenderTexture inputPositionTexture;
private ComputeBuffer positionBuffer;
const int texSize = 256;
public FramePositionBaker(RenderTexture _VFXPositionMap)
{
inputPositionTexture = new RenderTexture(texSize, texSize, 0, RenderTextureFormat.ARGBFloat);
inputPositionTexture.enableRandomWrite = true;
inputPositionTexture.Create();
bakerShader = (ComputeShader)Resources.Load("FramePositionBaker");
if (bakerShader == null)
{
Debug.LogError("[FramePositionBaker] baking shader not found in any Resources folder");
}
VFXpositionMap = _VFXPositionMap;
}
public void BakeFrame(ref Vector3[] vertices)
{
int pointCount = vertices.Length;
positionBuffer = new ComputeBuffer(pointCount, 3 * sizeof(float));
positionBuffer.SetData(vertices);
//Debug.Log("Length " + vertices.Length);
bakerShader.SetInt("dim", texSize);
bakerShader.SetTexture(0, "PositionTexture", inputPositionTexture);
bakerShader.SetBuffer(0, "PositionBuffer", positionBuffer);
bakerShader.Dispatch(0, (texSize / 8) + 1, (texSize / 8) + 1, 1);
Graphics.CopyTexture(inputPositionTexture, VFXpositionMap);
positionBuffer.Dispose();
}
}
The compute shader:
// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel CSMain
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float4> PositionTexture;
uint dim;
Buffer<float3> PositionBuffer;
[numthreads(8,8,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
// TODO: insert actual code here!
uint index = id.y * dim + id.x;
uint lastIndex = PositionBuffer.Length - 1;
// Trick for generating a pseudo-random number.
// Inspired by a similar trick in Keijiro's PCX repo (BakedPointCloud.cs).
// The points that are in excess because of the square texture, point randomly to a point in the texture.
// e.g. if (index > lastIndex) index = 0 generates excessive particles in the first position, resulting in a visible artifact.
//if (index > lastIndex) index = ( index * 132049U ) % lastIndex;
float3 pos;
if (index > lastIndex && lastIndex != 0) {
//pos = 0;
index = ( index * 132049U ) % lastIndex;
}
pos = PositionBuffer[index];
PositionTexture[id.xy] = float4 (pos.x, pos.y, pos.z, 1);
}

How to modify a Texture pixels from a compute shader in unity?

I stumbled upon a strange problem in vuforia.When i request a camera image using CameraDevice.GetCameraImage(mypixelformat), the image returned is both flipped sideways and rotated 180 deg. Because of this, to obtain a normal image i have to first rotate the image and then flip it sideways.The approach i am using is simply iterating over pixels of the image and modifying them.This approach is very poor performance wise.Below is the code:
Texture2D image;
CameraDevice cameraDevice = Vuforia.CameraDevice.Instance;
Vuforia.Image vufImage = cameraDevice.GetCameraImage(pixelFormat);
image = new Texture2D(vufImage.Width, vufImage.Height);
vufImage.CopyToTexture(image);
Color32[] colors = image.GetPixels32();
System.Array.Reverse(colors, 0, colors.Length); //rotate 180deg
image.SetPixels32(colors); //apply rotation
image = FlipTexture(image); //flip sideways
//***** THE FLIP TEXTURE METHOD *******//
private Texture2D FlipTexture(Texture2D original, bool upSideDown = false)
{
Texture2D flipped = new Texture2D(original.width, original.height);
int width = original.width;
int height = original.height;
for (int col = 0; col < width; col++)
{
for (int row = 0; row < height; row++)
{
if (upSideDown)
{
flipped.SetPixel(row, (width - 1) - col, original.GetPixel(row, col));
}
else
{
flipped.SetPixel((width - 1) - col, row, original.GetPixel(col, row));
}
}
}
flipped.Apply();
return flipped;
}
To improve the performance i want to somehow schedule these pixel operations on the GPU, i have heard that a compute shader can be used, but i have no idea where to start.Can someone please help me write the same operations in a compute shader so that the GPU can handle them, Thankyou!.
The whole compute shader are new for me too, but i took the occasion to research it a little bit for myself too. The following works for flipping a texture vertically (rotating and flipping horizontally should be just a vertical flip).
Someone might have a more elaborate solution for you, but maybe this is enough to get you started.
The Compute shader code:
#pragma kernel CSMain
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float4> Result;
Texture2D<float4> ImageInput;
float2 flip;
[numthreads(8,8,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
flip = float2(512 , 1024) - id.xy ;
Result[id.xy] = float4(ImageInput[flip].x, ImageInput[flip].y, ImageInput[flip].z, 1.0);
}
and called from any script:
public void FlipImage()
{
int kernelHandle = shader.FindKernel("CSMain");
RenderTexture tex = new RenderTexture(512, 1024, 24);
tex.enableRandomWrite = true;
tex.Create();
shader.SetTexture(kernelHandle, "Result", tex);
shader.SetTexture(kernelHandle, "ImageInput", myTexture);
shader.Dispatch(kernelHandle, 512/8 , 1024 / 8, 1);
RenderTexture.active = tex;
result.ReadPixels(new Rect(0, 0, tex.width, tex.height), 0, 0);
result.Apply();
}
This takes an input Texture2D, flips it in the shader, applies it to a RenderTexture and to a Texture2D, whatever you need.
Note that the image sizes are hardcoded in my instance and should be replaced by whatever size you need. (for within the shader use shader.SetInt(); )

Multiple textures doesn't show

I'm a newbie of DirectX10. Now I'm developing a Direct10 application. It mixes two textures which are filled manually according to user's input. The current implementation is
Create two empty textures with usage D3D10_USAGE_STAGING.
Create two resource shader view to bind to the pixel shader because the shader needs it.
Copy the textures to the GPU memory by calling CopyResource.
Now the problem is that I can only see the first texture but I don't see the second. It looks to me that the binding doesn't work for the second texture.
I don't know what's wrong with it. Can anyone here shed me a light on it?
Thanks,
Marshall
The class COverlayTexture takes responsible for creating the texture, creating resource view, fill the texture with the mapped bitmap from another applicaiton and bind the resource view to the pixel shader.
HRESULT COverlayTexture::Initialize(VOID)
{
D3D10_TEXTURE2D_DESC texDesStaging;
texDesStaging.Width = m_width;
texDesStaging.Height = m_height;
texDesStaging.Usage = D3D10_USAGE_STAGING;
texDesStaging.BindFlags = 0;
texDesStaging.ArraySize = 1;
texDesStaging.MipLevels = 1;
texDesStaging.SampleDesc.Count = 1;
texDesStaging.SampleDesc.Quality = 0;
texDesStaging.MiscFlags = 0;
texDesStaging.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesStaging.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;
HR( m_Device->CreateTexture2D( &texDesStaging, NULL, &m_pStagingResource ) );
D3D10_TEXTURE2D_DESC texDesShader;
texDesShader.Width = m_width;
texDesShader.Height = m_height;
texDesShader.BindFlags = D3D10_BIND_SHADER_RESOURCE;
texDesShader.ArraySize = 1;
texDesShader.MipLevels = 1;
texDesShader.SampleDesc.Count = 1;
texDesShader.SampleDesc.Quality = 0;
texDesShader.MiscFlags = 0;
texDesShader.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesShader.Usage = D3D10_USAGE_DEFAULT;
texDesShader.CPUAccessFlags = 0;
HR( m_Device->CreateTexture2D( &texDesShader, NULL, &m_pShaderResource ) );
D3D10_SHADER_RESOURCE_VIEW_DESC viewDesc;
ZeroMemory( &viewDesc, sizeof( viewDesc ) );
viewDesc.Format = texDesShader.Format;
viewDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D;
viewDesc.Texture2D.MipLevels = texDesShader.MipLevels;
HR( m_Device->CreateShaderResourceView( m_pShaderResource, &viewDesc, &m_pShaderResourceView ) );
}
HRESULT COverlayTexture::Render(VOID)
{
m_Device->PSSetShaderResources(0, 1, m_pShaderResourceView);
D3D10_MAPPED_TEXTURE2D lockedRect;
m_pStagingResource->Map( 0, D3D10_MAP_WRITE, 0, &lockedRect );
// Fill in the texture with the bitmap mapped from shared memory view
m_pStagingResource->Unmap(0);
m_Device->CopyResource(m_pShaderResource, m_pStagingResource);
}
I use two instances of the class COverlayTexture each of which fills its own bitmap to its texture respectively and renders with sequence COverlayTexture[1] then COverlayTexture[0].
COverlayTexture* pOverlayTexture[2];
for( int i = 1; i < 0; i++)
{
pOverlayTexture[i]->Render()
}
The blend state setting in the FX file is definedas below:
BlendState AlphaBlend
{
AlphaToCoverageEnable = FALSE;
BlendEnable[0] = TRUE;
SrcBlend = SRC_ALPHA;
DestBlend = INV_SRC_ALPHA;
BlendOp = ADD;
BlendOpAlpha = ADD;
SrcBlendAlpha = ONE;
DestBlendAlpha = ZERO;
RenderTargetWriteMask[0] = 0x0f;
};
The pixel shader in the FX file is defined as below:
Texture2D txDiffuse;
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse.Sample(samLinear, input.Tex);
return ret;
}
Thanks again.
Edit for Paulo:
Thanks a lot, Paulo. The problem is that which instance of the object should be bound to alpha texture or diffuse texture. As testing, I bind the COverlayTexture[0] to the alpha and COverlayTexture[1] to the diffuse texture.
Texture2D txDiffuse[2];
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse[1].Sample(samLinear, input.Tex);
float alpha = txDiffuse[0].Sample(samLinear, input.Tex).x;
return float4(ret.xyz, alpha);
}
I called the PSSetShaderResources for the two resource views.
g_pShaderResourceViews[0] = overlay[0].m_pShaderResourceView;
g_pShaderResourceViews[1] = overlay[1].m_pShaderResourceView;
m_Device->PSSetShaderResources(0, 2, g_pShaderResourceViews);
The result is that i don't see anything. I also tried the channel x,y,z,w.
Post some more code.
I'm not sure how you mean to mix these two textures. If you want to mix them in the pixel shader you need to sample both of them then add them (or whatever operation you required) toghether.
How do you add the textures toghether? By setting a ID3D11BlendState or in the pixel shader?
EDIT:
You don't need two textures in every class: if you want to write to your texture your usage should be D3D10_USAGE_DYNAMIC. When you do this, you can also have this texture as your shader resource so you don't need to do the m_Device->CopyResource(m_pShaderResource, m_pStagingResource); step.
Since you're using alpha blending you must control the alpha value output in the pixel shader (the w component of the float4 that the pixel shader returns).
Bind both textures to your pixel shader and use one textures value as the alpha components:
Texture2D txDiffuse;
Texture2D txAlpha;
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse.Sample(samLinear, input.Tex);
float alpha=txAlpha.Sample(samLinear,input.Tex).x; // Choose the proper channel
return float4(ret.xyz,alpha); // Alpha is the 4th component
}