I already posted this question on unity answers yesterday, but maybe anyone here can help? I've been trying to do some stuff that involves getting an image from a native plugin (in the form of a .dll file). I load the image data into a native buffer and then push that to the gpu in the form of a structured compute buffer. From there, I display the image using a shader (basically just doing something like uint idx = x + y * width to get the correct index). And this works great on my laptop (ignore the low resolution, I lowered it to be able to inspect the values for each pixel; this is exactly how it's supposed to look).
But when I try it on my desktop, all I get is this mess:
It's clearly displaying something, I'm almost able to make out contours of the text (it doesn't seem like I'm just getting random noise). But I can't seem to work out what's wrong here.
So far I've tried:
syncing the code across the two devices (it's excactly the same)
changing the unity version (tried 2020.3.26f1 and 2021.2.12f on both machines)
updating the graphics drivers
checking the directx version (DirectX 12 on both)
changing the editor game window resolution
comparing the contents of the buffer (the ComputeBuffer.GetData method is getting the same completely valid values on both machines)
building the project on both machines (both builds are working on my laptop and broken on my desktop)
Especially the last point really confused me. I'm running the same executable on both machines and it's working on my laptop with integrated graphics (not sure wether that could be relevant) but not on my desktop with a more modern dedicated gpu? The only idea I have left is that there might be some kind of optimization going on with my desktop's amd gpu that's not happening on my laptop's intel gpu. Any ideas on what I could try in the radeon software? Maybe it could even be some sort of bug (with unity or with my graphics driver)?
I'd be more than happy about any ideas on what could be the problem here (cause I have no clue at this point). And sorry if my grammar is a bit off at times, not a native speaker.
EDIT: Here's the shader I use to display the image.
Shader "Hidden/ReadUnpacked"
{
Properties
{
_MainTex("Texture", 2D) = "white" {}
}
SubShader
{
// No culling or depth
Cull Off ZWrite Off ZTest Always
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
static const uint PACKED_SIZE = 3;
static const uint PIXELS_PER_PACK = 4;
static const uint BYTES_PER_PIXEL = 8;
static const uint PERCISION = 0xFF; // 0xFF = 2^8
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
struct packed4
{
uint p[PACKED_SIZE];
};
struct unpacked4
{
fixed4 p[PIXELS_PER_PACK];
};
StructuredBuffer<packed4> InputBuffer;
uint ImgIdx;
float2 Resolution;
float2 TexelOffset;
fixed unpackSingle(packed4 val, uint idx)
{
uint pid = idx / PIXELS_PER_PACK; // pixel index
uint sid = idx % PIXELS_PER_PACK * BYTES_PER_PIXEL; // shift index
return ((val.p[pid] >> sid) & PERCISION) / (half)PERCISION;
}
unpacked4 unpack(packed4 packed)
{
unpacked4 unpacked;
half r, g, b;
uint idx = 0;
[unroll(PIXELS_PER_PACK)] for (uint i = 0; i < PIXELS_PER_PACK; i++)
{
fixed4 upx = fixed4(0, 0, 0, 1);
[unroll(PACKED_SIZE)] for (uint j = 0; j < PACKED_SIZE; j++)
{
upx[j] = unpackSingle(packed, idx++);
}
unpacked.p[i] = upx;
}
return unpacked;
}
fixed4 samplePackedBuffer(float2 uv)
{
int2 tc = float2(uv.x, 1 - uv.y) * Resolution;
uint idx = tc.x + tc.y * Resolution.x; // image pixel index
idx += Resolution.x * Resolution.y * ImgIdx;
uint gid = floor(idx / PIXELS_PER_PACK); // packed global index
uint lid = idx % PIXELS_PER_PACK; // packed local index
packed4 ppx = InputBuffer[gid];
unpacked4 upx = unpack(ppx);
return upx.p[lid];
}
v2f vert(appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = v.uv;
return o;
}
fixed4 frag(v2f i) : SV_Target
{
fixed4 col = samplePackedBuffer(i.uv);
return col;
}
ENDCG
}
}
}
You should check all other 3D APIs (D3D11, Vulkan, OpenGL,...).
Related
I am trying to learn about shaders by trying to do this tutorial. I finished the first part where I was able to draw the outlines using depth. But even though I copied everything accordingly during the normal map phase, I only get a black screen. Is there something wrong with my following?
My outline shader looks like this, so currently I am only rendering the normals but screen is all black. I've modified other files as well according to the tutorial. Maybe there's a problem with the new version of unity? I don't know. Help would be much apperciated.
Shader "Hidden/Roystan/Outline Post Process"
{
SubShader
{
Cull Off ZWrite Off ZTest Always
Pass
{
// Custom post processing effects are written in HLSL blocks,
// with lots of macros to aid with platform differences.
// https://github.com/Unity-Technologies/PostProcessing/wiki/Writing-Custom-Effects#shader
HLSLPROGRAM
#pragma vertex VertDefault
#pragma fragment Frag
#include "Packages/com.unity.postprocessing/PostProcessing/Shaders/StdLib.hlsl"
TEXTURE2D_SAMPLER2D(_MainTex, sampler_MainTex);
// _CameraNormalsTexture contains the view space normals transformed
// to be in the 0...1 range.
TEXTURE2D_SAMPLER2D(_CameraNormalsTexture, sampler_CameraNormalsTexture);
TEXTURE2D_SAMPLER2D(_CameraDepthTexture, sampler_CameraDepthTexture);
// Data pertaining to _MainTex's dimensions.
// https://docs.unity3d.com/Manual/SL-PropertiesInPrograms.html
float4 _MainTex_TexelSize;
float _Scale;
float _DepthThreshold;
float _NormalThreshold;
// Combines the top and bottom colors using normal blending.
// https://en.wikipedia.org/wiki/Blend_modes#Normal_blend_mode
// This performs the same operation as Blend SrcAlpha OneMinusSrcAlpha.
float4 alphaBlend(float4 top, float4 bottom)
{
float3 color = (top.rgb * top.a) + (bottom.rgb * (1 - top.a));
float alpha = top.a + bottom.a * (1 - top.a);
return float4(color, alpha);
}
float4 Frag(VaryingsDefault i) : SV_Target
{
float halfScaleFloor = floor(_Scale * 0.5);
float halfScaleCeil = ceil(_Scale * 0.5);
float2 bottomLeftUV = i.texcoord - float2(_MainTex_TexelSize.x, _MainTex_TexelSize.y) * halfScaleFloor;
float2 topRightUV = i.texcoord + float2(_MainTex_TexelSize.x, _MainTex_TexelSize.y) * halfScaleCeil;
float2 bottomRightUV = i.texcoord + float2(_MainTex_TexelSize.x * halfScaleCeil, -_MainTex_TexelSize.y * halfScaleFloor);
float2 topLeftUV = i.texcoord + float2(-_MainTex_TexelSize.x * halfScaleFloor, _MainTex_TexelSize.y * halfScaleCeil);
float depth0 = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, bottomLeftUV).r;
float depth1 = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, topRightUV).r;
float depth2 = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, bottomRightUV).r;
float depth3 = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, sampler_CameraDepthTexture, topLeftUV).r;
float3 normal0 = SAMPLE_TEXTURE2D(_CameraNormalsTexture, sampler_CameraNormalsTexture, bottomLeftUV).rgb;
float3 normal1 = SAMPLE_TEXTURE2D(_CameraNormalsTexture, sampler_CameraNormalsTexture, topRightUV).rgb;
float3 normal2 = SAMPLE_TEXTURE2D(_CameraNormalsTexture, sampler_CameraNormalsTexture, bottomRightUV).rgb;
float3 normal3 = SAMPLE_TEXTURE2D(_CameraNormalsTexture, sampler_CameraNormalsTexture, topLeftUV).rgb;
float3 normalFiniteDifference0 = normal1 - normal0;
float3 normalFiniteDifference1 = normal3 - normal2;
float edgeNormal = sqrt(dot(normalFiniteDifference0, normalFiniteDifference0) + dot(normalFiniteDifference1, normalFiniteDifference1));
edgeNormal = edgeNormal > _NormalThreshold ? 1 : 0;
return edgeNormal;
float depthFiniteDifference0 = depth1 - depth0;
float depthFiniteDifference1 = depth3 - depth2;
float edgeDepth = sqrt(pow(depthFiniteDifference0, 2) + pow(depthFiniteDifference1, 2)) * 100;
edgeDepth = edgeDepth > _DepthThreshold ? 1 : 0;
float edge = max(edgeDepth, edgeNormal);
//return edge;
}
ENDHLSL
}
}
}
just after the HLSLPROGRAM you have two #pragma, one for the vertex method and one for the fragment method, but later in the code, you only have the fragment function (Frag) and the vertex one is missing (VertDefault). As far as I know, you must have both methods implemented in a fragment shader in order for it to properly work. Hope I helped.
I have started learning to code shaders a couple of weeks ago. I wanted to create a basic shader for my 2D project. But while doing it, I was faced with the problem shown here.
There are weird gaps between light points. Fragments get a lower light value when two or more light sources touch each other. I can't think of a way to fix this problem.
Here is my code(part of it) :
struct Light{
float4 position;
float size;
};
//appdata and v2f
StructuredBuffer<Light> _LightData;
fixed4 frag (v2f i) : SV_Target{
float _distance = 0;
fixed4 color = tex2D(_MainTex, i.uv);
for(int a = 0; a < _LightArraySize;a++){
if(_distance>0){
if(_distance > distance(i.worldSpacePos, _LightData[a].position+0.5)/_LightData[a].size){
_distance = distance(i.worldSpacePos, _LightData[a].position+0.5)/_LightData[a].size;
}
}else{
_distance = distance(i.worldSpacePos, _LightData[a].position+0.5)/_LightData[a].size;
}
}
return fixed4(color.x - _distance, color.y - _distance, color.z - _distance, color.w);
}
Hello I have followed a video series on YouTube made by Sebastian league on procedural generation and I have followed his whole video series, however on my part there are black spots in the mesh, only on water regions. I'm using global mode for those wondering, also using unity 2019.4.6f1. I want to get rid of the black spots have tried to build and run the and the blackspots were there.
Link to his serie is: https://www.youtube.com/watch?v=wbpMiKiSKm8&list=PLFt_AvWsXl0eBW2EiBtl_sxmDtSgZBxB3
I have dowloaded his project on GitHub and he seems doesn't have a problem with here is his GitHub page: https://github.com/SebLague/Procedural-Landmass-Generation
Also here is a picture -> here
I'm creating my own custom shader for the terrain, here it is
Shader "Custom/terrain"
{
// this properties will be added to our meshMaterial
Properties {
testTexture("Texture", 2D) = "white"{}
testScale("Scale", Float) = 1
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 200
CGPROGRAM
// Physically based Standard lighting model, and enable shadows on all light types
#pragma surface surf Standard fullforwardshadows
// Use shader model 3.0 target, to get nicer looking lighting
#pragma target 3.0
const static int maxLayerCount = 8;
const static float epsilon = 1E-4;
int layerCount;
// float3 because of RGB
float3 baseColors[maxLayerCount];
float baseStartHeights[maxLayerCount];
float baseBlends[maxLayerCount];
float baseColorStrength[maxLayerCount];
float baseTextureScales[maxLayerCount];
float minHeight;
float maxHeight;
sampler2D testTexture;
float testScale;
UNITY_DECLARE_TEX2DARRAY(baseTextures);
struct Input {
float3 worldPos;
float worldNormal;
};
// float a is min value, float b is max value and value is current value
float inverseLerp(float a, float b, float value) {
// saturate means clamp the value between 0 and 1
return saturate((value - a)/(b - a));
}
// Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader.
// See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing.
// #pragma instancing_options assumeuniformscaling
UNITY_INSTANCING_BUFFER_START(Props)
// put more per-instance properties here
UNITY_INSTANCING_BUFFER_END(Props)
float3 triplanar(float3 worldPos, float scale, float3 blendAxis, int textureIndex) {
float3 scaledWorldPos = worldPos / scale;
// tripleaner mapping
float3 xProjection = UNITY_SAMPLE_TEX2DARRAY(baseTextures,
float3(scaledWorldPos.y, scaledWorldPos.z, textureIndex)) * blendAxis.x;
float3 yProjection = UNITY_SAMPLE_TEX2DARRAY(baseTextures,
float3(scaledWorldPos.x, scaledWorldPos.z, textureIndex)) * blendAxis.y;
float3 zProjection = UNITY_SAMPLE_TEX2DARRAY(baseTextures,
float3(scaledWorldPos.x, scaledWorldPos.y, textureIndex)) * blendAxis.z;
return xProjection + yProjection + zProjection;
}
// this function will be called for every pixel that our mesh is visible
// we want to set the color at that surface
void surf (Input IN, inout SurfaceOutputStandard o) {
float heightPercent = inverseLerp(minHeight, maxHeight, IN.worldPos.y);
float3 blendAxis = abs(IN.worldNormal);
blendAxis /= blendAxis.x + blendAxis.y + blendAxis.z;
for (int i = 0; i < layerCount; i++) {
float drawStrength = inverseLerp(-baseBlends[i]/2 - epsilon, baseBlends[i]/2, (heightPercent - baseStartHeights[i]));
float3 baseColor = baseColors[i] * baseColorStrength[i];
float3 textureColor = triplanar(IN.worldPos, baseTextureScales[i], blendAxis, i) * (1-baseColorStrength[i]);
// if drawStrength is 0 then we would set color to black
// but what we want is that if drawstength is 0
// then we want to use the same color, albedo * 1 + 0 will be same (what we want)
o.Albedo = o.Albedo * (1-drawStrength) + (baseColor + textureColor) * drawStrength;
}
}
ENDCG
}
FallBack "Diffuse"
}
So I thought the problem was the code but I compared my code against Sebastian league code that is by the way available at GitHub and the there was nothing there however the problem turned out to be with the animation curve that we used to assign base heights. Just make sure that it stars a bit below zero and that was in my case the solution
Github Link:
https://github.com/SebLague/Procedural-Landmass-Generation/tree/master/Proc%20Gen%20E21
I am developing a native (c++) plugin (windows only for now) for unity (2018.1.0f2).
The plugin downloads textures and meshes and provides them to the unity.
There is a LOT of boilerplate code that I would like to spare you of.
Anyway, the rendering is done like this:
void RegenerateCommandBuffer(CommandBuffer buffer, List<DrawTask> tasks)
{
buffer.Clear();
buffer.SetProjectionMatrix(cam.projectionMatrix); // protected Camera cam; cam = GetComponent<Camera>();
foreach (DrawTask t in tasks)
{
if (t.mesh == null)
continue;
MaterialPropertyBlock mat = new MaterialPropertyBlock();
bool monochromatic = false;
if (t.texColor != null)
{
var tt = t.texColor as VtsTexture;
mat.SetTexture(shaderPropertyMainTex, tt.Get());
monochromatic = tt.monochromatic;
}
if (t.texMask != null)
{
var tt = t.texMask as VtsTexture;
mat.SetTexture(shaderPropertyMaskTex, tt.Get());
}
mat.SetMatrix(shaderPropertyUvMat, VtsUtil.V2U33(t.data.uvm));
mat.SetVector(shaderPropertyUvClip, VtsUtil.V2U4(t.data.uvClip));
mat.SetVector(shaderPropertyColor, VtsUtil.V2U4(t.data.color));
// flags: mask, monochromatic, flat shading, uv source
mat.SetVector(shaderPropertyFlags, new Vector4(t.texMask == null ? 0 : 1, monochromatic ? 1 : 0, 0, t.data.externalUv ? 1 : 0));
buffer.DrawMesh((t.mesh as VtsMesh).Get(), VtsUtil.V2U44(t.data.mv), material, 0, -1, mat);
}
}
There are two control modes. Either the unity camera is controlled by the camera in the plugin, or the plugin camera is controlled by the unity camera. In my current scenario, the plugin camera is controlled by the unity camera. There is no special magic behind the scenes, but some of the transformations needs to be done in double precision to work without meshes 'jumping' around.
void CamOverrideView(ref double[] values)
{
Matrix4x4 Mu = mapTrans.localToWorldMatrix * VtsUtil.UnityToVtsMatrix;
// view matrix
if (controlTransformation == VtsDataControl.Vts)
cam.worldToCameraMatrix = VtsUtil.V2U44(Math.Mul44x44(values, Math.Inverse44(VtsUtil.U2V44(Mu))));
else
values = Math.Mul44x44(VtsUtil.U2V44(cam.worldToCameraMatrix), VtsUtil.U2V44(Mu));
}
void CamOverrideParameters(ref double fov, ref double aspect, ref double near, ref double far)
{
// fov
if (controlFov == VtsDataControl.Vts)
cam.fieldOfView = (float)fov;
else
fov = cam.fieldOfView;
// near & far
if (controlNearFar == VtsDataControl.Vts)
{
cam.nearClipPlane = (float)near;
cam.farClipPlane = (float)far;
}
else
{
near = cam.nearClipPlane;
far = cam.farClipPlane;
}
}
And a shader:
Shader "Vts/UnlitShader"
{
SubShader
{
Tags { "RenderType" = "Opaque" }
LOD 100
Pass
{
Blend SrcAlpha OneMinusSrcAlpha
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct vIn
{
float4 vertex : POSITION;
float2 uvInternal : TEXCOORD0;
float2 uvExternal : TEXCOORD1;
};
struct v2f
{
float4 vertex : SV_POSITION;
float2 uvTex : TEXCOORD0;
float2 uvClip : TEXCOORD1;
};
struct fOut
{
float4 color : SV_Target;
};
sampler2D _MainTex;
sampler2D _MaskTex;
float4x4 _UvMat;
float4 _UvClip;
float4 _Color;
float4 _Flags; // mask, monochromatic, flat shading, uv source
v2f vert (vIn i)
{
v2f o;
o.vertex = UnityObjectToClipPos(i.vertex);
o.uvTex = mul((float3x3)_UvMat, float3(_Flags.w > 0 ? i.uvExternal : i.uvInternal, 1.0)).xy;
o.uvClip = i.uvExternal;
return o;
}
fOut frag (v2f i)
{
fOut o;
// texture color
o.color = tex2D(_MainTex, i.uvTex);
if (_Flags.y > 0)
o.color = o.color.rrra; // monochromatic texture
// uv clipping
if ( i.uvClip.x < _UvClip.x
|| i.uvClip.y < _UvClip.y
|| i.uvClip.x > _UvClip.z
|| i.uvClip.y > _UvClip.w)
discard;
// mask
if (_Flags.x > 0)
{
if (tex2D(_MaskTex, i.uvTex).r < 0.5)
discard;
}
// uniform tint
o.color *= _Color;
return o;
}
ENDCG
}
}
}
It all works perfectly - in editor. It also works well in standalone DEVELOPMENT build. But the transformations get wrong when in 'deploy' builds. The rendered parts look as if they were rotated around wrong axes or with different polarities.
Can you spot some obvious mistakes?
My first suspect was OpenGL vs DirectX differences, but the 'deploy' and 'development' builds should use the same, should they not? Moreover, I have tried changing the player setting to force one or the other, but without any differences.
Edit:
Good image: https://drive.google.com/open?id=1RTlVZBSAj7LIml1sBCX7nYTvMNaN0xK-
Bad image: https://drive.google.com/open?id=176ahft7En6MqT-aS2RdKXOVW68NmvK2L
Note how the terrain is correctly aligned with the atmosphere.
Steps to reproduce
1) Create a new project in unity
2) Download the assets https://drive.google.com/open?id=18uKuiya5XycjGWEcsF-xjy0fn7sf-D82 and extract them into the newly created project
3) Try it in editor -> should work ok (it will start downloading meshes and textures from us, so be patient; the downloaded resources are cached in eg. C://users//.cache/vts-browser)
The plane is controlled by mouse with LMB pressed.
4) Build in development build and run -> should work ok too
5) Build NOT in development build and run -> the terrain transformations behave incorrectly.
Furthermore, I have published the repository. Here is the unity-specific code: https://github.com/Melown/vts-browser-unity-plugin
Unfortunately, I did not intend to publish it this soon, so the repository is missing some formal things like readme and build instructions. Most information can, however, be found in the submodules.
CommandBuffer.SetProjectionMatrix apparently needs a matrix that has been adjusted by GL.GetGPUProjectionMatrix.
buffer.SetProjectionMatrix(GL.GetGPUProjectionMatrix(cam.projectionMatrix, false));
Unfortunately, I still do not understand why would this cause a different behavior between deploy and development builds. I would have expected it to only make difference on different platforms.
I've written a shader and it works fine when I added it in a plane located in front of camera (in this case camera does not have shader). but then I add this shader to the camera, it does not show anything on the screen. Herein is my code, could you let me know how can I change it to be compatible with Camera.RenderWithShader method?
Shader "Custom/she1" {
Properties {
top("Top", Range(0,2)) = 1
bottom("Bottom", Range(0,2)) = 1
}
SubShader {
// Draw ourselves after all opaque geometry
Tags { "Queue" = "Transparent" }
// Grab the screen behind the object into _GrabTexture
GrabPass { }
// Render the object with the texture generated above
Pass {
CGPROGRAM
#pragma debug
#pragma vertex vert
#pragma fragment frag
#pragma target 3.0
sampler2D _GrabTexture : register(s0);
float top;
float bottom;
struct data {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct v2f {
float4 position : POSITION;
float4 screenPos : TEXCOORD0;
};
v2f vert(data i){
v2f o;
o.position = mul(UNITY_MATRIX_MVP, i.vertex);
o.screenPos = o.position;
return o;
}
half4 frag( v2f i ) : COLOR
{
float2 screenPos = i.screenPos.xy / i.screenPos.w;
float _half = (top + bottom) * 0.5;
float _diff = (bottom - top) * 0.5;
screenPos.x = screenPos.x * (_half + _diff * screenPos.y);
screenPos.x = (screenPos.x + 1) * 0.5;
screenPos.y = 1-(screenPos.y + 1) * 0.5 ;
half4 sum = half4(0.0h,0.0h,0.0h,0.0h);
sum = tex2D( _GrabTexture, screenPos);
return sum;
}
ENDCG
}
}
Fallback Off
}
I think what your asking for is a replacement shader that shades everything in the camera with your shader.
Am I correct?
If so this should work
Camera.main.SetReplacementShader(Shader.Find("Your Shader"),"RenderType")
here is some more info:
http://docs.unity3d.com/Documentation/Components/SL-ShaderReplacement.html
Edit: Are you expecting the entire camera to warp like a lens effect? Because your not going to get that using a shader like this by itself, because as it stands it will only apply to objects like your plane but not the full camera view, that requires a post image effect. First your need Unity Pro. If you do, import the Image effects package and look at the fisheye script. See if you can duplicate the fisheye script with your own shader. When I attached the fisheye shader without its corresponding script I was getting the same exact results as you are with your current shader code. If you dont have access to the image effects package let me know and ill send your the fisheye scripts and shaders.
I have tried several ways so far. The shader itself work very well when I add it to a plane located in front of main Camera. But when I add it to the main Camera by below code, nothing could be visible on the screen! (just a blank screen) without any error message. I assign the above shader to repl variable.
using UnityEngine;
using System.Collections;
public class test2 : MonoBehaviour {
// Use this for initialization
public Shader repl = null;
void Start () {
Camera.main.SetReplacementShader(repl,"Opaque");
}
// Update is called once per frame
void Update () {
}
}
Just for your information, the above shader distorts the scene to a trapezium shape.