I'm writing Cg shaders using Unity3D.
I'm trying to use fmod function to repeat a texture along an axis (basically the same effect I can achieve by setting the texture scale in Material with TextureWrapMode.Repeat ).
This is the fragment shader code that can reproduce the error:
half4 frag (v2f i) : COLOR
{
float u_sample_coord = fmod(i.uv.x ,period) /period;
half4 col =tex2D(myTexture,float2(u_sample_coord, i.uv.y));
return col;
}
Basically it seems to work but it produces some sort of aliasing (a strip pattern) in correspondance with 0's of fmod function.
Here's some screenshot:
The image above shows how texture repeats correctly.
Here's a zoom on the strip aliasing pattern emerged:
I tried to debug it but I can't figure out what's going on exactly.
Anyone could tell me what's the problem and eventually how to solve it?
EDIT:
I found out that disabling mipmap generation solve this problem. Btw I'd like to use mipmap to avoid minification aliasing problem while the distance increase. Anyone has any idea?
You'll need to explicitly set the u and v derivatives in the tex2D() function. But: why are you using fmod? If the sampler's repeat mode is set to wrap, then you can let the UV coordinates roam far beyond the 0-1 range. It will wrap by itself.
Related
I want to customize unity's standard shader, I have downloaded it's source code and just want to customize it, so that smoothness of albedo alpha property should change by x(for example 0.1f) per frame. Sorry if this is bad question, but I googled everything i could but found nothing. Thanks in advance.
Here is the image to make my question more clear, I want to change this value on every frame from shader script
It's called _Glossiness inside the standard shader. You can change it using this:
material.SetFloat("_Glossiness", .25f);
However, if you're using LWRP then it's more conveniently called _Smoothness
It is possible to do animations inside the shader if that's what you want, but keep in mind that a shader cannot remember any kind of information about the state of your model except what is going on at this exact frame. If you want that, you'll need to use Iggy's solution, by setting values from code.
You can however animate things using the current time, and potentially together with some math like sine waves for a pulsating effect. I wouldn't try to modify the standard shader if i were you though, that is a can of worms you do not want to open. In fact, Unity designed surface shaders for this exact purpose. A surface shader can be configured to do anything that the standard shader does, but with far less code.
If you go Create>Shader>Surface Shader, you get a good base. You can also find more information here and here. For readability i will not write down the code for all the parameters in my examples, but you can see in the template roughly how it works.
In your case, you can use the _Time variable to animate the glossiness. _Time is a 4d vector, where the x, y, z, and w values represent different speeds of time (t/20, t, t*2, t*3). For instance, you could do something like this:
void surf (Input IN, inout SurfaceOutput o) {
// This will make the smoothness pulsate with time.
o.Smoothness = sin(_Time.z);
}
If you really need an effect to start at a certain time, then you can define an offset timestamp and set it from code like in Iggy's answer. If you want the shader to only trigger once you set the stamp, you can use some kind of condition based on the value of the timestamp.
float _StartTime
void surf (Input IN, inout SurfaceOutput o) {
// This will make the smoothness pulsate with time.
// saturate() clamps the value between 0 and 1.
if (_StartTime > 0)
o.Smoothness = sin(saturate(_Time.y - _StartTime));
else
o.Smoothness = 0;
}
I'm currently trying to change my asset style from realistic to low poly / cartoonic.
For example, I have a toon surface shader
half4 LightingRamp(SurfaceOutput s, half3 lightDir, half atten) {
half NdotL = dot(s.Normal, lightDir);
half diff = NdotL * 0.5 + 0.5;
half3 ramp = tex2D(_LightingTex, float2(diff, 0)).rgb;
half4 c;
c.rgb = _LightColor0.rgb * atten * ramp *_Color;
c.a = s.Alpha;
return c;
}
where _LightingTex is a 2D texture ramp. This works fine for lighting effects on the objects themselfs.
As I have multiple objects with this shader in my scene, some of them are casting a shadow onto my wall.
As you can see, the shadow here is not a ramp but a continuous gradient, as it is (probably) done in some sort of ambient from unity. My question is now: is there an option to create this colorramp effect on the global shadows as well? Something like this:
Can I do it material shader based, or is it a post processing effect?
Thanks
With surface shaders: No, you can't do it in the shader. Actually, I think the best way to get a unified cartoon effect is to use a color grading LUT as a post effect. The great thing about LUTs is that you can create one easily in photoshop by first applying some cool effects to a regular image until it looks the way you want (such as "Posterize"), and then copy the effect stack to apply to a LUT texture, like this one. When you use this LUT in Unity, everything will look as they would with your Photoshop filters applied. One small caveat I've noticed though is that some standard LUT textures need to be flipped vertically to work with the Post Processing Stack. Here is a nice tutorial on how to create posterized LUTs.
If you want to get the toon-like shadows directly in the shader, it is not any harder than making a regular forward rendered vertex/fragment shader, though this by itself requires a bit of knowledge on how these work - i can recommend looking at the standard shader source code, this, or this (somewhat outdated) tutorial. You can find the details surrounding how to add shadow support from my post here. The only thing you need to change is to add a similar color ramp to the shadow mask:
half shadow = SHADOW_ATTENUATION(IN)
shadow = tex2D(_ShadowRamp, float2(shadow, 0));
For this, you can set the shadow ramp as a global shader variable from script, so you won't have to assign it for each material.
Say you have a trivial Unity shader. It does not at all use any texture. It grabs simply the position ..
void vert (inout appdata_full v, out Input o)
{
UNITY_INITIALIZE_OUTPUT(Input,o);
o.localPos = v.vertex.xyz;
}
and then draws a square ..
the quad in the example has been stretch about 3:1 using the transform.
If in the shader we simply knew the scaling (or, just the ratios) we could very easily draw "square squares"
This is obviously a common, everyday technique for things like billboarding, 2D images and backgrounds etc.
In current unity (2018) how the heck do you simply get the current scaling of the object, in Cg?
This is one of those crazy Unity things that is (1) totally undocumented (2) where the only information available about it is as much as 13 years old, I mean some of the folks involved may be deceased (3) it has changed drastically in different Unity versions, so often discussion about it is just totally wrong. Sigh.
How to do it? Can you?
Currently I just have a trivial script pass in the value, which is OK but a bit shoddy.
The scale of the transform is baked into the world matrix. If your object is not rotated then you can fetch it directly with a little bit of swizzling, but most likely you want to do something like this:
half3 ObjectScale() {
return half3(
length(unity_ObjectToWorld._m00_m10_m20),
length(unity_ObjectToWorld._m01_m11_m21),
length(unity_ObjectToWorld._m02_m12_m22)
);
}
heads-up: this implementation is dependent on the API, you might need to use some DEFINE to deal with this in DX/OGL, since the matrix format is different (row vs column order).
there's also different way to access matrices components: https://learn.microsoft.com/en-us/windows/desktop/direct3dhlsl/dx-graphics-hlsl-per-component-math
like the examples in this thread https://forum.unity.com/threads/can-i-get-the-scale-in-the-transform-of-the-object-i-attach-a-shader-to-if-so-how.418345/
I am freshman to the Unity3d shader. I am writing a custome unlit shader now. I know I can use _WorldSpaceLightPos0.xyz and _LightColor0.rgb to get the information of a directional light source.
However, if I have a skybox instead of a light source, how can I get the light information? The radiance(Li) is coming from my skybox now. How can I get the value of them and compute Li*my_custome_brdf?
Thanks
The skybox does not have a light position since it is omnidirectional, so you can't calculate proper directional lighting from it (only ambient). However, if you just want the skybox color in any direction, there's unity_SpecCube0, unity_SpecCube1, etc. for reflection probes listed by weight. You can use something like this:
inline half3 SurfaceReflection(half3 viewDir, half3 worldNormal, half roughness) {
half3 worldRefl = reflect(-viewDir, worldNormal);
half r = roughness * 1.7 - 0.7 * roughness; // Some magic for calculating MIP level
float4 reflData = UNITY_SAMPLE_TEXCUBE_LOD(
unity_SpecCube0, worldRefl, r * 6
);
return DecodeHDR (reflData, unity_SpecCube0_HDR);
}
If your reflection probe is different from your skybox, just define the skybox as a texCUBE uniform in your shader instead and use that. If you don't want to have to set it for each material using this shader, just declare it in the CGPROGRAM block as a uniform, but not as a ShaderLab property - it will then be considered a shader global. You can then set it through script using Shader.SetGlobalTexture. Check out the documentation for this here:
https://docs.unity3d.com/ScriptReference/Shader.SetGlobalTexture.html
I'm working on creating a 3D landscape. So far I've got a mesh created with vertices and faces, and it all looks decent. I've applied a single texture to it, but want to have multiple textures perhaps based on the height. I'm thinking I need a shader to do this.
I'm new to shaders but so far I've followed this tutorial and have two textures blended together. However, I want to fade one texture out completely at certain heights (or position) and have the other completley show.
I'm really not sure how to approach this using a shader. Could someone give some advice on how to start?
For argument's sake, suppose your source geometry runs from y=0 to y=1 and you want the second texture to be completely transparent at y=0 and completely opaque at y=1. Then you could add an addition varying, named secondTextureAlpha or something like that. Load it with your pre-transformation y values in the shader, then combine the incoming values from the source textures either so that the second is multiplies by secondTextureAlpha, if you want to stick with additive blending, or via the mix function for multiplicative.
So e.g. the fragement shader might end up looking like:
varying vec3 lightDir,normal;
varying lowp float secondTextureAlpha;
uniform sampler2D tex,l3d;
void main()
{
vec3 ct,cf;
vec4 texel;
float intensity,at,af;
intensity = max(dot(lightDir,normalize(normal)),0.0);
cf = intensity * (gl_FrontMaterial.diffuse).rgb +
gl_FrontMaterial.ambient.rgb;
af = gl_FrontMaterial.diffuse.a;
texel = mix(texture2D(tex,gl_TexCoord[0].st),
texture2D(l3d,gl_TexCoord[0].st), secondTextureAlpha);
ct = texel.rgb;
at = texel.a;
gl_FragColor = vec4(ct * cf, at * af);
}
To achieve a more complicated mapping, just adjust how you load secondTextureAlpha in your vertex shader, or take it as an input attribute maybe.