Decal texture in GLScene - glscene

I'm wanting to put a visual highlight (selection box really) onto one of many TGLPlane's which have many different textures assigned to them. How would I apply a second decal texture to that plane using GLScene?
Some background. The various different textures applied to the planes are all stored in a MaterialLibrary and assigned to the respective planes Material.MaterialLibrary and Material.LibMaterialName. This is the proper efficient reuse of textures as they are loaded only once regardless of how many times they are used.
What you can't seemingly do is use any of the properties on the TGLPlane because they are ignored once you apply a MaterialLibrary texture to it.
The methods I can find for doing so seem to require me to alter the LibMaterial which of course then applies to all other planes which share that particular texture, so that's a no-go.
Another method I spotted at Google Code (Checkers) solves the issue by creating a second plane which has it's own 'highlight' partially transparent texture applied, which is then placed slightly above the original object (a cube as it happens to be). This seems like a hack to get around it and I'm hoping to avoid that if possible.
If its not a built in capability for GLScene then is there a method to intercept the rendering when it gets to that particular plane and then use some OpenGL primitives to apply the decal texture after the MaterialLibrary texture is applied?

Here is one method, without applying a second texture but nevertheless making a visible marker.
When loading the textures, create a second TGLLibMaterial without a texture but with whatever 'highlight' modifications you want to make on your 'selected' texture, and call it the same except with '-selected' appended to the name.
fMatLib.AddTextureMaterial('empty','empty.bmp', False);
with TGLLibMaterial.Create(fMatLib.Materials) do begin
Material.MaterialLibrary := fMatLib;
Texture2Name := 'empty';
Name := 'empty-selected';
Material.FrontProperties.Emission.Color := clrRed;
Material.Texture.ImageBrightness := 1.5;
end;
This doesn't consume texture memory because you're not reloading the texture.
Then in your code identify the object you wish to highlight, and do something like this.
fPickedMaterial := fPickedObject.Material.LibMaterialName;
fPickedObject.Material.LibMaterialName :=
fPickedObject.Material.LibMaterialName + '-selected';
fPickedMaterial saves the original material name so that I can restore it later.
Not perfectly what I want, but it works. Call it a work-around for now.

Related

Separate shadow-casting from "shadow-clipping" in a ShadowCaster pass

I am using a single surface shader with a custom vertex function, and tried to I use macros like UNITY_PASS_SHADOWCASTER to add pass-specific code to the shadow processing, for example moving the vertices away from the light source to fix self-shadowing. However, I discovered that doing so has weird effects on how the shadows are rendered on the object, and even when some of its pixels are displayed.
Eventually, I managed to find out that the ShadowCaster pass must be called at least twice even if there is a single light source: once with the virtual camera matching the light source, but also a second time when the shadow is to be applied to it. This is the call that controls the visibility of the shadows behind the object.
Now I have two questions:
What is this mode of execution called?
How do I make code branch depending on which of these mode is executing? In other words, I want to move the vertices to a different position when casting the shadow, but make them stay when the shadows are applied to the object. At the moment, I am checking whether ObjSpaceLightDir matches ObjSpaceViewDir, but it doesn't sound like the best idea. Considering the shader pass is probably being compiled only once, I suppose I would have to look for a runtime variable, but I am not sure whether there is even any...
I managed to find mentions of a ShadowCollector pass for older versions of Unity. Is this the same thing?
I am using Unity 2020.3.32f1 with the built-in render pipeline.

How can i find for every pixel on the screen which object it belongs to?

Each frame unity generate an image. I want that it will also create an additional arrays of int's and every time it decide to write a new color on the generated image it will write the id of the object on the correspond place in the array of int's.
In OpenGL I know that it’s pretty common and I found a lot of tutorials for this kind of things, basically based on the depth map you decide which id should be written at each pixel of the helper array. but in unity i using a given Shader and i didn't find a proper way to do just that. i think there should be any build in functions for this kind of common problem.
my goal is to know for every pixel on the screen which object it belongs to.
Thanks.
In forward rendering if you don't use it for another purpose you could store the ID into the alpha channel of the back buffer (and it would only be valid for opaque objects), up to 256 IDs without HDR. In deferred you could edit the unused channel of the gbuffer potentially.
This is if you want to minimize overhead, otherwise you could have a more generic system that re-renders specific objects into a texture in screenspace, whith a very simple shader that just outputs ID, into whatever format you need, using command buffers.
You'll want to make a custom shader that renders the default textures and colors to the mainCamera and renders an ID color to a renderTexture trough another camera.
Here's an example of how it works Implementing Watering in my Farming Game!

Why in 3D game we need to separate a material into so many textures for a static object?

Perhaps the question is not that correct, the textures should be say a kind of channel? although I know they will be mixed in the shader finally.
I know the knowledge of the various textures is very important, but also a bit hard to understand completely.
From my understanding:
diffuse - the 'real' color of an object without light involved.
light - for static objects. render light effections into texture beforehand.
specular - the area where has direct reflection.
ao - to absorb indirect light for the different area of an object.
alpha - to 'shape' the object.
emissive - self illuminance.
normal - pixel normal vector to deal with the light ray.
bump - (dont' know the exact differences between normalmap).
height - stores Z range values, to generate terrain, modify vertex etc.
And the items below should be related to PBR material which I'm not familiar with:
translucency / cavity / metalness / roughness etc...
Please correct me if some misunderstandings there.
But whatever, my question is why we need to separate these textures apart for a material but not only render them together into the diffusemap directly for a static object?
It'll be appreciated if some examples (especially for PBR) , and thank you very much.
I can beforehand bake all things into the diffuse map and apply to my
mesh, why I need to apply so many different textures?
Re-usability:
Most games re-use textures to reduce the size of the game. You can't if you combine them together. For example, when you two similar objects but you want to randomize the looks of them(aging effect), you can make them share the-same color(albedo) map but use different ao map. This becomes important when there hundreds of objects, you can use different combination of texture maps on similar objects to create unique Objects. If you have combined this into one, it would be impossible to share it with other similar objects but you to slightly make to look different.
Customize-able:
If you separate them, you'll be able to change the amount of effect each texture will apply to the Object. For example, the slider on the metallic slot for the Standard shader. There are more of this sliders on other map slots but they only appear once you plug a texture into the slot. You can't do this when you combine the textures into one.
Shader:
The standard shader can't do this so you have to learn how to write shader since you can't use one image to get the effects you would with all those texture maps with the standard shader. A custom shader is required and you need a way to read the information about the maps in the combined shader.
This seems like a reasonable place to start:
https://en.wikipedia.org/wiki/Texture_mapping
A texture map is an image applied (mapped) to the surface of a shape or polygon. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3d model formats or material definitions, and assembled into resource bundles.
I would add to this that the shape or a polygon don't have to belong to 3d objects as one may imagine it. If you render two triangles as a rectangle, you can run all sorts of computations and store it in a "live" texture.
Texture mapping is a method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Edwin Catmull in 1974.
What this detail represents is either some agreed upon format to represent some property, (say "roughness" within some BRDF model) which you would encounter if you are using some kind of an engine.
Or whatever you decide that detail to be, if you are writing your own engine. You can decide to store whatever you want, however you want it.
You'll notice on the link that different "mapping" techniques are mentioned, each with their own page. This is the result of some person, or people who did some research and came up with a paper detailing the technique. Other people adopt it, and that's how they find their way into engines.
There is no rule saying these can't be combined.

HLSL (Unity-specific ok, not necessary) combining Stencil and worldspace "reverse" clipping

I've built a working surface shader (call it "wonderland") that renders as invisible unless a companion "lookingGlass" shader intersects with it from the viewpoint of the camera. Simple stencil shader arrangement.
Easy peasy.
I can add shader settings to specify a plane, or even just a minimum worldspace Z value, and use clip() to only render pixels on one side of that plane... (in other words, I could use that to trim the content that's allowed by the Stencil.)
What I want to do is use the stencil on surfaces "through the looking glass", (to reveal geometry that's inside the looking glass) and to always render those surfaces when they're on "our" side of the looking glass (to always show them if they're on this side of the looking glass portal). eg., if z<0, render if the Stencil Ref value is satisfied. if z>=0, render regardless.
Now, in Unity I can attach two materials to the MeshRenderer component (one with a stencil shader, one with a "plane cutoff" shader) - that works fine. It's pretty awesome, actually, at least visually. But while I haven't benchmarked it yet, I instinctively believe it's going to massively impact framerate if there are a number of objects, fairly complicated geometry, etc., set up with this arrangement.
(I can also manage shader attachment in code, and only do this when I expect something to transition, but I'm really hoping to get a unified shader out of this to avoid unnecessary draw calls.)
As it turns out, what I was looking to do is impossible.
The two shaders I wish to combine are both surface shaders. While you can combine multiple surface shaders into a multipass shader, you cannot combine multiple surface shaders, with a Stencil, and with a clip() where the clip is applied to passes that the Stencil is not and vice-versa.
There are combinations that can achieve parts of this, or can achieve the entire goal with surface and vert (or other non-surf) shaders, but the combination of requirements stipulated by this question isn't supported as desired.
While this does not answer the question, the workaround in Unity is to create two materials that provide each piece of functionality. They can both exist on the item that needs both pieces, and code can otherwise manage whether one or the other or both is actively in use.
Similar solutions would be available in other packages.

Making multiple objects with the same shader fade at different times

I have a death transformation for one of my GameObjects which goes from a spherical ball to a bunch of small individual blocks. Each of these blocks I want to fade at different times but since they all use the same shader I cannot seem to figure out how to make all of them not fade out at the same time.
This first picture is the Spherical Ball in its first step for when it turns from a spherical ball to a Minecraft'ish looking block ball and to the right of it is one of the blocks that make up the Minecraft'ish looking ball shown by the red arrow.
Now this is my Inspector for one of the little blocks that make up the Minecraft'ish looking ball.
I have an arrow pointing to what makes the object fade but that is globally across all of the blocks since they use the same shader. Is it possible to have each block fade separately or am I stuck and need to find a new disappear act for the little block dudes?
You need to modify the material property by script at runtime, and you need to do it through the Renderer.material property. When you access Renderer.material, Unity will automatically create a copy of the material for you that is handled separately -- including getting its own draw call, if you care about performance. You can tell this has happened because the material name in the renderer will change to "Materialname (Instance)".
Set the material's fade property using Renderer.material.SetFloat() (or whatever the appropriate Set... function is). Unfortunately the property's name isn't "Fade Factor". You can find the property's name by looking at the shader script, or by switching the inspector to debug mode and digging through the Saved Properties array for one that looks right.