Tiling atlas textures correctly with a custom shader in Unity - unity3d

This is a bit complicated, but it boils down to be quite a simple problem, I hope. So here is how it goes: I am using Unity to generate a map gameobject during runtime from a bsp file which has a whole bunch of vertices, faces, uvs, texture references, and so on. The meshes created come out exactly as they should be, and all the textures come out fine. There is one problem though, there are so many meshes created with so many materials leading to many draw calls making the program slow. So I searched on a way to reduce the draw calls and I found a solution. Combine all the meshes into one big mesh and create a texture atlas by combining all the textures used. Combining the meshes works fine and combining the textures comes out great as well. Then I faced the problem of uv mapping. So I found a solution from the NVidia white paper to make a custom shader which uses the tex2d function to interpolate the texel from the texture using the uv positions with their derivatives. I think this would have worked, but my meshes have really weird triangles and I think they are ruining this solution. In the images below you can see the difference when the meshes are combined from when they are separate:
Combined Meshes with Changed UVs and Custom Shader
Separate Meshes with original UVs
This is the code I am using in the shader to set the color of the model:
o.Albedo = tex2D (_MainTex, IN.uv2_BlendTex, ddx(IN.uv_MainTex), ddy(IN.uv_MainTex)).rgb;
As you can see, I have added a second UV which is the non-tiled version of the original UV. I do that by using the frac() function, but in the C# code rather than in the shader. Since the textures can be different sizes, I had to calculate the UV before getting to the shader because I have access to the texture sizes at that time.
Here is the code I used to calculate the 2 UVs:
Rect surfaceTextureRect = uvReMappers[textureIndex];
Mesh surfaceMesh = allFaces[i].mesh;
Vector2[] atlasTiledUVs = new Vector2[surfaceMesh.uv.Length];
Vector2[] atlasClampedUVs = new Vector2[surfaceMesh.uv.Length];
for (int j = 0; j < atlasClampedUVs.Length; j++)
{
Vector2 clampedUV = new Vector2((surfaceMesh.uv[j].x - Mathf.Floor(surfaceMesh.uv[j].x)), (surfaceMesh.uv[j].y - Mathf.Floor(surfaceMesh.uv[j].y)));
float atlasClampedX = (clampedUV.x * surfaceTextureRect.width) + surfaceTextureRect.x;
float atlasClampedY = (clampedUV.y * surfaceTextureRect.height) + surfaceTextureRect.y;
atlasTiledUVs[j] = new Vector2((surfaceMesh.uv[j].x * surfaceTextureRect.width) + surfaceTextureRect.x, (surfaceMesh.uv[j].y * surfaceTextureRect.height) + surfaceTextureRect.y);
atlasClampedUVs[j] = new Vector2(atlasClampedX, atlasClampedY);
if (i < 10) { Debug.Log(i + " Original: " + surfaceMesh.uv[j] + " ClampedUV: " + clampedUV); }
}
surfaceMesh.uv = atlasTiledUVs;
surfaceMesh.uv2 = atlasClampedUVs;
The array uvReMappers is an array of Rect created when using the Texture2D function PackTextures().
Sorry for taking so long, but here is my question: Why do the textures come out contorted. Is it because the way the meshes are triangulated or is it because of the way I wrote the custom shader. And finally how can I fix it.
Thank you for your time. I am sorry for writing so much, but I have never posted a question before. I always find answers to almost all my problems online, but I have been searching for days on how to fix this problem. I feel it might be too specific to be able to find an answer for. I hope I have provided enough information.

I finally solved the problem! So it turns out I should not calculate the UVs before the shader. Instead I passed the information needed by the shader through the UVs so that it can calculate the new texel positions directly.
Here is the code before the shader:
Rect surfaceTextureRect = uvReMappers[textureIndex];
Mesh surfaceMesh = allFaces[i].mesh;
Vector2[] atlasTexturePosition = new Vector2[surfaceMesh.uv.Length];
Vector2[] atlasTextureSize = new Vector2[surfaceMesh.uv.Length];
for (int j = 0; j < atlasTexturePosition.Length; j++)
{
atlasTexturePosition[j] = new Vector2(surfaceTextureRect.x, surfaceTextureRect.y);
atlasTextureSize[j] = new Vector2(surfaceTextureRect.width, surfaceTextureRect.height);
}
surfaceMesh.uv2 = atlasTexturePosition;
surfaceMesh.uv3 = atlasTextureSize;
Here is the shader code:
tex2D(_MainTex, float2((frac(IN.uv.x) * IN.uv3.x) + IN.uv2.x, (frac(IN.uv.y) * IN.uv3.y) + IN.uv2.y));

I took a different approach and created a texture atlas on the cpu, from there UV mapping was just like normal UV mapping all I had to do was assign a texture to the vertex info from my atlas ...
My scenario is a custom voxel engine that can handle anything from minecraft to rendering voxel based planets and I haven't found a scenario it can't handle yet.
Here's my code for the atlas ...
using UnityEngine;
using Voxels.Objects;
namespace Engine.MeshGeneration.Texturing
{
/// <summary>
/// Packed texture set to be used for mapping texture info on
/// dynamically generated meshes.
/// </summary>
public class TextureAtlas
{
/// <summary>
/// Texture definitions within the atlas.
/// </summary>
public TextureDef[] Textures { get; set; }
public TextureAtlas()
{
SetupTextures();
}
protected virtual void SetupTextures()
{
// default for bas atlas is a material with a single texture in the atlas
Textures = new TextureDef[]
{
new TextureDef
{
VoxelType = 0,
Faces = new[] { Face.Top, Face.Bottom, Face.Left, Face.Right, Face.Front, Face.Back },
Bounds = new[] {
new Vector2(0,1),
new Vector2(1, 1),
new Vector2(1,0),
new Vector2(0, 0)
}
}
};
}
public static TextureDef[] GenerateTextureSet(IntVector2 textureSizeInPixels, IntVector2 atlasSizeInPixels)
{
int x = atlasSizeInPixels.X / textureSizeInPixels.X;
int z = atlasSizeInPixels.Z / textureSizeInPixels.Z;
int i = 0;
var result = new TextureDef[x * z];
var uvSize = new Vector2(1f / ((float)x), 1f / ((float)z));
for (int tx = 0; tx < x; tx++)
for (int tz = 0; tz < z; tz++)
{
// for perf, types are limited to 255 (1 byte)
if(i < 255)
{
result[i] = new TextureDef
{
VoxelType = (byte)i,
Faces = new[] { Face.Top, Face.Bottom, Face.Left, Face.Right, Face.Front, Face.Back },
Bounds = new[] {
new Vector2(tx * uvSize.x, (tz + 1f) * uvSize.y),
new Vector2((tx + 1f) * uvSize.x, (tz + 1f) * uvSize.y),
new Vector2((tx + 1f) * uvSize.x, tz * uvSize.y),
new Vector2(tx * uvSize.x, tz * uvSize.y)
}
};
i++;
}
else
break;
}
return result;
}
}
}
And for a texture definition within the atlas ...
using UnityEngine;
using Voxels.Objects;
namespace Engine.MeshGeneration.Texturing
{
/// <summary>
/// Represents an area within the atlas texture
/// from which a single texture can be pulled.
/// </summary>
public class TextureDef
{
/// <summary>
/// The voxel block type to use this texture for.
/// </summary>
public byte VoxelType { get; set; }
/// <summary>
/// Faces this texture should be applied to on voxels of the above type.
/// </summary>
public Face[] Faces { get; set; }
/// <summary>
/// Atlas start ref
/// </summary>
public Vector2[] Bounds { get; set; }
}
}
For custom scenarios where I need direct control of the UV mappings I inherit texture atlas and then override the SetupTextures() method but in pretty much all cases for me I create atlases where the textures are all the same size so simply calling GenerateTextureSet will do the uv mapping calculations I believe you need.
The UV coords for a given face of a given voxel type are then ...
IEnumerable<Vector2> UVCoords(byte voxelType, Face face, TextureAtlas atlas)
{
return atlas.Textures
.Where(a => a.VoxelType == voxelType && a.Faces.Contains(face))
.First()
.Bounds;
}
In your case you probably have a different way to map to the texture of choice from your pack but essentially the combination of a face and type in my case are what determine the uv mapping set I want.
This then allows you to use your mesh with any standard shader instead of relying on custom shader logic.

You have to turn the passed in TEXCOORD0 from a percentage of the image space to a pixel value, use the modulus to figure out which pixel it is on the tiled texture, and then turn it back into a percentage of the image.
Here's the code:
You need the 2D variables _MainTex and _PatternTex to be defined.
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
float modFunction(float number, float divisor){
//2018-05-24: copied from an answer by Nicol Bolas: https://stackoverflow.com/questions/35155598/unable-to-use-in-glsl
return (number - (divisor * floor(number/divisor)));
}
fixed4 frag (v2f i) : SV_Target
{
fixed4 curColor = tex2D(_MainTex, i.uv);
fixed4 pattern = tex2D(_PatternTex,
float2(
modFunction(i.uv.x*_MainTex_TexelSize.z,_PatternTex_TexelSize.z) *_PatternTex_TexelSize.x,
modFunction(i.uv.y*_MainTex_TexelSize.w,_PatternTex_TexelSize.w) *_PatternTex_TexelSize.y
)
);
fixed4 col = curColor * pattern;
col.rgb *= col.a;
return col;
}

Related

sometimes my meshes are black for large textures in a texture array, sometimes the textures render

I'm trying to shade meshes I generated with a noise heightmap using an array of textures. With a smaller texture size (e.g. 512px*512px) everything works completely fine. However, if I use larger texture for example 1024px*1024px or 2048px*2048px, my meshes usually render black. Every now and then the textures will render correctly around 5% of the time, while around 20% of the time they will seem to render correctly for the first frame and then switch to black.
This issue seems to appear no matter how long my texture array is. (a size 1 array still causes the same behavior) I also see the same issue regardless of whether the images are JPGs or PNGs. I also tried a variety of different images as texture and reproduced the same problem. I have no errors or warnings in my console.
Below are simplified versions of the relevant code which also suffer from the same issue. This just additive blends the textures, but in the full version of the code, the height of the mesh is used to determine the texture(s) to use and the degree of blending between nearby textures. My code is based off of Sebastian Lague's procedural landmass generation youtube tutorial series, which only deals with 512px*512px textures.
The code that puts the texture array and layer number into the shader:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System.Linq;
[CreateAssetMenu()]
public class TextureData : UpdatableData {
const int textureSize = 2048;
const TextureFormat textureFormat = TextureFormat.RGB565;
public Layer[] layers;
public void UpdateMeshHeights(Material material, float minHeight, float maxHeight) {
material.SetInt("layerCount", layers.Length);
Texture2DArray texturesArray = GenerateTextureArray(layers.Select(x => x.texture).ToArray());
material.SetTexture("baseTextures", texturesArray);
}
Texture2DArray GenerateTextureArray(Texture2D[] textures) {
Texture2DArray textureArray = new Texture2DArray(textureSize, textureSize, textures.Length, textureFormat, true);
for (int i=0; i < textures.Length; i++) {
textureArray.SetPixels(textures[i].GetPixels(), i);
}
textureArray.Apply();
return textureArray;
}
[System.Serializable]
public class Layer {
public Texture2D texture;
}
}
The shader itself:
Shader "Custom/Terrain" {
SubShader {
Tags { "RenderType"="Opaque" }
CGPROGRAM
#pragma surface surf Standard fullforwardshadows
#pragma target 3.0
int layerCount;
UNITY_DECLARE_TEX2DARRAY(baseTextures);
struct Input {
float3 worldPos;
float3 worldNormal;
};
float3 triplanar(float3 worldPos, float scale, float3 blendAxes, int textureIndex) {
float3 scaledWorldPos = worldPos / scale;
float3 xProjection = UNITY_SAMPLE_TEX2DARRAY(baseTextures, float3(scaledWorldPos.y, scaledWorldPos.z, textureIndex)) * blendAxes.x;
float3 yProjection = UNITY_SAMPLE_TEX2DARRAY(baseTextures, float3(scaledWorldPos.x, scaledWorldPos.z, textureIndex)) * blendAxes.y;
float3 zProjection = UNITY_SAMPLE_TEX2DARRAY(baseTextures, float3(scaledWorldPos.x, scaledWorldPos.y, textureIndex)) * blendAxes.z;
return xProjection + yProjection + zProjection;
}
void surf (Input IN, inout SurfaceOutputStandard o) {
float3 blendAxes = abs(IN.worldNormal);
blendAxes /= blendAxes.x + blendAxes.y + blendAxes.z;
for (int i = 0; i < layerCount; i++) {
float3 textureColor = triplanar(IN.worldPos, 1, blendAxes, i);
o.Albedo += textureColor;
}
}
ENDCG
}
FallBack "Diffuse"
}
Here is a screenshot of then problem in action:
I had a similar problem, so I thought I'd document this here.
Making a global reference to the Texture2DArray created within GenerateTextureArray(..) fixed this for me:
Texture2DArray textureArray;
Texture2DArray GenerateTextureArray(Texture2D[] textures) {
textureArray = new Texture2DArray(textureSize, textureSize, textures.Length, textureFormat, true);
for (int i=0; i < textures.Length; i++) {
textureArray.SetPixels(textures[i].GetPixels(), i);
}
textureArray.Apply();
return textureArray;
}
(Of course you can then remove the return value and use the reference instead)
I can only guess the reason being, but as far as I know the Apply() function moves data to the GPU. As the data is not relevant for the CPU, the garbage collector removes it, which causes problems when updating the texture. Why exactly the reference is still needed though is questionable for me.

Why Does My Mesh Combiner Script Make Child Objects Invisible And Change Their Placements?

To further optimize my game, I wanted to combine the meshes of the hallways so that the frame rate would be higher. the code is intended to take all of the children inside of an empty game object(titled walls, floors, etc.) and combine their meshes into one. However, whenever I ran the script, all of the child objects would appear in completely random positions and were invisible. How can I make so that the objects all appeared at their original position and were visible?
How I set up the code is that I would place any repeated objects in an empty gameObject to easily categorize them (titled as Walls, Floors). Afterwards, I would assign the script to the empty gameObject and expect every repeated object in the empty gameObject to combine.
Here's an example:
Here's The Code:
using UnityEngine;
using System.Collections;
// Copy meshes from children into the parent's Mesh.
// CombineInstance stores the list of meshes. These are combined
// and assigned to the attached Mesh.
[RequireComponent(typeof(MeshFilter))]
[RequireComponent(typeof(MeshRenderer))]
public class CombineMesh : MonoBehaviour
{
void Update()
{
if(Input.GetKeyDown(KeyCode.J))
{
CombineMeshes();
}
}
void CombineMeshes()
{
Quaternion oldRot = transform.rotation;
Vector3 oldPos = transform.position;
transform.rotation = Quaternion.identity;
transform.position = Vector3.zero;
MeshFilter[] meshFilters = GetComponentsInChildren<MeshFilter>();
CombineInstance[] combine = new CombineInstance[meshFilters.Length];
int i = 0;
while (i < meshFilters.Length)
{
combine[i].mesh = meshFilters[i].sharedMesh;
combine[i].transform = meshFilters[i].transform.localToWorldMatrix;
meshFilters[i].gameObject.SetActive(false);
i++;
}
var MeshFilter = transform.GetComponentInChildren<MeshFilter>();
MeshFilter.mesh = new Mesh();
MeshFilter.mesh.CombineMeshes(combine);
GetComponentInChildren<MeshCollider>().sharedMesh = MeshFilter.mesh;
transform.gameObject.SetActive(true);
transform.rotation = oldRot;
transform.position = oldPos;
}
}
Answering late in case anyone happens to be searching for this information.
Unity's documentation is uncharacteristically unhelpful, as is the internet in general.
Lengthy Explanation
Kayra's Code Corrected
Working Code: Helper Functions
Working Code: CombineMeshes
Lengthy Explanation:
Two things to keep in mind are
Meshes are defined via a Vector3[](see meshFilter.mesh.vertices), describing the coordinates of each vertex relative to its meshFilter's transform (so local space). This is very important later on.
There is no black magic involved - math is not black magic ;)
First, you should understand "transform.localToWorldMatrix" and "transform.worldToLocalMatrix"
These are very misleading names - all these Matrix4x4 actually do in Unity, is describe the linear transformation between
{position: Vector3.zero, rotation: Quaternion.identity, scale: Vector3.one} and
the given transform's {position, rotation, scale}.
Its telling you by how much to move in which direction, by how much to rotate in which direction, and by how much to scale.
In fact: transform.localToWorldMatrix == transform.worldToLocalMatrix.inverse -- they're the exact same thing, just flipped around.
See "Khan Academy" or "3blue1brown" on YouTube for great explanations of Linear Transformation
We usually don't use Matrices in Unity, because we can access/modify the position, rotation, and scale directly -- for us, it's just a different way of storing the same information.
I wrote some working code to visualise what's actually happening in Unity:
/// Demonstration of how transform.localToWorldMatrix works - very important for understanding CombineInstance.transform
/// Place two 3D objects in your scene, attach this script somewhere, and assign the 3D Objects to the public variables
/// This script will auto-run inside a coroutine 2 seconds after Start(), because I use the InputSystem package and maybe you still use if(Input.getKeyDown()).
/// It's the easiest way I know of to guarantee working code in your scene
public class StackOverflowExample : MonoBehaviour
{
public Transform parent;
public Transform child;
private IEnumerator exampleMethod;
/// <summary>
/// Just setting some conditions, in case you try out this code
/// </summary>
private void Start()
{
parent.position = new Vector3(5f, 0f, 3f);
child.parent = parent;
child.localPosition = new Vector3(2f, 0f, 2f);
//Only doing this because I use the InputSystem package, and others maybe don't (yet...)
exampleMethod = TransformMatrixExample();
StartCoroutine(exampleMethod);
}
//Only doing this because I use the InputSystem package, and others maybe don't (yet...)
private IEnumerator TransformMatrixExample()
{
//wait for 2 seconds just because...
yield return new WaitForSeconds(2f);
//parent.position = parent.position - parent.localToWorldMatrix.GetPosition();
// Commented out on purpose. Would result in 0,0,0 parent.position, because parent has no gameObject parent...
//and therefore break the demonstration:
child.position = child.position - parent.localToWorldMatrix.GetPosition();
// child.localPosition (see the inspector) now states (-3f,0f,-1f)
// however, child.Position (just unparent the gameObject in the inspector) is now (2f,0f,2f)
}
}
When you tell each CombineInstance what Matrix4x4 to use in
combine[i].transform = meshFilters[i].transform.localToWorldMatrix;
you are describing by how much to offset each soon-to-be-combined mesh's vertices' coordinates before combining it.
Once again, we usually access/modify the position, rotation, and scale directly - Mesh.CombineMeshes() uses a 4x4 matrix.
Here Matrix4x4.GetPosition() returns the vector leading from Vector3.zero to parent.position - which is contained inside that matrix
We are effectively moving the child object to the place it would be if the parent were at coordinates (0f,0f,0f).
The same thing happens with rotation and scale.
In the while-loop in your code, there's a statement:
combine[i].transform = meshFilters[i].transform.localToWorldMatrix;
The problem with this is that if the parent object (or the mesh you want to merge into) is not positioned at Vector3.zero, all the meshes will still pretend otherwise and offset themselves by the wrong amount. That is why you have to move the parent.transform.position to vector3.zero before assigning to the CombineInstances[].
So in my code example above:
parent.position = new Vector3(5f, 0f, 3f);
child.localPosition = new Vector3(2f, 0f, 2f);
If I first move the parent to V3.zero, the results given by transform.localToWorldMatrix.GetDistance():
combine[i].transform = meshFilters[i].transform.localToWorldMatrix :
parent's offset: V3(0f,0f,0f) --> parent.position
child's offset: V3(2f,0f,2f) --> child.position
This works because now, the vector from child.position to V3.zero == vector from child.position to parent.position.
If I were to use the transforms without first moving the parent to V3.zero, I would get the following results:
combine[i].transform = meshFilters[i].transform.localToWorldMatrix :
parent's offset: V3(5f, 0f, 3f) --> parent.position
child's offset: V3(7f, 0f, 5f) --> parent.position + child.localPosition
Because transform.localToWorldMatrix returns the vector from zero to that transform.position.
Remember Point1 of things to remember?
Vertex coordinates are defined in local space relative to their meshFilter's transform.
In other words:
all of parent mesh's vertices will offset by an additional V3(5f,0f,3f) --> parent.position
all of child mesh's vertices will be offset by an additional V3(7f,0f,5f) --> parent.position + child.localPosition
The statement is effectivley telling Unity the following (pseudocode):
foreach (Vector3 v in meshFilters[i].mesh.vertices)
{
v += meshFilters[i].localToWorldMatrix.GetPosition();
//and now append my vertex to the new MeshFilter's mesh...
//in other workds: Pretend that my parent is at 0,0,0 and I'm in the right spot already
}
The exact same principle holds true for rotation and scale. You can probably decipher that from the working code later on.
kayra yorulmaz's Corrected Code
I'm assuming that CombineMesh.cs is attached to the "Floors" gameObject,
and that "Floors" transform.rotation = (0f,0f,0f).
The Quaternion operations will be explained shortly.
So the kayra yorulmaz's code would have to be written as follows:
using UnityEngine;
// Copy meshes from children into the parent's Mesh.
// CombineInstance stores the list of meshes. These are combined
// and assigned to the attached Mesh.
[RequireComponent(typeof(MeshFilter))]
[RequireComponent(typeof(MeshRenderer))]
[RequireComponent(typeof(MeshCollider))] //because otherwise line 72 might throw an exception...
public class CombineMesh_Corrected : MonoBehaviour
{
void Update()
{
if(Input.GetKeyDown(KeyCode.J))
{
CombineMeshes();
}
}
void CombineMeshes()
{
Vector3 transformOffset = transform.position;
MeshFilter[] meshFilters = GetComponentsInChildren<MeshFilter>();
CombineInstance[] combine = new CombineInstance[meshFilters.Length];
int i = 0;
while (i < meshFilters.Length)
{
Quaternion rotationOffset = Quaternion.FromToRotation(transform.eulerAngles, meshFilters[i].transform.eulerAngles);
meshFilters[i].transform.position -= transformOffset;
meshFilters[i].transform.rotation = Quaternion.Euler(meshFilters[i].transform.eulerAngles) * Quaternion.Inverse(rotationOffset);
combine[i].mesh = meshFilters[i].sharedMesh;
combine[i].transform = meshFilters[i].transform.localToWorldMatrix;
meshFilters[i].gameObject.SetActive(false);
//we already stored the 4x4Matrix in combine[i].transform, so it's safe to change back now
meshFilters[i].transform.position += transformOffset;
meshFilters[i].transform.rotation *= rotationOffset;
i++;
}
MeshFilter meshFilter = transform.GetComponent<MeshFilter>();
meshFilter.mesh = new Mesh();
meshFilter.mesh.CombineMeshes(combine);
GetComponentInChildren<MeshCollider>().sharedMesh = meshFilter.mesh;
transform.gameObject.SetActive(true);
}
}
Working Code Example
However, it you create your own Matrix4x4 to describe the necessary linear transformation, you don't have to touch the gameObject's transforms at all.
Remember that Vertices (and therefore the meshes you're combining) are described relative to the meshFilter's transform.
So if we create a Matrix4x4 for each child meshFilter, describing how that child.meshFilter.transform is located relative to parent.meshFilter.transform, we can tell Unity where to place the vertices for the combined mesh:
So here's is the code based off what I just wrote for my own project.
Necessary Helping Functions
using UnityEngine;
public static class StackoverflowHelpers
{
/// <summary>
/// Returns the difference between quaterions, treated as local rotations because of the order...
/// https://answers.unity.com/questions/810579/quaternion-multiplication-order.html
/// </summary>
/// <param name="from"></param>
/// <param name="to"></param>
/// <returns></returns>
public static Quaternion FromTo(Quaternion from, Quaternion to)
{
return Quaternion.Inverse(from) * to;
}
public static Quaternion Add(Quaternion start, Quaternion difference)
{
return start * difference;
}
public static Quaternion Subtract(Quaternion start, Quaternion difference)
{
return start * Quaternion.Inverse(difference);
}
public static Vector3 RatioBetween( Vector3 fromScale, Vector3 toScale)
{
return new Vector3(
toScale.x/fromScale.x,
toScale.y/fromScale.y,
toScale.z/fromScale.z );
}
}
Actual CombineMesh Method
using System.Collections.Generic;
using UnityEngine;
public static class StackOverflow_CombineMesh
{
public static void Simple(List<MeshFilter> _meshFilters, bool _deleteOriginals = true)
{
CombineInstance[] combineInstances = new CombineInstance[_meshFilters.Count];
Transform parent = _meshFilters[0].transform;
for (int i = 0; i < _meshFilters.Count; i++)
{
/// set up the matrix describing the step from the parent mesh to the child mesh
Transform child = _meshFilters[i].transform;
Vector3 posOffset = child.position - parent.position;
posOffset.x *= 1/parent.localScale.x;
posOffset.y *= 1/parent.localScale.y;
posOffset.z *= 1/parent.localScale.z;
Matrix4x4 ParentToChildMatrix = Matrix4x4.TRS(
posOffset,
StackoverflowHelpers.FromTo(parent.rotation, child.rotation),
StackoverflowHelpers.RatioBetween(parent.lossyScale, child.lossyScale));
combineInstances[i].mesh = _meshFilters[i].mesh;
combineInstances[i].transform = ParentToChildMatrix;
child.gameObject.SetActive(false);
}
_meshFilters[0].mesh = new Mesh();
_meshFilters[0].mesh.CombineMeshes(combineInstances, true, true);
_meshFilters[0].gameObject.SetActive(true);
}
Bear in mind that meshes in unity have a maximum number of vertices (65,535 to be exact) - if you cross that limit your mesh won't render properly after all.
Have fun, keep learning!
Gecko
I think this is a common issue with Unity mesh combine, try changing this line (assuming this is all on the Parent game object):
combine[i].transform = meshFilters[i].transform.localToWorldMatrix;
to this:
combine[i].transform = meshFilters[i].transform.localToWorldMatrix * transform.worldToLocalMatrix
where transform.worldToLocalMatrix is the parent object. You could also try something like:
combine[i].transform = meshFilters[i].transform.localToWorldMatrix * meshFilters[i].transform.parent.transform.worldToLocalMatrix;
Depends on how you have it set up

(Unity) How to bake data (Vector3 and Color32) onto render textures?

With the recent introduction of VFX Graph, attribute maps are being used to 'Set Position/Color from Map'.
In order to get an attribute map, one must bake position and color data into render textures. But there is no reference to how to do this that I could find or even on the Unity docs.
Any help on how to do this will be appreciated!
Most of the time you would want to use a Compute Shader to bake a list of points into your textures. I'd suggest you check these repositories for reference:
Bake Skinned Mesh Renderer Data into textures
https://github.com/keijiro/Smrvfx
Bake Kinect data into textures
https://github.com/roelkok/Kinect-VFX-Graph
Bake pointcloud data into texture:
https://github.com/keijiro/Pcx
Personally, I'm using these scripts which work for my purpose though I'm no expert in Compute Shaders:
public class FramePositionBaker
{
ComputeShader bakerShader;
RenderTexture VFXpositionMap;
RenderTexture inputPositionTexture;
private ComputeBuffer positionBuffer;
const int texSize = 256;
public FramePositionBaker(RenderTexture _VFXPositionMap)
{
inputPositionTexture = new RenderTexture(texSize, texSize, 0, RenderTextureFormat.ARGBFloat);
inputPositionTexture.enableRandomWrite = true;
inputPositionTexture.Create();
bakerShader = (ComputeShader)Resources.Load("FramePositionBaker");
if (bakerShader == null)
{
Debug.LogError("[FramePositionBaker] baking shader not found in any Resources folder");
}
VFXpositionMap = _VFXPositionMap;
}
public void BakeFrame(ref Vector3[] vertices)
{
int pointCount = vertices.Length;
positionBuffer = new ComputeBuffer(pointCount, 3 * sizeof(float));
positionBuffer.SetData(vertices);
//Debug.Log("Length " + vertices.Length);
bakerShader.SetInt("dim", texSize);
bakerShader.SetTexture(0, "PositionTexture", inputPositionTexture);
bakerShader.SetBuffer(0, "PositionBuffer", positionBuffer);
bakerShader.Dispatch(0, (texSize / 8) + 1, (texSize / 8) + 1, 1);
Graphics.CopyTexture(inputPositionTexture, VFXpositionMap);
positionBuffer.Dispose();
}
}
The compute shader:
// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel CSMain
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float4> PositionTexture;
uint dim;
Buffer<float3> PositionBuffer;
[numthreads(8,8,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
// TODO: insert actual code here!
uint index = id.y * dim + id.x;
uint lastIndex = PositionBuffer.Length - 1;
// Trick for generating a pseudo-random number.
// Inspired by a similar trick in Keijiro's PCX repo (BakedPointCloud.cs).
// The points that are in excess because of the square texture, point randomly to a point in the texture.
// e.g. if (index > lastIndex) index = 0 generates excessive particles in the first position, resulting in a visible artifact.
//if (index > lastIndex) index = ( index * 132049U ) % lastIndex;
float3 pos;
if (index > lastIndex && lastIndex != 0) {
//pos = 0;
index = ( index * 132049U ) % lastIndex;
}
pos = PositionBuffer[index];
PositionTexture[id.xy] = float4 (pos.x, pos.y, pos.z, 1);
}

OpenGL ES 2.0 / MonoTouch: Rendering GUI Textures shows nothing

I building a simple Framework for OpenGL UI's for MonoTouch. I set up everything and also succeeded rendering 3D Models, but a simple 2D texture object fails. The texture has a size of 256x256 so it's not to large and its power of two.
Here is some rendering code( Note: I did remove the existing, and working code ):
// Render the gui objects ( flat )
Projection = Matrix4x4.Orthographic(0, WindowProperties.Width, WindowProperties.Height, 0);
View = new Matrix4x4();
GL.Disable(All.CullFace);
GL.Disable(All.DepthTest);
_Stage.RenderGui();
Stage:
public void RenderGui ()
{
Draw(this);
// Renders every child control, all of them call "DrawImage" when rendering something
}
public void DrawImage (Control caller, ITexture2D texture, PointF position, SizeF size)
{
PointF gposition = caller.GlobalPosition; // Resulting position is 0,0 in my tests
gposition.X += position.X;
gposition.Y += position.Y;
// Renders the ui model, this is done by using a existing ( and working vertex buffer )
// The shader gets some parameters ( this works too in 3d space )
_UIModel.Render(new RenderParameters() {
Model = Matrix4x4.Scale(size.Width, size.Height, 1) * Matrix4x4.Translation(gposition.X, gposition.Y, 0),
TextureParameters = new TextureParameter[] {
new TextureParameter("texture", texture)
}
});
}
The model is using a vector2 for positions, no other attributes are given to the shader.
The shader below should render the texture.
Vertex:
attribute vec2 position;
uniform mat4 modelViewMatrix;
varying mediump vec2 textureCoordinates;
void main()
{
gl_Position = modelViewMatrix * vec4(position.xy, -3.0, 1.0);
textureCoordinates = position;
}
Fragment:
varying mediump vec2 textureCoordinates;
uniform sampler2D texture;
void main()
{
gl_FragColor = texture2D(texture, textureCoordinates) + vec4(0.5, 0.5, 0.5, 0.5);
}
I found out that the drawing issue is caused by the shader. This line produces a GL_INVALID_OPERATION( It works with other shaders ):
GL.UniformMatrix4(uni.Location, 1, false, (parameters.Model * _Device.View * _Device.Projection).ToArray());
EDIT:
It turns out that the shader uniform locations changed( Yes i'm wondering about this too, because the initialization happens when the shader is completly initialized. I changed it, and now everything works.
As mentioned in the other thread the texture is wrong, but this is another issue ( OpenGL ES 2.0 / MonoTouch: Texture is colorized red )
The shader initialization with the GL.GetUniformLocation problem mentioned above:
[... Compile shaders ...]
// Attach vertex shader to program.
GL.AttachShader (_Program, vertexShader);
// Attach fragment shader to program.
GL.AttachShader (_Program, pixelShader);
// Bind attribute locations
for (int i = 0; i < _VertexAttributeList.Length; i++) {
ShaderAttribute attribute = _VertexAttributeList [i];
GL.BindAttribLocation (_Program, i, attribute.Name);
}
// Link program
if (!LinkProgram (_Program)) {
GL.DeleteShader (vertexShader);
GL.DeleteShader (pixelShader);
GL.DeleteProgram (_Program);
throw new Exception ("Shader could not be linked");
}
// Get uniform locations
for (int i = 0; i < _UniformList.Length; i++) {
ShaderUniform uniform = _UniformList [i];
uniform.Location = GL.GetUniformLocation (_Program, uniform.Name);
Console.WriteLine ("Uniform: {0} Location: {1}", uniform.Name, uniform.Location);
}
// Detach shaders
GL.DetachShader (_Program, vertexShader);
GL.DetachShader (_Program, pixelShader);
GL.DeleteShader (vertexShader);
GL.DeleteShader (pixelShader);
// Shader is initialized add it to the device
_Device.AddResource (this);
I don't know what Matrix4x4.Orthographic uses as near-far range, but if it's something simple like [-1,1], the object may just be out of the near-far-interval, since you set its z value explicitly to -3.0 in the vertex shader (and neither the scale nor the translation of the model matrix will change that). Try to use a z of 0.0 instead. Why is it -3, anyway?
EDIT: So if GL.UniformMatrix4 function throws a GL_INVALID_OPERATION, it seems you didn't retrieve the corresponding unfiorm location successfully. So the code where you do this might also help to find the issue.
Or it may also be that you call GL.UniformMatrix4 before the corresponding shader program is used. Keep in mind that uniforms can only be set once the program is active (GL.UseProgram or something similar was called with the shader program).
And by the way, you're multiplying the matrices in the wrong order, anyway (given your shader and matrix setting code). If it really works this way for other renderings, then you either were just lucky or you have some severe conceptual and mathemtical inconsistency in your matrix library.
It turns out that the shader uniforms change at a unknown time. Everything is created and initialized when i ask OpenGL ES for the uniform location, so it must be a bug in OpenGL.
Calling GL.GetUniformLocation(..) each time i set the shader uniforms solves the problem.

Multiple textures doesn't show

I'm a newbie of DirectX10. Now I'm developing a Direct10 application. It mixes two textures which are filled manually according to user's input. The current implementation is
Create two empty textures with usage D3D10_USAGE_STAGING.
Create two resource shader view to bind to the pixel shader because the shader needs it.
Copy the textures to the GPU memory by calling CopyResource.
Now the problem is that I can only see the first texture but I don't see the second. It looks to me that the binding doesn't work for the second texture.
I don't know what's wrong with it. Can anyone here shed me a light on it?
Thanks,
Marshall
The class COverlayTexture takes responsible for creating the texture, creating resource view, fill the texture with the mapped bitmap from another applicaiton and bind the resource view to the pixel shader.
HRESULT COverlayTexture::Initialize(VOID)
{
D3D10_TEXTURE2D_DESC texDesStaging;
texDesStaging.Width = m_width;
texDesStaging.Height = m_height;
texDesStaging.Usage = D3D10_USAGE_STAGING;
texDesStaging.BindFlags = 0;
texDesStaging.ArraySize = 1;
texDesStaging.MipLevels = 1;
texDesStaging.SampleDesc.Count = 1;
texDesStaging.SampleDesc.Quality = 0;
texDesStaging.MiscFlags = 0;
texDesStaging.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesStaging.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;
HR( m_Device->CreateTexture2D( &texDesStaging, NULL, &m_pStagingResource ) );
D3D10_TEXTURE2D_DESC texDesShader;
texDesShader.Width = m_width;
texDesShader.Height = m_height;
texDesShader.BindFlags = D3D10_BIND_SHADER_RESOURCE;
texDesShader.ArraySize = 1;
texDesShader.MipLevels = 1;
texDesShader.SampleDesc.Count = 1;
texDesShader.SampleDesc.Quality = 0;
texDesShader.MiscFlags = 0;
texDesShader.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
texDesShader.Usage = D3D10_USAGE_DEFAULT;
texDesShader.CPUAccessFlags = 0;
HR( m_Device->CreateTexture2D( &texDesShader, NULL, &m_pShaderResource ) );
D3D10_SHADER_RESOURCE_VIEW_DESC viewDesc;
ZeroMemory( &viewDesc, sizeof( viewDesc ) );
viewDesc.Format = texDesShader.Format;
viewDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D;
viewDesc.Texture2D.MipLevels = texDesShader.MipLevels;
HR( m_Device->CreateShaderResourceView( m_pShaderResource, &viewDesc, &m_pShaderResourceView ) );
}
HRESULT COverlayTexture::Render(VOID)
{
m_Device->PSSetShaderResources(0, 1, m_pShaderResourceView);
D3D10_MAPPED_TEXTURE2D lockedRect;
m_pStagingResource->Map( 0, D3D10_MAP_WRITE, 0, &lockedRect );
// Fill in the texture with the bitmap mapped from shared memory view
m_pStagingResource->Unmap(0);
m_Device->CopyResource(m_pShaderResource, m_pStagingResource);
}
I use two instances of the class COverlayTexture each of which fills its own bitmap to its texture respectively and renders with sequence COverlayTexture[1] then COverlayTexture[0].
COverlayTexture* pOverlayTexture[2];
for( int i = 1; i < 0; i++)
{
pOverlayTexture[i]->Render()
}
The blend state setting in the FX file is definedas below:
BlendState AlphaBlend
{
AlphaToCoverageEnable = FALSE;
BlendEnable[0] = TRUE;
SrcBlend = SRC_ALPHA;
DestBlend = INV_SRC_ALPHA;
BlendOp = ADD;
BlendOpAlpha = ADD;
SrcBlendAlpha = ONE;
DestBlendAlpha = ZERO;
RenderTargetWriteMask[0] = 0x0f;
};
The pixel shader in the FX file is defined as below:
Texture2D txDiffuse;
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse.Sample(samLinear, input.Tex);
return ret;
}
Thanks again.
Edit for Paulo:
Thanks a lot, Paulo. The problem is that which instance of the object should be bound to alpha texture or diffuse texture. As testing, I bind the COverlayTexture[0] to the alpha and COverlayTexture[1] to the diffuse texture.
Texture2D txDiffuse[2];
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse[1].Sample(samLinear, input.Tex);
float alpha = txDiffuse[0].Sample(samLinear, input.Tex).x;
return float4(ret.xyz, alpha);
}
I called the PSSetShaderResources for the two resource views.
g_pShaderResourceViews[0] = overlay[0].m_pShaderResourceView;
g_pShaderResourceViews[1] = overlay[1].m_pShaderResourceView;
m_Device->PSSetShaderResources(0, 2, g_pShaderResourceViews);
The result is that i don't see anything. I also tried the channel x,y,z,w.
Post some more code.
I'm not sure how you mean to mix these two textures. If you want to mix them in the pixel shader you need to sample both of them then add them (or whatever operation you required) toghether.
How do you add the textures toghether? By setting a ID3D11BlendState or in the pixel shader?
EDIT:
You don't need two textures in every class: if you want to write to your texture your usage should be D3D10_USAGE_DYNAMIC. When you do this, you can also have this texture as your shader resource so you don't need to do the m_Device->CopyResource(m_pShaderResource, m_pStagingResource); step.
Since you're using alpha blending you must control the alpha value output in the pixel shader (the w component of the float4 that the pixel shader returns).
Bind both textures to your pixel shader and use one textures value as the alpha components:
Texture2D txDiffuse;
Texture2D txAlpha;
float4 PS(PS_INPUT input) : SV_Target
{
float4 ret = txDiffuse.Sample(samLinear, input.Tex);
float alpha=txAlpha.Sample(samLinear,input.Tex).x; // Choose the proper channel
return float4(ret.xyz,alpha); // Alpha is the 4th component
}