In Unity3D, what would "setting" the bounds of a mesh do or achieve? - unity3d

In a Unity code base, I saw this:
// the game object currently has no mesh attached
MeshFilter mFilter = gameObject.AddComponent<MeshFilter>();
gameObject.AddComponent<MeshRenderer>();
// no problem so far
mFilter.mesh = MakeASmallQuadMesh(0.1f);
// great stuff
mFilter.mesh.bounds = SomeSpecificBounds();
// what ?
The function "MakeASmallQuadMesh" has the usual completely normal code for making a mesh, so
Mesh mesh = new Mesh();
mesh.SetVertices(verts);
mesh.SetIndices(indices);
mesh.SetUVs(0, uvs);
mesh.RecalculateNormals();
mesh.RecalculateBounds();
return mesh;
No worries there. It makes a quad 10cm across.
But what about the line of code
mFilter.mesh.bounds = SomeSpecificBounds();
I was amazed to learn you can set mesh.bounds, I assumed it would be read-only.
What possible "meaning" is there to "setting" the bounds? It would be like: there's a written measurement in a doctor's office stating that Jane is 6'. You change the record to 5'10". Of course, Jane's height does not change at all. You've just, bizarrely made the record wrong.
Could it be that: so, the bounds of a mesh are used by various, indeed many, Unity systems. (Culling, etc etc.) Could the pattern be that by "forcing" the bounds like this (the bounds are now "totally wrong" for the object - they're just some value you forced in there) the programmer wanted (for some reason) Unity's system (say, culling) to use those forced, nonsensical (to the actual object) values?
Wild guess, maybe there's a pattern (I have never heard of) where you "force" the bounds of object A to be the same as object B - for some reason I cannot guess at?
What could possibly be the pattern / reason here?
I would just assume it's a basic mistake, but assumptions kill.

I was actually curious about this myself, and I happened to have a program that explicitly generated large numbers of custom meshes, so I decided to test a few things.
The first thing I wanted to confirm was that the bounds were in fact set automatically. A rudimentary inspection revealed that this was indeed the case: Specifically, a mesh's bounds are automatically recalculated any time you set the mesh.vertices property. This probably explains why the property is a fixed length array rather than a list. (fun side note: If you try to assign secondary properties like uv coords or normals to the mesh before assigning vertex positions, unity gently nags you about mismatched array lengths before promptly exploding. So Don't Do That.)
As for what this actually impacts, I had a hunch: I set the extents of my mesh bounds to be 0. Immediately, meshes at the corner of my viewport stated getting visually culled. This tells us a few things:
Setting bounds explicitly does have an effect.
Unity does actually make use of custom bounds data.
Unity uses mesh bounds to perform frustum culling.
According to Unity's manual, there are three cases where the Bounds class is used: Mesh.bounds, Renderer.bounds, and Collider.bounds. Of those three, Mesh.bounds is the only property that isn't read only.
As for the question of why anybody would want to set mesh bounds explicitly, it's not impossible that you could perform some clever culling optimization like looking at a complex mesh through a window or some such, but if I had to guess, whoever wrote that code didn't trust Unity to set mesh bounds accurately or explicitly.

Related

Get world space Oriented Boundin Box 8 points in unreal (C++)

Does anybody know how to retrieve an actor's world space oriented bounding box 8 points in C++. Im reading the official documentation but it's a bit vague as it never specifies whether the bounds objects (FBox, FBoxShpereBounds) are local space, world space, axis aligned etc
I'm thinking something like below but I'm not sure if that's right
UStaticMeshComponent* pMesh = Cast<UStaticMeshComponent>(actor->GetComponentByClass(UStaticMeshComponent::StaticClass()));
if (pMesh)
{
UStaticMesh* pStaticMesh = pMesh->GetStaticMesh();
if (pStaticMesh && pStaticMesh->GetRenderData())
{
FStaticMeshRenderData* pRenderData = pStaticMesh->GetRenderData();
if (pRenderData)
FBoxSphereBounds bounds = pRenderData->Bounds;
bounds.TransformBy(actor>GetActorTransform());
}
}
Unreal maintains it's bounds as axis-aligned (AABB). This is often done in game engines for efficiency in the physics/collision subsystem. To get an AABB for an actor, you can use the following function - this is essentially equivalent to what you did above with pRenderData->Bounds but is independent of the actor implementation.
FBox GetActorAABB(const AActor& Actor)
{
FVector ActorOrigin;
FVector BoxExtent;
// First argument is bOnlyCollidingComponents - if you want to get the bounds for components that don't have collision enabled then set to false
// Last argument is bIncludeFromChildActors. Usually this won't do anything but if we've child-ed an actor - like a gun child-ed to a character then we wouldn't want the gun to be part of the bounds so set to false
Actor.GetActorBounds(true, ActorOrigin, BoxExtent, false);
return FBox::BuildAABB(ActorOrigin, BoxExtent);
}
From code above it looks like you want an oriented bounded box (OBB) since you are applying the transform to it. Trouble is the AABB Unreal maintains will be "fit" to the world space axes and what you are essentially doing just rotates the center point of the AABB which will not give a "tight fit" for rotation angles far from the world axes. The following two UE forum posts provide some insight into how you might do this:
https://forums.unrealengine.com/t/oriented-bounding-box-from-getlocalbounds/241396
https://forums.unrealengine.com/t/object-oriented-bounding-box-from-either-aactor-or-mesh/326571/4
If you want a true OBB, FOrientedBox is what you need, but the engine lacks documented utilities to do intersection or overlap tests with this structure depending on what you are trying to do, but these things do exist in the engine but you have to hunt through the source code to find them. In general, the separating axis theorem (SAT) can be used to find collisions between two convex hull shapes, which an OBB is by definition.

Unity: Create AnimationClip With World Scale AnimationCurves

I've been looking for a solution to this for quite a while now (meaning several days) and I haven't found anything yet. Maybe I'm thinking about it wrong and there isn't a way, but let's try!
I'm recording hand-data on a Hololens (the Unity Hololens Input Simulation for now). This essentially gives me one float AnimationCurve for each hand joint for each transform.position.x to z and rotation.x to w. Now my goal is to put these curves into an AnimationClip and add it to an AnimatorController (via an AnimatorOverrideController) that animates a hand rig and replay the recordings. Everything so far works!
However, the recorded hand-data from the Hololens is in world scale, not in local scale. (which makes sense, since you usually want absolute coordinates when you want to know where the hand is.) But to animate the hand, it seems I'm only able to set local coordinates, which I don't have.
Example:
clip.SetCurve("", typeof(Transform), "localPosition.x", curve.PositionX);
Here, the clip takes the the x-coordinates from some hand joint and puts it to the localPosition.x of the corresponding hand rig joint. The problem: curve.PositionX is world-scale (absolute coordinates), but localPosition.x takes local-scale (coordinates relative to its parent).
I can't simply change "localPosition.x" to "position.x", like so:
clip.SetCurve("", typeof(Transform), "position.x", curve.PositionX);
even though the Transform class has both properties and position is the object's world scale position. I'm not sure why this doesn't work, but it gives me the following error:
Cannot bind generic curve on Transform component, only position, rotation and scale curve are supported.
I'm aware that it doesn't make much sense to use absolute coordinates for an animation, but I simply don't have anything else.
Does anyone have an approach how I can deal with this in a sensible, not-too-cumbersome way? It seems I have all the important parts, I just can't figure out how to put them together. Thanks so much already! :)
From my basic understanding, it seems like you are using the Input animation recording service provided by MRTK. Unfortunately, MRTK does not provide the localPosition version of Curves data. However, you can modify the data from the recordingBuffer after the InputRecordingService stops recording.
So, this is a method worth trying for you: in the handJointCurves dictionary property of recordingBuffer field, a set of pose curves is stored for each joint. And then, base on this table:Joint pose curves, subtract the position value of the key None from the position value of each other joint in every key frame so that the localPosition based on the key None is obtained.

Unity3d: How to stretch (or share) a shader across multiple objects

I am tinkering around with cubes trying to build variations of 'block types' (in an effort to get more familiar with Unity's abilities, shaders, editor tools etc).
I have a generic cube:
That I want to add a material/shader.. which I have done (no problem there):
Which looks well enough (for my purposes) when it's just one block, but when I stick them altogether, I don't like the effect; you can see the individual boxes and the shader (which you can't see in the still image) is actually animated water, so when it's animating it looks ... pretty ugly.
(Bad/undesired)
I am trying to STRETCH or share the shader/material across all the selected blocks. See the below example (in this case, I have taken a SINGLE block and stretched it, but that's not keeping with the spirit of having individual blocks, so also not what I want).
(better/more desired)
I have thought the following may help, but they all seem overly complicated (aka I think I'm going about it incorrectly)
Have the individual blocks, but stretch a single plane across them and then apply the material.
I have found examples of programmatically joining meshes, and then apply the material/shader to the single object.
Take a single block and stretch it to the dimensions needed.
Maybe (not sure if I can), but have a plane with the water material applied to it and use the blocks as masks to only display water for those blocks? Not sure how that works...
In the end I am hoping to have the following:
Individual blocks (so I can interact with them.
Shader animations/colors are shared across the shared/connected blocks.
It won't always be a 2x3 grid... it could be diagonal, or contain odd shapes of connected blocks...
(this is all in EDITOR mode).
Any thoughts on how I might approach this?
Phrases you could try searching are "converting from world space to uv space", "transforming uv coordinates", "uv math". UV is the name for coordinates in textures that a shader samples from, and if you take already existing shader code, you can do interesting things by changing the UV(s) it uses. One of those things is letting you "stretch" it.
In your 2x3 cube example you could tell each cube to treat its U value as going from 0 to 0.5 or 0.5 to 1 and the V as going from 0 to 0.33 or 0.33 to 0.67 or 0.67 to 1 depending on where it is instead of each one going from 0 to 1. You could do this by having a property on the shader to tell it where to start the uv (a) and where to end its uv (b), and you lerp from (0,0) - (1,1) to a - b.
My answer to a different question uses some similar logic to that by comparing the world position of the pixel vs a range of world positions to get a UV. The relevant shader code is:
fixed4 colorizedMapUV = (IN.worldPos.xz-_WorldSpaceRange.xy)
/ (_WorldSpaceRange.zw-_WorldSpaceRange.xy);
Another option is to only look at the world position, and completely disregard a notion of where the "corners" of the uv should be. A method called "triplanar mapping" might guide you to a solution that does this

Weird Lines 3D Unity

I'm working on a project, using unity 5.4.
In this projects blocks are stacked next to eachother.
However there appear some annoying weird lines. Also on android these
line occur more often than on PC.
For illustration purposes I added an image and video.
Please zoom in on the picture to see, the line I'm speaking of, clearly.
Could anyone please provide a solution to get red of this nuissance.
Thanks in advance.
Literature:
Block alignment code snippet:
for (int x = 0; x < xSize; x++)
for (int z = 0; z < zSize; z++)
{
Vector3 pos = new Vector3(x, -layerDepth, z);
InstantiateBlock(pos);
}
Video link: https://youtu.be/5wN1Wn51d_Y
You have object seams!
This occurs when there is a physical or perceived gap between objects.
There are multiple causes for this.
1. Floating Point Imprecision
This could be because you are setting the position of the cubes to int's but they have floating point dimensions. The symptom for this is usually no white seams when the camera is close to the objects, and then they gradually appear as you get further away due to floating point imprecision. More.
Most of these blocks appear to line up exactly, from most camera positions. But from the occasional unfortunate position, the exact value for A's position plus its vertex at (0.5,0.5,-0.5) might be slightly different to object B's position plus its vertex at (-0.5,0.5,-0.5) . The result is that Unity shows a tiny gap, within which you can see the shadowed side of cube A.
If you consider the following on paper 3 == 1/3 * 3 this is mathematically correct, however using floats, 1/3 == 0.333333... and subsequently 3 * 0.333333... == 0.999999... BINGO! random gap between objects!
So how to solve? Use floats to calculate the positions of your objects. new vector3(1,1,1); should be new vector3(1f,1f,1f); - for example. For further reading on this try this SOP.
2. Texture Wrap-mode
If you are using textures on your objects, try changing the Wrap-Mode of your texture from wrap to clamp, or try upping the texture padding.
3. Shadow Acne - (Lighting and Shadow artifacts)
This is the arbitrary patterns of pixels in shadow when they should really be lit or NOT lit.
To prevent shadow acne, a Bias value can be added to the distance in the shadow map to ensure that pixels on the borderline definitely pass the comparison as they should, or to ensure that while rendering into the shadow map. source.
In Unity... go to your light source and then increase the Shadow Type > shadow Bias I would suggest doubling the default value of 0.05 and then continue so until fixed. You don't want to crank this value to max because...
Do not set the Bias value too high, because areas around a shadow near the GameObject casting it are sometimes falsely illuminated. This results in a disconnected shadow, making the GameObject look as if it is flying above the ground.
Are you using different blocks that you put against eachother? Your problem sounds like the blocks are not completely against eachother which causes you to see the side of the next block (this explains the camera Y changing: you might see the side better from higher up). That side will have different lighting and appear as a different/lighter colour. To check if this is the problem, try overlapping them slightly manually in the editor and see if the problem still occurs.
Making the blocks kinematic solves that. The issue is the rigid bodies bumping up against one another.

Drawing a 3D arc and helix in SceneKit

A recent question here made me think of SceneKit again, and I remembered a problem I never solved.
My app displays antenna designs using SK. Most antennas use metal rods and mesh reflectors so I used SCNCylinder for the rods, SCNPlane for the reflector and SCNFloor for the ground. The whole thing took a couple of hours, and I'm utterly noob at 3D.
But some antennas use wires bent into arcs or helixes, and I punted here and made crappy segmented objects using several cylinders end-to-end. It looks ass-tastic.
Ideally I would like a single object that renders the arc or helix with a cylindrical cross section. Basically SCNTorus, but with a start and end angle. This post talks about using a UIBezierPath in SK, but it uses extrude to produce a ribbon-like shape. Is there a way to do something similar but with a cylinder cross section (like a partial SCNTorus)?
I know I can make a custom shape by creating the vertexes (and normals and such) but I'm hoping I missed a simpler solution.
An arc you can do with SCNShape. Start with the technique from my other answer to get an extruded, ribbon-like arc. You'll want to make sure that the part where your path traces back on itself is offset by a distance the same as your extrusion depth, so you end up with a shape that's square in cross section.
To make it circular in cross section, use the chamferProfile property — give it a path that's a quarter circle, and set the chamfer radius equal to half the extrusion depth, and the four quarter-circle chamfers will meet, forming a circular cross section.
A helix is another story. SCNShape takes a planar path — one that varies in only two dimensions — and extrudes it to make a three-dimensional solid. A helix is a path that varies in three dimensions to start with. SceneKit doesn't have anything that describes a shape in such terms, so there's no super simple answer here.
The shader modifier solution #HalMueller alludes to is interesting, but problematic. It's simple to use a modifier at the geometry entry point to make a simple bend — say, offset every y coordinate by some amount, even by an amount that's a function of why. But that's a one-dimensional transform, so you can't use it to wrap a wire around on itself. (It also changes the cross section.) And on top of that, shader modifiers happen on the GPU at render time, so their effects are an illusion: the "real" geometry in SceneKit's model is still a cylinder, so features like hit testing apply to that and not to the transformed geometry.
The best solution to making something like a helix is probably custom geometry — generating your own vertex data (SCNGeometrySource). The math for finding the set of points on a helix is pretty simple if you follow that shape's definition. To wrap a cross section around it, follow the Frenet formulas to create a local coordinate frame at each point on the helix. Then make an index buffer (SCNGeometryElement) to stitch all those points into a surface with triangles or tristrips. (Okay, that's a lot of hand-waving around a deep topic, but a full tutorial is too big for an SO answer. This should be enough of a breadcrumb to get started, though...)
Here are some starting points that might help.
One approach would be to use more cylinders and make them shorter. That's the same idea behind the various segmentCount properties on the SCNGeometry primitives. Can we see a screenshot of the current linked cylinders version?
If you increase the heightSegmentCount, you could use the approach outlined here: scenekit, how to bend an object.
I just took a look at SCNShape. I was thinking you could use a shader modifier to warp the extruded shape into a circular cross section. But SCNShape doesn't seem to expose a segment count property, which I think you'd need to create enough extrusion segments for a good look. The chamferRadius and chamferProfile properties look interesting. I wonder if you could use those to create an extrusion that looks good.