Unity and VR for complex construction CAD models - unity3d

I work in construction and we are trying to visualize our projects using Unity and Oculus Rift.
Basically all our models are created using Revit and we export them out to fbx and bring them into Unity. For each model we have (electrical, mechanical, architectural, facade...) we generate a fbx in Revit and bring into Unity.
The models have around 3000 to 60000 objects(meshes) and around 3 million to 40 million polygons. When we try to visualize the models in Unity we are getting very low FPS, around 2 to 3 fps, and batch draw calls around 15000 to 20000.
I believe the problem is the complexity of all our models that we bring together into Unity. I wonder if there is any way to optimize it, I already tried decimating, disabling shadows, occlusion but nothing seems to work. Collapsing the models into a single object is not an option because we have to allow the user to select and inspect individual elements.

I am working on something similar and i can share some experiences for tasks like this with many vertices or meshes. I am trying to visualize point clouds in Unity and it is a very challenging task. In my case though i create point clouds myself and i do not triangulate them. It helps but i still have to apply optimizations.
From my experience if you have vertices more than 10 million rendered at a frame you start to have fps issues. This can vary based on your hardware of course. Also i am sure this will be even worse with triangulated meshes. What i have done to optimize things are following:
I first started by rendering objects that are in Camera Frustum In order to do this i used a function called IsVisibleFrom which is an extension to Renderer like this:
using UnityEngine;
public static class RendererExtensions
{
public static bool IsVisibleFrom(this Renderer renderer, Camera camera)
{
Plane[] planes = GeometryUtility.CalculateFrustumPlanes(camera);
return GeometryUtility.TestPlanesAABB(planes, renderer.bounds);
}
}
Then you can use it like this by traversing all the meshes you have:
Renderer grid;
IEnumerator RenderVisibleGameObject()
{
for (int i = 0; i < PointCloud.transform.childCount; i++)
{
grid = PointCloud.transform.GetChild(i).GetComponent<Renderer>();
if (!grid.IsVisibleFrom(cam))
{
grid.gameObject.SetActive(false);
}
else
{
grid.gameObject.SetActive(true);
}
if (i == (PointCloud.transform.childCount - 1))
yield return null;
}
StartCoroutine(RenderVisibleGameObject());
}
Second option would be if possible and if you can create meshes with lower detail using Level of Detail. Basically what it does is rendering low detail meshes that are further away from camera.
Last option i can recommend is using Occlusion Culling. This is similar to first option but it also takes care of occlusions which was not the case for me because i had only points.

You may also find the Forge Unity AR/VR toolkit of interest:
Overview
Introduction
23-minute video
As you probably know, Forge is highly optimised for professional visualisation of large CAD models.

Related

Is inverse kinematics possible without additional reference objects?

I was looking at this tutorial: https://docs.unity3d.com/Manual/InverseKinematics.html
In that tutorial, they change the body position of hands, head etc, by setting a target object to hold or look at.
In this project: https://hackaday.com/2016/01/23/amazing-imu-based-motion-capture-suit-turns-you-into-a-cartoon/
The guy access the blender api, and directly sets the transform of several bones.
Is it possible to do the same in Unity ? I do not need any assistance with getting data from sensors etc. I'm just looking for information on what is the equivalent API in unity to directly set the orientation of specific body parts of a skeleton at runtime.
You are probably looking for SkinnedMeshRenderer.
When you import a model from some 3d soft, such as blender, it will have SkinnedMeshRenderer component.
What you would want too check out is SkinnedMeshRenderer.bones, which get you the array of bones (as an array of Transform) used to control its pose. You can modify its elements, thus affecting the pose. So you can do stuff like this:
var bones = this.GetComponent<SkinnedMeshRenderer>().bones;
bones[0].localRotation = bones[0].localRotation * Quaternion.Euler(0f, 45f, 0f);
Just play around with it, it is the best way to see.
For more advanced manipulations, you can also set your own array of bones and specify their weights, with SetBlendShapeWeight / GetBlendShapeWeight, but this is probably more than what you need.

Unity spawning lots of objects at runtime running slow

I've created a simple project where approx. 7000 cubes are created in the scene (with VR camera) but the problem is that when I move camera to see all cubes the FPS becomes very bad, something like 5-6 frames. My PC is i7 with GTX 1070, and I thought that it have to draw hundred thousands of cubes without any problems. Really, I saw minecraft, it looks like there no problems to draw cubes ))
So question is is it possible to optimize the scene so that all cubes are painted it one call or something to provide a good performance?
I'm actually made all cubes static and there are no textures only standard material...
Here is how it looks now:
I'm using default Directional Light and It would be good to not change the standard shader because of it great work with light.
Here is how I'm generating the cubes:
private void AddCube(Vector3 coords)
{
var particle = (Transform)MonoBehaviour.Instantiate(prototype.transform, holder.transform);
SetScale(particle);
SetPosition(particle, coords);
cubes.Add(particle.gameObject);
particle.gameObject.isStatic = true;
}
private void SetScale(Transform particle)
{
particle.localScale = new Vector3(Scale, Scale, Scale);
}
private void SetPosition(Transform particle, Vector3 coords)
{
particle.position = coords;
}
This is the screenshot of Stats:
I have here 41 fps because I moved camera away from cubes to have clean background for the stats panel. Actually after I'm making the cubes 'static' the FPS is depends on are the cubes visible on the screen or not.
The problem is most likely caused by number of individual objects you are instantiating. If cubes doesnt change their transform after generating you should be able to use StaticBatchingUtility to combine them into one batch.
Call this after generating cubes.
StaticBatchingUtility.Combine(GameObject cubesRoot);

Speeding up rendering in SceneKit

So, I am using SceneKit to render a collection of parametric surfaces (the sum of which make an object). To put these on screen I am creating custom geometries by sampling the points and creating triangles. Here is a quick over view of how I do it.
Loop through the collection of surfaces
Generate a random color C
For each surface calculate a grid of N x N points (both positions and normals)
Assign all vertexes for that surface the color C
Add groups of 3 vertexes from this surface to the face index list
And that seems to work. After I get all this data, I make it into the proper structures (SCNGeometrySource and SCNGeometryElement) and make a SCNGeometry like so
SCNGeometry(sources: [vertexSource, normalSource, colorSource], elements: [element])
This works and displays my surfaces on the screen fine as one single geometry element. My problem is, I have some really complicated objects that I am trying to work with and it is just running really slow to move the camera around when looking at the object. Rendering is taking around 500 ms. Which is making my frame rate and experience awful.
So the question is, what steps can I take to speed up SceneKit performance? I did this same project with WebGL using Three.js with the same amount of data and was able to use an orbiting camera fine, so I can't believe that scene kit couldn't at least compete with that. What features can I tweak and turn off to speed up performance? I am using the triangle primitive type, the allowsCameraControl = true for the orbiting camera, and metal for the SCNView.
For those curious, the model I am struggling on generated 231,900 vertices and 347,850 indices for faces (11.1312 MB of vertex data (position and normal) and 1.3914 MB of face data (essentially just index positions of vertexes in order for triangles.))
1) If you are "standing" on center of your generated surface, then your problem maybe that you drawing alot offscreen (no frustum culling) and you need to split your sufrface (single node) into subsurfaces (child nodes), so only nodes that is visible in camera view space is drawn.
That being said, 231,900 vertices is really not much, I draw several milions #60fps with SceneKit Metal renderer (+20% faster than using OpenGL renderer) on OSX.
2) If you are looking on your surfaces from distance and have bad performance, check what ammount of bytesPerComponent: you feeding when creating SCNGeometrySource. I experienced big performance drop when using CGFloat (double) instead of plain float on GeForce GTX (while okay on integrated Intel graphics).

Tile Grid Data storage for 3D Space in Unity

This question is (mostly) game engine independent but I have been unable to find a good answer.
I'm creating a turn-based tile game in 3D space using Unity. The levels will have slopes, occasional non-planar geometry, depressions, tunnels, stairs etc. Each level is static/handcrafted so tiles should never move. I need a good way to keep track of tile-specific variables for static levels and i'd like to verify if my approaches make sense.
My ideas are:
Create 2 Meshes - 1 is the complex game world, the second is a reference mesh overlay that will have minimal geometry; it will not be rendered and will only be used for the tiles. I would then Overlay the two and use the 2nd mesh as a grid reference.
Hard-code the tiles for each level. While tedious it will work as a brute force approach. I would, however, like to avoid this since it's not very easy to deal with visually.
Workaround approach - Convert the 3d to 2D textures and only use 1 mesh.
"Project" a plane down onto the level and record height/slope to minimize complexity. Also not ideal.
Create individual tile objects for each tile manually (non-rendered). Easiest solution i could think of.
Now for the Unity3D specific question:
Does unity allow selecting and assigning individual Verts/Triangles/Squares of a mesh and adding componenets, scripts, or variables to those selections; for example, selecting 1 square in the 10x10 unity plane and telling unity the square of that plane now has a new boolean attached to it? This question mostly refers to idea #1 above, where i would use a reference mesh for positional and variable information that were directly assigned to the mesh. I have a feeling that if i do choose to have a reference mesh, i'd need to have the tiles be individual objects, snap them in place using the reference, then attach relevant scripts to those tiles.
I have found a ton of excellent resources (like http://www-cs-students.stanford.edu/~amitp/gameprog.html) on tile generation (mostly procedural), i'm a bit stuck on the basics due to being new to unity and im not looking for procedural design.

Automatically generating Level of Detail information

I have a very simple but large scene containing lots of objects and a lot of these objects are small but curved objects so they have large polygon counts. The FPS on the scene is really horrible. I learned that a Level of Detail optimization should help alot.
I am using three.js and it has an option to set LOD. But the model, doesn't have any LOD information (alternate meshes for each object corresponding to distance from the object). Is there something like a tool to generate this information by automatically by decimating the original mesh to create the alternate meshes?
But I can't imagine how textures will be skinned on the decimated meshes. Do I have to manually create the LOD information? 3D editors like Blender, 3dsMax, Unity editor let me set these meshes up individually. But I have about 200 meshes in my scene.
Level of Detail information can not be generally generated automatically. And yes it a painstaking process to create the LOD info. You can look at the LOD Book site for help.
The accepted answer to this question is actually not quite correct anymore.
While it's true that it's a painstaking process to create LOD data it gets easy when using InstaLOD. InstaLOD is a fully automatic 3d optimization solution that's able to optimize any static and skeletal mesh and maintain all vertex attributes like texture coordinates. Besides the polygon optimization, InstaLOD also features remeshing, occlusion culling, imposter creation and other unique methods related to the optimization of individual 3D models and complex scenes.
DISCLAIMER: I am one of the devs of InstaLOD.