CAD issue in Unity 2019.3 - unity3d

I am trying to fully understand the importing process of CAD models within Unity software and the potentialities of the program. Is it possible in some way to show PMI of a CAD model into Unity? I am working on an application for industrial purpose which should help operator in investigating parts or components of assembly and i am in need of a way of showing them dimension, tolerances etc.
Thanks in advance!

For (NURBS, B-splines, IGES) CAD dimension tolerances investigation, the McNeel Rhino Inside® open-source example, allows the CAD editor Rhinoceros3D and Grasshopper to be embedded and run inside Unity Software.
Rhinoceros 7 does not provide fatigue resistance testing. And for dimension, tolerances investigation you will need a Rhino license.
If you just want to export from your favorite CAD 3D editor CAD model and convert it into FBX (for visualization only), InstaLOD or much more expensive PiXYZ (around $2,000 year not perpetual) can do the conversion job. As a UV mapping tool specific for FBX (converted from CAD), RizomUV RealSpace (not Virtual Space) has the precision needed to preserve vertex normals.

Related

How to make a complex robot model in webots?

I make my robot using cyberbotics webots. I can’t figure out how to make a beautiful 3D model. At least at the mantis hexapod level.
I understand that you can import ready-made fraud only in vrml97 format, but it is not supported by fusion 360 and other programs.
But in webots itself, I did not find a way to build a model more complicated than using cubes, pyramids, and other simple objects.
There was also an idea to assemble a model from a large number of rectangles using grouping, but it seems to me that such a model will greatly slow down.
Is it possible to see how the finished robots are made, and make changes to them?
The node you are looking for is the IndexedFaceSet https://www.cyberbotics.com/doc/reference/indexedfaceset, it allows you to efficiently model a shape using a set of triangle faces, you will find an example of this in this simulation world: https://cyberbotics.com/doc/guide/samples-geometries#high_resolution_indexedfaceset-wbt
One possible workflow to do this is to use Blender to create your mesh and then use the Webots exporter to export it to Webots: https://github.com/cyberbotics/blender-webots-exporter

How to save modified mesh at runtime?

In my game, I modify mesh at runtime using a damage algorithm.
After that, I would like to save them.
I get all meshes with:
MeshFilter[] meshfilters = MyObject.GetComponentsInChildren<MeshFilter>();
then i modify single mesh.
How to save that in my original FBX file ?
Thanks
Unity's internal mesh type is not the same as an FBX, FBX is a file storage format, while Unity uses a different internal structure at runtime, for greater efficiency.
As far as I know there is no standard way to save out a runtime mesh as an FBX (this is a feature of 3D modelling software, not a game engine)
You can save your deformations somehow and re-apply them when you reload the model, when I did a similar thing (custom generated terrain meshes at runtime) I would generate a mesh at runtime, then save the data required to generate that mesh, then pass it into the same flow I used to generate the mesh in the first place.
It is possible to serialise meshdata in the unity editor as meshdata is actually a different format from FBX (Unity reads an FBX, then generates a mesh asset from the FBX's data, then associates that mesh with the FBX) You do this by playing with AssetDatabase.CreateAsset() https://docs.unity3d.com/ScriptReference/AssetDatabase.CreateAsset.html
But I don't think this is what you're intending to do.
If you want the full FBX suite of features (skinning, animation, use in other unity projects) and you want them at runtime, you don't really have any options that I know of. Short of implementing your own FBX serialisation library (not really a reasonable solution)

Extracting 2D surface from 3D STEP model

I'm trying to figure out a good way to programmatically generate contours describing a 2D surface, from a 3D STEP model. Application is generating NC code for a laser-cutting program from a 3D model.
Note: it's easy enough to do this in a wide variety of CAD systems. I am writing software that needs to do it automatically.
For example, this (a STEP model):
Needs to become this (a vector file, like an SVG or a DXF):
Perhaps the most obvious way of tackling the problem is to parse the STEP model and run some kind of algorithm to detect planes and select the largest as the cut surface, then generate the contour. Not a simple task!
I've also considered using a pre-existing SDK to render the model using an orthographic camera, capture a high-res image, and then operating on it to generate the appropriate contours. This method would work, but it will be CPU-heavy, and its accuracy will be limited to the pixel resolution of the rendered image - not ideal.
This is perhaps a long shot, but does anyone have thoughts about this? Cheers!
I would use a CAD library to load the STEP file (not a CAD API), look for the planar face with the higher number of edge curves in the face loop and transpose them on the XY plane. Afterward, finding 2D geometry min/max for centering etc. would be pretty easy.
Depending on the programming language you are using I would search for "CAD control" or "CAD component" on Google combining it with "STEP import".

Offline (visualisation) rendering in scientific computing

I have already asked this question on scientific computing and wondered if this forum could offer alternatives.
I need to simulate the movement of a large number of agents undergoing soft body deformation. The processes that govern the agents' movement are complex and so the entire process requires parallelisation.
The simulation needs to be visualised in 3D. As I will be running this simulation across many different nodes (MPI or even MPI+GPGPU) I do not want the visualisation to run in real time, rather the simulation should output a video file after it is finished.
(i'm not look for awesome AAA video game quality graphics, in addition the movement code will take up enough CPU time so I don't want to further slow the application down by adding heavy weight rendering code)
I think there are three ways of producing such a video:
Write raw pixel information to BMPs and stitch them together - I have done this in 2D but I don't know how this would work in 3D.....
Use an offline analogue of OpenGL/Direct3D, rendering to a buffer instead of the screen.
Write some sort of telemetry data to a file, indicating each agents' position, deformation etc for each time interval and then after the simulation has finished use it as input to a OpenGL/Direct3D program.
This problem MUST have been solved before - there's plenty of visualisation in HPC
In summary: How does one easily render to a video in an offline manner (very basic graphics not toy story - I just need 3D blobs) without impacting performance in a big way?
My idea would be to store the different states/positions of the vertices as single frames of a vertex animation in a suitable file format. A suitable format would be COLLADA, which is a intermediate format for 3D scenes based on XML, thus it can be easily parsed and written with general purpose XML libraries. There are also special purpose libraries for COLLADA like COLLADA DOM and pycollada. The COLLADA file containing the vertex animation could then be rendered directly to a video file, with the rendering software of your choice (3D Studio Max, Blender, Maya ...)

iPhone GLGravity example using quaternions

GLGravity iPhone example showing how to use accelerometer and OpenGL suffers from Gimbal Lock problem. I'm wondering is there any code available using quaternion rotation instead of Euler angles? Any help will be greatly appreciated, I'm struggling with this from a long time without having a success ...
It helps to have a good grasp of the theory of things before trying to implement and use it oneself. Below are two introductory articles about using Quaternions for rotations. Both are primarily related to smooth rotation interpolations and avoiding gimbal lock in accumulated rotations:
Gamedev.net - Quaternion Powers
Gamasutra - Rotating Objects Using Quaternions
Now as far as actual code goes, I would suggest getting, and using, an "industry strength" vector math library as opposed to rolling your own. My suggestion would be grabbing the LinearMath part of the Bullet Physics Middleware project. Bullet physics, and the included linear math library, is developed by some of Sony's top engineers and has been in active development for years. It's freely available, not restricted by any license (Zlib license), and is used by professional game developers all over the world. The lib is cross platform/architecture and compiles on anything from iPhone to PS3.
The lib offers a Quaternion class that allows you to create quaternions from euler angles or from rotation about an arbitrary axis, e.g. using setEulerZYX. Once you have your quaternions, there are built in functions for all common operations applicable to them; plus, minus, mul, normalize, slerp and much more.
For actually applying your final quaternion to OpenGL rendering, the Transform class allows you to construct a matrix from a quaternion. The transform class in turn includes a function getOpenGLMatrix that directly gives you a compatible matrix to pass to OpenGL.
The lib also includes a host of other very useful matrix and vector classes and functions.
Grab the latest Bullet dist from google code, or grab just the LinearMath portion of the code directly from subversion using: svn checkout http://bullet.googlecode.com/svn/trunk/src/LinearMath