Simplify a very complex/detailed 3D model for mobile apps - unity3d

tl;dr;
How a software developer who get's a very high detailed 3D model, quickly & easily optimize it for mobile apps, so he can focus his time & energy on developing the app logic?
I think it's a pretty common use case and there may be a tool for this already out somewhere.
Long story
I have a 3D model (collada) of a machine. This model being created by the machine's engineering team, contains a lot of minute details essential in creating the machine hardware.
Now, I am developing a mobile app with unity that needs to render this machine along with 10 other machines in a single scene. The thing I like about the available models is that they look exactly like the real stuff. At the same time, I am not interested in the internal stuff, the external shell is just enough for me. I have no interaction with the 3D modelling team (let's assume I downloaded the model from some archive), and hence can't ask them to make any changes for me. The model is all I have. I am on my own.
There are two problems I am facing
How to get rid of the interiors of the models?
How to get rid of the high resolutions details in the external shell which the human eye can't detect in a mobile phone?
To give a sense of the scale, the real equipment and hence the model can be as big as 100 ft. (30 m.) while these will never occupy more than a 5 inch HD display. The size of the models ranges from 50MB to 400MB. The entire scene hence can go up to 2GB. Each model has nearly 300k faces.
The other challenge I am facing is that I am a software developer familiar with code and my familiarity with 3D modelling tools is very limited and I would like it to be that way :) I can play around with these tools, but I don't want to start spending half my time with these tools.
I have tried blender's decimate modifier. But the result's aren't good, the amount of details lost is very uniform, instead of being targeted to the interiors. I don't want to spend time in going through each mesh and deleting them manually.
Also, for some reason when I import a model that is exported from blender into unity, they look horrible (some faces/polygons that I can see in Blender I don't see them in unity), even with 0 decimation.
I am unable to digest the fact that the manual process is the only way. I feel with today's technology this would be a simple automation. The steps as I see are
Detect polygons that aren't directly reachable from any exterior raycasts. If required, I can define the set (14 may be enough) of points for the raycast's origin, basically camera's locations.
Delete these polygons
Detect polygons with dimensions less than a threshold
Delete these polygons

Blender to unity models can have slight problems if you don't export them the right way. How to do this is out of my field as I am also a developer and personally prefer 3DSMax.
What I would reccomend you to do is do what you don't want to do, it is the easiest way. Select the faces (just drag and select) and then just delete them all (the inside faces that is) you should be able to hide the outer shell if you got a propper 3d program, just google how to do it if it's too complicated for you.
If you want to delete smaller details on the outside, do exactly the same... just select the polys and delete them. I wouldn't reccomend using a build in tool because most of those tend to take the whole object and make it less polys or more polys depending on what tool you use.
In the end next time just try to get in to the program, as a programmer I dislike having to use 3D modelling software as well, but it is part of the job, so put some effort in and just learn the tools. It's less work than it seems.
Edit: As for the tools you are asking for, those do not really exist, you don't normally take a high poly model and change it to a low poly model for a mobile game. Instead you usually get a 3D artist to make a low poly model. The fact you do not have any communication with the team is a bit odd, but so be it. I'd reccomend either getting in touch with them or like I said before, putting the effort in and learning a 3D program, what you are wanting to do literally sounds like just click, drag, select and then press delete to delete some polys you wouldn't see anyways.
-Lars

with vcglib
vcglib may work for you, you can see the sample for simplify a ply 3D file. And it can applyed for many other 3D file format such as stl, obj... As vcglib is a C++ library, you can write a simple to use this library to simplify your stl model. This method work on the OS without X, such as ubuntu server. You can refer to my quesiton Failed to to simplify 3D models with vcglib, Assertion `0' failed on how to use vcglib to simplify a ply 3D file.
with meshlabserver
If you want do the auto simplyfication on OS with X, or windows, or Mac OS, it's much easy, you can refer to the meshlabserver, meshlab is also build on vcglib. You can run such ccommand, where the PLYmesher_script.mlx is the filter file, you can write this file or generate it with meshlab refer here.
meshlabserver -i ./option-0000.ply -o ./meshed.ply -m vc vn -s scripts/PLYmesher_script.mlx

Related

Making an overlay with vulkan [duplicate]

I am a mathematician and not a programmer, I have a notion on the basics of programming and am a quite advanced power-user both in linux and windows.
I know some C and some python but nothing much.
I would like to make an overlay so that when I start a game it can get info about amd and nvidia GPUs like frame time and FPS because I am quite certain the current system benchmarks use to compare two GPUs is flawed because small instances and scenes that bump up the FPS momentarily (but are totally irrelevant in terms of user experience) result in a higher average FPS number and mislead the market either unintentionally or intentionally (for example, I cant remember the name of the game probably COD there was a highly tessellated entity on the map that wasnt even visible to the player which lead AMD GPUs to seemingly under perform when roaming though that area leading to lower average FPS count)
I have an idea on how to calculate GPU performance in theory but I dont know how to harvest the data from the GPU, Could you refer me to api manuals or references to help me making such an overlay possible?
I would like to study as little as possible (by that I mean I would like to learn what I absolutely have to learn in order to get the job done I dont intent to become a coder).
I thank you in advance.
It is generally what the Vulkan Layer system is for, which allows to intercept API commands and inject your own. But it is nontrivial to code it yourself. Here are some pre-existing open-source options for you:
To get to timing info and draw your custom overlay you can use (and modify) a tool like OCAT. It supports Direct3D 11, Direct3D 12, and Vulkan apps.
To just get the timing (and other interesting info) as CSV you can use a command-line tool like PresentMon. Should work in D3D, and I have been using it with Vulkan apps too and it seems to accept them.

Hololens-SpatialMapping (Unity3D)

I'm actually doing a project with the Hololens of Microsoft. The problem is that the Hololens memory is bad, so i can only make a spatialmapping of a room and not of a building because he can't remember all the building. I had an idea, maybe a can create more object and assemble them ? But no one talk about this... Do you think it's possible ?
Thanks for reading me.
Y.P
Since you don’t have a compass, you could establish some convention to help. For example, you could start the scanning by giving a voice command (and stop it by another one), and decide to only start scanning when you’re facing north, for example. Then it would be easy to know the orientation of each room. What may be harder is to get the angle exactly right. Your head might be off by a few degrees and you may have to work some “magic” (post processing) to correct it.
Or placing QR codes on a wall (printer paper + scotch tape) and using something like Vuforia can help you avoid this orientation problem altogether (you would get the QR code’s orientation which would match that of the wall).
You can also simplify the scanned mesh and convert it to planes. That way you can remember simpler objects instead of the raw spatial mapping mesh. (Search for the SurfaceToPlanes script in the Holographic Academy tutorials).
Scanning, the first layer, as in HoloLens trying to reason about the environment is an unstoppable process. There is no API for starting or stopping it. And that process also does slowly consume more and more memory as far as I know. The only thing you can do is deleting space (aka deleting holograms) or covering the sensors. But that's OS/hardware level, not app level, which you presumably want.
Layer two, what you are you probably talking about, is starting and stopping the spatial reconstruction process, where that raw spatial data is processed into a low-poly mesh (aka spatial mapping). This process can be started or stopped. For example through Unity's SpatialMappingCollider and SpatialMappingRenderer components, if you use Unity.
Finally the third level is extracting some objects/segments from that spatial mapping mesh into primitives. Like that SurfaceToPlanes. That you can also fully control in terms of when.
There has been a great confusion, especially due to the a re-naming parties in MixedRealityToolkit (overuse of word Scanning) and Unity (SpatialAnchor to WorldAnchor etc.) and misleading tutorials using a lot of colloquialisms instead of crisp terminology.
Theory aside. If you want the HoloLens to think of your entire building as one continuous space in terms of the first layer, you're out of luck. It was designed for a living room and there is a lot of voodoo involved into making it work stable in facilities 30x30 meters. You probably want to rely on disjointed "islands" with specific detection anchors to identify where you are. Or rely on markers and coordinates relative to them.
Cheers

Generate a real time 3D (mesh) model in Unity using Kinect

I'm currently developing an application with the initial goal of obtaining, in real time, a 3D model of the environment "seen" by a Kinect device. This information would be later on used for projection mapping but that's not an issue, for the moment.
There are a couple of challenges to overcome, namely the fact that the Kinect will be mounted on a mobile platform (robot) and the model generation has to be in real-time (or close to it).
After a long research on this topic, I came up with several possible (?) architectures:
1) Use the depth data obtained from Kinect, convert it into a point cloud (using PCL for this step), then a Mesh and then export it into Unity for further work.
2) Use the depth data obtained from Kinect, convert it into a point cloud (using PCL for this step), export it into Unity and then convert it into a Mesh.
3) Use KinectFusion that already the option of creating a Mesh model, and (somehow) automatically load the Mesh model created into Unity.
4) Use OpenNI+ZDK (+ wrapper) to obtain the depth map and generate the Mesh using Unity.
Quite honestly, I'm kinda lost here, my main issue is the real-time requirement along with being forced to integrate several software components makes this it tricky problem. I don't know which if any of these solutions are viable and the information/tutorials on these issues isn't exactly abundant like the one, for example, for Skeleton tracking .
Any sort of help would be greatly appreciated.
Regards,
Nuno
Sorry, I might not be providing a solution for realtime mesh creation within Unity - but the process discussion here, was interesting enough for me to reply.
In the hard science novel Memories with Maya - there is discussion of exactly such a scenario:
"“Point taken,” he said. “So… Satish showed me a demo of the Quad [Quad=Drone] acquiring real-time depth and texture maps.”
“Nothing new in that,” I said.
“Yeah, but look above us.”
I tilted my head up. The crude shape of the Quad came into view.
“The Quad is here, but you can't see it because the FishEye [Fisheye=Kinect 2] is on it aimed straight ahead.”
“So it's mapping video texture over live geometry? Cool,” I said.
“Yeah, the breakthrough is I can freeze a frame… freeze real life as it were, step out of the scene and study it.”
“All you do is block out the live world with the cross polarizers?”
“Yeah,” he said. “It's a big deal for AYREE to be able to use such data-sets.”
“The resolution has improved,” I said.
“Good observation,” he said. “So has the range sensing. The lens optics have also been upgraded.”
“I noticed that if I turn around I don't see the live feed, just the empty street,” I said.
“Yes, of course,” he replied. “The Quad is facing the other way around. It's why I'm standing in front of you. The whole street, however, is a 3D model done by a standard laser scan taken from the top of that high tower.”
Krish pointed to a building block at the far end of the street. I turned back to the live 3D view again. He walked in front of me.
“This is uber cool. Everyone looks so real.”
“Haha. You should see how cool it is when you're here in person with the Wizer on,” he said. “I'm here watching these real people pass by, only they have a mesh of themselves mapped onto them.”
“Ahhh! Yes.”
“Yeah, it's like they have living paint on them. I feel like reaching out and touching, just to feel the texture.”...
The work that you're thinking of doing in this area, and this use of a live mesh goes far beyond Projection Mapping for events- for sure!
Wishing you the best on the project, and I will be following your updates.
Some of the science behind the story is on www.dirrogate.com if the topic interests you.
Kind Regards.
I would use Kinect Fusion, as it has a sample with the ability to export to .obj, which unity supports. You can automatically save it, and import it to unity to generate a mesh automatically. Especially if you have multiple Kinects, then Microsoft even has a sample to show the basics of Kinect Fusion with multiple Kinects. Also, since Fusion is already pre-written, there is not much code you will have to write.
Here is an example of a mesh from Fusion with one camera:
I do want you to notice how many vertices there are though... This could cause performance problems later on.
Good luck!

Modeling a Physical Place inside iPhone Application

I need to find a way to model a physical place inside an iPhone application. For example, I want to be able to take images for a restaurant and then use some tools or programming API to model this resturant as a 3d place and make the user able to navigate and explore the place and rooms.
I have thought about HTML 5 inside a web view but I don't think the WebGL is compatible with iPhone Web View (Safari Engine).
Can you please recommend a method, API, Commercial Library or anything to help me achieve this task?
First, you need to be able to display 3D models for IPhone. One of the most popular 3D engine is Unity3D:
http://unity3d.com/
It is extremely easy to start playing with Unity3D. You even have a free license with limited features:
http://unity3d.com/unity/licenses
Then, you now need to reconstruct a 3D model from pictures. This is not a trivial problem so it is better if you know some computer vision. You can try to play with OpenCV:
http://opencv.willowgarage.com/wiki/
Best regards.
Actually Nuke from the Foundry has a decent start at the future of creating computer models from images.
Basically it takes a high contrast point and tracks it through successive moments. Given hundreds and thousands of tracked points, the next step is to calculate the perspective change between points.
Say two points are a known pixel distance apart at time zero and a certain time period later they are a different distance apart. This change in difference could be a bad tracking point. But assuming that the two points are perfectly tracking, then the distance change could be caused by a camera motion laterally or rotationally. And in real space a point further away from you will have a different perspective then a closer point . This perspective change is a mathematical certainty.
Initially the tracking is typically used to refilm a piece of film to stabilize it. But the process the software uses to analyze the film can be saved , it is often called a point cloud. connection of many close points that track very closely usually are because the points are parts of a surface, so a model can be built.
But my friend, we are barbarians to the speed and software that can do that perfectly yet. Or all the CG Artists out there would not have anything to model in Maya except fantasy monsters and space ships that don't exist yet....

OpenGL ES and real world development

I'm trying to learn OpenGL ES quickly (I know, I know, but these are the pressures that have been thrusted upon me) and I have been read around a fair bit, which lots of success at rendering basic models, some basic lighting and 'some' texturing success too.
But this is CONSTANTLY the point at which all OpenGL ES tutorials end, they never say more of what a real life app may need. So I have a few questions that Im hoping arent too difficult.
How do people get 3d models from their favorite 3d modeling tool into the iPhone/iPad application? I have seen a couple of blog posts where people have written some python scripts for tools like Blender which create .h files that you can use, is this what people seem to do everytime? Or do the "big" tooling suites (3DS, Maya, etc...) have exporting features?
Say I have my model in a nice .h file, all the vertexes, texture points, etc.. are lined up, how to I make my model (say of a basic person) walk? Or to be more general, how do you animate "part" of a model (legs only, turn head, etc...)? Do they need to be a massive mash-up of many different tiny models, or can you pre-bake animations these days "into" models (somehow)
Truely great 3D games for the iPhone are (im sure) unbelievably complex, but how do people (game dev firms) seem to manage that designer/developer workflow? Surely not all the animations, textures, etc... are done programatically.
I hope these are not stupid questions, and in actual fact, my app that Im trying to investigate how to make is really quite simple, just a basic 3D model that I want to be able to pan/tilt around using touch. Has anyone ever done/seen anything like this that I might be able to read up on?
Thanks for any help you can give, I appreciate all types of response big or small :)
Cheers,
Mark
Trying to explain why the answer to this question always will be vague.
OpenGLES is very low level. Its all about pushing triangles to the screen and filling pixels and nothing else basicly.
What you need to create a game is, as you've realised, a lot of code for managing assets, loading objects and worlds, managing animations, textures, sound, maybe network, physics, etc.
These parts is the "game engine".
Development firms have their own preferences. Some buy their game engine, other like to develop their own. Most use some combination of bought tech, open source and inhouse built tech and tools. There are many engines on the market, and everyone have their own opinion on which is best...
Workflow and tools used vary a lot from large firms with strict roles and big budgets to small indie teams of a couple of guys and gals that do whatever is needed to get the game done :-)
For the hobbyist, and indie dev, there are several cheap and open source engines you can use of different maturity, and amount of documentation/support. Same there, you have to look around until you find one you like.
on top of the game engine, you write your game code that uses the game engine (and any other libraries you might need) to create whatever game it is you want to make.
something many people are surprised with when starting OpenGL development is that there's no such thing as a "OpenGL file format" for models, let alone animated ones. (DirectX for example comes with a .x file format supported right away). This is because OpenGL acts somewhat at a lower level. Of course, as tm1rbrt mentioned, there are plenty of libraries available. You can easily create your own file format though if you only need geometry. Things get more complex when you want to take also animation and shading into account. Take a look at Collada for that sort of things.
again, animation can be done in several ways. Characters are often animated with skeletal animation. Have a look at the cal3d library as a starting point for this.
you definitely want to spend some time creating a good pipeline for your content creation. Artist must have a set of tools to create their models and animations and to test them in the game engine. Artist must also be instructed about the limits of the engine, both in terms of polygons and of shading. Sometimes complex custom editors are coded to create levels, worlds, etc. in a way compatible with your specific needs.
Write or use a model loading library. Or use an existing graphics library; this will have routines to load models/textures already.
Animating models is done with bones in the 3d model editor. Graphics library will take care of moving the vertices etc for you.
No, artists create art and programmers create engines.
This is a link to my favourite graphics engine.
Hope that helps