Multiple materials or texture on low poly meshes? - unity3d

I'm creating low poly environment in blender and importing it to unity. In blender I just have mesh with few materials assigned to different faces. I know two approaches to export it into unity: just export it with materials or render texture and then assign it to the object. My question is: Which option is better in case of performance?
From what I read using multiple materials is worse in case of performance(correct me if I'm wrong), but when i add texture to imported object it still shows that it uses few materials in mesh renderer. Am I importing it wrong or it should be like these?
There are screenshots of mesh renderer before and after I add texture:

Multiple materials are performance killing. But if your game is very tiny, these are not much on an issue. Not suggested anyways.
When exporting with single texture, remember deleting additional materials in blender. Or you can set the value of Size under Materials to 1. That case it will remove the materials keeping the first one or you can set the textured one there and remove others as well.

Related

Unity Point-cloud to mesh with texture/color

I have a point-cloud and a rgb texture that fit together from a depth camera. I procedurally created a mesh from a selected part of the point-cloud implementing the quickhull 3D algorithm for mesh creation.
Now, somehow I need to apply the texture that I have to that mesh. Note that there can be multiple selected parts of the point-cloud thus making multiple objects that need the texture. The texture is just a basic 720p file that should be applied to the mesh material.
Basically I have to do this: https://www.andreasjakl.com/capturing-3d-point-cloud-intel-realsense-converting-mesh-meshlab/ but inside Unity. (I'm also using a RealSense camera)
I tried with a decal shader but the result is not precise. The UV map is completely twisted from the creation process, and I'm not sure how to generate a correct one.
UV and the mesh
I only have two ideas but don't really know if they'll work/how to do them.
Try to create a correct UV and then wrap the texture around somehow
Somehow bake colors to vertices and then use vertex colors to create the desired effect.
What other things could I try?
I'm working on quite a similar problem. But in my case I just want to create a complete mesh from the point cloud. Not just a quickhull, because I don't want to lose any depth information.
I'm nearly done with the mesh algorithm (just need to do some optimizations). Quite challenging now is to match the RGB camera's texture with the depth camera sensor's point cloud, because they of course have a different viewport.
Intel RealSense provides an interesting whitepaper about this problem and as far as I know the SDK corrects these different perspectives with uv mapping and provides a red/green uv map stream for your shader.
Maybe the short report can help you out. Here's the link. I'm also very interested in what you are doing. Please keep us up to date.
Regards

How to import complex .dae model to SceneKit?

I know how to export a single model (like a car) from Blender as a .dae file, and then import it and show it using SceneKit. And also with an animation for that model.
But I'm wondering what the best way to import more complex model's. Like a small part of a city. Like a scene with multiple cars and, buildings and people, with different animations.
Is there a way to do this without exporting everything as one model with one animation, and then through code in SceneKit comebine and place everything? So that as much as possible is defined in Blender/other 3D tool.
It's obvious that you must export complex models from 3D packages divided into smaller parts. And, of course, you do not necessarily need to export all your 3D models separately (per one model basis). In any case, a preparation of all your 3D models for using in Game Engine is extremely time consuming process. There's no one button solution.
Complex scenes like city could be logically divided to groups of static objects: skyscrapers, posts, asphalt, houses, benches, etc. But animated objects, like people, trees or cars, must be exported from Blender and imported into SceneKit separately.
Remember, all corresponding textures for these 3D objects (whether it's a single object or a group of objects) must be saved as UV-mapped square jpeg or png files (like 512x512 or 1024x1024 pix). And do not forget about low-poly collision meshes for dynamics.
Look at WWDC 2015 SceneKit session. You'll see how to build 3D scene in Xcode's Scene Editor.
To accomplish your goal you need to export smaller parts (logically divided, as I mentioned earlier) of your 3D scene from Blender, import all the parts into SceneKit's (ARKit) project and then combine them all through Swift code. Also, many 3D packages can export multiple animations as a single animation with, so called, sub-animations. In this SO post you can find how to handle it.
Actually, there is a way to do this.
If you watch the WWDC video about Model I/O, the guy demonstrates how to iterate through a .USD file to easily capture the nodes, geometries, associated hierarchies, materials, animations, etc.
Unfortunately, he didn’t do this for a .dae file.
The process goes like this:
Create an array of the nodes in the scene file.
Create an array that describes the parent nodes of each node.
Create an array that describes the instances of those nodes.
Create an array that describes the materials for the nodes...
Create an array that describes the animations... do this by creating an array of the bones, their attached vertices, transforms, etc.
After all that, you have to code a function that reassembles the scene that has been described as an array of arrays.
I’m not skilled enough to do this... and I hope somebody creates an example so I can study it.
But that’s the logic.

Why in 3D game we need to separate a material into so many textures for a static object?

Perhaps the question is not that correct, the textures should be say a kind of channel? although I know they will be mixed in the shader finally.
I know the knowledge of the various textures is very important, but also a bit hard to understand completely.
From my understanding:
diffuse - the 'real' color of an object without light involved.
light - for static objects. render light effections into texture beforehand.
specular - the area where has direct reflection.
ao - to absorb indirect light for the different area of an object.
alpha - to 'shape' the object.
emissive - self illuminance.
normal - pixel normal vector to deal with the light ray.
bump - (dont' know the exact differences between normalmap).
height - stores Z range values, to generate terrain, modify vertex etc.
And the items below should be related to PBR material which I'm not familiar with:
translucency / cavity / metalness / roughness etc...
Please correct me if some misunderstandings there.
But whatever, my question is why we need to separate these textures apart for a material but not only render them together into the diffusemap directly for a static object?
It'll be appreciated if some examples (especially for PBR) , and thank you very much.
I can beforehand bake all things into the diffuse map and apply to my
mesh, why I need to apply so many different textures?
Re-usability:
Most games re-use textures to reduce the size of the game. You can't if you combine them together. For example, when you two similar objects but you want to randomize the looks of them(aging effect), you can make them share the-same color(albedo) map but use different ao map. This becomes important when there hundreds of objects, you can use different combination of texture maps on similar objects to create unique Objects. If you have combined this into one, it would be impossible to share it with other similar objects but you to slightly make to look different.
Customize-able:
If you separate them, you'll be able to change the amount of effect each texture will apply to the Object. For example, the slider on the metallic slot for the Standard shader. There are more of this sliders on other map slots but they only appear once you plug a texture into the slot. You can't do this when you combine the textures into one.
Shader:
The standard shader can't do this so you have to learn how to write shader since you can't use one image to get the effects you would with all those texture maps with the standard shader. A custom shader is required and you need a way to read the information about the maps in the combined shader.
This seems like a reasonable place to start:
https://en.wikipedia.org/wiki/Texture_mapping
A texture map is an image applied (mapped) to the surface of a shape or polygon. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3d model formats or material definitions, and assembled into resource bundles.
I would add to this that the shape or a polygon don't have to belong to 3d objects as one may imagine it. If you render two triangles as a rectangle, you can run all sorts of computations and store it in a "live" texture.
Texture mapping is a method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Edwin Catmull in 1974.
What this detail represents is either some agreed upon format to represent some property, (say "roughness" within some BRDF model) which you would encounter if you are using some kind of an engine.
Or whatever you decide that detail to be, if you are writing your own engine. You can decide to store whatever you want, however you want it.
You'll notice on the link that different "mapping" techniques are mentioned, each with their own page. This is the result of some person, or people who did some research and came up with a paper detailing the technique. Other people adopt it, and that's how they find their way into engines.
There is no rule saying these can't be combined.

Flatten 3D object to create a template for a 2D texture map

I would like to create a texture map for a 3D car model I have. I am not sure where to start. I thought maybe I could unwrap the 3D object to a 2D image and then use this as an outline to draw my texture. Is this possible, or is there a simpler solution?
Thank you in advance!
I would like to create a texture map for a 3D car model I have. I am not sure where to start
What you are asking about is called UV mapping.
"UV mapping is the 3D modeling process of projecting a 2D image to a 3D model's surface for texture mapping."
Source: https://en.wikipedia.org/wiki/UV_mapping
UV mapping is normally done when creating the model in 3d modelling software, although there may be assets in Unity able to do the same. To my knowledge Unity is not able to directly UV map.
You can however, change the texture of an object inside Unity as well as assign objects various colours and materials.
maybe I could unwrap the 3D object to a 2D image and then use this as an outline to draw my texture
To my knowledge you need 3d modelling software to do so, but yes, it is possible.
You can try to change it through scripting, but I'd recommend looking into 3d modelling software instead as I believe that if it is possible it will be bothersome.
3D modelling software I know of:
Blender - Free
Maya - Licensed
3DS Max - Licensed

Automatically generating Level of Detail information

I have a very simple but large scene containing lots of objects and a lot of these objects are small but curved objects so they have large polygon counts. The FPS on the scene is really horrible. I learned that a Level of Detail optimization should help alot.
I am using three.js and it has an option to set LOD. But the model, doesn't have any LOD information (alternate meshes for each object corresponding to distance from the object). Is there something like a tool to generate this information by automatically by decimating the original mesh to create the alternate meshes?
But I can't imagine how textures will be skinned on the decimated meshes. Do I have to manually create the LOD information? 3D editors like Blender, 3dsMax, Unity editor let me set these meshes up individually. But I have about 200 meshes in my scene.
Level of Detail information can not be generally generated automatically. And yes it a painstaking process to create the LOD info. You can look at the LOD Book site for help.
The accepted answer to this question is actually not quite correct anymore.
While it's true that it's a painstaking process to create LOD data it gets easy when using InstaLOD. InstaLOD is a fully automatic 3d optimization solution that's able to optimize any static and skeletal mesh and maintain all vertex attributes like texture coordinates. Besides the polygon optimization, InstaLOD also features remeshing, occlusion culling, imposter creation and other unique methods related to the optimization of individual 3D models and complex scenes.
DISCLAIMER: I am one of the devs of InstaLOD.