geometry object's created in Blender lose details in SceneKit - import

I'm using Blender to design a custom geometry object which will be used in my sceneKit scene. I import it using childNodeWithName:. I experience a significant lost in accuracy of object representation. In blender it looks terrific(see the upper picture), but in sceneKit it is full of sharp edges(bottom image). What must be done to avoid such lost of details? I'm importing a cube with smooth edges so I'm guessing that shouldn't be to hard to represent in SceneKit.
EDIT: I added visual representation of my problem.

you probably have a smooth modifier (or something like that) that you have to bake (apply) before you export your model.

I must type at least 30 characters in order to post this screenshot.

Related

FBX from maya importing to Unity with wrong joint rotation

I'm working on a project that needs to read the joint data from motion capture device to reconstruct the motion.
I already have a model in MAYA where all the joint are having Y axis up-pointed. However, when I import that model to unity, many koints' rotation changed. I've tried different importing method including exporting a FBX then load to Unity and ecpoting to unity directly. I've also changed the default rotation in MAYA from XYZ to ZXY. But the problem still exists.
The mesh will twist since due to the wrong rotation as shown in the picture below. I've googled a lot but still not find a solution.
error listed
Usually the joint direction doesn't matter, though I used to set "z" instead "y" in my character rig.
Please go to your fbx import settings, choose your Rig/Animation Type as Humanoid. Then, create a new avatar Definition from this model.
You should now be able to remap your wrong or missing joints. It's most likely that you have wrong order of character joints' mapping.

Casting Civilization V based Hex Grid on Unity Terrain and Select Certain Areas of Grid

I am looking for approach for casting hex based grid on terrain which is for now pre-made but eventually it will be procedural for my exploration game where you can scan planet and elements will be highlighted/hex grid selected. What could be the approach towards making this kind of hex grid as my terrain will be un-even.
I have seen approaches like mesh creation, using tile-map, unity projectors but eventually I feel like this should be something using shaders but what about selection?
Can someone please guide me in a right direction.
I think this topic is more like for https://gamedev.stackexchange.com .
My tips for you:
I think the hexa grid projection can be solved with Unity built in Projector, you can use ortoghraphic projection with it, so it does not matter if your terrain is uneven, also it has a convenient way for selecting which layers are affected only (terrain, your buildings etc...)
(The projector, is a shader magic tho, it blends the picture you give it to it, and the layer below it)
If projector does not satisfies your needs, im pretty sure there are grid shader already written for unity.
About the selection, i think you could also solve that with projector, or give some trail effect to the grid boundaries? - i guess you gonna still store the boundaries so..
About country borders in Civ:
I think they cast a spline using the hex grid border points, then blend it on the terrain. I saw a shader that could draw lines on a terrain, so you might found it!
Keywords for search: Beziér, Catmull–Rom, Spline, terrain shader

How to import complex .dae model to SceneKit?

I know how to export a single model (like a car) from Blender as a .dae file, and then import it and show it using SceneKit. And also with an animation for that model.
But I'm wondering what the best way to import more complex model's. Like a small part of a city. Like a scene with multiple cars and, buildings and people, with different animations.
Is there a way to do this without exporting everything as one model with one animation, and then through code in SceneKit comebine and place everything? So that as much as possible is defined in Blender/other 3D tool.
It's obvious that you must export complex models from 3D packages divided into smaller parts. And, of course, you do not necessarily need to export all your 3D models separately (per one model basis). In any case, a preparation of all your 3D models for using in Game Engine is extremely time consuming process. There's no one button solution.
Complex scenes like city could be logically divided to groups of static objects: skyscrapers, posts, asphalt, houses, benches, etc. But animated objects, like people, trees or cars, must be exported from Blender and imported into SceneKit separately.
Remember, all corresponding textures for these 3D objects (whether it's a single object or a group of objects) must be saved as UV-mapped square jpeg or png files (like 512x512 or 1024x1024 pix). And do not forget about low-poly collision meshes for dynamics.
Look at WWDC 2015 SceneKit session. You'll see how to build 3D scene in Xcode's Scene Editor.
To accomplish your goal you need to export smaller parts (logically divided, as I mentioned earlier) of your 3D scene from Blender, import all the parts into SceneKit's (ARKit) project and then combine them all through Swift code. Also, many 3D packages can export multiple animations as a single animation with, so called, sub-animations. In this SO post you can find how to handle it.
Actually, there is a way to do this.
If you watch the WWDC video about Model I/O, the guy demonstrates how to iterate through a .USD file to easily capture the nodes, geometries, associated hierarchies, materials, animations, etc.
Unfortunately, he didn’t do this for a .dae file.
The process goes like this:
Create an array of the nodes in the scene file.
Create an array that describes the parent nodes of each node.
Create an array that describes the instances of those nodes.
Create an array that describes the materials for the nodes...
Create an array that describes the animations... do this by creating an array of the bones, their attached vertices, transforms, etc.
After all that, you have to code a function that reassembles the scene that has been described as an array of arrays.
I’m not skilled enough to do this... and I hope somebody creates an example so I can study it.
But that’s the logic.

Importing objects form blender to unity correctly

I have recently done few 3d objects in blender and I want to import them int unity3d. I know the basics: that I should export it to FBX file. But I wonder if there are any other things that are very important while exporting to unity? For example where should i position center of my object(near the ground or in the center)?
Typically you want the origin to be the bottom of the character (feet), and another good idea when importing is to make the model a child of an empty object before saving it as a prefab. That parent empty object will then have driver scripts that control stuff like movement and animation.
EDIT: This is one example of why you might want to do the empty parent method, but it isn't directly applicable to your question: http://docs.unity3d.com/Manual/HOWTO-FixZAxisIsUp.html
1) apply location, rotation and scale.
2) set the origin of the model to the bottom.
3) Delete camera and lights

Automatically generating Level of Detail information

I have a very simple but large scene containing lots of objects and a lot of these objects are small but curved objects so they have large polygon counts. The FPS on the scene is really horrible. I learned that a Level of Detail optimization should help alot.
I am using three.js and it has an option to set LOD. But the model, doesn't have any LOD information (alternate meshes for each object corresponding to distance from the object). Is there something like a tool to generate this information by automatically by decimating the original mesh to create the alternate meshes?
But I can't imagine how textures will be skinned on the decimated meshes. Do I have to manually create the LOD information? 3D editors like Blender, 3dsMax, Unity editor let me set these meshes up individually. But I have about 200 meshes in my scene.
Level of Detail information can not be generally generated automatically. And yes it a painstaking process to create the LOD info. You can look at the LOD Book site for help.
The accepted answer to this question is actually not quite correct anymore.
While it's true that it's a painstaking process to create LOD data it gets easy when using InstaLOD. InstaLOD is a fully automatic 3d optimization solution that's able to optimize any static and skeletal mesh and maintain all vertex attributes like texture coordinates. Besides the polygon optimization, InstaLOD also features remeshing, occlusion culling, imposter creation and other unique methods related to the optimization of individual 3D models and complex scenes.
DISCLAIMER: I am one of the devs of InstaLOD.