Moving/Dressing mesh in 3D engine (iPhone) - iphone

I'm new in 3D games developing so I have kinda dummy question - how to move a mesh in 3D engine (under moving I mean a walking animation). And how to dress some skin on it?
What I have:
open source 3D OpenGL engine - NinevehGL http://nineveh.gl/. It's super easy to load a mesh
to. I' pretty sure it will be awesome engine when it will be released!
a mesh model of the human.
http://www.2shared.com/file/RTBEvSbf/female.html (it's mesh of a
female that I downloaded from some open source web site..)
found a
web site from which I can download skeleton animation in
formats:
dao (COLLADA) , XML , BVH (?) - http://www.animeeple.com/details/bcd6ac4b-ebc9-465e-9233-ed0220387fb9
what I stuck on (see attached image)
So, how can I join all these things and make simple game when dressed human will walk forward and backward?

The problem is difficult to answer, because it would require knowledge of the engine's API. Also you can't just stick a skeletal animation onto some mesh. You need some connection between both, a process called rigging, in which you add "bones" (also called armatures) to the mesh. This is an artistic process, done in a 3D modeller. Then you need to implement a skeletal animation system, which is a far too complex task to answer in a single Stackoverflow answer (it involves animation curve evaluation, quaternion interpolation, skinning matrices, etc.).
You should break down your question into smaller pieces.

I'm a BETA Tester for Nineveh Engine.
Currently, the engine does not support bones/skeleton animation. This will be a part of their next release version which is upcoming in next 4-8 months.
Future (Roadmap)
Version 0.9.3 : Q4 2011 - Q1 2012
Bones, Rigging and Mesh's Animations.
Mesh Morph.
You might want to checkout http://nineveh.gl/docs/changelog/

Related

Implementing payed 3D models into Unity Games

I've started learning gaming development using Unity and there's a thing I wasn't able to fully understand. I stumbled across the Sketch Fab website and noticed this cool market with 3D models and I was wondering what are the requirements to import such a model into an actual game.
For example this one already has animations:
https://sketchfab.com/3d-models/royal-knight-895d1c1d222d4efd9f264318e8ab0cb2
But on the other hand others don't have:
https://sketchfab.com/3d-models/crusader-knight-b079a8e34f454836bc8107c21c8c47fe
I have basically 2 questions:
If I buy the first model is this going to save me a lot of time and I can jump straight into implementing the character into an actual game and add custom scripts to it etc?
If I buy the second one, what would I need to too to actually animate this character? Is this something that somehow I can learn from Unity tutorials or would I need to import it to a tool like Blender to further improve this model with animations?
This question provokes a lot of answers. The first model you show does have a .fbx format and the animations will hopefully work fine. This format is typically what you want to use with Unity.
The second model is not Rigged (look at the product description). What this means is you will have to rig every bone yourself (in Blender) and make it compatible with Unity. I never buy a model that isn't rigged.
To add animations to the second character, you can download some from www.mixamo.com or use many of the animations you will find in the Unity Asset Store.
Personally, I prefer getting my models from www.turbosquid.com. You can search against multiple formats including .unitypackage
As Jiveturkey said, the first model is directly compatible with unity and doesn't require any additional steps - so if you're looking to focus solely on building the game without worrying about animation then you might want to go with the first model.
The second model isn't rigged, so you would have to manage all rigging and animating yourself - Unity does have a built-in rigging package, so you would be able to do that within unity rather than using Blender (Link to tutorial for rigging in Unity, Rigging tutorial directly from unity)
Unity can read .fbx, .dae (Collada), .3ds, .dxf, .obj, and .skp files for 3D models, and that's pretty much the only requirement. There are tons of sites with free 3D assets if you don't want to spend the money as well Itch.io, Unity Asset Store, and tons more - these are just the ones that come to mind

ocean simulation in Blender vs. Unity3D

I want to make an ocean simulation that is physically accurate.
The height and speed of the waves should be controlled by the keyboard at runtime.
In the ocean, there needs to be a boat that either moves along a path or is controlled by the keyboard.
So far I have made this simulation in Blender:
https://youtu.be/LJ6ncxv-k7w
The problems are as follows:
1. There is no collision with the ocean
2. There are no controllers for the boat's movement
3. I am able to control the waves, but not at runtime
I thought about switching to Unity because the user interface is obviously better, as it is a game engine. I do not want to use Blender's game engine as its future is uncertain at this point.
After reviewing the various Unity water simulation plugins, I came to these conclusions:
1. the buoyancy is great in most of them, such as in Aquas and SUIMONO
2. None of them seems to offer a physically realistic collision with the boat.
3. they do offer wave height control, but not much else as far as wave properties go.
4. Some of the plugins can be combined to get closer to satisfactory results.
My question is:
Should I go with Unity completely?
It seems perfect for my user control needs, but the plugins are lacking in the collision aspect. I came across this video, but no tutorial: https://www.youtube.com/watch?v=T0D_vrYm4FQ
Even if there was one, how could I combine it with the plugins?
Is there a way to build the scene in Blender and then import it into Unity?
Would I be able to control the waves and boat after importing them?
Thank you very much for your time and knowledge.
if you really means an ocean, i suggest you to check out NVIDIA WaveWorks. it's a C library and doesn't have an officially integration with Unity3D, but since you go this far for it, i guess maybe you'll have enough courage to trying make it into a useable plugin yourself.

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.

iPhone app 3d engine or not

I am developing a simple iPhone app, which:
retrieves data from the server
presents the data
In order to present the data better I want to add nice 3d dynamic objects, for example:
a car with spinning wheels next to car sales bar chart.
power plant with smoke coming out of the chimney next to CO2 emission numbers
The questions are:
How do I work with the designer on this, what output should he provide for me (format)?
How do I put it in my application, should I involve some 3d engine/framework?
The team behind cocos2D has just announced cocos*3D* and this seems really promising.
The first public beta can be downloaded
http://www.cocos2d-iphone.org/archives/1274
You can use cocos2d for the iPhone and fake 3d with the art. So, you have a car that is drawn to look 3d but you're only using 2d to display it. The effects that you want to do don't require to use full 3d models.
You may also have a look at this one I discovered recently:
http://nineveh.gl/
It's pretty new but well documented and with video demos.

What framework should I use to develop a game for the iPhone?

I want to develop this game for the iPhone. Which framework would be best to use (e.g., Cocoa, Cocoa2d, OPENGLES)?
I would look at
http://code.google.com/p/cocos2d-iphone/
You get a whole engine to help with your app and getting some pretty tricky stuff working.
Cocos2d does the following.
Scene management (workflow)
Transitions between scenes Sprites
and Sprite Sheets Effects: Lens,
Ripple, Waves, Liquid, Twirl, etc.
Actions (behaviors): Trasformation
Actions: Move, Rotate, Scale, Jump,
etc. Composable actions: Sequence,
Spawn, Repeat, Reverse Ease Actions:
Exp, Sin, Cubic, etc. Misc actions:
CallFunc, OrbitCamera Basic menus and
buttons Integrated Chipmunk 2d
physics engine Particle system Text
rendering support Texture Atlas
support Tile Map support Parallax
scrolling support High Score server
(Cocos Live) Touch/Accelerometer
support Portrait and Landscape mode
Integrated Pause/Resume Supports
PowerVR Texture Compression (PVRTC)
format Language: objective-c Open
Source: Compatible with open and
closed source projects OpenGL ES 1.1 based
Cocoa2D is using OpenGL ES, and given the game you are looking to do is a simple sprite based game, I think this would be a fine tool kit to use.
On the other hand, for performance you might find that Quartz 2D is significantly faster and provides you for free useful things such as key frame animation. The reason you'll find it's faster is that OpenGL needs to handle at all times the possibility of 3D rendering and all the various possible interactions your polygons could have with the image space: projections, zordering along vertices, etc. Quartz 2D however is fixed in 2D space so provides a good 2D space management tool kit. Additionally all the additional overhead OpenGL can encompass is stripped out.
I've done both, and I've found Quartz 2D to be a simple and fast toolkit to learn and definitely easy to program in. In the future when I do simple 2D sprite based apps I myself will be using Quartz 2D.
I would go with OpenGLEs. It would allow you to have 3d and do some cool stuff later on if you wanted.