iPhone app 3d engine or not - iphone

I am developing a simple iPhone app, which:
retrieves data from the server
presents the data
In order to present the data better I want to add nice 3d dynamic objects, for example:
a car with spinning wheels next to car sales bar chart.
power plant with smoke coming out of the chimney next to CO2 emission numbers
The questions are:
How do I work with the designer on this, what output should he provide for me (format)?
How do I put it in my application, should I involve some 3d engine/framework?

The team behind cocos2D has just announced cocos*3D* and this seems really promising.
The first public beta can be downloaded
http://www.cocos2d-iphone.org/archives/1274

You can use cocos2d for the iPhone and fake 3d with the art. So, you have a car that is drawn to look 3d but you're only using 2d to display it. The effects that you want to do don't require to use full 3d models.

You may also have a look at this one I discovered recently:
http://nineveh.gl/
It's pretty new but well documented and with video demos.

Related

Implementing payed 3D models into Unity Games

I've started learning gaming development using Unity and there's a thing I wasn't able to fully understand. I stumbled across the Sketch Fab website and noticed this cool market with 3D models and I was wondering what are the requirements to import such a model into an actual game.
For example this one already has animations:
https://sketchfab.com/3d-models/royal-knight-895d1c1d222d4efd9f264318e8ab0cb2
But on the other hand others don't have:
https://sketchfab.com/3d-models/crusader-knight-b079a8e34f454836bc8107c21c8c47fe
I have basically 2 questions:
If I buy the first model is this going to save me a lot of time and I can jump straight into implementing the character into an actual game and add custom scripts to it etc?
If I buy the second one, what would I need to too to actually animate this character? Is this something that somehow I can learn from Unity tutorials or would I need to import it to a tool like Blender to further improve this model with animations?
This question provokes a lot of answers. The first model you show does have a .fbx format and the animations will hopefully work fine. This format is typically what you want to use with Unity.
The second model is not Rigged (look at the product description). What this means is you will have to rig every bone yourself (in Blender) and make it compatible with Unity. I never buy a model that isn't rigged.
To add animations to the second character, you can download some from www.mixamo.com or use many of the animations you will find in the Unity Asset Store.
Personally, I prefer getting my models from www.turbosquid.com. You can search against multiple formats including .unitypackage
As Jiveturkey said, the first model is directly compatible with unity and doesn't require any additional steps - so if you're looking to focus solely on building the game without worrying about animation then you might want to go with the first model.
The second model isn't rigged, so you would have to manage all rigging and animating yourself - Unity does have a built-in rigging package, so you would be able to do that within unity rather than using Blender (Link to tutorial for rigging in Unity, Rigging tutorial directly from unity)
Unity can read .fbx, .dae (Collada), .3ds, .dxf, .obj, and .skp files for 3D models, and that's pretty much the only requirement. There are tons of sites with free 3D assets if you don't want to spend the money as well Itch.io, Unity Asset Store, and tons more - these are just the ones that come to mind

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.

How to mask 3D Objects with Real-World-Objects in front of them? (Tango, Unity)

All the Tango Apps and Demos I have seen so far have one major limitation: 3D-Objects are always "on top" of the real world camera image. They are placed correctly in 3D space but a real object in front of the virtual object will not overlap it!
Question:
Is it possible to mask 3D objects or parts of them in realtime by real world objects in front of them?
In theory the 3D data deliverd by Tango sensors should be sufficient to do this. But I wonder if anyone has done it before or if there might be performance limitations that make this impossible? Thanks for your advice!
One approach is to use the 3D Reconstruction library (search "Unity How-to Guide: Meshing with Color") to pre-scan the environment, and then use this model to provide depth data when rendering the AR scene. Here's a video of an AR game that appears to use this technique. It's not perfect for sure, but it does sorta work.
This questions has been asked before.

iPhone iOS how to create a room for an iPhone game demo?

I'm building an iPhone core motion game demo and would like to have a virtual "room" around the user. The user would be using the phone with the core motion to "look around" the room through the phone. Attached is an example.
I'm not looking for anything fancy. 4 solid color panels for walls and 2 panels for the floor and ceiling would do. Pretty much a large cube with the middle at the user's location
What is the quickest way for me to create a room with a box geometry, putting the user in the middle? Can this be done with UIKit objects, or do I need to use openGL to render the panels? Maybe there's some kind of a game engine that I can use for these purposes?
I would want to rotate the room in the future.
Thank you for your input!
You won't be able to create a 3 dimensional environment without using OpenGL in some form. The best way to get started is to follow a good tutorial on OpenGL such as this one. You could even take this tutorial and put the camera inside the cube and voila, instant room. You would just need to add view rotation logic from core motion and you would be set.

3D free rotation of object

I have a 3D CAD file of a set of products. I want to create a viewer so that the user can freely rotate the object in 3D.
How would I best go about this?
1) I had thought about exporting a series of 360 degree images every 30 degrees around the image, but that would be around 360 images per product. Then right the code to handle the matrix that would be required to handle rotation of the object. Seems very excessive, but doable.
2) OpenGL - I have never done any 3d animation using this, though.
We are using LightWave 3D, if that helps.
I'd recommend going with the 3-D rendering route, even though it might require more upfront work than the multiple sliced images approach. It will provide much greater flexibility over the long run, and I think you'll be able to generate a more pleasing experience in the end (small application binary size, smoother rotation, etc.). Also, once you have the display code done, you'll be able to pull in arbitrary models to add on to the ones you started with, and make tweaks to those models more easily.
This question points out a number of ways that you might be able to import LightWave models into formats usable by an OpenGL ES application. It looks like you'll probably need to pass through Blender or another intermediary to accomplish this.
Once you have the model in a form that you can work with, you can build off of several open source 3-D rendering applications for the iPhone / iPad, such as my Molecules application. My application is built for displaying 3-D molecular structures, but people have modified it to support rendering other models for their own needs, so I know that's possible. I go into detail on how this application works in the video for the OpenGL ES session of my class on iTunes U.
OpenGL ES may seem intimidating at first, but it only took me three weeks of nights-and-weekends development to build the initial version of Molecules, and I had no real OpenGL experience before starting that project. There are many great resources out there now, so it's easier than ever to get started.