How to prepare my game for VR? [closed] - virtual-reality

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Let's imagine we have some C++ OpenGL game. It uses our own engine for render (nor Unity, nor UE, etc). Let's simplify our problem.
For example, we need to render some simple cube in VR mode. What we should do for that?
I know we need to split our screen into two parts. But what then? How to calculate rotation, distance for both parts?
By VR I mean devices such as VR Box, Gear VR, Oculus Rift, etc.

All the major headsets have API documentation on their sites that explain how to integrate VR support into your engine. You should refer to that documentation for details. My experience is mostly with the Oculus SDK but other SDKs are similar.
You generally don't directly split the screen into two yourself - you provide images with left and right eye views to the SDK and the SDK performs warping for the lens optics and sends the outputs to the HMD display(s).
The SDK provides APIs to get the camera and viewport parameters you need to render each eye's view. With the Oculus SDK you also obtain your render targets for each eye view through API calls. You build view and projection matrices and set viewports for each eye view based on the information provided to you by the APIs for the HMD position, orientation, Field of View, target resolution, etc.
Rendering for each eye is essentially the same as whatever you are already doing in your engine but of course you have to render twice (once for each eye) using the camera and viewport information provided by the SDK and may wish to render a third view for display on the regular monitor. You may want to restructure parts of your engine for efficiency since the left and right eye views are very similar rather than naively render the entire scene twice but that is not strictly necessary.
There will probably be a call at the end of a frame to tell the SDK you've finished rendering and submit the completed eye buffers for display. Other than that there's not that much to it. Most of the challenge of VR rendering lies in achieving the required performance not in the integration of the SDKs which are fairly simple on the display side of things.

Related

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.

Unity 5 - 2D game - separate scenes vs panels [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I try read all post like
http://forum.unity3d.com/threads/scenes-vs-canvases-vs-panels.279890/
http://forum.unity3d.com/threads/with-new-ui-separate-scenes-or-just-separate-panels.281013/
and others and still don't understand what is better way?
Scenes vs panels.
I'm personally thing scenes is better - I feel separate levels.
A game i try to write familiar to candy crush and how i understand they use scenes - because from map to play new matches gam you see loading screen - i thing this is some scene.
Thank you
because from map to play new matches gam you see loading screen
Because you see a loading screen does not mean that the game is actually laoding a scene. Anyone can put a loading screen up while doing something else such as loading AssetBundles and not actually loading the scene.
Your question will only result to opinion answer since you don't really have any problem.
I have my own rules I use after experimenting with Unity.
If you are making a 3D game with color map, normal map, height map and other textures, use Scene. Separate the scenes to speed up loading each level.
Now, if you are making a 2D Game as simple as candy Crush that uses few Sprites and Images simply use Canvas/Panels. Make Level 1 as parent Canvas GameObject. For level 2, simply duplicate the Level 1 parent Canvas, rename it to Level 2 then make a little changes to it. You can switch levels by enabling and disabling the parent Canvas GameObjects.
This make modifying your 2D Game very easy.You can easily compare what two levels look like by enabling/disabling them. You don't have to load up another scene to get another level. Also loading scene takes time too but this eliminates that.
Another advantage of this is that you can always convert the Canvas parent of each level into a prefab then import it into another level if you want to start using scenes instead.

Unity3D Applying Texture - Model turns black [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am following a course to learn Unity3D.
Game Development for iPhone/iPad Using Unity iPhone Tutorials
http://www.vtc.com/products/Game-Development-for-iPhone-iPad-Using-Unity-iPhone-Tutorials.htm
I am following along whatever author is doing on the screen. He is using Unity 1.6 and I am using Unity 3.40f5
When I try to apply the texture as he does in the movie, my model turns black. Is there something trivial that I am missing here?
Also find the screenshot attached.
Whats happening in the movie with author -
Whats happening on my screen -
It's hard to tell from screen grabs but your material looks correct, assuming it says "Bumped Diffuse" after shader, I can't tell.
When you first drag your model the scene, before applying a texture, it should self shade. If it doesn't, you need to regenerate your model's normals by clicking on the model and then in the inspector look for "Normals and Tangents." After Normals choose "Calculate" the click "Apply" at the bottom, see what happens. I don't know your model type but Unity has given me trouble in the past with Wavefront .obj files predefining their normals.
The other possible issue is a faulty UV import. If the tutorial is from v1.6 it is possible the model included with the tutorial isn't importing correctly. I've had a similar issue where the UV's were all set to '0 0' so only the lowest corner pixel of my texture was used. Unity can't do anything for you there. You can test this by creating a new material. Set the shader to diffuse. Set 'Base(RGB)' texture to 'none', and set 'Main Color' to something like blue. Apply it to your model. If you don't define a texture then your model should appear blue. If it does, it means you likely have a UV import problem.
You just create the material as well but I think you did not import it to the object.

Treasure hunt in Augmented Reality

I'm looking for an augmented reality browser/toolkit/api that supports the following:
Adding fixed 3d models such as a treasure-chest.
Possible image recognition of this treasure-chest so the iPhone knows when you're looking at it.
Specify altitude on a 3d model so it can be positioned on the ground or the second floor in an apartment building for example.
It must have support for "migrating" it to a standalone app that can be published on the app store.
The ability to customize the camera overlay with own buttons, huds, text and other UIViews.
Support for both iPhone and Android.
I have tried Wikitude which doesn't have support for 3d models in iPhone.
I have tried Junaio which doesn't support to create a standalone app using their browser.
I have tried Layar Player SDK, and asked the question on their community if I can customize the interface with own buttons etc.
I have tried the artoolkit on github.
None of the libraries I've tried have support for all my demands.
Am I looking for too much here?
Is there something I've missed using Layar, Wikitude and Junaio?
Specify altitude on a 3d model so it
can be positioned on the ground or the
second floor in a apartment building
for example.
Can you break this down? Do you want the phone to recognize it's on the second floor, at a particular location within the building? In general altitude is surprisingly tricky, and indoor positioning is very approximate - in the absence of indoor GPS repeaters or some other indoor positioning mechanisms which would probably require a lot of additional effort (bluetooth beacons, WiFi triangulation etc) this might be infeasible - in general, and not just given a particular AR library.
I think the Junaio libraries cover the other bases - CV recognition of a (prepared) object, stand-alone application packaging, customizable UI, and iPhone and Android support.

Best process to show an OpenGL Animation in iPhone [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I'm having issues being able to import a bone and skeletal animation from Maya to Blender to iPhone. Here's what I've done:
install the ColladaMaya plugin to export a DAE for Blender to export
used Jeff LeMarche's script to be able to export a single keyframe of the model and import this .h file into the iPhone game
Setup the GLView using more of Jeff LeMarche's steps and rendered into our Game, so this model display next to the actual game (not in 3D).
Researched oolongngine, sio2 (applied but haven't yet gotten e-mail back from them), other SO q's for solutions, including mine from game dev
Reviewed using FBX SDK content pipeline to dynamically generate class files for the animations.
I can import a model and display it. A lot of these processes respond to that issue and leaving the developer to manipulate the game object programmatically.
My main issue is finding the best, defined process of importing an animation into iphone next to the existing game? I don't need a whole game, or a whole scene, just one animating model and some steps to follow.
This animation is meant to play in a loop. There are 3 more animations that will play on different game states (good move, bad move, etc.). So I'm concerned that LeMarche's keyframe solution (which basically means exporting EVERY keyframe as a .h file) will be incredibly time-intensive and memory-intensive. I'm definitely willing to do it, but after all the research I've done (additional links not included), I'm lost as to where to go next besides hand-exporting each keyframe and importing them.
EDIT:
I've added a bounty to this for anyone who can give me a clearly-defined process for importing an animation from a 3D application into iPhone. NOT the entire application itself (i.e. Unity, Sio2, etc.), but just showing a 3D overlay in an application (like an animating model next to a bejeweled-esqe game, not interacting with the world.)
People keep saying, "create your own model loader". Are there scripts, examples, tutorials, anything that walks through this "model loader" process from EXPORTING from the 3D application (preferably Maya or Blender) and IMPORTING this animation and rendering it in Objective-C?
This is really a big problem with animation export. I had this problem recently and ended up with Assimp.
However it also has problems with skeletal animation exported from Maya and Blender. As for me I prefer 3Ds Max (don't forget to reset Xform before rigging), it has no problems with Collada and animation.
Though if you want to use that models in your game I suggest you to write your custom exporter for Maya or Blender. Also try the mesh (morph) animation. If you don't use inverse kinematics or something like that this is what you need.
I have written code that reads Blender files and parses the mesh, bone and animation data. It then applies the bone animations to the meshes world transforms etc. It's a lot of work, but certainly doable. If you have any questions about Blender peculiars, just ask me.
The other option is using a library like OgreKit, which can read blend files and does skeletal animation for you as well out of the box.
AnimKit is a small animation library that can read a .blend file and play an animation using Glut. It extracts all data for skeletal animation and more, including bones, mesh and animation channels etc from a Blender 2.5 .blend file.
AnimKit is open source (using the permissive zlib license) an you can check it out using Subversion:
svn co http://code.google.com/p/gamekit/source/browse/#svn%2Fbranches%2FAnimKit AnimKit
AnimKit doesn't run on iPhone yet, but I'll port it soon, using Oolong Engine.
Another interesting resource is Proton SDK (protonsdk.com) it has a customized Irrlicht SDK that runs on iPhone/Android etc. The RT3DApp sample can play an animation.