I need to use a image sequence as a VR on the iOS. Is there component on the OS that does that? If not, is there any third party view for that? I'm very interested on learning more about it!
Thanks!
Here is a tutorial to create your own OpenGL (iOS or Android) VR engine.
The principle is to create a 3D skybox where you will have to apply your images to create your environment.
There are some libraries ready to use too like this one : Open AR
Related
I am involved in a virtual-reality project using the HTC Vive device, Unity and the SteamVR SDK used to communicate with the Vive.
Thanks to the joysticks, the final user must draw some shapes (for example a circle) and the movements begin when he presses a joysticks' button.
From all the generated data (output from the joysticks), how could I detect a circle ?
Do you have some documentation on this ?
Please correct me if I understand your concern incorrectly here:
You use joysticks to draw some shapes like circles in some apps like steamvr home,
and you want to detect what you have drawn using software. And maybe you want to show the result in real time in screen or save to a file.
That means you need the ability to get rendered images, and detect image content using algorithms like deep learning.
HTC Vive device are compatible with openVR SDK:
https://github.com/ValveSoftware/openvr
You can use openVR SDK to DIY a steamVR driver,and get images in real time using direct mode component in the SDK. It has lot of works to do even before adding the detection algorithm, because you need a steamvr driver that can used to execute steamVR.
I would like to implement a letter detection feature for my "guess the drawing" game in Unity, that would detect if someone draws a letter and I would count it as cheating. So people would only be able to draw the word requested as pictures and would not draw the letters that create word itself.
I would like to know what is your opinion and what technology can I use for this task in Unity3d. Thanks in advance.
Vuforia is best pick for your requirement.
You can download vuforia SDK for unity from below link.
https://developer.vuforia.com/downloads/sdk
Steps need to be followed:
1.Download SDK for unity.
2.Draw the patters which you want as answers in any of tools like paint or photoshop and take screenshots of them.
3.Remove main camera and add AR camera in prefabs folder of vuforia library
4.Drag and drop image targets to your project and add the screenshots you have taken
5.Now implement code to broadcast message if image target is detected.
6.Use this broadcast message to implement post game logic.
There is so much to learn about image recognition and Vuforia, adding image targets and all require few additional steps, You can go through the tutorials for better understanding on how to use Vuforia SDK.
Following is link for vuforia tutorials
https://library.vuforia.com/tutorials
Happy Game Development
I want to create Augmented Reality using Unity and ARToolkit but when done then play the camera was blank. I follow this tutorial from Youtube
Alternatively, you can use one of the example scenes that come with the ARToolKit Unity package as a starting point.
Those scenes should work fine and you can check the differences with yours.
I am trying to build a Photosphere-like application with Unity3D and use it along with Google cardboard.
I need to load different panoramic view photos and be able to view them stereoscopically, by using the Cardboard goggles.
I am having problem to use the pano images and render it into stereoscopic view in Unity.
Any suggestions will be gratefully received.
Simplest solution that I can give to you:
A. Install your Unity Pro with Android Pro plugins, setup
Cardboard SDK For Unity, install Android Build tools and SDK.
B Setup Skybox
Get a stereoscopic panorama image (Might take a while to load the image as it is a high resolution image).
In Unity, import the image, change the Texture Type to Cubemap. Select the Mapping as Cylindrical (Lat and Long).
Create a Material, change the Shader to Skybox/Cubemap.
Assign the texture to the material.
In Unity 5 Pro topbar, select Window -> Ligthing, drag the material to the Skybox property. In this step you can do it programmatically. Combine these steps with the Cardboard assets and game objects. Voila, you've made a VR panorama stereoscopic Cardboard app! The whole setup is just take out your five minutes(excluding setup your tools :D).
If you're familiar with Unity, you know exactly what to do on my instructions. If you stuck at somewhere in my steps, feel free to ask me. Happy coding :)
Extra tips:
You can make the large textures files into Asset Bundles, act like a dynamic content which is stored in the server. Your app is just simply a small-size empty app. When app launches, request and download the asset bundles from server then manipulate the textures. :)
Super extra tip: Don't forget to generate asset bundles under Android Build Settings. If not, your textures will be corrupted when the bundle is downloaded to Android phones.
Make a sphere, write a shader so that it is not back culled and it can be seen from the inside. Use the pano image as a texture on the sphere. Place a VR camera rig in the center of the sphere. If you want true stereo, create two such spheres with separate textures for the left and right eye. Place the spheres at the lcoations of the cameras in the rig. Use layer culling so that each camera only sees the proper sphere.
I am trying to edit multi targets and image targets sample app of qualcomm sdk so that instead of using openGL , I can only overlay UIKit contents such as buttons text etc just for a simple demo. But I am unable to do so till now.
Please guide me where to make any changes or how to go about it?
I have also referred to the forums and tried to examples but they all are using openGL which i want to get rid of.
Please help me out
The main duty of the QCAR is to give you a 4x4 matrix called ModelView matrix with which you can superimpose your graphics. This is an OpenGL matrix and I don't think that you would be able to use that matrix in UIKIT.
If you only want to overlay some UIs stuff when a target is detected then you don't need that matrix. But the UIs will be on the screen in 2D (not on the target image in 3D scene) and tha't not AR at all.
Another option is to incorporate a rendering engine with QCAR to avoid using pure OpenGL ES APIs. For iPhone I think OpenFramework should do the business.