I am involved in a virtual-reality project using the HTC Vive device, Unity and the SteamVR SDK used to communicate with the Vive.
Thanks to the joysticks, the final user must draw some shapes (for example a circle) and the movements begin when he presses a joysticks' button.
From all the generated data (output from the joysticks), how could I detect a circle ?
Do you have some documentation on this ?
Please correct me if I understand your concern incorrectly here:
You use joysticks to draw some shapes like circles in some apps like steamvr home,
and you want to detect what you have drawn using software. And maybe you want to show the result in real time in screen or save to a file.
That means you need the ability to get rendered images, and detect image content using algorithms like deep learning.
HTC Vive device are compatible with openVR SDK:
https://github.com/ValveSoftware/openvr
You can use openVR SDK to DIY a steamVR driver,and get images in real time using direct mode component in the SDK. It has lot of works to do even before adding the detection algorithm, because you need a steamvr driver that can used to execute steamVR.
Related
I'm working in unity(2018) and building for the HTC Vive VR headset. I had an idea to use the small camera on the front of the headset to make an AR system, as in run the video from the headsets camera to the headset view to then be able to overlap things from a unity environment. But unfortunately, I can't seem to find any examples of others doing this (other than the Tron blue outline system that the Vive comes with) though perhaps I'm not looking with the right keywords.
If anyone has seen something like this or know if it can be done I'd greatly appreciate it.
It is registered as a standard WebCam, so you should be able to use Unitys WebCamTexture.
But the resolution of the cameras is very low.
I am working with the Vive and a mobile tablet. The tablet has a tracker attached and then there is another tracker in the room.
On the tablet I output the devices camera on the screen and adjust the position and rotation according the devices position. What I want to do now is render the other tracker's position AR-like on top of the camera output.
I tried googling this, but so far I could only find how to make AR with Vuforia, which I don't need.
I really just need some keywords to start searching, because I don't really know how begin.
there is allot of way for making AR .
i propose you using API - like
Vuforia AR package
ARtoolkit
wikitude
etc ( by searching "Augmented reality API )
in other way ( i try it allot )
you should use corner detection and using feature extraction method ( every things under "ImageProcessing" science )
it if you don't want work with image Marker or target - you should get data from sensors in mobile application is easy but in other you should add sensors ( gyroscope and acceleration meter )
i hope i got what you want,
for image processing use openCV in C++ or java / emguCV in C#
if your problem was shader in unity
you could add a background layer in a unlit shader;
and put the Plane ( textured with camera render ) front of view of you unity camera object
I'm developing a VR app in Unity for the Samsung Gear VR and I'm trying to implement a pointer so the user can interact with the objects in the scene. When you look at distant objects it looks fine, but when you focus on close objects (which is highly needed for the app mechanics) the pointer appears to be duplicated, so you need to center the desired object in the middle of the points :P
What I've tried
-Using the GvrReticlePointer that comes with the GoogleVR package for cardboard
-Creating my own pointer by adding a canvas to the main camera with an image in the center
-Changing some of the Camera settings like field of view, stereo separation, etc.
-Configure my phone via a QR code http://imgur.com/fVrNrQk
Steps to reproduce (With canvas added to camera)
1.- Create a simple scene with a few objects to look at in Unity
2.- Set build settings for android
3.- Configure player settings to enable "Virtiual Reality Supported"
4.- Add Oculus as Virtual Reality SDK
5.- Set package name and minimum API level
6.- Add a canvas to the camera
7.- Add an image to the canvas, a cross will do the job
Observations
I'm using Unity 5.6.0b10 since google cardboard's site recommends using this version for the GoogleVR package. And I'm using the Samsung Gear VR with a Samsung Galaxy S6 edge + phone.
Solved
Apparently this is a well documented issue called voluntary Diplopia, and it's a human bug not a software one (read here, Unity's documentation, section The Reticle Interaction in VR).
The problem is trying to put the reticle at a fixed point in the user interface, like traditional 3D games. When looking at closer objects in VR this is going to cause this seeing double problem.
The solution is to position the reticle at the point in the 3D space the user is looking at. If he's looking closer, the reticle is drawn closer. Of course now you also have to scale the reticle accordingly, so the users can see it the same size no matter where they're looking at.
Unity also provides some example scripts about this, you can find them in the assets store, is called VR Samples.
Now I have performance issues (I'm working on mobile platforms): sometimes, when you turn your head fast you can see the reticle where it was drawn before. But looks way better than the double reticle version.
I am trying to build a Photosphere-like application with Unity3D and use it along with Google cardboard.
I need to load different panoramic view photos and be able to view them stereoscopically, by using the Cardboard goggles.
I am having problem to use the pano images and render it into stereoscopic view in Unity.
Any suggestions will be gratefully received.
Simplest solution that I can give to you:
A. Install your Unity Pro with Android Pro plugins, setup
Cardboard SDK For Unity, install Android Build tools and SDK.
B Setup Skybox
Get a stereoscopic panorama image (Might take a while to load the image as it is a high resolution image).
In Unity, import the image, change the Texture Type to Cubemap. Select the Mapping as Cylindrical (Lat and Long).
Create a Material, change the Shader to Skybox/Cubemap.
Assign the texture to the material.
In Unity 5 Pro topbar, select Window -> Ligthing, drag the material to the Skybox property. In this step you can do it programmatically. Combine these steps with the Cardboard assets and game objects. Voila, you've made a VR panorama stereoscopic Cardboard app! The whole setup is just take out your five minutes(excluding setup your tools :D).
If you're familiar with Unity, you know exactly what to do on my instructions. If you stuck at somewhere in my steps, feel free to ask me. Happy coding :)
Extra tips:
You can make the large textures files into Asset Bundles, act like a dynamic content which is stored in the server. Your app is just simply a small-size empty app. When app launches, request and download the asset bundles from server then manipulate the textures. :)
Super extra tip: Don't forget to generate asset bundles under Android Build Settings. If not, your textures will be corrupted when the bundle is downloaded to Android phones.
Make a sphere, write a shader so that it is not back culled and it can be seen from the inside. Use the pano image as a texture on the sphere. Place a VR camera rig in the center of the sphere. If you want true stereo, create two such spheres with separate textures for the left and right eye. Place the spheres at the lcoations of the cameras in the rig. Use layer culling so that each camera only sees the proper sphere.
I need to use a image sequence as a VR on the iOS. Is there component on the OS that does that? If not, is there any third party view for that? I'm very interested on learning more about it!
Thanks!
Here is a tutorial to create your own OpenGL (iOS or Android) VR engine.
The principle is to create a 3D skybox where you will have to apply your images to create your environment.
There are some libraries ready to use too like this one : Open AR