Is it possible to add distortion to the display in the HMD with openvr? - virtual-reality

I trying to change the barrel distortion coefficients for the HTC Vive to create a distortion in the HMD. Is OpenVR the best method to do this?

The only thing I can suggest to you at this point is to search for OculusRiftEffect.
This is an old plugin for THREE.js that is now useless in normal use because it needed to show you the deformed view on your screen. In most application you don't want that, but you might want to show that to the students. The example was hardcoded to the lenses of Oculus Rift DK2 (or DK1 if you uncomment some stuff inside), but the optics don't differ that much and the effect should be even more visible.
It is removed from their current version, so check out old THREE.js revisions or some stale demos on the internet, and you'll find something. Search something around 3 years back.

Related

ARFoundation detecting vertical planes and placing objects - Unity

Im finding it difficult to find information for detecting vertical planes and putting objects on walls.
I see alot about AR Core and using the HelloAR example app, but i get loads of compile errors as im mainly building the app for iOS, although i will do it for Android to at some point.
Although i dont mind editing C# I cant actually read/write c# so the simpler the resource/answer the better.
I also wouldnt mind been able to design/change the detector image thing, the thing that shows up when it detects a surface.
On the horizontal one theres just a simple square/crosshair and i love that.
Thanks in advance.
ARFoundation 1.0.0 preview 22 will surface the ability to select your plane detection mode: horizontal, vertical, or both. To use this mode, you will have to upgrade to the newly released Unity 2018.3: https://blogs.unity3d.com/2018/12/13/introducing-unity-2018-3/
For more about plane detection in ARFoundation, refer the following link
AR Plane Manager

Unity 2018: 2D Object - SpriteMesh

Ok, so I have looked around the internet but I cannot find the sprite mesh. I should be able to right click my sprite> 2D Object> SpriteMesh.
Problem is that I don't see the option "SpriteMesh" anywhere.
Here's the deal. I created a bunch of 2D pieces for a character: head, body, two arms, two legs, two hands, and two feet. I imported the sprite as a PNG file and changed SpriteMode to multiple. I used the Sprite Editor to slice the char into pieces automatically. There's also nothing inside of the sprite editor that allows me to rig bones either.
Now I need to Rig the toon with bones and skin. However, I cannot find a way to do this. Watching a few tutorials, the guy adds a SpriteMesh to each of the parts. However, when I try to do this, the option just doesn't exist. I see SpriteMask but no SpriteMesh.
I'm using Unity 2018.2.18f1.
I have zero experience in animations like this. Normally I create a player/enemy without legs/arms. So they just float and I use the animation tab to change size/shape to insinuate movement. However, I'd like to take this next step and make the game look better.
How can I rig my toon? What steps do I need to follow?
All help is appreciated!
I guess you want to use the new 2D Features from Unity, if you want to rig your 2D Character.
I'm using Unity 2018.2.18f1.
You need to use Unity 2018.3 or later to use these tools.
I suggest you to use Unity Hub to download multiples versions and Beta versions.
There is a really nice video from Brackeys about this subject also.
When you have the 2018.3 or later version installed, open your project and go to the Window/Package Manager window, you need to install these packages :
I don't think you need the 2D Pixel Perfect but it's always nice to have.

Building custom Mixed Reality (Augmented Reality) setup in Unity

I'm tasked with developing an application, which would emulate augmented reality in a virtual reality application. We are using Google Cardboard (Google VR), and want to show the camera images (don't mind the actual camera setup, say I already have the images) to the user.
I'm wondering about the ways to implement it. Some ideas I had:
Substituting the images rendered for each eye by my custom camera images.
Here I have the following problems: I don't know how to actually replace the images that are rendered to the screen, let alone to each eye. And how to afterwards show some models overlayed on top of the image (I would assume by using the Stencil Buffer?).
Placing 2 planes in from of the camera with custom images rendered onto it
In this case, I'm not sure about the whole "convenience" of the user experience, as the planes would most likely be placed really close, so you only see one plane with one eye, and not the other. Seems like it might put some strain onto your eyes, because they would close on something that is really close to you.
Somehow I haven't found a project that would try to achieve something like that, and especially with all the Windows Mixed Reality related stuff polluting the search results.
You can use Vuforia digital eyewear, here is the documentation for it.
And a simple tutorial on YouTube.

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.

AR Overlay Accuracy in Google Project Tango

I am experimenting with overlaying augmented reality objects over a pass-through image from the rear camera in Unity.
Has anyone experimented with overlaying objects with accurate tracking? I've tweaked the movement scale to get somewhat decent results but rotation is still not accurate and drift is a big issue.
I've had good luck with the augmented reality sample that ships with the latest tango. in my experience it does work the way you speculated where if you add items to the unity scene they are synced to motion detected by the device.
I believe the tracking and syncing function have improved since you asked this question originally because I've noticed an improvement since I got my tango devkit a month or so ago. there was an update a week or so later, with an immediate improvement.
I have found that some scenes track better than others, it seems to help for there to be additional scenery for it to track. in my workspace, a fairly cluttered apartment, it tracks well but in the neighboring identical apartment unit which is currently vacant and empty, it does not track as well. that could also be a product of the blinds hanging up in my unit that are not hanging up in the vacant unit, filtering out additional infrared.
I'm experimenting with placing 3D objects over the real time input from the Tango color camera.
One problem here is that the hardware color camera 'point' in a (strange) direction. I wasn't able to get the direction vector from the api until now. Your virtual camera for rendering the scene needs this rotation to render 3D objects properly.
There are augmented reality examples of Tango's Unity plugin:
https://developers.google.com/tango/apis/unity/unity-simple-ar
They solve this problem with a matrix that rotates the 3d camera.
It can be found in the Unity script "TangoARPoseController" (C#) that, when attached to a unity camera, rotates it so that it looks at the scene in the right direction. The matrix is obtained in the method "SetCameraExtrinsics" of that script.
Unfortunately, when I apply the matrix to my unity scene it does not produce a perfect overlay (actually it's quiet bad). But I have other sources of position input which may be the problem here.
However, until now I'm not sure if the matrix used in the examples is good enough for accurate ar overlays. Maybe it is just suitable for demonstration purposes. But it should be a good starting point for further investigation.
Are we talking about displaying the 'webcam' in the background as opposed to a skybox ?
Take a look at my GhostHunter repo. It includes a shader and a script for displaying the rear facing camera 'behind' the gameplay objects (like the skybox). It should be useable with Tango and it is better than the 'display on a mesh' technique I`ve seen others used.
https://github.com/NVentimiglia/Augmented-Reality-Ghost-Hunter