Building custom Mixed Reality (Augmented Reality) setup in Unity - unity3d

I'm tasked with developing an application, which would emulate augmented reality in a virtual reality application. We are using Google Cardboard (Google VR), and want to show the camera images (don't mind the actual camera setup, say I already have the images) to the user.
I'm wondering about the ways to implement it. Some ideas I had:
Substituting the images rendered for each eye by my custom camera images.
Here I have the following problems: I don't know how to actually replace the images that are rendered to the screen, let alone to each eye. And how to afterwards show some models overlayed on top of the image (I would assume by using the Stencil Buffer?).
Placing 2 planes in from of the camera with custom images rendered onto it
In this case, I'm not sure about the whole "convenience" of the user experience, as the planes would most likely be placed really close, so you only see one plane with one eye, and not the other. Seems like it might put some strain onto your eyes, because they would close on something that is really close to you.
Somehow I haven't found a project that would try to achieve something like that, and especially with all the Windows Mixed Reality related stuff polluting the search results.

You can use Vuforia digital eyewear, here is the documentation for it.
And a simple tutorial on YouTube.

Related

Is it possible to add distortion to the display in the HMD with openvr?

I trying to change the barrel distortion coefficients for the HTC Vive to create a distortion in the HMD. Is OpenVR the best method to do this?
The only thing I can suggest to you at this point is to search for OculusRiftEffect.
This is an old plugin for THREE.js that is now useless in normal use because it needed to show you the deformed view on your screen. In most application you don't want that, but you might want to show that to the students. The example was hardcoded to the lenses of Oculus Rift DK2 (or DK1 if you uncomment some stuff inside), but the optics don't differ that much and the effect should be even more visible.
It is removed from their current version, so check out old THREE.js revisions or some stale demos on the internet, and you'll find something. Search something around 3 years back.

CSS3D StereoEffect creating dual non synced webpages

This project is a combination of a few things I've found, first embedding webpages in Three.js:
http://adndevblog.typepad.com/cloud_and_mobile/2015/07/embedding-webpages-in-a-3d-threejs-scene.html
and the second is the custom CSS 3D Renderer I found on stackoverflow:
Three.js StereoEffect cannot be applied to CSS3DRenderer
The effect is almost exactly what I wanted, except instead of simply re-drawing the output from one side to the other, it's loading two separate instances, which sorta breaks the point of going VR...
Any ideas? Here's the file:
https://drive.google.com/open?id=1UmXmdgyhZkbeuZlCrXFUXx-yYEKtLzFP
My goal was to render the Ace cloud editor in a VR environment using the stereo effect algorithm (which seems like it could be a fun new way to develop code if you had a wireless keyboard/trackpad with a VR headset, locking the camera in one location of course but still needing the mirrored view for the lenses)...
https://www.ebay.com/itm/Wireless-Bluetooth-Keyboard-with-Touchpad-for-iOS-Android-Smart-phone-Tablet-PC/112515393899?_trkparms=aid%3D222007%26algo%3DSIM.MBE%26ao%3D1%26asc%3D20161006002618%26meid%3D50ca4e61c27345df85a8461fb1a0e6d5%26pid%3D100694%26rk%3D7%26rkt%3D30%26sd%3D222631353709&_trksid=p2385738.c100694.m4598
https://www.walmart.com/ip/ONN-Virtual-Reality-Headset-White/187088616?wmlspartner=wlpa&selectedSellerId=0

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.

Tango Predefined Objects

I'm somewhat familiar with tango and unity. I have worked through the examples and can get them to work correctly. I have seen some people doing an AR type example where they have their custom objects in an area to interact with or another example would be directions where you follow a line to a destination.
The one thing I cannot figure out is how to precisely place a 3d object into a scene. How are people getting that data to place it within unity in the correct location? I ha e an area set up and the AR demo seems promising but I'm not placing objects with the click of a finger. What I am looking to do is when they walk by my 3d object will already be there and they can interact with it. Any ideas? I feel like I've been searching everywhere with little luck to an answer to this question.
In my project, I have a specific space the user will always be in - so I place things in the (single room) scene when I compile.
I Create an ADF using the provided apps, and then my app has a mode where it does the 3D Reconstruction and saves off the mesh.
I then load the Mesh into my Unity Scene (I have to rotate it by 180° in the Y axis because of how I am saving the .obj files)
You now have a guide letting you place objects exactly where you want them, and a nice environment to build up your scene.
I disable the mesh before I build. When tango localises, your unity stuff matches up with the tango world space.
If you want to place objects programatically, you can place them in scripts using Instantiate
I also sometimes have my app place markers with a touch, like in the examples, and record the positions to a file, which I then use to place objects specifically... But having a good mesh loaded into your scene is really the nicest way i've found.

scattering effect

I went through all my resources but I am not getting the scattering effect of an image smoothly. However, I am able to zoom it and I had scattered it but it is not as smooth as I want. I just want to click on a button and it should zoom and the other image should get scatter
I want the image to be scattered as the link given below is it possible in iPhone?
http://www.touchmagix.com/templates/diamond.htm
this is hell lot of graphics. If you intend to achieve this with iPhone sdk, start learning details about CALayer and then try to manipulate the movements of different objects. Which will require lot of coding logic.
My suggestion would be going for Cocos2D and try with the different classes and api's present there. In Cocos2D you can create an image which will act as an actual physical object and you can apply all laws of physics to it. Say if you create a ball, and you push it, the ball will go in the direction of the push and bounce back if it gets hit by any other physical object and you don't require to do any coding for this. Cocos2D takes care of all these things.
Try it.