How to implement indoor navigation application using Unity3d on Project Tango? - unity3d

I'd like to implement indoor navigation application using Unity3d on Project Tango.
Could anyone share me the train of thought about this?
My rough idea shows below:
Get the whole build mesh by Tango Constructor
Import into Unity3d as .obj
Bark whole mesh as Navmesh
Name and Mark all interested address or position with ADF together and save with Navmesh
Program UI to receive the start/end address and generate the navigation path dynamically.
Use AR mark and add on the navigation path floor plane.
Please correct my thoughts and share your experience, I am newbie on Unity3d/Tango.

I am doing what you are doing but without the navmesh and ADF. What you might want to consider is reducing the poly count of the 3D obj using a program like Meshlab. I dont know if they fixed that in the Mira release but previously a 'small' room would yield something like.. 1.2 million triangles which I can only assume would slow down your navmesh quite a bit.
With navmesh, generation navigation should be very easy! so I think 1, 2, 3, 5 and 6 are no problem at all.
however nr 4, I have no idea if it works. This you must explore on your own. Naming/marking an adress using ADF? You are thinking it will recognize itself in the environment and then providing the adress? How will it save it with the navmesh? I am sure you will be able to make it work.
good luck.

Related

How to modify behaviour of a VR controller's pointer

My university colleagues and I are trying to develop a Virtual Reality project for university where we use an Oculus headset and create a scene with a mouse where you can select and click and drag different objects in the scene. You are supposed to be stationary and move one of the controllers as if it was a mouse. However, we want to modify the behaviour of the controller to better fit the 3D environment. When an object is not selected, we want to interpolate the depth of the cursor according to the interpolation of the nearest objects. There is a paper that we were shown in class that we are supposed to drag inspiration from, and it achieved this kind of behaviour of the cursor with a normal mouse but I can't seem to find any information on how they did it. Our final goal would be to compare both ways of managing the scene and assess which one is better. We are using Unity with VRTK as suggested by our professor, but we and can't really seem to be able to access the mouse's file on how it moves or its behaviour, and we are kind of lost on where to go. Could someone help with this?
Here is the paper where they talk about it:
https://dl.acm.org/doi/pdf/10.1145/3491102.3501884
We so far have tried creating a simple scene and adding objects with different behaviours as well as a controller instance, but we seem to only be able to modify the events of the mouse and not its specific behaviour.
Kind regards and thanks

How to grab the 2D views/textures from a 3D Object in Unity

I am working on a Projection Mapping Project and I am prototyping in Unity 3D. I have a cube like object with a 3D terrain and characters in it.
To recreate the 3D perspective and feel I am using two projectors which will project in a real world object which is exactly like the Unity Object. In order to do this I need to extract 2D views from the shape in Unity.
Is there an easy way to achieve this ?
Interesting project. It sounds like you would need multiple displays, one for each projector, each using a separate virtual camera in Unity, like documented there.
Not sure if I understood your concept correctly from the description above. If the spectator should be able to walk around the cube, onto which the rendered virtual scene should be projected, it would also be necessary to track a spectator's head/eyes to realize a convincing 3D effect. The virtual scene would need to be rendered from the matching point of view in virtual space (works for only one spectator). Otherwise the perspective would only be "right" from one single point in real space.
The effect would also only be convincing with stereo view, either by using shutter glasses or something similar. Shadows are another problem, when projecting onto the cube from outside the scene. By using only two projectors, you would also need to correct the perspective distortion, when projecting onto multiple sides of cube at the same time.
As an inspiration: There's also this fantastic experiment by Johnny Chung Lee demonstrating a head tracking technique using the Wii Remote, that might be useful in a projection mapping project like yours.
(In order to really solve this problem, it might be best to use AR glasses instead of conventional projectors, which have the projector built in, and use special projection surfaces that allow for multiple spectators at the same time (like CastAR). But I have no idea, if these devices are already on the market... - However, I see the appeal of a simple projection mapping without using special equipment. In that case it might be possible to get away from a realistic 3D scene, and use more experimental/abstract graphics, projected onto the cube...)

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.

Tango Predefined Objects

I'm somewhat familiar with tango and unity. I have worked through the examples and can get them to work correctly. I have seen some people doing an AR type example where they have their custom objects in an area to interact with or another example would be directions where you follow a line to a destination.
The one thing I cannot figure out is how to precisely place a 3d object into a scene. How are people getting that data to place it within unity in the correct location? I ha e an area set up and the AR demo seems promising but I'm not placing objects with the click of a finger. What I am looking to do is when they walk by my 3d object will already be there and they can interact with it. Any ideas? I feel like I've been searching everywhere with little luck to an answer to this question.
In my project, I have a specific space the user will always be in - so I place things in the (single room) scene when I compile.
I Create an ADF using the provided apps, and then my app has a mode where it does the 3D Reconstruction and saves off the mesh.
I then load the Mesh into my Unity Scene (I have to rotate it by 180° in the Y axis because of how I am saving the .obj files)
You now have a guide letting you place objects exactly where you want them, and a nice environment to build up your scene.
I disable the mesh before I build. When tango localises, your unity stuff matches up with the tango world space.
If you want to place objects programatically, you can place them in scripts using Instantiate
I also sometimes have my app place markers with a touch, like in the examples, and record the positions to a file, which I then use to place objects specifically... But having a good mesh loaded into your scene is really the nicest way i've found.

Positioning 3d objects for AR in Unity3d

I'm experimenting with an AR experience in Unity3D. I'd like to place models in my Unity scene and have them show up on top of real world objects using tango. I'm using tango's augmentedReality scene as a starting point.
Say there is a table in a room and I want a 3d cube to sit on top of it when it is in tangos view. Do I need to be using an .adf file to solve this problem or is there something else I should be looking into.
Is there some way to test an .adf file locally in my unity scene? This would be ideal to establish and debug the correct positions to place models in my scene.
Just trying to sort everything out.
If you want keep your virtual object's position persistent between different runs of the application, you will need a ADF file to relocalize. Unfortunately, there's no in-editor debug functions for ADF at the moment, so you will need to create a program to place the objects.
You could take a look of the Experiments/PersistentState example for reference. This example is not using AR, however, it's saving objects position with respect to your ADF's origin and keeping them persistently.