I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.
Related
I'm making an augmented reality app with Unity and Mapbox for both ios and android. I have data sets that I am using to make markers in the real world when someone uses the app. I collected json files and converted them to geojson files and then I made a custom map in Mapbox Studio with these 4 different geojson files. Basically I want to have the markers from the datasets I collected to show up in the real world. I am not sure how to get these markers to show up in the real world and not with building prefabs. Example of my custom app made in Mapbox. Each color shows a different category of markers. There are four categories.
Here is an example of what I am referring to.
In this image skeletons can show up in the real world.
Here is an example of what I am not referring to.
In this image droids are place in a map but it is not the real world. It is like Pokemon Go where the map is generated with location but you don't actually see the real world when you are playing.
I already have my Unity project set up and this is the final step, but I am just having issues getting it to show up in the real world. So far, tutorials only show on to get it to reflect something like Pokemon Go.
You will have one scene with a stationary Camera. Your code will monitor the MapBox data in Update(), constantly passing the current GPS position and receiving your list of markers/points of interest. You can simply randomly spawn skeletons in a sphere area (see https://docs.unity3d.com/ScriptReference/Random-insideUnitSphere.html) around the Camera's transform position once you detect that the user's GPS position is in within a certain distance of the center of your point of interest. Keep track of that list, and destroy the skeletons once they leave the area - and have some way of making sure you only spawn them once for that area.
Your skeletons should have a NavMeshAgent, and you should generate a NavMesh onto the ARFoundation plane for them to walk on. In this case, the plane is probably dynamically created and you may need to use the dynamic NavMesh component https://github.com/Unity-Technologies/NavMeshComponents. If you tell the NavMeshAgent to go to a specific point it will walk to the closest point - so even though you get a random position in the sphere in 3D, the skeleton will move or spawn onto the nearest point so there is no need to figure out how to convert it to the 2D plane space.
Your AR view, both the tracking of the camera position/angle and the generation of a plane representing the ground, will be something generated by ARFoundation and it is simple to add the basic functionality. They have a prefab that already includes the camera and generates the plane for you. You can get ARFoundation via the Unity Package Manager. It will work with many different types of devices.
You should start with a cheap Android phone or tablet, even if you own an iPhone, because it's easier to load the APK and debug/develop your app via Android build.
This is a simplification. I recommend using Singletons, ScriptableObjects, Object Pooling, and other Unity paradigms and plenty of other things within Unity that would help you but as another user pointed out - you may want to spend time learning Unity, ARFoundation, MapBox, and ask more specific programming questions when you are ready.
I am working on a Projection Mapping Project and I am prototyping in Unity 3D. I have a cube like object with a 3D terrain and characters in it.
To recreate the 3D perspective and feel I am using two projectors which will project in a real world object which is exactly like the Unity Object. In order to do this I need to extract 2D views from the shape in Unity.
Is there an easy way to achieve this ?
Interesting project. It sounds like you would need multiple displays, one for each projector, each using a separate virtual camera in Unity, like documented there.
Not sure if I understood your concept correctly from the description above. If the spectator should be able to walk around the cube, onto which the rendered virtual scene should be projected, it would also be necessary to track a spectator's head/eyes to realize a convincing 3D effect. The virtual scene would need to be rendered from the matching point of view in virtual space (works for only one spectator). Otherwise the perspective would only be "right" from one single point in real space.
The effect would also only be convincing with stereo view, either by using shutter glasses or something similar. Shadows are another problem, when projecting onto the cube from outside the scene. By using only two projectors, you would also need to correct the perspective distortion, when projecting onto multiple sides of cube at the same time.
As an inspiration: There's also this fantastic experiment by Johnny Chung Lee demonstrating a head tracking technique using the Wii Remote, that might be useful in a projection mapping project like yours.
(In order to really solve this problem, it might be best to use AR glasses instead of conventional projectors, which have the projector built in, and use special projection surfaces that allow for multiple spectators at the same time (like CastAR). But I have no idea, if these devices are already on the market... - However, I see the appeal of a simple projection mapping without using special equipment. In that case it might be possible to get away from a realistic 3D scene, and use more experimental/abstract graphics, projected onto the cube...)
All the Tango Apps and Demos I have seen so far have one major limitation: 3D-Objects are always "on top" of the real world camera image. They are placed correctly in 3D space but a real object in front of the virtual object will not overlap it!
Question:
Is it possible to mask 3D objects or parts of them in realtime by real world objects in front of them?
In theory the 3D data deliverd by Tango sensors should be sufficient to do this. But I wonder if anyone has done it before or if there might be performance limitations that make this impossible? Thanks for your advice!
One approach is to use the 3D Reconstruction library (search "Unity How-to Guide: Meshing with Color") to pre-scan the environment, and then use this model to provide depth data when rendering the AR scene. Here's a video of an AR game that appears to use this technique. It's not perfect for sure, but it does sorta work.
This questions has been asked before.
I'm somewhat familiar with tango and unity. I have worked through the examples and can get them to work correctly. I have seen some people doing an AR type example where they have their custom objects in an area to interact with or another example would be directions where you follow a line to a destination.
The one thing I cannot figure out is how to precisely place a 3d object into a scene. How are people getting that data to place it within unity in the correct location? I ha e an area set up and the AR demo seems promising but I'm not placing objects with the click of a finger. What I am looking to do is when they walk by my 3d object will already be there and they can interact with it. Any ideas? I feel like I've been searching everywhere with little luck to an answer to this question.
In my project, I have a specific space the user will always be in - so I place things in the (single room) scene when I compile.
I Create an ADF using the provided apps, and then my app has a mode where it does the 3D Reconstruction and saves off the mesh.
I then load the Mesh into my Unity Scene (I have to rotate it by 180° in the Y axis because of how I am saving the .obj files)
You now have a guide letting you place objects exactly where you want them, and a nice environment to build up your scene.
I disable the mesh before I build. When tango localises, your unity stuff matches up with the tango world space.
If you want to place objects programatically, you can place them in scripts using Instantiate
I also sometimes have my app place markers with a touch, like in the examples, and record the positions to a file, which I then use to place objects specifically... But having a good mesh loaded into your scene is really the nicest way i've found.
I am experimenting with overlaying augmented reality objects over a pass-through image from the rear camera in Unity.
Has anyone experimented with overlaying objects with accurate tracking? I've tweaked the movement scale to get somewhat decent results but rotation is still not accurate and drift is a big issue.
I've had good luck with the augmented reality sample that ships with the latest tango. in my experience it does work the way you speculated where if you add items to the unity scene they are synced to motion detected by the device.
I believe the tracking and syncing function have improved since you asked this question originally because I've noticed an improvement since I got my tango devkit a month or so ago. there was an update a week or so later, with an immediate improvement.
I have found that some scenes track better than others, it seems to help for there to be additional scenery for it to track. in my workspace, a fairly cluttered apartment, it tracks well but in the neighboring identical apartment unit which is currently vacant and empty, it does not track as well. that could also be a product of the blinds hanging up in my unit that are not hanging up in the vacant unit, filtering out additional infrared.
I'm experimenting with placing 3D objects over the real time input from the Tango color camera.
One problem here is that the hardware color camera 'point' in a (strange) direction. I wasn't able to get the direction vector from the api until now. Your virtual camera for rendering the scene needs this rotation to render 3D objects properly.
There are augmented reality examples of Tango's Unity plugin:
https://developers.google.com/tango/apis/unity/unity-simple-ar
They solve this problem with a matrix that rotates the 3d camera.
It can be found in the Unity script "TangoARPoseController" (C#) that, when attached to a unity camera, rotates it so that it looks at the scene in the right direction. The matrix is obtained in the method "SetCameraExtrinsics" of that script.
Unfortunately, when I apply the matrix to my unity scene it does not produce a perfect overlay (actually it's quiet bad). But I have other sources of position input which may be the problem here.
However, until now I'm not sure if the matrix used in the examples is good enough for accurate ar overlays. Maybe it is just suitable for demonstration purposes. But it should be a good starting point for further investigation.
Are we talking about displaying the 'webcam' in the background as opposed to a skybox ?
Take a look at my GhostHunter repo. It includes a shader and a script for displaying the rear facing camera 'behind' the gameplay objects (like the skybox). It should be useable with Tango and it is better than the 'display on a mesh' technique I`ve seen others used.
https://github.com/NVentimiglia/Augmented-Reality-Ghost-Hunter