I am trying to wrap my head around Vuforia's capabilities. I want to make an app which lets me place a 3D object into a camera view and have that 3D object stick to the world. I've been learning how to use Vuforia in Unity3D, and Vuforia seems to be slightly capable of this, but is severely limited by its craving for "Targets". It doesn't seem to be able to do much if I don't give it some sort of target.
One workaround I've found is to set the ARCamera's World Center Mode to DEVICE_TRACKING. This seems to let me place a 3D object into the world (in Unity) and have this object overlay into the camera feed, almost making it seem like it's anhcored to the real world. This doesn't work perfectly though: it tracks properly when I angle the device up/down/left/right (rotation), but it does not seem to track the device's translational motion; that is, when I move the device forward/back/left/right, the overlaid object doesn't get closer/farther nor does it rotate as I move around it.
Is it possible to get this sort of tracking out of Vuforia, or am I better off switching to something like Google Tango?
The difficulty with setting World Center Mode to CAMERA in Vuforia is that apparently 3D objects rotate around the camera based on its accelerometer/gyroscope changes. This doesn't allow for objects to be anchored to the environment. Instead they follow with the camera.
Kudan is a good markerless tracking option.
Related
I'm making an augmented reality app with Unity and Mapbox for both ios and android. I have data sets that I am using to make markers in the real world when someone uses the app. I collected json files and converted them to geojson files and then I made a custom map in Mapbox Studio with these 4 different geojson files. Basically I want to have the markers from the datasets I collected to show up in the real world. I am not sure how to get these markers to show up in the real world and not with building prefabs. Example of my custom app made in Mapbox. Each color shows a different category of markers. There are four categories.
Here is an example of what I am referring to.
In this image skeletons can show up in the real world.
Here is an example of what I am not referring to.
In this image droids are place in a map but it is not the real world. It is like Pokemon Go where the map is generated with location but you don't actually see the real world when you are playing.
I already have my Unity project set up and this is the final step, but I am just having issues getting it to show up in the real world. So far, tutorials only show on to get it to reflect something like Pokemon Go.
You will have one scene with a stationary Camera. Your code will monitor the MapBox data in Update(), constantly passing the current GPS position and receiving your list of markers/points of interest. You can simply randomly spawn skeletons in a sphere area (see https://docs.unity3d.com/ScriptReference/Random-insideUnitSphere.html) around the Camera's transform position once you detect that the user's GPS position is in within a certain distance of the center of your point of interest. Keep track of that list, and destroy the skeletons once they leave the area - and have some way of making sure you only spawn them once for that area.
Your skeletons should have a NavMeshAgent, and you should generate a NavMesh onto the ARFoundation plane for them to walk on. In this case, the plane is probably dynamically created and you may need to use the dynamic NavMesh component https://github.com/Unity-Technologies/NavMeshComponents. If you tell the NavMeshAgent to go to a specific point it will walk to the closest point - so even though you get a random position in the sphere in 3D, the skeleton will move or spawn onto the nearest point so there is no need to figure out how to convert it to the 2D plane space.
Your AR view, both the tracking of the camera position/angle and the generation of a plane representing the ground, will be something generated by ARFoundation and it is simple to add the basic functionality. They have a prefab that already includes the camera and generates the plane for you. You can get ARFoundation via the Unity Package Manager. It will work with many different types of devices.
You should start with a cheap Android phone or tablet, even if you own an iPhone, because it's easier to load the APK and debug/develop your app via Android build.
This is a simplification. I recommend using Singletons, ScriptableObjects, Object Pooling, and other Unity paradigms and plenty of other things within Unity that would help you but as another user pointed out - you may want to spend time learning Unity, ARFoundation, MapBox, and ask more specific programming questions when you are ready.
I need to insert some virtual objects in an indoor environment, but I need the position of these objects to be fixed. I have already tried using markers with the vuforia but it is complicated, it takes time to recognize. I'm thinking of using Google's ArCore. Does anyone know if this is possible and, if so, do they know how to do it?
I'm using Unity to do this. Can someone help me?
ARCore places the camera relative to the detected plane, so you will need a plane at some point so the application can locate the camera into the game.
HelloAR shows how this works, you may test into the unity editor and see how the camera moves arround the points and the detected plane.
One solution for your problem may be the image detection of ARCore + Plane detection, you place the image on the floor and when the image is detected you will have your objects in place while you move arround, but you will need to have a plane to move, not only the image detection, because if you don't, you will lose the objects once the camera loses the image.
I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.
Kinematic based world being messed on movement
Hello, I have been developing a humble AR-based game in Unity3d. Until this point, I have been using Vuforia to deploy my scene on a (multi)tracker. However, I have been doing tests with Kudan and I´m quite happy with its tracking performance when using a tracker.
http://i.imgur.com/nTHs6cM.png
My engine is based on collisions by raycasts and not "UnityEngine.Physics" (Almost Everything is Kinematic). I have stumbled into a problem when I deploy my 3d environment on a tracker using the Kudan engine, my whole physics get messed up. If the marker is moved the elements move with it but the axis seem to change with marker but my physics seem to respond to my old axis orientation. My characters are always standing upward in the world Y axis (not the local inside the tracker). Another issue is that my player 3D asset keeps switching between "standing" and "falling" status and eventually clipping and falling through the floor (this is probably due to the jitter in the camera detection).
http://i.imgur.com/ROn4uEz.png
One of the solutions that come to mind is to use a local coordinate system but I hope that there is an alternative solution since when I was using Vuforia I did not have to do any further corrections.
Any links or feedback are appreciated.
You could use transform.InverseTransformPoint and transform.InverseTransformDirection combined with Quaternion.LookDirection to get the position and rotation of the kudan camera relative to the MarkerTransformDriver object. This will allow you to position a camera in world space and keep whatever content you want to augment static at the unity3d world origin.
cameraPos = markerTransform.InverseTransformPoint(kudanCamera.position);
cameraRot = Quaternion.LookRotation(markerTransform.InverseTransformDirection (kudanCamera.transform.forward));
I am trying to make an mobile application that contains AR(Augumented Reality)-Mode using Unity3D. So I have connected my mobile device with my unity3d program, and the camera works fine. But when move the mobile device, the main camera inside unity program does not move the same orbit that the mobile device moves. Does any one know how to change or control the orbit of the main Camera in unity3d?
This could be happening due to a number of reasons. It could be due to non centered pivots, or coordinate systems for example.
Could you please specify which AR system are you using? As a side note, at work we recently had a project involving Unity3d and Metaio and it was a nightmare to bend the system to do what we needed, specially when we needed to do a lot of object positioning based on the local coordinate system.
When you refer to the orbit of the camera, I imagine it could be that the pivot of the camera is somehow offset and the camera is rotating around that offset. Or maybe that the camera is a child of the actual Game Object that is controlled by the AR system, in which case this parent node acts as a pivot to the camera.
In the picture below you can see that the camera is away from that center point and when it rotates it does it based on that center point, in other words the camera always tries to look at that center point and it gives that feeling of "orbiting" when it moves.
Here's the link to the image (I can't post pictures yet on this forum -.- )
http://i.stack.imgur.com/fIcY2.png