Is there a possibility with a google tango camera to create a situation, that my player goes on the table and if he comes out of the table he falls? Has anyone ever done anything similar and has references or ideas on how to do it?
in order to implement the functionality that you described, you will need to find different planes from the real world and translate their position into Unity scene. There is a class in Tango SDK, called TangoPointCloud which contains several methods for recognizing planes and translate their position into unity scene points. By knowing the positions of the table and the floor, you might be able to implement the feature you want. In my case, TangoPointCloud helped me find the walls from a room and their position relative to unity scene units.
Related
I'm making an augmented reality app with Unity and Mapbox for both ios and android. I have data sets that I am using to make markers in the real world when someone uses the app. I collected json files and converted them to geojson files and then I made a custom map in Mapbox Studio with these 4 different geojson files. Basically I want to have the markers from the datasets I collected to show up in the real world. I am not sure how to get these markers to show up in the real world and not with building prefabs. Example of my custom app made in Mapbox. Each color shows a different category of markers. There are four categories.
Here is an example of what I am referring to.
In this image skeletons can show up in the real world.
Here is an example of what I am not referring to.
In this image droids are place in a map but it is not the real world. It is like Pokemon Go where the map is generated with location but you don't actually see the real world when you are playing.
I already have my Unity project set up and this is the final step, but I am just having issues getting it to show up in the real world. So far, tutorials only show on to get it to reflect something like Pokemon Go.
You will have one scene with a stationary Camera. Your code will monitor the MapBox data in Update(), constantly passing the current GPS position and receiving your list of markers/points of interest. You can simply randomly spawn skeletons in a sphere area (see https://docs.unity3d.com/ScriptReference/Random-insideUnitSphere.html) around the Camera's transform position once you detect that the user's GPS position is in within a certain distance of the center of your point of interest. Keep track of that list, and destroy the skeletons once they leave the area - and have some way of making sure you only spawn them once for that area.
Your skeletons should have a NavMeshAgent, and you should generate a NavMesh onto the ARFoundation plane for them to walk on. In this case, the plane is probably dynamically created and you may need to use the dynamic NavMesh component https://github.com/Unity-Technologies/NavMeshComponents. If you tell the NavMeshAgent to go to a specific point it will walk to the closest point - so even though you get a random position in the sphere in 3D, the skeleton will move or spawn onto the nearest point so there is no need to figure out how to convert it to the 2D plane space.
Your AR view, both the tracking of the camera position/angle and the generation of a plane representing the ground, will be something generated by ARFoundation and it is simple to add the basic functionality. They have a prefab that already includes the camera and generates the plane for you. You can get ARFoundation via the Unity Package Manager. It will work with many different types of devices.
You should start with a cheap Android phone or tablet, even if you own an iPhone, because it's easier to load the APK and debug/develop your app via Android build.
This is a simplification. I recommend using Singletons, ScriptableObjects, Object Pooling, and other Unity paradigms and plenty of other things within Unity that would help you but as another user pointed out - you may want to spend time learning Unity, ARFoundation, MapBox, and ask more specific programming questions when you are ready.
Following the google-vr sample I manage to add a camera and controller to my scene.
The next thing I need is to get the distance between my controller to any pointed game object in the scene.
After searching for a while, I cannot find any tutorial nor information on how to get the distance.
So, is there any newest working tutorial on how to do this? (Many tutorial on the internet is outdated since google updates its API so frequently)
Or it is actually a simple task i.e. I can get the value from GvrPointerInputModule.Pointer / GvrLaserPointer / some other GVR class?
Thanks in advance~
You need to do raycasts from the controller and measure the difference between the hit location and the origin of the Ray cast. I think unity raycasts can return this distance built-in.
Just as I suspected, GvrLaserPointer is the answer.
If its CurrentRaycastResult.gameObject is not null, then the laser is intersecting with something. Then, we can get the intersection point from CurrentRaycastResult.worldPosition.
Using this point, we can easily calculate the distance.
Note: Just in case anyone failing with this method, like I did before. Check your ray casting group. Make sure that your Raycaster Event Mask in GVRPointerPhysicsRaycaster only include the desired layers. And if you have any canvas in screen space, check its Blocking Mask in Graphic Raycaster. It's Everything by default and your pointer may keep intersecting with the canvas, resulting in "weird" intersection point. This the cause of my problem, and to fix it, I select Nothing for Blocking Mask, and voila.
I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.
I'm somewhat familiar with tango and unity. I have worked through the examples and can get them to work correctly. I have seen some people doing an AR type example where they have their custom objects in an area to interact with or another example would be directions where you follow a line to a destination.
The one thing I cannot figure out is how to precisely place a 3d object into a scene. How are people getting that data to place it within unity in the correct location? I ha e an area set up and the AR demo seems promising but I'm not placing objects with the click of a finger. What I am looking to do is when they walk by my 3d object will already be there and they can interact with it. Any ideas? I feel like I've been searching everywhere with little luck to an answer to this question.
In my project, I have a specific space the user will always be in - so I place things in the (single room) scene when I compile.
I Create an ADF using the provided apps, and then my app has a mode where it does the 3D Reconstruction and saves off the mesh.
I then load the Mesh into my Unity Scene (I have to rotate it by 180° in the Y axis because of how I am saving the .obj files)
You now have a guide letting you place objects exactly where you want them, and a nice environment to build up your scene.
I disable the mesh before I build. When tango localises, your unity stuff matches up with the tango world space.
If you want to place objects programatically, you can place them in scripts using Instantiate
I also sometimes have my app place markers with a touch, like in the examples, and record the positions to a file, which I then use to place objects specifically... But having a good mesh loaded into your scene is really the nicest way i've found.
I have created a 3d environment full of 3D cubes, does anyone have any idea how you would detect a touch on one of these Cubes. I thinking if I could get the cubes screen position (coords start from bottom left) then it would be pretty easy
UPDATE:
I added the function -(CGPoint)getScreenCoorOfPoint:(IMPoint3D)_point3D which seems to give me my items position in the world but the bit I am now stuck on is:
I have objects that have a position
I have my position in the world (gluLookAt eye[0], eye[1], eye[2])
and then I have where I tapped on the screen
How do I join all this up, its the last thing in my way to archiving greatness!!!!
Look up OpenGL picking on Google. There are two main methods to accomplish this, I recommend you use the second one described at OpenGL.org as it does not involve rendering anything offscreen:
[…] involves shooting a pick ray through the mouse location and testing for intersections with the currently displayed objects. OpenGL doesn't test for ray intersections, but you'll need to interact with OpenGL to generate the pick ray.
Also see this question for some discussion on the matter:
Screen-to-World coordinate conversion in OpenGLES an easy task?