Using PointCloud Prefab on Unity 3D - unity3d

I tryed to implement the "Measure It" app on Unity 3D. I started with the PointCloud example scene downloaded on tango's website.
My problem is, when i look in 1st Person view, the point cloud don't fiel the screen, and when i look in 3rd Person I can see the point outside the Unity Camera FOV.
I don't see this problem on the Explorer app, but it looks to be made in Java so I think it's a Unity compatibility problem.
Does someone have the same problem, or a solution?
Unity 3D 5.1.1
Google Tango urquhart
Sorry for my poor english,
Regards.
EDIT :
It looks like the ExperimentalAugmentedReality scene is using the point cloud to place markers in real world, and this point cloud is right in front of the camera. I don't see any script difference between them so i don't understand why it works. If you've any idea.

I think it makes sense to divide you question into two parts.
Why the points are not filling in the screen in the point cloud example.
In order to make the points to fill in the first person view camera, the render camera's FOV needs to match the physical depth camera's FOV. In the point cloud example, I believe Tango is just using the default Unity camera's FOV, that's why you saw the points is not filling the screen(render camera).
In the third person camera view, the frustum is just a visual representation of the device movement. It doesn't indicate the FOV or any camera intrinsics of the device. For the visualization purpose, Tango explore might specifically matched the camera frustum size to the actual camera FOV, but that's not guaranteed to be 100% accurate.
Why the AR example works.
In the AR example, we must set the virtual render camera's FOV to match the physical camera's FOV, otherwise the AR view will be off. On the Tango hardware, the color camera and depth camera are the same camera sensor, so they shared a same FOV. That's why the AR example works.

Related

Unity3d: How to map real-world coordinates to scene coordinates?

I have a physical (real-world) camera and a Unity3d Scene. I want to map the physical camera coordinate system to the virtual scene, 1:1.
For example, imagine the physical camera is pointed at the sky and an aircraft flies overhead. I want to have the physical aircraft appear in my virtual environment, at the correct location. I can get the ADS-B data (which describes position and altitude of the aircraft) and a generic 3D aircraft model. I can import that 3D aircraft model into my scene, but how do I know where to put it and at which height in the scene? And when I move the physical camera, I want the virtual camera to move in the same way.
Put another way, if you wanted to recreate the Earth (ignoring all textures, lighting, etc.) in Unity3D, how would you ensure that objects in the physical world appear in the same location as in your virtual Earth?
How can I do this?
Unity has a built-in LocationService class to determine your location on the globe. Then there is Input.gyro, which can be used to determine where approximately you are pointing. Use this information and the flight transponder data to compute your position relative to the aircraft. Obviously, this will be wildly inaccurate. But, as others suggested, you can gain additional accuracy by setting your virtual camera up as a Unity physical camera and matching it up with your real-world camera. From your camera footage, extract the clip space position of the aircraft using some kind of image recognition method (i.e. a ComputeShader retrieving the location of a small dark spot) and then use Camera.ScreenPointToRay of that position to get a vector towards where the airplane should be in the scene. Using this, correct your virtual user position in the 3D scene, such that the ray lines up with the vector from your virtual camera to the virtual aircraft.
I think what you want to do is to do 1:1 map of real world in your unity scene. For that you will need to read this documentation. Basically 1 unit in unity is 1 meter. To make dimensions of 3D objects 1:1 to real life dimensions you will need to resize/change scale of those objects manually. Good Luck with that!
If you want a virtual camera that is just like your physical one then there's toggle in "camera" component called "physical camera". There you can set it's sensor data and all that kind of stuff. If it's not it, I don't know.

How to fix objects in the indoor environment with ArCore?

I need to insert some virtual objects in an indoor environment, but I need the position of these objects to be fixed. I have already tried using markers with the vuforia but it is complicated, it takes time to recognize. I'm thinking of using Google's ArCore. Does anyone know if this is possible and, if so, do they know how to do it?
I'm using Unity to do this. Can someone help me?
ARCore places the camera relative to the detected plane, so you will need a plane at some point so the application can locate the camera into the game.
HelloAR shows how this works, you may test into the unity editor and see how the camera moves arround the points and the detected plane.
One solution for your problem may be the image detection of ARCore + Plane detection, you place the image on the floor and when the image is detected you will have your objects in place while you move arround, but you will need to have a plane to move, not only the image detection, because if you don't, you will lose the objects once the camera loses the image.

Can Vuforia track spatial location when using targetless device tracking?

I am trying to wrap my head around Vuforia's capabilities. I want to make an app which lets me place a 3D object into a camera view and have that 3D object stick to the world. I've been learning how to use Vuforia in Unity3D, and Vuforia seems to be slightly capable of this, but is severely limited by its craving for "Targets". It doesn't seem to be able to do much if I don't give it some sort of target.
One workaround I've found is to set the ARCamera's World Center Mode to DEVICE_TRACKING. This seems to let me place a 3D object into the world (in Unity) and have this object overlay into the camera feed, almost making it seem like it's anhcored to the real world. This doesn't work perfectly though: it tracks properly when I angle the device up/down/left/right (rotation), but it does not seem to track the device's translational motion; that is, when I move the device forward/back/left/right, the overlaid object doesn't get closer/farther nor does it rotate as I move around it.
Is it possible to get this sort of tracking out of Vuforia, or am I better off switching to something like Google Tango?
The difficulty with setting World Center Mode to CAMERA in Vuforia is that apparently 3D objects rotate around the camera based on its accelerometer/gyroscope changes. This doesn't allow for objects to be anchored to the environment. Instead they follow with the camera.
Kudan is a good markerless tracking option.

Camera-Offset | Project Tango

I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.

unity3d - how to control the movement of the main Camera in Unity3d

I am trying to make an mobile application that contains AR(Augumented Reality)-Mode using Unity3D. So I have connected my mobile device with my unity3d program, and the camera works fine. But when move the mobile device, the main camera inside unity program does not move the same orbit that the mobile device moves. Does any one know how to change or control the orbit of the main Camera in unity3d?
This could be happening due to a number of reasons. It could be due to non centered pivots, or coordinate systems for example.
Could you please specify which AR system are you using? As a side note, at work we recently had a project involving Unity3d and Metaio and it was a nightmare to bend the system to do what we needed, specially when we needed to do a lot of object positioning based on the local coordinate system.
When you refer to the orbit of the camera, I imagine it could be that the pivot of the camera is somehow offset and the camera is rotating around that offset. Or maybe that the camera is a child of the actual Game Object that is controlled by the AR system, in which case this parent node acts as a pivot to the camera.
In the picture below you can see that the camera is away from that center point and when it rotates it does it based on that center point, in other words the camera always tries to look at that center point and it gives that feeling of "orbiting" when it moves.
Here's the link to the image (I can't post pictures yet on this forum -.- )
http://i.stack.imgur.com/fIcY2.png