track real object movements via vofuria and unity ignoring camera movements - unity3d

I am developing an augmented reality application that tracks an object via camera (real object, using Vuforia), my aim is to detect the distance it pass.
I am using unity + Vuforia.
For each frame, I calculate the distance between the first position and the current position (Vector calculating).
But I got wrong position/s details, and camera movements affect the result.
(I don't want to take the camera offset in account)
any solution?
for more clearing I want to implement this experience: (video):
https://youtu.be/-c5GiXuATh4

From the comments and the question i understood problem is using camera as origin. This means at all frames of your application camera will be origin and the position of all trackables will be calculated relative to camera. Therefore, even though if you do not move your target, it's position will change because of camera movement.
To eliminate this problem i would recommend using extended tracking. This will minimize the impact of camera movement to position of your target. You can try and test this by adding a trail renderer to your image and you will see your image will stay at a certain position regardless of camera movement.

Related

Reset the Camera Transform with MRTK and Hololens

I am currently developing an application for hololens 1 with Unity and MRTK and i would like to perform a very simple task.
Reseting the camera transform to the origin.
I try some actions but all with no sucess :
Get the camera and play space and set their position and rotation at 0.
Get the "MixedRealityCameraSystem" via the MRTK and use the Reset() function.
Indeed, the camera position is controlled by the user head and once the app is started i don't know how to recenter this position.
Does anyone know if there is a way to simply reset the camera transform ?
Thank you very much in advance for your time and help.
As mentioned above, you cannot modify the camera position at runtime.
But If what you are interested in is only the position data. As a workaround, we recommend that you offset the position data of the camera before outputting it. Specifically, you first calculate the correction value between the camera and origin of the coordinate system before load your next scene. Then, after loading the new scene, subtract the correction value when outputting the head position log information.

Offsetting the rendered result of a camera in Unity

I am trying to make everything that is rendered by my perspective "Camera A" appear 100 points higher. This is due to the fact that my App has an interface with an open space on the upper part.
My app uses face detection to simulate the face movement into an in game avatar. To do this I compute the "Model-View-Matrix" to set it into the camera's "worldToCameraMatrix".
So far this works well, but everything is rendered with the center as the origin, now i want to move this center origin a certain distance "up" so that it matches my interface.
Is there a way to tell Unity to offset the rendered camera result?
An alternative I thought about is to render into a texture, then I can just move the texture itself, but I thought there must be an easier way.
By the way, my main camera is orthographic, and i use this one to render the camera texture. In this case simply moving the rendering game object quad up does the trick.
I found a property called "pixelRect", the description says:
Where on the screen is the camera rendered in pixel coordinates.
However moving the center up seems to scale down my objects.
You can set the viewport rect/orthosize so that its offset or you can render to a render texture and render that as a overlay with a offset or diffirence in scale.
Cheers

Camera-Offset | Project Tango

I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.

Using PointCloud Prefab on Unity 3D

I tryed to implement the "Measure It" app on Unity 3D. I started with the PointCloud example scene downloaded on tango's website.
My problem is, when i look in 1st Person view, the point cloud don't fiel the screen, and when i look in 3rd Person I can see the point outside the Unity Camera FOV.
I don't see this problem on the Explorer app, but it looks to be made in Java so I think it's a Unity compatibility problem.
Does someone have the same problem, or a solution?
Unity 3D 5.1.1
Google Tango urquhart
Sorry for my poor english,
Regards.
EDIT :
It looks like the ExperimentalAugmentedReality scene is using the point cloud to place markers in real world, and this point cloud is right in front of the camera. I don't see any script difference between them so i don't understand why it works. If you've any idea.
I think it makes sense to divide you question into two parts.
Why the points are not filling in the screen in the point cloud example.
In order to make the points to fill in the first person view camera, the render camera's FOV needs to match the physical depth camera's FOV. In the point cloud example, I believe Tango is just using the default Unity camera's FOV, that's why you saw the points is not filling the screen(render camera).
In the third person camera view, the frustum is just a visual representation of the device movement. It doesn't indicate the FOV or any camera intrinsics of the device. For the visualization purpose, Tango explore might specifically matched the camera frustum size to the actual camera FOV, but that's not guaranteed to be 100% accurate.
Why the AR example works.
In the AR example, we must set the virtual render camera's FOV to match the physical camera's FOV, otherwise the AR view will be off. On the Tango hardware, the color camera and depth camera are the same camera sensor, so they shared a same FOV. That's why the AR example works.

unity3d - how to control the movement of the main Camera in Unity3d

I am trying to make an mobile application that contains AR(Augumented Reality)-Mode using Unity3D. So I have connected my mobile device with my unity3d program, and the camera works fine. But when move the mobile device, the main camera inside unity program does not move the same orbit that the mobile device moves. Does any one know how to change or control the orbit of the main Camera in unity3d?
This could be happening due to a number of reasons. It could be due to non centered pivots, or coordinate systems for example.
Could you please specify which AR system are you using? As a side note, at work we recently had a project involving Unity3d and Metaio and it was a nightmare to bend the system to do what we needed, specially when we needed to do a lot of object positioning based on the local coordinate system.
When you refer to the orbit of the camera, I imagine it could be that the pivot of the camera is somehow offset and the camera is rotating around that offset. Or maybe that the camera is a child of the actual Game Object that is controlled by the AR system, in which case this parent node acts as a pivot to the camera.
In the picture below you can see that the camera is away from that center point and when it rotates it does it based on that center point, in other words the camera always tries to look at that center point and it gives that feeling of "orbiting" when it moves.
Here's the link to the image (I can't post pictures yet on this forum -.- )
http://i.stack.imgur.com/fIcY2.png