I am currently developing an application for hololens 1 with Unity and MRTK and i would like to perform a very simple task.
Reseting the camera transform to the origin.
I try some actions but all with no sucess :
Get the camera and play space and set their position and rotation at 0.
Get the "MixedRealityCameraSystem" via the MRTK and use the Reset() function.
Indeed, the camera position is controlled by the user head and once the app is started i don't know how to recenter this position.
Does anyone know if there is a way to simply reset the camera transform ?
Thank you very much in advance for your time and help.
As mentioned above, you cannot modify the camera position at runtime.
But If what you are interested in is only the position data. As a workaround, we recommend that you offset the position data of the camera before outputting it. Specifically, you first calculate the correction value between the camera and origin of the coordinate system before load your next scene. Then, after loading the new scene, subtract the correction value when outputting the head position log information.
Related
I'm trying to figure out Cinemachine to create some kind of Top view perspective. I managed to find the setting to make sure the camera keeps it's rotation, and follows the player. My issue now is that little extra movement that happens when the player is moving. Is there a way for me to get rid of it, so that my camera stays still?
What I have
Camera Settings
What I'm trying to achieve
Edit:
Edit after comment
Camera settings
You can set Body type to transposer and Binding Mode to world space:
Have you tried to change the aim setting to Do Nothing? If you leave it to Hard look at it will always keep your player at the center of the camera.
You can read the documentation here:
https://docs.unity3d.com/Packages/com.unity.cinemachine#2.6/manual/CinemachineVirtualCameraAim.html
I have been struggling with some rotations maths for a feature on my project.
I am bassicly using a gyroscope input from a phone and combining a touch input in order to recreate the same behaviour as the youtube 360 video player input. (Using Unity)
So in other words im trying to add the touch input (only rotation on the X and Y Axis) to the gyroscope free to rotate in all angles.
I tried building 2 quaternion, one for the gyro and one quaternion with the touch input. If i start up the app and stay looking at the same direction with the phone, both are adding up fine, but if i change my phone orientation in the Y axis and start using the touch input up and down becomes the roll instead of the yaw.
I tried changing the quaternion addition order but it did not fix my issue.
After playing around with a minimal setup in Unity, i figured what i need to do is recreate the same relation a child and parent object have regarding rotation.
Sorry for the lack of capture and screenshots im trying to find the best way to document the issue.
Thanks
I am implementing a "companion map" for a HoloLens application using Unity and Visual Studio. My vision is for a small rectangular map to be affixed to the bottom right of the HoloLens view, and to follow the HoloLens user as they move about, much like the display of a video game.
At the moment my "map" is a .jpeg made into a material and put on an upright plane. Is there a way for me to affix the plane such that it is always in the bottom right of the user's view, as opposed to being fixed in the 3D space that the user moves through?
The Orbital Solver in MRTK can implement this idea without even writing any code. It can lock the map to a specified position and offset it from the player.
To use it what you need to do is:
Add Orbital Script Component to your companion map.
Modify the Local Offset and World Offset properties to keep the map in the bottom right of the user's view.
Modify the Orientation Type as Face Tracked Object.
Besides, the SolverExamples scene provided by the mrtkv2 SDK is an excellent outset to become familiar with Solver components
I am developing an augmented reality application that tracks an object via camera (real object, using Vuforia), my aim is to detect the distance it pass.
I am using unity + Vuforia.
For each frame, I calculate the distance between the first position and the current position (Vector calculating).
But I got wrong position/s details, and camera movements affect the result.
(I don't want to take the camera offset in account)
any solution?
for more clearing I want to implement this experience: (video):
https://youtu.be/-c5GiXuATh4
From the comments and the question i understood problem is using camera as origin. This means at all frames of your application camera will be origin and the position of all trackables will be calculated relative to camera. Therefore, even though if you do not move your target, it's position will change because of camera movement.
To eliminate this problem i would recommend using extended tracking. This will minimize the impact of camera movement to position of your target. You can try and test this by adding a trail renderer to your image and you will see your image will stay at a certain position regardless of camera movement.
So I wanted to edit the starting height of the camera in my scene and tried to move the camera up a bit. Nothing happened. Also tested it buy moving the camera up a large amount, am I right in assuming that the camera always 'spawns'/'initiates' at (0,0,0), never using the position of it in the scene.
If this assumption is correct how can I possibly get the camera to initiate at the camera position in screen. I don't really want to move the entire scene to re-position the camera.
I think there are two solutions for this (specifically for TangoUnitySDK):
You could modify the PoseController.cs. For example, here in the example code, you could add an offset vector.
Use the DeltaPoseController prefab instead of the PoseController. DeltaPoseController will keep the initial position as the starting point, so you could just put it at the place that where you want the application to start.
You could create a simple script where you set the position of your camera in the start() method:
void Start()
{
transform.position += new Vector(0, 100f, 0);
}
then just attach the script to your camera
just figured it out myself. Hope this helps you!
newPose = new Vector3(0,10,0);
yourTangoCamera.GetComponent<TangoDeltaPoseController> ().SetPose (newPose,yourTangoCamera.transform.rotation);
Read more about SetPose here:
https://developers.google.com/tango/apis/unity/reference/class/tango-delta-pose-controller#class_tango_delta_pose_controller_1a19c0ea02c4c538ffcf7cdc3423b222b8