I am trying to develop a HUD for my VR players using two cameras. One is a stationary camera that points at my HUD objects with a depth=1, the other is the main camera with depth=0 which tracks the HMD and has the HUD overlaid onto it.
However Unity, SteamVR or OpenVR is forcing my camera to track the player's HMD. After my exhaustive search I was unable to find any way to stop this behavior. Then I tried to parent all of my HUD objects to my HUD camera and hoped for the best.
This results in undesirable jerky motion in all HUD objects:
Project Hierarchy:
[WorldHUD]
|__ Container (HUD)
|__ Camera (eye)
|__ TextMesh (Status) <- Is child of camera
[CameraRig]
|__ Camera (Head)
|__ Camera (eye)
[SteamVR]
I believe I really need control over my camera's transform so I may prevent it from moving or stopping it from tracking the player's HMD.
I have tried updating the cameras position in Update() and LateUpdate() but that does not have any affect. I also spent many hours modifying/debugging SteamVR and OVR scripts which had no positive results and the cameras never stopped tracking.
I have tried using this suggested script on my cameras parent:
NegateTracking.cs:
using UnityEngine;
using UnityEngine.VR;
public class NegateTracking : MonoBehaviour
{
void LateUpdate()
{
transform.position = -InputTracking.GetLocalPosition(VRNode.CenterEye);
transform.rotation = Quaternion.Inverse(InputTracking.GetLocalRotation(VRNode.CenterEye));
}
}
This caused the camera to point in one direction but was still translating a bit. However I noticed that things seemed even more choppy this way and feel I'm either using it wrong or it just isn't sufficient.
I'm now back to looking for ways to disable headtracking on certain cameras so I am able to position them in my scene.
Does anyone know how SteamVR/OpenVR takes control of these camera's transforms? Is there any way to disable or overridden this behavior?
Thank you.
So, just spent the whole night developing a functional workaround and put together a project that uses a little post processing magic to essentially "spoof" a stationary camera. My idea was to generate two renderTextures from a custom stereo camera setup consisting of two separate cameras that are not being tracked (which allows control of position, rotation, convergence plane. fov and so on). I then pass those camera's (left and right) renderTextures to the left and right eyes of one of the "tracked" cameras within the OnRenderImage() function. It worked better than I expected, So I decided to put everything up on GitHub and write a short tutorial for anyone else needing to do this.
Stationary Stereo Camera (Download From GitHub)
Or you can change the target display and the target type (to main display) in your Camera component. (can also be done in a script)
Related
I'm using unity 2018.4.14f1 personal (I don't use 2019 or 2020 because it lags my computer)
I'm using the Unity Standard Assets Player Prefab and Cinemachine Freelook for the camera. I have some water, and when my player walks into it, its fine. However, when the camera comes into the water, it stops rendering the water. Is there anyway I can fix it?
Update: I've somewhat got it working, however its hollow when your inside. Is there anyway to fix that?
Video : https://easyupload.io/2b0p3a
(I'm quite a noob so if you need any screenshots please ask.)
The problem here is that the water will only rendered when looking from the outside as the normalized are modeled so. The program renders outs objects that it thinks is not in view. You can load the model into a 3d program and then copy and invert the model to allow your camera to see the water, or I believe there are some shader option to stop this optimization. You can also look in this Reddit thread.
I'm very new to Unreal Engine 4 and have been following an fps guide online!
Currently have an AK and M4 in the game and can switch between the two using 1 / 2 on the keypad. I had to setup the first aim down sights camera to the AK and it works well! However if I equip the M4 and aim down sights then the camera is no longer in the correct spot and it doesn't line up at all with the ironsights. So I added another camera called M4A1 ADS Camera, but can't figure out how to switch to that camera when aiming down sights then going back to the AK camera if using that weapon.
Is there a better way of doing this or any tutorials / tips to help with the process for the future?
If I want to try and answer your question I'd say that you should add a switch case or make branches to check wich weapon is equipped at the time.
But I'd say a better way to do this would be to add a camera to your weapon blueprint then you could access the camera from the weapon directly (assuming you have a master weapon class). This way you would configure 1 ADS camera per weapon and align it properly in it own blueprint.
you can use "Set View Targent With Blend" function to change your cameras, it is very good for changing speed, and blending other things.
I know this is old but even cleaner than Deimos's suggestion would be to have an ADS camera component on your character and attach it to a socket you create on each of your weapons. You can adjust the socket position and rotation on each weapon's skeleton and then all you do from the character side is attach the camera to the weapon any time you equip one.
Kinematic based world being messed on movement
Hello, I have been developing a humble AR-based game in Unity3d. Until this point, I have been using Vuforia to deploy my scene on a (multi)tracker. However, I have been doing tests with Kudan and I´m quite happy with its tracking performance when using a tracker.
http://i.imgur.com/nTHs6cM.png
My engine is based on collisions by raycasts and not "UnityEngine.Physics" (Almost Everything is Kinematic). I have stumbled into a problem when I deploy my 3d environment on a tracker using the Kudan engine, my whole physics get messed up. If the marker is moved the elements move with it but the axis seem to change with marker but my physics seem to respond to my old axis orientation. My characters are always standing upward in the world Y axis (not the local inside the tracker). Another issue is that my player 3D asset keeps switching between "standing" and "falling" status and eventually clipping and falling through the floor (this is probably due to the jitter in the camera detection).
http://i.imgur.com/ROn4uEz.png
One of the solutions that come to mind is to use a local coordinate system but I hope that there is an alternative solution since when I was using Vuforia I did not have to do any further corrections.
Any links or feedback are appreciated.
You could use transform.InverseTransformPoint and transform.InverseTransformDirection combined with Quaternion.LookDirection to get the position and rotation of the kudan camera relative to the MarkerTransformDriver object. This will allow you to position a camera in world space and keep whatever content you want to augment static at the unity3d world origin.
cameraPos = markerTransform.InverseTransformPoint(kudanCamera.position);
cameraRot = Quaternion.LookRotation(markerTransform.InverseTransformDirection (kudanCamera.transform.forward));
I am making a VR Game in Unity. But the problem is, after generating the apk and installing it in my phone, when I look through the cardboard my first person character is fixed at a single position only.
When I look at different directions, the fps arm remains at the same position, it doesn't rotate according to the direction I am facing.
I am using Unity Cardboard asset and I am working on Unity 5.
I've had a similar problem before, make sure that your model is a child of the Head Component, that way your model will be fixed to the head as it rotates.
EDIT
From the image you supplied in your question, you have the Unity Standard Asset FPS controller. This moves by mouse movement, which of course you cannot do on a phone. Because your arms are a child of the FPS Controller, they will only move if the mouse moves. Therefore you need to make your Arms a child of the Head component, like so:
I have successfully enabled 3d object detection through vuforia in Unity. I have attached a crosshair (reticle) at the centre of the screen in screenspace overlay. when the user moves his phone over the 3d object which is produced upon object detection, I want a label to appear when crosshair crosses different parts of the 3d object. I tried many methods including, collision, cursor and reticle. It is not working.
Is there any easy way to implement this so that I can use event trigger pointer enter to make few things happening on the game.
I successfully solved my problem. The solution is using worldspace crosshair.
Most of the crosshairs available in the assets are cemeraspace. therefore using a worldspace corsshair solved my problem. It may be useful to someone in future.