Unity Mobile depth buffer precision - unity3d

I'm a mobile game developer using Unity Engine.
Here is my problem:
I tried to render the static scene stuffs into a render target with color buffer and depth buffer, with which i render to the following frames before the dynamic objects are rendered if the game main player's viewpoint stays the same. My goal is to reduce some of draw calls as well as to save some power for mobile devices. This strategy saves up to 20% power in our MMO mobile game consumption on android devices FYI.
The following pics are screen shot from my test project. The sphere,cube and terrain are static objects, and the red cylinder is moving.
you can see the depth test result is wrong on android.
iOS device works fine, The depth test is right, and the render result is almost the same as the optimization is off. Notice that the shadow is not right but we ignore it for now.
However the result on Android is not good. The moving cylinder is partly occluded by the cube and the occlusion is not stable between frames.
The results seem that the depth buffer precision is not enough. Any ideas about this problem?
I Googled this problem, but no straight answers. Some said we cant read depth buffer on GLES.
https://forum.unity.com/threads/poor-performance-of-updatedepthtexture-why-is-it-even-needed.197455/
And then there are cases where platforms don't support reading from the Z buffer at all (GLES when no GL_OES_depth_texture is exposed; or Direct3D9 when no INTZ hack is present in the drivers; or desktop GL on Mac with some buggy Radeon drivers etc.).
Is this true?

Related

Unity MiniMap not Rendering on Phone

I'm currently doing a little 2D game where you can walk around and interact with things. Unfortunately I encountered a problem. When I made the minimap and tested it, everything went smoothly.
Here i tested it on pc in Unity
But when i build it to my phone and tested it there i just got this black square instead of my minimap.
On my Phone
I searched online in stckoverflow and in Unity docs but didnĀ“t find anything.
Here the settings from my cameras
My main camera
My MiniMap Camera
And the Settings from the
Render Texture
I solved the broblem. I just had to do the depth buffer to "at least 24 bits depth" on render texture

Reducing eye strain when displaying video in VR?

I'm working on a VR project where at some points we will display video in front of the user. I'm looking for recommendations/answers into how to maintain focus on the video without causing eye strain on the users. I want to know how others have tackled this issue of showing video for VR headsets within their projects/games.
What I have tried so far is increasing/decreasing the size and distance of the video from the players eyes/head. I have found that since you can't do overlay or world space-camera with VR canvas, you have to result to general world space placement. This results in decreasing too small and close causing a splitting effect and also blurring the video. Increasing the size as resulted in no significantly noticeable changes.
Headset: HTC Vive (Regular and Pro versions)

Physics messed up with cardboard scene in Unity

I am in the process of putting together an app using the Google Cardboard SDK. The user will be able to use the app with or without cardboard. So, there is a switch button inside the app, that activates and deactivates stereo rendering.
The app also uses the Vuforia SDK to track image targets. If a specific target is recognized, some 3D objects above the target and a particle system starts to emit particles.
Everything works fine in non-stereo mode. Particles are emitted and falling correctly as intended. They should simulate snow. Also if the user turns the image target to an angle, the 3D objects above fall down.
When switching to stereo mode, the physics are messed up completely. The snow particles are not falling anymore, they seem to "teleport" around the screen. Also the 3D objects do fall upwards, with a really heavy negative gravity. Timescale seems multiplied several times, but is not - I double checked that. Gravity also does not change when switching between non-stereo and stereo rendering.
Everything works fine in Unity Editor in moth modes. It only appears on the device, which is an iPhone 5.
Cardboard SDK is version 0.52, which is the newest.
Unity is version 5.3.1.
Vuforia is 5.0.6, which is not the newest, but release notes do not indicate a fix concerning physics. Will update it anyway as a next step.
Vuforia is 5.0.10, which is the latest version.
I double checked gravity and timescale, which are not changing when switching between modes. I have a hard time figuring out what might cause the physics to mess up.
EDIT:
I did some further investigations. I made me a little gizmo sitting always in front of the camera but getting the rotation of the Unity world space axes, so I know the 3D-world is oriented in relation to the camera. And it turns out, that when in VR mode with the Google cardboard camera system, the world does spin around the camera heavily. I managed to hold the test device in a way, so it is slowing down and almost freezing, but I have no explanation for the effect yet.
I managed to get my setup right again. Unfortunately I did not find the source of the weird behavior. But By deleting the Vuforia Prefab and the Cardboard Prefab and adding them again to the scene, the problem was solved.

Using PointCloud Prefab on Unity 3D

I tryed to implement the "Measure It" app on Unity 3D. I started with the PointCloud example scene downloaded on tango's website.
My problem is, when i look in 1st Person view, the point cloud don't fiel the screen, and when i look in 3rd Person I can see the point outside the Unity Camera FOV.
I don't see this problem on the Explorer app, but it looks to be made in Java so I think it's a Unity compatibility problem.
Does someone have the same problem, or a solution?
Unity 3D 5.1.1
Google Tango urquhart
Sorry for my poor english,
Regards.
EDIT :
It looks like the ExperimentalAugmentedReality scene is using the point cloud to place markers in real world, and this point cloud is right in front of the camera. I don't see any script difference between them so i don't understand why it works. If you've any idea.
I think it makes sense to divide you question into two parts.
Why the points are not filling in the screen in the point cloud example.
In order to make the points to fill in the first person view camera, the render camera's FOV needs to match the physical depth camera's FOV. In the point cloud example, I believe Tango is just using the default Unity camera's FOV, that's why you saw the points is not filling the screen(render camera).
In the third person camera view, the frustum is just a visual representation of the device movement. It doesn't indicate the FOV or any camera intrinsics of the device. For the visualization purpose, Tango explore might specifically matched the camera frustum size to the actual camera FOV, but that's not guaranteed to be 100% accurate.
Why the AR example works.
In the AR example, we must set the virtual render camera's FOV to match the physical camera's FOV, otherwise the AR view will be off. On the Tango hardware, the color camera and depth camera are the same camera sensor, so they shared a same FOV. That's why the AR example works.

AR Overlay Accuracy in Google Project Tango

I am experimenting with overlaying augmented reality objects over a pass-through image from the rear camera in Unity.
Has anyone experimented with overlaying objects with accurate tracking? I've tweaked the movement scale to get somewhat decent results but rotation is still not accurate and drift is a big issue.
I've had good luck with the augmented reality sample that ships with the latest tango. in my experience it does work the way you speculated where if you add items to the unity scene they are synced to motion detected by the device.
I believe the tracking and syncing function have improved since you asked this question originally because I've noticed an improvement since I got my tango devkit a month or so ago. there was an update a week or so later, with an immediate improvement.
I have found that some scenes track better than others, it seems to help for there to be additional scenery for it to track. in my workspace, a fairly cluttered apartment, it tracks well but in the neighboring identical apartment unit which is currently vacant and empty, it does not track as well. that could also be a product of the blinds hanging up in my unit that are not hanging up in the vacant unit, filtering out additional infrared.
I'm experimenting with placing 3D objects over the real time input from the Tango color camera.
One problem here is that the hardware color camera 'point' in a (strange) direction. I wasn't able to get the direction vector from the api until now. Your virtual camera for rendering the scene needs this rotation to render 3D objects properly.
There are augmented reality examples of Tango's Unity plugin:
https://developers.google.com/tango/apis/unity/unity-simple-ar
They solve this problem with a matrix that rotates the 3d camera.
It can be found in the Unity script "TangoARPoseController" (C#) that, when attached to a unity camera, rotates it so that it looks at the scene in the right direction. The matrix is obtained in the method "SetCameraExtrinsics" of that script.
Unfortunately, when I apply the matrix to my unity scene it does not produce a perfect overlay (actually it's quiet bad). But I have other sources of position input which may be the problem here.
However, until now I'm not sure if the matrix used in the examples is good enough for accurate ar overlays. Maybe it is just suitable for demonstration purposes. But it should be a good starting point for further investigation.
Are we talking about displaying the 'webcam' in the background as opposed to a skybox ?
Take a look at my GhostHunter repo. It includes a shader and a script for displaying the rear facing camera 'behind' the gameplay objects (like the skybox). It should be useable with Tango and it is better than the 'display on a mesh' technique I`ve seen others used.
https://github.com/NVentimiglia/Augmented-Reality-Ghost-Hunter