I'm creating VR game for Gear VR device. I'm using unity 2019.1.2 version. Even though it showed higher frame rate and lesser draw call amount in my game, when playing its gives slow response. I reduce the materials amount and now only using 2 to 3 materials for whole scene. I tested using Samsung S6. Can't figure it out what's the wrong here.
Many thanks for your considerations
Related
I have a problem with my VR game.
I have build a simple scene, and when I start the game, RivaTuner shows 90-80 frames per second, but when I put on the headset, the frame rate is very low, I want to know why it works like that.
I used Oculus Quest 2 with Oculus Link.
I am facing a problem after exporting the project. I do not know what the reason is, but my mobile phone becomes hot after a few minutes.
The project is new and does not have any script, just add ARKit XR Plugin.
It's quite common "thermal condition" for any device running Augmented Reality app. ARKit, RealityKit, ARCore, Vuforia or MRTK's tracking stage is highly CPU-intensive. Your phone not only tracks and reconstructs a surrounding environment at 60 fps but also simultaneously renders 3D geometry with PBR shaders, textures, shadows, animation and physics.
In some cases, Face tracking is even more CPU-intensive than World tracking. This can be possible due to the point that RGB channels coming from the selfie camera are in tandem with a segmented Alpha channel and ZDepth channel, coming from TrueDepth sensor. And there are more than 50 facial blendshapes deforming geometry at 1/60 fraction of a second.
Pay particular attention to the fact that native Xcode builds of ARKit apps written in Swift (using UIKit or, especially, SwiftUI) run considerably faster than Unity builds of ARKit apps.
I want to find user movement walking and tracking position and move camera accordingly in virtual reality app , not head movements like rotation , etc. So I am wondering whether this combination Gear VR with Samsung S7 edge will support my requirement or not.
Any suggestion will be appreciated , thank you.
GearVR is not meant to be a 6-DoF device, which is what you are asking (3 Degrees of Freedom means tracking of rotation, 6 Degrees of Freedom means rotation + position).
It is possible to achieve 6-DoF tracking with a Samsung Galaxy by using ARCore. A good starting point would be this GitHub project.
However, the results will most likely not meet your expectations in terms of usability. If you intend to use the original GearVR goggles, you'll notice that the camera is covered, so that would be a problem as well.
If you really want quality mobile 6-DoF VR (read: goggles, not a smartphone in your hands), you should look out for Vive Focus or Oculus Quest as a device.
I'm developing a VR app in Unity for the Samsung Gear VR and I'm trying to implement a pointer so the user can interact with the objects in the scene. When you look at distant objects it looks fine, but when you focus on close objects (which is highly needed for the app mechanics) the pointer appears to be duplicated, so you need to center the desired object in the middle of the points :P
What I've tried
-Using the GvrReticlePointer that comes with the GoogleVR package for cardboard
-Creating my own pointer by adding a canvas to the main camera with an image in the center
-Changing some of the Camera settings like field of view, stereo separation, etc.
-Configure my phone via a QR code http://imgur.com/fVrNrQk
Steps to reproduce (With canvas added to camera)
1.- Create a simple scene with a few objects to look at in Unity
2.- Set build settings for android
3.- Configure player settings to enable "Virtiual Reality Supported"
4.- Add Oculus as Virtual Reality SDK
5.- Set package name and minimum API level
6.- Add a canvas to the camera
7.- Add an image to the canvas, a cross will do the job
Observations
I'm using Unity 5.6.0b10 since google cardboard's site recommends using this version for the GoogleVR package. And I'm using the Samsung Gear VR with a Samsung Galaxy S6 edge + phone.
Solved
Apparently this is a well documented issue called voluntary Diplopia, and it's a human bug not a software one (read here, Unity's documentation, section The Reticle Interaction in VR).
The problem is trying to put the reticle at a fixed point in the user interface, like traditional 3D games. When looking at closer objects in VR this is going to cause this seeing double problem.
The solution is to position the reticle at the point in the 3D space the user is looking at. If he's looking closer, the reticle is drawn closer. Of course now you also have to scale the reticle accordingly, so the users can see it the same size no matter where they're looking at.
Unity also provides some example scripts about this, you can find them in the assets store, is called VR Samples.
Now I have performance issues (I'm working on mobile platforms): sometimes, when you turn your head fast you can see the reticle where it was drawn before. But looks way better than the double reticle version.
I'm using Unity3D 5.3 version. I'm working on a 2D "Endless running" game. It's working normally on PC. But when I compile it to my phone, all of my gameobjects are shaking when they are moving. Gameobjects are in a respawn loop. I'm increasing my Camera's transform position x. So when my camera is in action, all of the other objects look like they are shaking a lot and my game is working slowly on my phone as a result. I tried to play my game at Samsung, on discovery phones. It's working normally on some of them. But even on some Samsung devices it's still shaking. So i don't understand what the problem is. Can you help me with this?
One thing you can do is start optimising, if you have a game that is either finished or close to it. If you open the profiler, click "Deep Profile" and then run it in the editor on your PC, you'll get a very detailed breakdown of what is using the most resources within your game. Generally it's something like draw calls or the physics engine doing unnecessary work.
Another thing that might help is to use Time.deltaTime, if you aren't already. If the script that increases the transform doesn't multiply the increase by Time.deltaTime, then you're moving your camera by an amount per frame rather than per second, which means that if you have any framerate drops for any reason, the camera will move a smaller distance and that could be throwing out some of your other calculations. Using Time.deltaTime won't improve your framerate, but it will make your game framerate independant, which is very important.