I am facing a problem after exporting the project. I do not know what the reason is, but my mobile phone becomes hot after a few minutes.
The project is new and does not have any script, just add ARKit XR Plugin.
It's quite common "thermal condition" for any device running Augmented Reality app. ARKit, RealityKit, ARCore, Vuforia or MRTK's tracking stage is highly CPU-intensive. Your phone not only tracks and reconstructs a surrounding environment at 60 fps but also simultaneously renders 3D geometry with PBR shaders, textures, shadows, animation and physics.
In some cases, Face tracking is even more CPU-intensive than World tracking. This can be possible due to the point that RGB channels coming from the selfie camera are in tandem with a segmented Alpha channel and ZDepth channel, coming from TrueDepth sensor. And there are more than 50 facial blendshapes deforming geometry at 1/60 fraction of a second.
Pay particular attention to the fact that native Xcode builds of ARKit apps written in Swift (using UIKit or, especially, SwiftUI) run considerably faster than Unity builds of ARKit apps.
Related
I'm creating VR game for Gear VR device. I'm using unity 2019.1.2 version. Even though it showed higher frame rate and lesser draw call amount in my game, when playing its gives slow response. I reduce the materials amount and now only using 2 to 3 materials for whole scene. I tested using Samsung S6. Can't figure it out what's the wrong here.
Many thanks for your considerations
I am developing an app that needs to use the front facing camera of the iPhone for Augmented Reality experience using swift. I have tried to use the ARKit, but the front facing camera made by the ARKit is only supported for iPhone X.
So, which frameworks or libraries that I can use with swift to develop apps that has AR experience especially fro front facing camera, other than ARKit?
ARKit isn't the only way possible to create "AR" experiences on iOS, nor is it the only way that Apple permits creating "AR" in the App Store.
If you define "front-facing-camera AR" as something like "uses front camera, detects faces, allows placing virtual 2D/3D content overlays that appear to stay attached to the face", there are any number of technologies one could use. Apps like Snapchat have been doing this kind of "AR" since before ARKit existed, using technology they've either developed in-house or licensed from third parties. How you do it and how well it works depends on the technology you use. ARKit guarantees a certain precision of results by requiring a front-facing depth camera.
It's entirely possible to develop an app that uses ARKit for face tracking on TrueDepth devices and a different technology for other devices. For example, looking only at what you can do "out of the box" with Apple's SDK, there's the Vision framework, which locates and tracks faces in 2D. There's probably a few third party libraries out there, too... or you could go looking through academic journals, since face detection/tracking is a pretty active area of computer vision research.
ARKit 2.0
TrueDepth front-facing camera of iPhone X/Xr/Xs gives you Depth channel at 15 fps frame rate plus Image front-facing camera gives you RGB channels at 60 fps frame rate.
Principle of work: It's like Depth Sensing System in MS Xbox Kinect but more powerful. Infrared emitter projects over 30,000 dots in a known pattern onto the user’s face. Those dots are then photographed by a dedicated infrared camera for analysis. There is a proximity sensor, presumably so that the system knows when a user is close enough to activate. An ambient light sensor helps the system set output light levels.
At the moment only iPhone X/Xr/Xs models have TrueDepth Camera. If you don't have TrueDepth Camera and Sensor System in your iPhone (just like iPhone SE, iPhone 6s, iPhone 7 and iPhone 8 don't have) you cannot use your gadget for such features as Animoji, Face ID, or Depth Occlusion Effects.
In ARKit 2.0 framework a configuration that tracks the movement and expressions of the user’s face with the TrueDepth camera uses special class ARFaceTrackingConfiguration.
So, the answer is NO, you can use the front-facing camera of iPhones with A11 and A12 chipset (or higher version), or iPhone with TrueDepth Camera and its Sensor System.
ARKit 3.0 (addition)
Now ARKit allows you simultaneously track a surrounding environment with back camera and track you face with front camera. Also, you can track up to 3 faces at a time.
Here's a two code snippets how to setup you configuration.
First scenario:
let configuration = ARWorldTrackingConfiguration()
if configuration.supportsUserFaceTracking {
configuration.userFaceTrackingEnabled = true
}
session.run(configuration)
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
for anchor in anchors where anchor is ARFaceAnchor {
// you code here...
}
}
Second scenario:
let configuration = ARFaceTrackingConfiguration()
if configuration.supportsWorldTracking {
configuration.worldTrackingEnabled = true
}
session.run(configuration)
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let transform = frame.camera.transform
// you code here...
}
I'm a mobile game developer using Unity Engine.
Here is my problem:
I tried to render the static scene stuffs into a render target with color buffer and depth buffer, with which i render to the following frames before the dynamic objects are rendered if the game main player's viewpoint stays the same. My goal is to reduce some of draw calls as well as to save some power for mobile devices. This strategy saves up to 20% power in our MMO mobile game consumption on android devices FYI.
The following pics are screen shot from my test project. The sphere,cube and terrain are static objects, and the red cylinder is moving.
you can see the depth test result is wrong on android.
iOS device works fine, The depth test is right, and the render result is almost the same as the optimization is off. Notice that the shadow is not right but we ignore it for now.
However the result on Android is not good. The moving cylinder is partly occluded by the cube and the occlusion is not stable between frames.
The results seem that the depth buffer precision is not enough. Any ideas about this problem?
I Googled this problem, but no straight answers. Some said we cant read depth buffer on GLES.
https://forum.unity.com/threads/poor-performance-of-updatedepthtexture-why-is-it-even-needed.197455/
And then there are cases where platforms don't support reading from the Z buffer at all (GLES when no GL_OES_depth_texture is exposed; or Direct3D9 when no INTZ hack is present in the drivers; or desktop GL on Mac with some buggy Radeon drivers etc.).
Is this true?
I am in the process of putting together an app using the Google Cardboard SDK. The user will be able to use the app with or without cardboard. So, there is a switch button inside the app, that activates and deactivates stereo rendering.
The app also uses the Vuforia SDK to track image targets. If a specific target is recognized, some 3D objects above the target and a particle system starts to emit particles.
Everything works fine in non-stereo mode. Particles are emitted and falling correctly as intended. They should simulate snow. Also if the user turns the image target to an angle, the 3D objects above fall down.
When switching to stereo mode, the physics are messed up completely. The snow particles are not falling anymore, they seem to "teleport" around the screen. Also the 3D objects do fall upwards, with a really heavy negative gravity. Timescale seems multiplied several times, but is not - I double checked that. Gravity also does not change when switching between non-stereo and stereo rendering.
Everything works fine in Unity Editor in moth modes. It only appears on the device, which is an iPhone 5.
Cardboard SDK is version 0.52, which is the newest.
Unity is version 5.3.1.
Vuforia is 5.0.6, which is not the newest, but release notes do not indicate a fix concerning physics. Will update it anyway as a next step.
Vuforia is 5.0.10, which is the latest version.
I double checked gravity and timescale, which are not changing when switching between modes. I have a hard time figuring out what might cause the physics to mess up.
EDIT:
I did some further investigations. I made me a little gizmo sitting always in front of the camera but getting the rotation of the Unity world space axes, so I know the 3D-world is oriented in relation to the camera. And it turns out, that when in VR mode with the Google cardboard camera system, the world does spin around the camera heavily. I managed to hold the test device in a way, so it is slowing down and almost freezing, but I have no explanation for the effect yet.
I managed to get my setup right again. Unfortunately I did not find the source of the weird behavior. But By deleting the Vuforia Prefab and the Cardboard Prefab and adding them again to the scene, the problem was solved.
I have downloaded core assets of Leap Motion from the official website. What I was trying to do is to see my hands in Oculus Rift. There are some predefined scenes that are already added into core assets, for example 500Blocks. However, when I'm trying to load this scene I just get a scene with blocks but hands are not detected. I'm pretty sure that Oculus Rift and Leap Motion are turned on. You can see on the picture of what I get.
What I want is simply to have detected my hands and being able to interact with cubes. How can I do this?
I have Leap Motion of 2.2.7, Oculus Rift 2, and Unity 5.1.1. I built the scene and launched the version with directToRift.
Is your leap motion plugged into oculus rift (won't work) or directly to your machine? (more likely to work)
Is your leap motion working fine, e.g. try the Visualizer while oculus rift is running
Do you have "Allow Images" enabled in the Leap Motion Control Panel (accessible through the taskbar icon)? The white background suggests that passthrough is turned off. 500 Blocks uses our Image Hands assets, so passthrough is needed to see your hands.