AR model disappear at some distance though I set AR camera far clipping plane to 5000 - unity3d

I need help with one project based on the AR + GPS.
I have to place the model at GPS location at runtime and the player has to find that model.
I completed almost part of the project using assets AR + GPS Location to place the model at lat-long. I also use one asset named TriLib 2 - Model Loading Package to load the model from the URL.
I am facing one issue with some android devices.
The model does not appear at some distance(i.e 20 meters away) and if I am inside the range of the 20 meters then it appears on the screen.
I set the AR camera far clipping to 5000.
On iOS devices, it works perfectly fine. On some android devices, it also works fine.
What will be the issue?
https://www.youtube.com/watch?v=IDhH3SrzVFg
Please see the video(time: 3.08) for reference.
On Real Me XT, iPhone X it is working.
On Samsung Galaxy S10 and Xiaomi Redmi Note 9 Pro, The model is disappearing at some distance.

Related

iPhone 12 Pro is not showing 3D model instantly

I have developed an AR mobile application with object detection using TensorFlow. The app is running perfectly on iPhone 12 Mini and other iPhones. But when I test it on iPhone 12 Pro and iPad 12 Pro the app is not showing 3D model when the phone camera is far from the detected object. When ever the app detects the trained object it is suppose to show the 3d model and place near to that object but in iPhone 12 pro versions it is only showing 3D object when camera is near to detected object.
I think may be LiDAR is creating problem? If yes, then how to stop the LiDAR using C# code as I have developed the project in Unity using ARFoundation and TensorFlow. I am using ARFoundation 1.0.
ARFoundation 1.0 was released in 2018, so it doesn't support Meshing (generation of triangle meshes that correspond to the physical space). So, there may be possible time-lag-problems, because device equipped with a LiDAR scanner must understand that there's no support for Scene Reconstruction in a current config, and it must toggled for a common Plane Detection approach instead.
A solution is simple – use the latest version of ARFoundation 4.1.5.

How to reduce drawcalls in unity?

I'm a beginner developer in Unity 3D and I'm working on a mobile game (Android). Everything works fine when I test my game on the editor (150 FPS) and on my latest phone (One plus 5, 60 FPS), but when I try it on my old phone (LG optimus G5 with android 6.0) I have only 15 FPS.
I try to test an empty scene with only a 3D cube and I can reach 25 FPS. I used the profiler to inspect my game and I see that I have more than 1300 draw calls on my home scene (which I use about 40 differents sprites and 30 differents meshes). I try to put some materials in static batching, enabled GPU instancing, reduce most of the quality settings, but nothing solves my issue. I also tried to disabled every GameObjects of the scene (except the camera) but it doesn't increase FPS (or only 5 FPS).
Here's my profiler on the empty scene (on the LG G5) :
I developped another game with only UI elements and it works fine on this mobile LG G5.
Do i make a mystake in settings ? Or is my phone just too old for my game ? (I try to download crossy road which was made with unity and it works really nice on my old phone..)
How can i improve the graphics performances ?
I'm using universal render pipeline and unity 2019.3.5f.
Thanks in advance ! And please apologize my english isn't perfect.

Vuforia and Unity : Unable to place mid-air objects or use ground plane

I'm trying to place an object in mid-air and detect ground planes. When I follow the steps in the documentation, it doesn't work and I have the following error when debugging with adb:
PositionalDeviceTracker has not been Initialized correctly, unable to create anchors
I tried on an iPad running iOS 11.2, Pixel XL running Android 8.1.0 and a OnePlus 3T on Android 8.0.0 (which is not supporting ground planes but should work with mid-air anchors)
I tried each on Unity 2017.3.0p4, 2017.3.0f3, 2017.3.1f1 and even 2018.1.0b7
None of the above combination had any of these two features working.
I also use the image target feature and this one works perfectly.
I once managed to detect ground planes a month ago and I haven't changed my Unity version since then. However I updated both my Android and iOS devices at least once since then.
Please could you let me know if I'm doing anything wrong or if there's a known issue about that?
Thanks

GoogleVR 1.0.1: stereo cameras are OK in the Unity Game View but too separated when compiled to Android

I'd say the value I have to change is the stereoMultiplier of the StereoController script attached to the main camera. Anyway I think I have changed every single value of GvrViewer, MainCamera and StereoController. Nothing seems to change the separation of the left and right cameras when compiled to the Android smartphone.
I can see a correct separation in the Unity Game View, but when I compile it to the smartphone, the cameras are too separated (see image below).
I think this issue has happened after updating the Smartphone to Android 6.0 Marshmallow (CyanogenMod 13.0), on Samsung S4.
UPDATE: I have updated to GoogleVR 1.0.1. The same problem is still happening.
Changing scale to 0.007 (which is a very similar scale to the objects in the provided demo scene of GoogleVR: 0.003) seems to fix the problem.
Note: discussed here: https://github.com/googlevr/gvr-unity-sdk/issues/351
UPDATE: in the previous link, somebody wrote:
Android app:
/sdcard/Cardboard/current_device_params
all gvr(cardboard) app will use this file --- "current_device_params".
maybe you should go the url to setup your device profile https://vr.google.com/cardboard/viewerprofilegenerator/
in addiation, you can also do this:
How to change Field of View in Google VR SDK for Unity

how to use the cardboard sdk for pc vr game?

so I want to create a vr game using unity3d and cardboard sdk for PC(windows), which I'll stream to my phone screen using kinoConsol. I created a simple scene when I build it for android,it works fine , I mean it shows the dual sbs camera(screen), but a windows build shows only one normal camera(screen).. is there a way I can use the cardboard sdk to show the sbs camera(screen) in a windows build ?? if not is there any thing else available to achieve this ?
Side by side is easy, just place two cameras where the eyes should be and change their viewport rect to half width. Now you have a side by side stereo renderer without any external library. Cardboard also adds some distortion to the lenses, but it is not that important to use it in your case.
Your second, and much bigger problem is the gyroscope - you have to somehow communicate the position of your headset to your unity app on your pc. This is not trivial and probably will require finding or building an persistent service on your android device that will send the orientation data to your desktop app.