IOS AR Build using ARkit, ARfoundation and LWRP Render Meshes Partially - unity3d

When I make my IOS build using ARFoundation, ARKit and LWRP then its render meshes partially in AR view. I am using Unity 2018.3.11f1 and LWRP 4.8.0. Any Help? Why meshes render partially?
See what I tried

Try adjusting the minimum and maximum render distance on your camera. Cameras always render objects that are within their max and min distance, missing anything else. Your video looks as though your meshes are either too large or too small, so they don't fit in your camera's render distance.

Related

Unity 3D Pixelate individual 3D objects but retain resolution over distance

I've been doing research around how to create a pixilated effect in Unity for 3D objects. I came across the following tutorials that were useful to me:
https://www.youtube.com/watch?v=dpNhymnBDQw
https://www.youtube.com/watch?v=Z8xB7i3W4CE
For the most part, it is what I am looking for, where I can pixelate certain objects in my scene (I don't want to render all objects as pixelated).
The effect, however, works based of the screen resolution so when you are closer to objects they become less pixelated and vice versa:
Notice how the cube on the right consists of less pixels as it is further away.
Do you perhaps have an idea as to how you would keep the resolution of the pixelated object consistent regardless of distance to them such as below (this effect was done in photoshop, I am unaware as to how to actually implement it):
I'm not sure if this even is possible with the method provided by most pixelart methods.
I was thinking maybe if you could use a shader per object in the scene that would render the pixelated object then you could do some fancy shader math to fix the resolution keeping it consistent per object, however I have no idea as to how you would even render a pixel effect with just a shader. (The only method I can think of is what is described in the videos in which you render the objects onto a smaller resolution via render texture then upscale to screen resolution, which you can't really do with a shader assigned to a material).
Another thought I had was to render each object separately using a separate camera for each object I wanted pixelated, then I could set the camera to be a fixed distance away from the object and blit the render together onto the main camera. This way since each pixelated object is rendered individually with their own camera at a fixed distance, they will retain a fixed pixel resolution regardless of distance from the main camera. (Essentially you can think of it as converting each object into a sprite and rendering that sprite in the scene, thus keeping the resolution of each object consistent despite distance) But this obviously has its own set of problems from performance, to different orientations etc...
Any ideas?
Ideally I am able to specify the resolution I want for a specific 3D object in my scene to be pixelated to and it retains that over any distance. This way I have the flexibility to have different objects rendered at different resolutions.
I should mention that I am using the Universal Render Pipeline at the moment with a custom render feature to achieve my current pixelated effect through downscaling a render texture and upscaling to screen resolution, in which I can change the resolution of the downscaled texture.

Is there a way to add dynamic clipping to cameras besides the scene view camera?

I have recently begun to develop an XR experience within Unity. Everything looked fine in the scene viewer, but when I ran it on the headset there was horrendous z-fighting. I learned that a fix for this is to raise the camera's minimum clipping plane to .3-.7 and dropping the max to about 1000. However, this has the issue of clipping into objects that are only within about half a meter of the camera. I noticed the scene camera has no z-fighting at all and I wonder if that has to do with its dynamic clipping ability or not and how I can re-create this effect within my project.

unity 5.6.0b7 vr stereoscopic panorama + 3d objects

I have 2 textures to create stereoscopic panorama on VR and i want to make a 360ยบ experience. In order to achieve this I need to show one texture at the left side (VR-LeftEye) and the other at the right side (VR-RightEye). Additionally i have to show 3D models in front of the panorama to interact with them.
Im using cardboard GoogleVR v1.20 with Unity 5.6.0b7. I have no problem with changing any version.
After several researches i got few possible solutions but i dont know how to implement them at 100%:
2 spheres (with the faces inside) with 1 camera at the center of the spheres and cull the left on the right side and viceversa. I don know how to cull in different ways per side because only one camera is needed to make stereo in 5.6.
2 textures in the same sphere material and the shader should select the needed texture according to the rendering side. I dont know how to know what is the rendering side in the shader code.
2 spheres, 2 cameras.This is the most artisan way and i have some issues displaying the 3d objects and i got double rotation speed.
Any tips or solutions are welcome.
EDIT:
Im looking for a solution on Unity 5.6.0 because it just implemented a feature that make 2 projections with a distance between them simulating both eyes.
I'm not familiar with VR in unity, but 3rd option sounds better because of the additional 3D models in front of the panorama.
Furthermore, since the eyes are in the center of the spheres in this implementation, moving 3D objects in front of the cameras might be tricky.

Camera-Offset | Project Tango

I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.

Using PointCloud Prefab on Unity 3D

I tryed to implement the "Measure It" app on Unity 3D. I started with the PointCloud example scene downloaded on tango's website.
My problem is, when i look in 1st Person view, the point cloud don't fiel the screen, and when i look in 3rd Person I can see the point outside the Unity Camera FOV.
I don't see this problem on the Explorer app, but it looks to be made in Java so I think it's a Unity compatibility problem.
Does someone have the same problem, or a solution?
Unity 3D 5.1.1
Google Tango urquhart
Sorry for my poor english,
Regards.
EDIT :
It looks like the ExperimentalAugmentedReality scene is using the point cloud to place markers in real world, and this point cloud is right in front of the camera. I don't see any script difference between them so i don't understand why it works. If you've any idea.
I think it makes sense to divide you question into two parts.
Why the points are not filling in the screen in the point cloud example.
In order to make the points to fill in the first person view camera, the render camera's FOV needs to match the physical depth camera's FOV. In the point cloud example, I believe Tango is just using the default Unity camera's FOV, that's why you saw the points is not filling the screen(render camera).
In the third person camera view, the frustum is just a visual representation of the device movement. It doesn't indicate the FOV or any camera intrinsics of the device. For the visualization purpose, Tango explore might specifically matched the camera frustum size to the actual camera FOV, but that's not guaranteed to be 100% accurate.
Why the AR example works.
In the AR example, we must set the virtual render camera's FOV to match the physical camera's FOV, otherwise the AR view will be off. On the Tango hardware, the color camera and depth camera are the same camera sensor, so they shared a same FOV. That's why the AR example works.