Currently in my SceneKit scene for a game in iOS using Swift the render distance is very limited, there is a noticeable cutoff in the terrain
of the players perspective, i cant find a "max render distance" setting anywhere and the only option ive seen so far is to just cover it with fog, im clearly missing something as ive seen plenty of games with larger render distances but after searching across google, documentation and stack overflow i cant seem to get an answer, can anyone help?
Camera Far Clipping Plane
To adjust a max distance between the camera and a visible surface, use zFar instance property. If a 3D object's surface is farther from the camera than this distance, the surface is clipped and does not appear. The default value in SceneKit is 100.0 meters.
arscnView.pointOfView?.camera?.zFar = 500.0
Im a dingdong and figured out what i was missing.
What i was looking for was a setting in the camera that your scene is using as the point of view, theres a setting called 'Z clipping" which clips out anything closer then the "near" value or further then the "far" value, and by default far is set to 100 units. just adjust that setting either in code or within XCODE and set it to a higher value to view the entire scene.
Related
I apologize in advance for such a basic question. I haven't been able to find anything online referring to this. I'm probably just not using the right search terms, but I don't know what this is called.
This is zoomed out showing my model which is a large building, and what appears to be a default sort of terrain or horizon. I can't interact with it. What is it and how do I get rid of it?
This view is a little closer and the scene has a sky box applied
This view is much closer at an angle showing the skybox. You can see the gray circle cutting the skybox off at the horizon.
This might be the far clipping plane of the camera coming into effect.
Clipping planes are two (near and far) planes from the camera's origin, away. Anything near than the near is culled, anything further than the far is culled.
If you're using a very wide angle camera, you might get this sort of round clipping effect on a far plane.
Try setting the far plane's value to a much higher number, to see if that helps/solves the problem.
Select your Main Camera in the Hierarchy, and adjust the Clipping plane values in the Inspector, about half way down... here...
I have a problem where images change z drawing depending on the xposition of my camera - the image shows the problem - I want it so the positions of the images dont change drawing order to camera
https://ibb.co/syHLY8d
Looks like the issue is they're exactly the same depth from the camera, and when the camera moves in the X-coordinate, floating point rounding errors cause the depth values to be slightly different and sort in a different way.
To fix this case you need to move one object slightly further away from the camera, although personally I would say this is generally not an issue worth worrying about, lots of games have small sorting issues like this and most people won't notice.
How to make the 3D game adapt to the screen resolution?
I tried to change the fieldOfView of the camera, but this adjustment does not work correctly!
If you mean UI elements, there are little triangles usually in the middle of the canvas the element is under. These are anchors that will tell the element to try and stay in the same place on the canvas regardless of the screen resolution. You can read more about it here: https://docs.unity3d.com/Manual/UIBasicLayout.html https://docs.unity3d.com/Manual/HOWTO-UIMultiResolution.html
If you mean your actual game view, you'd probably need to write a script that adjusts the camera's FOV at the start of the game based on the resolution, but I have no idea where to even begin on the formula you'd use.
When Unity builds a VR project, by default it is set to make the two views stereoscopic. It slightly offsets the camera position of one eye to give the user a sense of depth.
For example a square will appear slightly to the left on the right view compared to the left view.
I want to make the camera truly monoscopic by removing the offset that is created when i build the project. Each camera should render all objects in exactly the same position for both eyes.
One of things i tried was creating two camera and setting them to the left and right eye. Then i manually set the position/rotation of one camera until it looked monoscopic
It worked fine on my pixel phone, but as soon as i put the project on my test phone i noticed that the difference in resolutions messed up the view i was going for. The blocks were not in the same position when i looked at both renders.
If anyone has any solutions or ideas as to how i can go about this, i would greatly appreciate it.
Thank you!
You can still use 2 cameras, but instead of offsetting them, you can just make the width of the camera half.
Make 2 cameras, set their positions to exactly the same.
On the left eye camera, set the width to 0.5 and the x position to 0.
On the right eye camera, set the width to 0.5 and the x position to 0.5.
You should now have 2 cameras rendering the exact same thing, but twice across the screen, with no sense of depth.
I'd like to develop an iPhone app that does the following:
1. Starts the device camera.
2. Places a layer on the screen containing a stretchable frame for the user to fit to a desired object.
3. Measures the object's width & height.
You may look at this app which does practically what I need and more:
http://itunes.apple.com/us/app/easymeasure-measure-your-camera!/id349530105?mt=8
Notice that it doesn't need to be super accurate and can definitely bear some aberration.
Any clue how to do it?
10x
The clue: Geometry and Trigonometry.
By knowing the camera Field-of-View angles, entering the height of the camera above ground and assuming a planar, i.e. flat, ground, you can use basic geometry and trigonometry to work out everything.