ARKit: Change FOV for rendered content - arkit

I'm looking to change the field of view for the rendered content in my AR session. Obviously we can't change the raw camera FOV, but I think it should be possible to change the field of view for the rendered SceneKit content.
Changing the camera field of view is trivial in a raw SceneKit SCNCamera... but I don't know how to do this within an ARSCNView.

You might be able to access the pointOfView property of your ARSCNView (and then retrieve the active SCNCamera).
If that doesn't work (ARKit changing the camera property every frame etc.), you can always go the path of writing the code yourself by using ARSession directly with SCNView.
Note that unless you have a 3D scene covering the entire camera stream, changing the FoV of your virtual camera would break the AR registration (alignment).

The developer documentation for ARSession suggests "If you build your own renderer for AR content, you'll need to instantiate and maintain an ARSession object yourself."
This repo does this: https://github.com/hanleyweng/iOS-ARKit-Headset-View

This code will retrieve the camera from an ARSCNView if there is one:
sceneView.scene.rootNode.childNodes.first(where: { $0.camera != nil})
Note that this will return the camera's associated node, which you may need if you want to control its position or angle directly. The SCNCamera itself is stored in the node's camera property.
It's best not to touch the AR camera if you can avoid it as it will mess up the association between the world model and the scene. I occasionally use this technique if I want a control system that can optionally use AR to track device motion and angle, but which doesn't have to translate into real-world coordinates (i.e. VR apps that don't display the camera feed).
Essentially, I'd only do this if you're using AROrientationTrackingConfiguration or similar.
EDIT I should probably mention that ARKit overrides the camera's projectionTransform property, so you probably won't be able to set fieldOfView manually. I've had some success setting xFov and yFov, but since these are deprecated you shouldn't rely on them.

Related

Change z-position of ARCamera in ARSCNView

I'm using ARSCNView to render scenes and looking if there is an option to change the z-position of the camera. I need it to implement zooming in my AR drawing app.
I see that I can not set any related property to sceneView.session.currentFrame?.camera because they are all get-only. Is there a way to set camera position to session before calling run()?
There is an option just to transform the outcome image but it decreases the quality of the image and also positioning is quite off, so it is not a working option for my case.

Make a URP renderer feature affect only the current camera

I'm making a renderer feature with a single ScriptableRenderPass. This renderer feature is present on a single 2D Renderer, like so:
and I have a single camera that is using this renderer, that only affects a particular layer of the camera:
The camera only renders everything on the PixelPerfect layer, ignoring anything else. This camera is in a camera stack, like so:
But, somehow, the renderer feature on Downscaled Camera affects the Background Camera - I suspect that the render pass somehow sees everything from the previous cameras, but I have no idea how that even makes sense, as when singling out only the downscaled camera, I only see the layer that I have set the Camera to cull.
Here's how the Downscaled Camera is set-up:
I'm Blitting to the renderingData.cameraData.renderer.cameraColorTarget in Execute.
I've found this post on the GameDev StackExchange, but this was before the era of URP and scriptable renderer features, but it describes my problem perfectly. Any thoughts?
I am having a similar issue where I have selected a custom renderer on my camera, but it refuses to use my custom renderer and only uses the default. I have yet to figure out why.
EDIT: For future reference, I fixed my problem. Turns out there was no problem. The scene view (and subsequently any camera previews for cameras you have selected) will always render with the default renderer. My render target was being rendered with the correct renderer.

Is there a way to apply bloom to a specific object?

I've currently noticed that, if i uncheck the "is Global" checkbox on the Bloom Effect of a Post Processing Volume, even thought I adjusted my layer to affect one in particular, the Bloom doesnt apply to that layer I've set in the P-p layer. In fact, it doesn't apply at all. Either it sets bloom for everything in the scene, or it doesn't.
Extras: I have no Pipeline asset, maybe thats the issue, but I've tried to setting one LRP (because for some reason URP in my 2019.2.17f1 version doenst exist) and it just breaks all my materials that i use for Particle Systems (Particles/Standard Unlit) even if i upgrade them for LRP materials.
Any ideas? If it's possible to deliver a solution to both these problems excellent, but the main one is the title question.
Note: The "camera stacking" approach mentioned here applies only to Unity URP. For the Unity Built-in Render Pipeline or Unity versions prior to 2019.3.0f3 you can achieve a similar effect with RenderTextures. Though Unity HDRP has no explicit "camera stacking" feature it does allow for the same net effect via the HDRP-specific Graphics Compositor.
"Is there a way to apply bloom to a specific object?"
You could take a leaf out of Unity camera stacking whereby one set of objects are rendered by one camera and another set with a different camera. The results of each camera rendering are merged together automatically by Unity and presented to the screen.
But don't take my word for it, this is what Unity has to say:
In the Universal Render Pipeline (URP), you use Camera Stacking to layer the output of multiple Cameras and create a single combined output. Camera Stacking allows you to create effects such as a 3D model in a 2D UI, or the cockpit of a vehicle. Tell me more...
...and (my emphasis):
A Camera Stack overrides the output of the Base Camera with the combined output of all the Cameras in the Camera Stack. As such, anything that you can do with the output of a Base Camera, you can do with the output of a Camera Stack. For example, you can render a Camera Stack to a given render target, apply post-process effects, and so on. Tell me more...
When you consider that each camera has the potential for its own rendering settings (including bloom) the solution is clear:
ensure there are two cameras in the scene, say My Default Camera and Bloomin' Camera
create a custom layer called "Bloom"
assign whatever objects you want to be rendered with a bloom to layer Bloom
setup the camera stack as per "Adding a Camera to a Camera Stack".
My Default Camera should be set to "Base":
Bloomin' Camera should be set to overlay:
Add Bloomin' Camera to My Default Camera Stack settings:
ensure that the Culling mask for My Default Camera has the Bloom layer unticked. This ensures that the objects to be bloomed are only drawn once on the Bloom layer
ensure that the Culling mask for Bloomin' Camera has a single ticked entry for the Bloom layer and nothing else. You don't want to double-up on rendering otherwise you will get funky and undesirable z-order effects apart from hurting game performance. Other layers will be rendered by My Default Camera.
apply bloom effects to camera Bloomin' Camera
run game, celebrate
The is global might sound confusing at first. Ultimately it does not mean where to apply the post processing effect, but when to apply the effect. If it is set to Global, it will always be applied, otherwise you can set a layer and a border that triggers the effect.
The general approach is to only set emission to materials where you want the effect to take place. If your Materials are to dark otherwise you should adjust the ambient lighting settings.
Atleast in URP there are some work arounds for older versions like this, but afaik this does not work in 2020.3 since they made some changes on URP and the camera system.
edit: on the video Chris Hull
Chris Hull game an answer for how to do it with the new system
#Mezzanine Add your actual game objects to a created bloom layer.
Create two cameras and set one of them to cull everything except that
bloom layer you made. Set the other to only cull the bloom layer. Then
you can set your camera to overlay and it will be added to the other.
You can then use separate post process stacks on these cameras. Note
that you can only bloom objects in the background with this technique
as if you add bloom to an overlay camera, for some reason it just adds
bloom to everything rather than just the things in that camera view.
Doesn't make much sense and makes the purpose of the layers redundant
in my opinion. If you can find a way to add post process to the
overlay camera before it is added to the final image, to do let me
know.
i have not tested that yet, but i presume it's still valid.

Unity AR Foundation object anchor problem

I am new on unity ar foundation and I am trying to create simple screen that when camera looks to the target image, my object will be visible at that point and some other objects will be visible according to first object position (referans point).
When camera looks the target image, my object becomes visible but when I move camera, object also moves with camera direction as well like that; check here to see video
What can I do to keep object always at the same point even camera moves (even it can be render once -no update-).. any advise ? Thanks
Which AR platform do you working on? (ARKit & ARCore) You don't need an anchor if you use the tracked image. Because it is already an anchor. Update your game object transform if only if tracked image anchor tracking state is tracking & limited. Also the image you provided has few trackable features. You should test your code with an image that has more features than the current image. Also setting the image's physical size correctly greatly improves tracking.

HoloLens companion map

I am implementing a "companion map" for a HoloLens application using Unity and Visual Studio. My vision is for a small rectangular map to be affixed to the bottom right of the HoloLens view, and to follow the HoloLens user as they move about, much like the display of a video game.
At the moment my "map" is a .jpeg made into a material and put on an upright plane. Is there a way for me to affix the plane such that it is always in the bottom right of the user's view, as opposed to being fixed in the 3D space that the user moves through?
The Orbital Solver in MRTK can implement this idea without even writing any code. It can lock the map to a specified position and offset it from the player.
To use it what you need to do is:
Add Orbital Script Component to your companion map.
Modify the Local Offset and World Offset properties to keep the map in the bottom right of the user's view.
Modify the Orientation Type as Face Tracked Object.
Besides, the SolverExamples scene provided by the mrtkv2 SDK is an excellent outset to become familiar with Solver components