Change z-position of ARCamera in ARSCNView - arkit

I'm using ARSCNView to render scenes and looking if there is an option to change the z-position of the camera. I need it to implement zooming in my AR drawing app.
I see that I can not set any related property to sceneView.session.currentFrame?.camera because they are all get-only. Is there a way to set camera position to session before calling run()?
There is an option just to transform the outcome image but it decreases the quality of the image and also positioning is quite off, so it is not a working option for my case.

Related

Is there a way to apply bloom to a specific object?

I've currently noticed that, if i uncheck the "is Global" checkbox on the Bloom Effect of a Post Processing Volume, even thought I adjusted my layer to affect one in particular, the Bloom doesnt apply to that layer I've set in the P-p layer. In fact, it doesn't apply at all. Either it sets bloom for everything in the scene, or it doesn't.
Extras: I have no Pipeline asset, maybe thats the issue, but I've tried to setting one LRP (because for some reason URP in my 2019.2.17f1 version doenst exist) and it just breaks all my materials that i use for Particle Systems (Particles/Standard Unlit) even if i upgrade them for LRP materials.
Any ideas? If it's possible to deliver a solution to both these problems excellent, but the main one is the title question.
Note: The "camera stacking" approach mentioned here applies only to Unity URP. For the Unity Built-in Render Pipeline or Unity versions prior to 2019.3.0f3 you can achieve a similar effect with RenderTextures. Though Unity HDRP has no explicit "camera stacking" feature it does allow for the same net effect via the HDRP-specific Graphics Compositor.
"Is there a way to apply bloom to a specific object?"
You could take a leaf out of Unity camera stacking whereby one set of objects are rendered by one camera and another set with a different camera. The results of each camera rendering are merged together automatically by Unity and presented to the screen.
But don't take my word for it, this is what Unity has to say:
In the Universal Render Pipeline (URP), you use Camera Stacking to layer the output of multiple Cameras and create a single combined output. Camera Stacking allows you to create effects such as a 3D model in a 2D UI, or the cockpit of a vehicle. Tell me more...
...and (my emphasis):
A Camera Stack overrides the output of the Base Camera with the combined output of all the Cameras in the Camera Stack. As such, anything that you can do with the output of a Base Camera, you can do with the output of a Camera Stack. For example, you can render a Camera Stack to a given render target, apply post-process effects, and so on. Tell me more...
When you consider that each camera has the potential for its own rendering settings (including bloom) the solution is clear:
ensure there are two cameras in the scene, say My Default Camera and Bloomin' Camera
create a custom layer called "Bloom"
assign whatever objects you want to be rendered with a bloom to layer Bloom
setup the camera stack as per "Adding a Camera to a Camera Stack".
My Default Camera should be set to "Base":
Bloomin' Camera should be set to overlay:
Add Bloomin' Camera to My Default Camera Stack settings:
ensure that the Culling mask for My Default Camera has the Bloom layer unticked. This ensures that the objects to be bloomed are only drawn once on the Bloom layer
ensure that the Culling mask for Bloomin' Camera has a single ticked entry for the Bloom layer and nothing else. You don't want to double-up on rendering otherwise you will get funky and undesirable z-order effects apart from hurting game performance. Other layers will be rendered by My Default Camera.
apply bloom effects to camera Bloomin' Camera
run game, celebrate
The is global might sound confusing at first. Ultimately it does not mean where to apply the post processing effect, but when to apply the effect. If it is set to Global, it will always be applied, otherwise you can set a layer and a border that triggers the effect.
The general approach is to only set emission to materials where you want the effect to take place. If your Materials are to dark otherwise you should adjust the ambient lighting settings.
Atleast in URP there are some work arounds for older versions like this, but afaik this does not work in 2020.3 since they made some changes on URP and the camera system.
edit: on the video Chris Hull
Chris Hull game an answer for how to do it with the new system
#Mezzanine Add your actual game objects to a created bloom layer.
Create two cameras and set one of them to cull everything except that
bloom layer you made. Set the other to only cull the bloom layer. Then
you can set your camera to overlay and it will be added to the other.
You can then use separate post process stacks on these cameras. Note
that you can only bloom objects in the background with this technique
as if you add bloom to an overlay camera, for some reason it just adds
bloom to everything rather than just the things in that camera view.
Doesn't make much sense and makes the purpose of the layers redundant
in my opinion. If you can find a way to add post process to the
overlay camera before it is added to the final image, to do let me
know.
i have not tested that yet, but i presume it's still valid.

Unity VideoPlayer with Subtitles

I was going to use the VideoPlayer to render to Camera Near Plane, but I also want to display subtitles for the video for the sake of accessibility. I'm wondering what the best way to do that is.
I can't see anything on a canvas if I render to Near Plane. I'd like the video to appear in front of the scene so that I can have the scene there once the video is complete.
Do I need to be using a render texture to achieve this? Seems like a render texture might incur some unnecessary overhead for my purposes, but I could be wrong.
The idea is this:
Far Background - Scene
Background - Black Image (so i can fade to scene)
Middleground - Video
Foreground - Subtitles
More info:
This is a 2D point and click adventure game with a pre-rendered cutscene.
You could do this with a render texture, place it in front of the camera at an exact distance and size, but I wouldn't. Probably would be a different camera anyway for lighting or clipping purposes.
I would use a second Camera, rendering over top of the Main Camera, with the subtitle UI's canvas targeting the second camera's screen space, and clearing depth only. It will render what it sees, but with a totally transparent background. Then, you can render your video on either the main camera's near plane or the new subtitle camera's far plane.
You could put your black square in front of this camera, too, though it would be in front of the video. It could be UI on the main camera, or stick a third camera in between them. You might have to worry about performance if there are too many cameras, but I have used two or three before to no noticeable performance hit.
Robert Mocks's answer is perfectly tenable and makes sense to me. Thank you for that!
What I decided to do instead was use a RawImage so that I wouldn't have to deal with extra cameras. This way I can use the canvas as I normally would and don't have to deal with render textures.
This involves using the API Only setting along with the following code:
rawImage.texture = videoPlayer.texture;
That seems to work well for me.

ARKit: Change FOV for rendered content

I'm looking to change the field of view for the rendered content in my AR session. Obviously we can't change the raw camera FOV, but I think it should be possible to change the field of view for the rendered SceneKit content.
Changing the camera field of view is trivial in a raw SceneKit SCNCamera... but I don't know how to do this within an ARSCNView.
You might be able to access the pointOfView property of your ARSCNView (and then retrieve the active SCNCamera).
If that doesn't work (ARKit changing the camera property every frame etc.), you can always go the path of writing the code yourself by using ARSession directly with SCNView.
Note that unless you have a 3D scene covering the entire camera stream, changing the FoV of your virtual camera would break the AR registration (alignment).
The developer documentation for ARSession suggests "If you build your own renderer for AR content, you'll need to instantiate and maintain an ARSession object yourself."
This repo does this: https://github.com/hanleyweng/iOS-ARKit-Headset-View
This code will retrieve the camera from an ARSCNView if there is one:
sceneView.scene.rootNode.childNodes.first(where: { $0.camera != nil})
Note that this will return the camera's associated node, which you may need if you want to control its position or angle directly. The SCNCamera itself is stored in the node's camera property.
It's best not to touch the AR camera if you can avoid it as it will mess up the association between the world model and the scene. I occasionally use this technique if I want a control system that can optionally use AR to track device motion and angle, but which doesn't have to translate into real-world coordinates (i.e. VR apps that don't display the camera feed).
Essentially, I'd only do this if you're using AROrientationTrackingConfiguration or similar.
EDIT I should probably mention that ARKit overrides the camera's projectionTransform property, so you probably won't be able to set fieldOfView manually. I've had some success setting xFov and yFov, but since these are deprecated you shouldn't rely on them.

I set the depth for the camera, but it will be rendered first

I am using Unity 5.4.1f1.
There are multiple cameras on the scene and there is camera I would like to draw last for post effects.
But whatever value I set for the depth of the camera,
That camera will be rendered first on Profiler.
Therefore, the rendered object is not displayed.
How can I change the rendering order?
I solved it myself. When Camera.targetTexture set, then render first.

Offsetting the rendered result of a camera in Unity

I am trying to make everything that is rendered by my perspective "Camera A" appear 100 points higher. This is due to the fact that my App has an interface with an open space on the upper part.
My app uses face detection to simulate the face movement into an in game avatar. To do this I compute the "Model-View-Matrix" to set it into the camera's "worldToCameraMatrix".
So far this works well, but everything is rendered with the center as the origin, now i want to move this center origin a certain distance "up" so that it matches my interface.
Is there a way to tell Unity to offset the rendered camera result?
An alternative I thought about is to render into a texture, then I can just move the texture itself, but I thought there must be an easier way.
By the way, my main camera is orthographic, and i use this one to render the camera texture. In this case simply moving the rendering game object quad up does the trick.
I found a property called "pixelRect", the description says:
Where on the screen is the camera rendered in pixel coordinates.
However moving the center up seems to scale down my objects.
You can set the viewport rect/orthosize so that its offset or you can render to a render texture and render that as a overlay with a offset or diffirence in scale.
Cheers