Unreal Engine: How to render Stereo Panoramic Capture from single eye - unreal-engine4

Unreal Engine has a plug in : Stereo Panoramic Capture. After enabling the plugin, the command:
SP.PanoramicScreenshot
renders two images, one from the left eye, and one from the right eye.
Is anyway we can only render from a single eye to save the render time?
I don't see any other command that allows me to do so, the only option seems to be hacking the source code.
Stereo Panoramic Capture Reference

Related

Why is the Motion Blur post processing effect not working in Unity (mac)?

I just can't seem to get motion blur to work in my Unity game. I have added the Post Processing package, created a custom layer for it, added a Post Process Layer to my camera, a Post Process Volume object to my scene and I've linked them together using a dedicated layer. I've also added the effects I want to the Post Processing Profile.
I'm pretty sure I've done this all correctly as I have successfully added many post processing effects, including Bloom, Vignette, Depth of Field and Lens Distortion. These work fine. But when I add Motion Blur, despite turning the Sample Count to maximum, there simply doesn't seem to be a difference.
My scene is very simple at the moment, containing only a sphere with its normals inverted, centred around a user-controlled camera - the standard '360 Degree Photo' setup, basically. There are no lights and I am using a bright white ambient light to illuminate everything equally and optimally. The render pipeline is the default one for 3D games.
I have tried both spinning the camera AND spinning the sphere using the mouse. Neither seems to result in any appreciable blur. Anyone know why this is not working?
Unity Version: 2020.3.3f1 (Personal)
Computer: 2013 iMac

Are masked sprites and Stencil Buffers in Unity only visible on one eye if deployed on HoloLens2?

Is it possible that Sprite Renderer in a HoloLens2 Unity Project, which are masked via a SpriteMask are only visible on one eye in the final HoloLens2 build (UWP via VisualStudio2019 deployed on HoloLens2 device).
I also experienced the same behaviour on elements which are masked with a StencilShader.
I am using a 24-bit depth buffer for my unity project if that helps, otherways the StencilShader wouldn´t work.
To display certain objects to one eye, for different rendering configurations, different ways need to be used.
With MultiPass, you should be able to set a camera to render to only one eye(Camera component -> Output-> Target Eye), so doing per eye stuff is super easy. With Stereo Instancing, you can draw in Shader but you will need to multiply the projection matrix of the current eye.

What objects and APIs would I use to fill the unity game window with custom graphics?

I need to build an app using Unity which doesn't use a traditional camera to generate the graphics. I'll build them using some custom shaders and a few cameras whose results get stuffed in rendertextures and then frobbed. (Think http://www.purplefrog.com/~thoth/art/kaleidescope/kaleid1.html but even weirder)
I'm not sure what objects I would put in the scene to accomplish this. In any normal app you just put a camera and point it at the right spot and Unity gets the pixels into the window, but that is just not how this thing will work.
I'm not sure if I should be using a UI Canvas or what APIs would be used to copy various render textures into the proper locations.
If you are not targeting WebGL you can create a RenderTexture of the proper size (maybe using RenderTexture.GetTemporary) and use Graphics.CopyTexture or other techniques to assemble the image you want displayed in the game window.
Once you have the pixels you want in the RenderTexture you can use Graphics.Blit(src, (RenderTexture)null); which will copy the pixels into the game window. These pixels will be stretched if the game window is not the same size as the RenderTexture.
This technique worked for me in the editor's game window, but when I compile it to WebGL, all I get is a mostly-grey screen with a really big black rectangle in the bottom left.

Vuforia Real time Augmented Reality

I am new to unity and vuforia, I would like to know is it possible that instead of using a image target in vuforia, can I do something like augmenting real time hand drawn illustration? For instance, when I start to draw something on the AR app, then a 3D object representing the object I am drawing would also appear?
Vuforia can recognize pre-defined images, with enough features to be detected. If you draw by hand such an image, that's fine. Otherwise - the answer is no. Just an FYI - Vuforia also has text recognition, if you'd like, take a look here: Vuforia's Text Recognition
Of course that if a detection was made, what happens and what is drawn is up to you, so of course a 3D object is an option.

Accessing Main Camera Left and Right in Unity / google-vr

This is a follow up to this question regarding how to display objects on one camera only in google-vr and unity.
In the current demo project of Unity and Google-vr, I can only access Main Camera Left and Main Camera Right while running the game. During runtime, I am able to disable a layer with the culling mask of one camera.
But I am not able to save those changes while running the game. If I stop, the two Main Cameras Left/Right disappear and I only see Main Camera and GvrReticle as child.
I suspect the cameras are created or imported from a prefab during runtime.
What would be the right way to have the left / right cameras accessible when not running the screne?
It's mentioned in the guide:
Often you will wish to add the stereo rig to your scene in the Editor rather than at runtime. This allows you to tweak the default parameters of the various scripts provided in this plugin, and provides a framework on which to add additional scripts and objects, such as 3D user interface elements.
To turn your Camera into a stereo camera, select the Camera (or Cameras) in the Hierarchy panel, then execute the main menu command Component > Google VR > Update Stereo Cameras.