Merge cameras to one - unity3d

I have two cameras in my scene - one to follow 2D objects and the other for 3D Models (the second camera has other angle of view than first).
I want to create third camera that combines views from that 2 cameras (so, that camera should "see" what the user "see the game"). Is that possbile?
Why I want such a camera? I would like to make a screenshot. I know I can user Application.CaptureScreenshot but it also captures UI elements. Why can't I disable UI elements for a few miliseconds only when screen is captured? Because the effect is ugly (user sees the moment when UI disappears for a while). So my idea is to create third camera that ignores UI layers (but sees exactly what users sees). make render a frame to file and destroy that camera.
My code so far:
private IEnumerator CaptureScreenCoroutine()
{
// Wait till the last possible moment before screen rendering to hide the UI
yield return null;
this.EnableUi(false);
// Wait for screen rendering to complete
yield return new WaitForEndOfFrame();
// Take screenshot
Application.CaptureScreenshot("SomeScreenShor.png");
this.EnableUi(true);
}

With a Monobehaviour that is attached to the last rendered camera you will need something like this.
void LateUpdate(){
Texture2D tex = new Texture2D(Screen.width, Screen.height);
tex.ReadPixels(new Rect(0,0,Screen.width,Screen.height),0,0);
tex.Apply();
}
this will dump all pixels on the screen into "tex" then you can do what you will with it. This might cause a hitch depending on the resolution that you are rendering the cameras at.

Related

Unity3D: changing camera from perspective to orthographic hides UI elements

In my project I'm using 2 cameras:
the Main Camera for the scene 3D elements;
a secondary camera for UI elements, which I need(?*) since I want to render 3D objects inside the UI (I followed this YouTube tutorial).
?* - I'm actually not 100% sure if that's the only and/or correct way to render 3D objects in UI, but I haven't found any other solution.
Unity Inspector
Main Camera
Here's the Inspector of my Main Camera (Perspective Projection):
Orthographic Camera
Inspector of one of the orthographic cameras:
UI Camera
Inspector of the UI camera (also orthographic):
Canvas Details
Inspector of the canvas I'm using for the UI:
I'm trying to make a sort of switch for different orthogonal projections, that makes use of different cameras to change the perspective.
Therefore I made a dropdown UI element that seems to work fine: I used the gameobject.SetActive(bool) method to switch camera, even though the docs say to use the enabled property to switch cameras, because that wasn't working.
Switch Camera Script
public class ChangeCamera : MonoBehaviour
{
public List<Camera> cameras;
public void SetCamera(int camera)
{
for (int i = 0; i < cameras.Count; i++)
{
cameras[i].gameObject.SetActive(i == camera);
}
}
}
Problem
So far so good.
The problem is that, when I switch the camera in Play Mode, the UI disappears.
Demo
1. Perspective Camera
Before switching camera (UI correctly displayed):
2. Orthographic Camera
After switching camera to orthogonal (UI disappeared):
And that's what I see in the Scene window:
Update
Setting the Canvas "Render Mode" to Screen Space - Overlay seems to be solving the problem, but then I'm not able to see the 3D objects in the UI:
Turns out it was easier than I thought. Big thanks to derHugo :)
Problem
The problem here is the camera depth:
the Main Camera (perspective) has a depth of -1;
the UI camera has a depth of 0;
the orthographic cameras have a depth of 0.
Since the camera depth controls the cameras rendering order, a camera with a lower depth gets rendered before with a camera with higher depth. Therefore, when I was changing the camera to one of the orthographics one, they were rendered in the same depth of the UI, masking it completely.
Solution
I changed the orthographic cameras depth to -1 (the same as the Main Camera), and that solved the problem.
NB: this had nothing to do with the camera using perspective or orthographic projection.
Demo
1. Before
2. After

How does one Render a texture directly to the Camera in Unity

In Unity, I have 2 images in the form of a texture that I am merging together (one overlaying the other). I do this in a Compute Shader and put the results on a RenderTexture. I want this RenderTexture to be everything that the output camera sees.
I have found articles saying to use a the ReplacementShader property of the camera, but I couldn't get that to properly work.
Currently I have simply put the RenderTexture onto a UIRawImage that covers the whole UI Canvas so that the entire camera is filled. This however has a lot of lag and is obviously a suboptimal solution.
So how does one output the Rendertexture or the compute Shader result directly onto the camera. Thanks.
You could probably use OnRenderImage
Event function that Unity calls after a Camera has finished rendering, that allows you to modify the Camera's final image.
and use Graphics.Blit
Copies source texture into destination render texture with a shader.
and do something like e.g.
// This script goes onto your according Camera
public class RenderReplacement : MonoBehaviour
{
public RenderTexture replacement;
void OnRenderImage(RenderTexture src, RenderTexture dest)
{
// To overwrite the entire screen
Graphics.Blit(replacement, null);
// Or to overwrite only what this specific Camera renders
//Graphics.Blit(replacement, dest);
}
}
Where
dest
The destination RenderTexture. Set this to null to blit directly to screen. See description for more information.
Note as mentioned in the API the OnRenderImage is called after this Camera already finished rendering. So in order to make this more efficient - since we basically throw away that render - simply make sure that camera basically isn't rendering anything by disabling all Layers and e.g. let it only render a single color background

How to make transparent UI visible in Unity Editor?

BackGround
In Unity 2020LTS, I want to make a UI scene.
But in Game Panel, I discovered, although a animation is set at beginning (no conditions), the game will show what I see in Editor panel for a little time at first, then play the animation.
The StateMachine is Entry -> Target(Default)
I don't want show player what I see in editor, but only the first frame in animation.
I guess this is because loading level costs some time (almost 0.5 secs).
Question
So I try another way, make initial state of all objects be same as the first frame of animation.
This way work, seems just like it freeze at first frame for 0.5secs. However, I can't edit those objects visibly (Because they all are transparent in first frame).
I have tried Gizmos, but they don't work well. Besides, Gizmos makes me have to create lots of classes in C# scripts for each object, which just is component of animation and has no script.
Could there be any better way to show transparent (UI) object in editor scene only ?
Assuming you have some Game Manager script, you can add a GameObject, assigning it the UI element, and in the Start() function of the script, make it inactive, like so:
public class GameManager : MonoBehaviour
{
public GameObject menu;
void Start()
{
menu.SetActive(false);
//other statements
}
}
I'm not quite sure what you asked for but if you want to edit the UI elements that are invisible you can simply select them by using the hierarchy and edit them. I have linked an image with an example of this. Example scene with invisible panel
Image img;
// Start is called before the first frame update
void Start()
{
img = GetComponent<Image>();
img.color = new Color32(0, 0, 0, 0);
}
The code above will set the panels transparency to zero when the scene starts, make sure to use the using unity.UI namespace in order to acess the image component.

How to set Camera View a Specific Area in UI

My all project works in UI. One of my scene has videoPlayer and I would like to set my MainCamera to that specific Area. This scene is my Recording scene but for recording I have to use canvas Render Mode and has to be "Screen Space - Camera". How I can set my camera view to that specific area.
I tried 2nd camera with culling mask but didn't work because videoplayer is already my UI's child when camera Renders UI showing all children objects.
Is it possible show to camera whereever I want in UI?
enter image description here
Yessir it sure is. I literally had to do this a couple of days ago. Here is a tutorial/video I followed to figure out how to do it for my application. From the sounds of it, all you need to do is follow the video up to ~3 minutes. Then utilize the end product of getting a camera to showcase something in a UI window for your application.
For reference if the video gets deleted the process is as follows:
Create a new GameObject within the Canvas
Add a Raw Image as a child to to the new GameObject
We want a raw image because Raw Images in particular support the addition of a 2d Texture and this is what we need in order to display the Camera view.
Add a new Camera as a child to the GameObject
Create a new Render Texture in you Project Folder
Add the Render Texture to your Camera's Camera Component in the Target Texture field
Add the Render Texture to you Images Raw Image Component in the Texture field
Move the Camera to what you want to view and it should project to your UI GameObject.
If it doesn't do this correctly or the image has weird aspect ratios, check the dimensions in the Render Texture and your Image. It took me a while to get everything to look normal but it really just takes dimension adjustments.
Hope this helps!

How to change cameras in Unity based on camera coordinates?

I'd like to create a movie in Unity, so I would need several cameras and camerapaths.
On the top of this off course I'd like to change between them. For example if Camerapath1 reaches a significant point with Camera1 then I'd like to change to Camerapath2 with Camera2, etc.
I also have Camera Path Animator asset installed. It's working perfectly when I'm using it with only one camera for several camerapaths but I'm unable to change between maincameras.
I'm a newcomer to Unity. I also know that I should do something like this:
...
camera1.camera.active = false;
camera2.camera.active = true;
...
...but where should I populate these lines? On the top of this, how may I catch the event when a camera on a specific camerapath reaches a particular point?
The way to go would be an animation controller that has all camera as children and controls the active state of all cameras. This provides perfect control over the behaviour.
Create an empty game object, add all cameras as children, add an Animator to the main object with one animation. This animation takes all the camera and set their active state. One extra bonus of this approach is the possibility to call methods as well using the AnimationEvent process. You can still define within the animation for some triggered actions like explosions or movements of objects.
As I said, this give you perfect control since you can define easily actions at specific time.
The downside of it is the rigidity of the process. It may not be as flexible as code, but since you are making a movie, you probably do not need flexibility.
If so, you would have your cameras with a collider and rigidbody (isKinematic true), then you would have some trigger box with a simple script:
public void CameraTrigger:MonoBehaviour{
public GameObject nextCamera;
void OnTriggerEnter(Collider col){
if(col.gameObject.CompareTag("Camera")){
col.gameObject.SetActive(false);
nextCamera.SetActive(true);
}
}
}
Then you drag the camera meant to start next as nextCamera.