My goal is to show the worldmap, a group of 3D Models, with lighting, shading and a Post-Processing Effect inside the UI whenever the player wants to warp to a location.
The setup I came up with was that the worldmap is located in a different scene, this scene is loaded when the game is started, but the scene's main camera renders to a RenderTexture, which will then be displayed with a RawImage on the UI of the main scene.
The problem with this is that the worldmap scene then also uses the lighting of the main scene. Additionally, the RawImage seems a bit distorted, even though the resolution is set to 1920x1080, which is the same as the game window.
Is there a way to achieve my goal in another, easier way? It seems a bit 'hacky' the way I am currently doing it.
Related
I am making a 3D isometric game, in my game, there is a lot of holes and the player can go into them, but the problem is when he goes into one of them he becomes invisible, I can,t move the camera it will change the concept I tried using shader like the one in this video but it makes the entire terrain transparent and glitchy (I am using unity terrain) so I am stuck and I need Idea how to make the layer visible inside the hole
What you're asking can be achieved in multiple ways, depending on your render pipeline and requirements.
If you're working with the Universal Render Pipeline (URP), you could create a Forward Renderer asset and create a custom render pass whenever your player is occluded by terrain.
You could assign a new Layer to the player, such as "Player", then select or deselect said mask from the Filters > Layer Mask properties of the Forward Render Data. Then assign the same or a custom material for when the player is occluded by terrain.
Alternatively, you could create either a cutout or a dither shader using Shader Graph, on which there are many tutorials
Camera GameObjects give you the option to select what layer you want them to "see" (render) using the culling mask. Think of layers as grouping GameObjects and giving that group a name.
You can have multiple cameras at the same time each one with a different name and different layer to render, or even change the viewing layer of a single camera depending on changes happening in the game.
Assign a layer tag in each of the terrain elements in your scene and have the camera render them accordingly and "cull" the rest.
Very helpful documentation on layers and camera culling mask.
Ok so I've been trying to make a custom 2D lighting system in Unity, and I'm at that annoying stage where I know what I want to do but I'm not sure how to do it.
Here's the plan:
There will be dedicated light objects with their own meshes. These meshes determine the shape of the light.
Before the camera renders the whole scene, it does an extra render of just the light meshes with a black background to create a lightmap.
Then the camera renders the scene as normal (does NOT render the light meshes this time). Every object has a shader that will access the lightmap and shade itself appropriately depending on the color of the lightmap at that point.
That's the idea anyway. I sorta threw together a botched form of this. I used a separate camera to render the lightmap into a render texture with a culling mask so that it only rendered the light meshes, which are on their own layer. I then manually passed that texture to the shaders which use their screen uvs to sample from it.
This works sorta ok, but the scene view is completely messed up since it tries to light things as if you were looking at it from the perspective of the lighting camera. I feel like this would make the system hard to use, so I want to try to make some that feels a bit more cohesive.
Here's some screenshots to explain:
The tan-ish box is my "light," which gets rendered to the light cam, visible in scene. This next shot is what renders to the lightmap:
The black background is not from the big black box, the clear flag is just set to Black.
Now according to this lightmap, the middle of the screen should be lit up. and that's exactly what happens:
Notice that in the game view, since the light camera is set up with the same position/rotation/perspective settings as the game camera, it looks fine:
The main problem is figuring out that extra render. Is there anyway to create an extra pass for the main camera before the scene render that only renders the light meshes? I could probably figure out the rest from there. It would also be nice if I could make the lightmap a global shader variable, that way I don't have to pass it to each individual material, but one thing at a time, right?
Thanks so much to anyone who can shed some light on this subject. I'm still pretty new to shaders and rendering, so any help is much appreciated.
If I understand correctly, your problem is the appearance of your lights in Scene View, right ?
For that, you can create a custom Gizmos for them and hide the original objects. There's a tutorial:
https://learn.unity.com/tutorial/creating-custom-gizmos-for-development-2019-2#5fa30655edbc2a002192105c
I was going to use the VideoPlayer to render to Camera Near Plane, but I also want to display subtitles for the video for the sake of accessibility. I'm wondering what the best way to do that is.
I can't see anything on a canvas if I render to Near Plane. I'd like the video to appear in front of the scene so that I can have the scene there once the video is complete.
Do I need to be using a render texture to achieve this? Seems like a render texture might incur some unnecessary overhead for my purposes, but I could be wrong.
The idea is this:
Far Background - Scene
Background - Black Image (so i can fade to scene)
Middleground - Video
Foreground - Subtitles
More info:
This is a 2D point and click adventure game with a pre-rendered cutscene.
You could do this with a render texture, place it in front of the camera at an exact distance and size, but I wouldn't. Probably would be a different camera anyway for lighting or clipping purposes.
I would use a second Camera, rendering over top of the Main Camera, with the subtitle UI's canvas targeting the second camera's screen space, and clearing depth only. It will render what it sees, but with a totally transparent background. Then, you can render your video on either the main camera's near plane or the new subtitle camera's far plane.
You could put your black square in front of this camera, too, though it would be in front of the video. It could be UI on the main camera, or stick a third camera in between them. You might have to worry about performance if there are too many cameras, but I have used two or three before to no noticeable performance hit.
Robert Mocks's answer is perfectly tenable and makes sense to me. Thank you for that!
What I decided to do instead was use a RawImage so that I wouldn't have to deal with extra cameras. This way I can use the canvas as I normally would and don't have to deal with render textures.
This involves using the API Only setting along with the following code:
rawImage.texture = videoPlayer.texture;
That seems to work well for me.
I have a VR project for which I am trying to render a canvas to be "always on top" of everything using a separate UI camera as noted here.
I made my UICamera object a child of the main camera - which is the centerEyeAnchor object in the OVRCameraRig.
And I set the culling mask for my UICamera to only show its layer and removed that layer from the main camera (CenterEyeAnchor).
But in fact the perspective is weird and it seems like the UICamera is offset a little bit but its transform is zero'd out in the inspector, so I don't why it's displaying so weird.
If I set the culling mask to "Everything" for both cameras it's still offset a little.
In general you don't need the UI camera to be a child of CenterEyeAnchor. Move it out to to the top level and zero out the coordinates. The Oculus rig might be doing some magic with IPD or something else and it screws up the pixel-perfectness of UI.
I'm fairly new to Unity and I'm trying to embed a 3d view inside a 2d one.
I'm working on an emulation app that has a 2d UI for controls and a preview of the result in a 3d box that should be embedded in the 2d one, sort of as a player.
What's the right approach to doing that in Unity? Is there a way of "embedding" one scene in another?
Thanks!
If you want to create 3D effect with UI canvas, you should look to this link.
If you are using 2D project, it is basically 3D scene with camera set to use ortographic projection, not perspective. So you can use 3D models as well.
You should look into Render Texture. Those allows to render a camera view onto a texture in a scene. Say you have a part of a scene an dyou want to render onto a TV screen in your game. You would place the TV scene somewhere and place your camera to view it. Then you create a render texture and apply onto the mesh that is making your tv screen.
Now if you wish to make a UI system, like a radar with a top view, you would modify the viewport of your top view camera (0,0,.2,.2 would place it bottom left corner with 20% height and width) and make the depth higher so that it renders on top of the main camera.