I have three objects in a scene: a grass cube, a sky cube, and a brick cube. All the objects are in the same layer, and it appears that they are correctly aligned in space to show properly. Actually, the camera preview shows them right, but then in Game mode somehow one of the cubes, the sky in this case, always shows in front and overlaps the brick.
What bothers me specially is that in the tiny Camera Preview window it's showing correctly, but then in the Game it's not.
Any idea why could this be happening?
http://prntscr.com/6iez5a
Related
Ok so I've been trying to make a custom 2D lighting system in Unity, and I'm at that annoying stage where I know what I want to do but I'm not sure how to do it.
Here's the plan:
There will be dedicated light objects with their own meshes. These meshes determine the shape of the light.
Before the camera renders the whole scene, it does an extra render of just the light meshes with a black background to create a lightmap.
Then the camera renders the scene as normal (does NOT render the light meshes this time). Every object has a shader that will access the lightmap and shade itself appropriately depending on the color of the lightmap at that point.
That's the idea anyway. I sorta threw together a botched form of this. I used a separate camera to render the lightmap into a render texture with a culling mask so that it only rendered the light meshes, which are on their own layer. I then manually passed that texture to the shaders which use their screen uvs to sample from it.
This works sorta ok, but the scene view is completely messed up since it tries to light things as if you were looking at it from the perspective of the lighting camera. I feel like this would make the system hard to use, so I want to try to make some that feels a bit more cohesive.
Here's some screenshots to explain:
The tan-ish box is my "light," which gets rendered to the light cam, visible in scene. This next shot is what renders to the lightmap:
The black background is not from the big black box, the clear flag is just set to Black.
Now according to this lightmap, the middle of the screen should be lit up. and that's exactly what happens:
Notice that in the game view, since the light camera is set up with the same position/rotation/perspective settings as the game camera, it looks fine:
The main problem is figuring out that extra render. Is there anyway to create an extra pass for the main camera before the scene render that only renders the light meshes? I could probably figure out the rest from there. It would also be nice if I could make the lightmap a global shader variable, that way I don't have to pass it to each individual material, but one thing at a time, right?
Thanks so much to anyone who can shed some light on this subject. I'm still pretty new to shaders and rendering, so any help is much appreciated.
If I understand correctly, your problem is the appearance of your lights in Scene View, right ?
For that, you can create a custom Gizmos for them and hide the original objects. There's a tutorial:
https://learn.unity.com/tutorial/creating-custom-gizmos-for-development-2019-2#5fa30655edbc2a002192105c
I have two cameras and for each I've associated a Mesh Filter and Mesh Renderer (which are just red and green cubes. I notice that in the Scene view I can see one cameras position from the other camera. But when I actually play the scene the camera I'm looking through cannot see the cube of the other camera?
EDIT: Adding some clarity I hope.
When I click the play symbol.
When I switch to the green camera it cannot see the red camera.
When I switch to the red camera it cannot see the green camera.
Even though both cameras has a mesh attached to them which appears both in the Scene and Game previews (screenshot above).
The camera at the position of the green cube cannot see the green cube because it is inside the cube and the standard material uses backface culling, which means that only the fronts of the faces of the mesh are drawn and not their backs. And the standard Unity cube has all its faces facing outwards.
Also, I assume the cube would have been clipped regardless, since it's too close, under the near clipping distance of the camera. So anything outside the camera frustum will not be drawn.
We have built an app for the Hololens that has 1 or 2 3D characters in the scene at any given time. What is the best practice for adding lighting to an AR scene for headsets like the Hololens? Should the scene be lighted at all?
So here's the thing with the Hololens:
Black is Transparent
This means that any shadows on objects will make the object fade out into transparency when viewed on the real device (the emulator does not show simulated environments). As such, environments should be brightly lit from a source that is a child of the main camera (you may still use a directional light pointing from above angle) and objects should not cast shadows (as it will appear that those shadows are punching holes in objects).
This also means that you will want textures that are brightly colored as well.
Brightly lit (real world) backgrounds will exacerbate the transparency effect (as the Hololens can't reduce incoming light).
You'll likely have to experiment to find something that works best for your project.
I've made a scene full of half transparent cubes. On the inside of each cube, there is another (slightly smaller) cube with the mesh flipped so the walls of the cube are visible from both the inside as well as the outside. My problem is that the ordering of the cubes isn't correct. If I look around with the camera while the game is running, the order of the cubes starts to jump around. As you can see in the images, the gray cube in the top left is jumping back and forth.
I'm using the 3rd Person Controller + Fly Mode as my player controller. All cubes are using the Standard shader. Also worth mentioning is that the cubes and corresponding materials are generated from a C# script.
This is done for each cube to enable transparency:
material.SetFloat("_Mode", 3f);
material.SetInt("_SrcBlend", (int)UnityEngine.Rendering.BlendMode.One);
material.SetInt("_DstBlend", (int)UnityEngine.Rendering.BlendMode.OneMinusSrcAlpha);
material.SetInt("_ZWrite", 0);
material.DisableKeyword("_ALPHATEST_ON");
material.DisableKeyword("_ALPHABLEND_ON");
material.EnableKeyword("_ALPHAPREMULTIPLY_ON");
material.renderQueue = 3000;
I've tried setting the camera to Camera.transparencySortMode = TransparencySortMode.Orthographic because I found this as a tip in the forums, but this made the matter worse.
Actually I was expecting this to just work out of the box. But it seems I have to do something special to get my use case to work. But what? :)
I'm very beginner in Unity so please forgive if, this questions isn't so hard to answer:)
So, I have a text on a Canvas in the editor, it is okay, it's showing well on Scene editor and In Game as well.
But, when I added two Sprites, which going to be the player and the enemy, the positions of these sprites are behave a bit weird.
The text position is: x: -293 y: 195, when I'm modifying the position of the text it works fine.
When I add the sprites to x:0 y:0 and x:1 y:1, in the scene editor they appear in the left bottom corner, but when I check in the game, they placed in the middle of the screen.
My question is why the coordinates and the positions are so different on Scene (grey) and on Game (blue) ?
Because initialized render mode of Canvas in Unity is "ScreenSpace - Overlay". So it is shown on too big area in scene. If you want to work only in view field of camera, in inspector just change render mode of Canvas to "ScreenSpace-Camera" and drag your MainCamera to RenderCamera in inspector. Even if you use ScreenSpace-Camera, coordinate system of RectTransform (UI Objects transform) is different than Transform (Normal game objects transform)
in this view, if you get closer to the left-down corner of your scene, you will see your main camera area and Sprites that are in correct positions.
I hope this helps.