in my AR application I want to render a model on top over another. Is there any method or variable to influence this?
Yes,there are at least two options:
z-test set to always
this has to be set on the shader of the materal your model is using. Here's the documentation from unity, but there are also some guides online how to write a shader with custom z-test.
two Cameras
This is done by having two cameras. One is your ARCamera and a second one that always at the same pose as the ARCamera. This could for example be done by setting the 2nd camera as a child of the ARCamera with Identity Pose. Then you can create a special Layer for example "Always In Front" and assign the model to it. Then set the cullings Masks accordingly for both cameras such that the 2nd camera only renders the model and the ARCamera only everything else. There might be some overhead when rendering two cameras with this solution.
Your problem has a similar solution to preventing weapons clipping in FPS games. As seen in this blogpost .
MainCamera
SecondCamera
The two picutes show the properties I set in the inspector for each camera. Therefore I tried it also in two diffrent ways but neither made a diffrence.
I: SecondCamera is the Child of the MainCamera
II: SecondCamera and MainCamera are both on the same hierarchy level in the ARSessionOrigin.
Related
I am making a 3D isometric game, in my game, there is a lot of holes and the player can go into them, but the problem is when he goes into one of them he becomes invisible, I can,t move the camera it will change the concept I tried using shader like the one in this video but it makes the entire terrain transparent and glitchy (I am using unity terrain) so I am stuck and I need Idea how to make the layer visible inside the hole
What you're asking can be achieved in multiple ways, depending on your render pipeline and requirements.
If you're working with the Universal Render Pipeline (URP), you could create a Forward Renderer asset and create a custom render pass whenever your player is occluded by terrain.
You could assign a new Layer to the player, such as "Player", then select or deselect said mask from the Filters > Layer Mask properties of the Forward Render Data. Then assign the same or a custom material for when the player is occluded by terrain.
Alternatively, you could create either a cutout or a dither shader using Shader Graph, on which there are many tutorials
Camera GameObjects give you the option to select what layer you want them to "see" (render) using the culling mask. Think of layers as grouping GameObjects and giving that group a name.
You can have multiple cameras at the same time each one with a different name and different layer to render, or even change the viewing layer of a single camera depending on changes happening in the game.
Assign a layer tag in each of the terrain elements in your scene and have the camera render them accordingly and "cull" the rest.
Very helpful documentation on layers and camera culling mask.
I heard that game objects are drawn in the same order they appear in the Hierarchy. But in my case it doesn't work.
For example I wanted the wolf is placed in front of the rabbit, but it doesn't work.
Is there some way to make it with sorting objects according in hierarchy or I can make it only with layers?
The hierarchy sorting you speak of only works in the canvas - so for example with RectTransforms and Images. However I guess you want to use Sprites. SpriteRenderer component has a Order in layer property. Plus Sprites are more lightweight than Images with Transparency. Or you could just move Transforms closer and further away from the camera (even if your game is 2D/ using an orthogonal camera). If everything fails you could change the RenderQueue of the Materials.
I've currently noticed that, if i uncheck the "is Global" checkbox on the Bloom Effect of a Post Processing Volume, even thought I adjusted my layer to affect one in particular, the Bloom doesnt apply to that layer I've set in the P-p layer. In fact, it doesn't apply at all. Either it sets bloom for everything in the scene, or it doesn't.
Extras: I have no Pipeline asset, maybe thats the issue, but I've tried to setting one LRP (because for some reason URP in my 2019.2.17f1 version doenst exist) and it just breaks all my materials that i use for Particle Systems (Particles/Standard Unlit) even if i upgrade them for LRP materials.
Any ideas? If it's possible to deliver a solution to both these problems excellent, but the main one is the title question.
Note: The "camera stacking" approach mentioned here applies only to Unity URP. For the Unity Built-in Render Pipeline or Unity versions prior to 2019.3.0f3 you can achieve a similar effect with RenderTextures. Though Unity HDRP has no explicit "camera stacking" feature it does allow for the same net effect via the HDRP-specific Graphics Compositor.
"Is there a way to apply bloom to a specific object?"
You could take a leaf out of Unity camera stacking whereby one set of objects are rendered by one camera and another set with a different camera. The results of each camera rendering are merged together automatically by Unity and presented to the screen.
But don't take my word for it, this is what Unity has to say:
In the Universal Render Pipeline (URP), you use Camera Stacking to layer the output of multiple Cameras and create a single combined output. Camera Stacking allows you to create effects such as a 3D model in a 2D UI, or the cockpit of a vehicle. Tell me more...
...and (my emphasis):
A Camera Stack overrides the output of the Base Camera with the combined output of all the Cameras in the Camera Stack. As such, anything that you can do with the output of a Base Camera, you can do with the output of a Camera Stack. For example, you can render a Camera Stack to a given render target, apply post-process effects, and so on. Tell me more...
When you consider that each camera has the potential for its own rendering settings (including bloom) the solution is clear:
ensure there are two cameras in the scene, say My Default Camera and Bloomin' Camera
create a custom layer called "Bloom"
assign whatever objects you want to be rendered with a bloom to layer Bloom
setup the camera stack as per "Adding a Camera to a Camera Stack".
My Default Camera should be set to "Base":
Bloomin' Camera should be set to overlay:
Add Bloomin' Camera to My Default Camera Stack settings:
ensure that the Culling mask for My Default Camera has the Bloom layer unticked. This ensures that the objects to be bloomed are only drawn once on the Bloom layer
ensure that the Culling mask for Bloomin' Camera has a single ticked entry for the Bloom layer and nothing else. You don't want to double-up on rendering otherwise you will get funky and undesirable z-order effects apart from hurting game performance. Other layers will be rendered by My Default Camera.
apply bloom effects to camera Bloomin' Camera
run game, celebrate
The is global might sound confusing at first. Ultimately it does not mean where to apply the post processing effect, but when to apply the effect. If it is set to Global, it will always be applied, otherwise you can set a layer and a border that triggers the effect.
The general approach is to only set emission to materials where you want the effect to take place. If your Materials are to dark otherwise you should adjust the ambient lighting settings.
Atleast in URP there are some work arounds for older versions like this, but afaik this does not work in 2020.3 since they made some changes on URP and the camera system.
edit: on the video Chris Hull
Chris Hull game an answer for how to do it with the new system
#Mezzanine Add your actual game objects to a created bloom layer.
Create two cameras and set one of them to cull everything except that
bloom layer you made. Set the other to only cull the bloom layer. Then
you can set your camera to overlay and it will be added to the other.
You can then use separate post process stacks on these cameras. Note
that you can only bloom objects in the background with this technique
as if you add bloom to an overlay camera, for some reason it just adds
bloom to everything rather than just the things in that camera view.
Doesn't make much sense and makes the purpose of the layers redundant
in my opinion. If you can find a way to add post process to the
overlay camera before it is added to the final image, to do let me
know.
i have not tested that yet, but i presume it's still valid.
I have implemented blast particle within my game but when it spawns, it gets cutout to correspond to the nearest environment object.
This problem, I was getting:
Overall there are multiple particle systems are running to achieve this but I am attaching one particle system inspector panel details:
A similar kind of renderer exists for mostly all particle systems. So please guide me to solve the above effect cut related problem with the wall.
EDIT-1:
I have added a VFX rendering camera and created separate later too for the effects but there is no change in the result.
EDIT-2:
Here you have a screenshot for the Main Camera of the game:
You could add collisions to the particles so that they either bounce away from the world object or are just destroyed.
Example Collision Setup
It will add a little overhead but depending on how many particles the effect uses it shouldn't be an issue, and if it is you could test it with a lower collision quality.
I found this advice on the polycount forums written by user AlecMoody:
[1.] Create a second camera parented to the primary camera...
[2.] Create a render layer for your [particle effect] (or assign it to an existing layer).
[3.] Set the main camera culling mask to everything except your explosion layer.
[4.] Set the child camera culling mask to only be your explosion layer. The child camera should have the clear flag set to "don't clear"
[5.] and put a positive value into the [child] camera depth.
Since for you having a higher depth on the child camera doesn't make it render above the lower depth cameras, setting the clear flag to "depth only" may help you.
First, I just want to introduce to you guys my problem, because it is really complex so you need this to understand it properly.
I am trying to do something with Scene Kit and Swift : I want to reproduce what we can see in the TV Show Doctor Who where the Doctor's spaceship is bigger on the inside, as you can see in this video.
Of course the Scene Kit Framework doesn't support those kind of unreal dimensions so we need to do some sort of hackery to do achieve that.
Now let's talk about my idea in plain english
In fact, what we want to do is to display two completely different dimensions at the same place ; so I was thinking to :
A first dimension for the inside of the spaceship.
A second dimension for the outside of the spaceship.
Now, let's say that you are outside of the ship, you would be in the outside dimension, and in this outside dimension, my goal would be to display a portion of the inside dimension at the level of the door to give this effect where the camera is outside but where we can clearly see that the inside is bigger :
We would use an equivalent principle from the inside.
Now let's talk about the game logic :
I think that a good way to represent these dimensions would be two use two scenes.
We will call outsideScene the scene for the outside, and insideScene the scene for the inside.
So if we take again the picture, this would give this at the scene level :
To make it look realistic, the view of the inside needs to follow the movements of the outside camera, that's why I think that all the properties of these two cameras will be identical :
On the left is the outsideScene and on the right, the insideScene. I represent the camera field of view in orange.
If the outsideScene camera moves right, the insideScene camera will do exactly the same thing, if the outsideScene camera rotates, the insideScene camera will rotate in the same way... you get the principle.
So, my question is the following : what can I use to mask a certain portion of a certain scene (in this case the yellow zone in the outsideView) with what the camera of another view (the insideView) "sees" ?
First, I thought that I could simply get an NSImage from the insideScene and then put it as the texture of a surface in the outsideScene, but the problem would be that Scene Kit would compute it's perspective, lighting etc... so It would just look like we was displaying something on a screen and that's not what I want.
there is no super easy way to achieve this in SceneKit.
If your "inside scene" is static and can be baked into a cube map texture you can use shader modifiers and a technique called interior mapping (you can easily find examples on the web).
If you need a live, interactive "inside scene" you can use the sane technique but will have to render your scene in a texture first (or renderer your inside scene and outer scene one after the other with stencils). This can be done by leveraging SCNTechnique (new in Yosemite and iOS 8). On older versions you will have to write some OpenGL code in SCNSceneRenderer delegate methods.
I don't know if it's 'difficult'. As we have to in iOS , a lot of times the simplest answer ..is the simplest answer.
Maybe consider this:
Map a texture onto a cylinder sector prescribed by the geometry of the Tardis cube shape. Make sure the cylinder radius is equal of the focal point of the camera. Make sure you track the camera to the focal point.
The texture will be distorted because it is a cylinder making onto a cube. The actors' nodes in the Tardis will react properly to the camera but there should be two groups of light sources...One set for the Tardis and one outside the Tardis.