In Unity3d, I'm trying to make a simple metaball effect using multiple sprites of a blurred round, and those sprites move randomly.
Thus I'd like the shader to perform a pixel color change from the rendering of all the sprites all together and not one by one.
Here is an example :
The picture on the left shows four sprites with the blurred sprite ; the picture on the right is the result of the shader.
And I have no clue of how to do this.
If I understand your question correctly, what you can do is
Create a new RenderTexture
Move these sprites off-screen, out of the main camera's view.
Point a new orthographic camera at all of the sprites that you've moved off-screen and set this camera's Target Texture field (in the Inspector view) to the render texture. This will save whatever the camera is seeing to that texture.
From here you can render that texture onto the surface of another game object (maybe a Quad?)
Attach a custom shader material to that quad that takes the render texture as input.
Perform whatever operations you wish to the render texture within this shader
Position this quad object in front of your main camera so that the final result gets rendered to screen
Does this make sense?
Related
I have two camera in the same position. I want to use the first camera's depth buffer to render the scene again on the second camera.
Use case:
I want to detect if a line that I draw on the 3d space is rendered behind an object or not. To do that, first I draw the scene normally on the first camera. Then I render the scene again on the second camera, which rendered unto a texture and set to culled all object except the line. The line is using a shader that will shows Red on the part that rendered behind an object, but to draw it correctly I need to render the second camera using the first camera's depth buffer. After that I just check if the texture has the red color.
Is it possible to do that on URP? Or do you guys have any other idea how to achieve what I want? !(https://i.stack.imgur.com/7hxDo.png)
I have a hololens app made in unity using the URP. I added a second camera of render type base and set the output texture to a custom render texture. I set the camera background type as uninitialized. I created a material and set the render texture as the surface input base map. I then created a plane, dragged the material onto it, and positioned it in the field of view of the main camera. I added a cube in the field of view of the second camera. I followed this link to do this... Rendering to a Render Texture
The result is I see the plane and the output of the second camera (cube) in the main camera, which is what I want. But I see the entire plane as well as a black background. Is there a way that I can make the plane appear transparent so only the cube is displayed?
Camera
Render Texture
Material
Plane with render texture
Main camera with cube and black background
Your plane's material is set to Surface Type -> Opaque.
Changing that to
Surface Type -> Transparent
Blending Mode -> Alpha
should solve your issue.
I created a hlsl shader which is rendering a sierpinski fractal using Raymarching. Currently I have assigned the shader to a material, this material is assigned to a cube which I placed in the scene. So the sierpinski fractal is displayed / rendered on the cube geometry.
How can I use the whole screen / camera view to display my shader? I don’t want to add my shader to a material which I assign to a geometry.
In case someone is coming to this question and have the same problem as I had, you can do the following:
use a Graphics.Blit to execute a certain shader
store this information in a RenderTexture
assign this RenderTexture to e.g. a Canvas RawImage.texture
I have 3D animation and display it on Raw Image.
The animation was quite ok testing on 3D plane, now changed to 2D Raw Image, then I am a bit weird about origins of GameObjects.
On 3D plane, animation models have origin on the ground between two legs.
Once changed to 2D Raw Image, the origin is shifted to chest. The first image is on 3D plane and the second is on 2D Raw Image.
All other position of club handle, club shaft and club head are also shifted to chest.
When I print the (x,y,z) position of club head, I have 374.9705,741.7168,-0.4869962. They should be less than 2 in 3D plane and less than 2 is correct.
How I run 3D animation on 2D Raw Image is discussed here and stated below
(1)Put the object in a specific layer (called MyLayer for the sake of the example)
(2)Set the Culling mask of a new camera to render only this specific layer
(3)Uncheck the MyLayer in the Culling mask of your main camera in order to prevent the latter to render your model
(4)Create a new Render texture in the project, and drag & drop it in the Render Texture field of your new Camera
(5) Add a new Raw Image to your UI canvas and assign the render texture in the Texture field
(6)Run my 3D animation
How can I have position of clubhead same as in my previous 3D plane?
I can't shift the origin, when I drag the whole GameObject is also shifted.
EDIT:
Let me add in why I run animation on RawImage is I need to display 3D animation on 2D canvas. For that I need Raw Image and RenderTexture to run 3D animation.
Please see the below image.
EDIT 1:
I take out of the canvas, but I don't see my model at the scene view to set the position. Why I can't see my model.
I see the model on the preview of my RawImage, but not at the scene.
DisplayCamera can see the model, but when I run I don't see model on the greencanvas too.
I like to play animation on a Canvas.
I made a canvas as shown in the following image.
I like to play a golfer animation on the green color canvas.
Is it possible?
I have animation model as shown in the second figure.
I like to play that golfer animation on the canvas.
How can I do that?
I drag and put under canvas as child object, it doesn't work.
As I explained in my comment, I would do as follow :
Put your object in a specific layer (called MyLayer for the sake of the example)
Set the Culling mask of a new camera to render only this specific layer
Uncheck the MyLayer in the Culling mask of your main camera in order to prevent the latter to render your model
Set the Clear flags to Depth only of the camera to prevent the latter from rendering the skybox
Create a new Render texture in your project, and drag & drop it in the Render Texture field of your new Camera
Add a new Raw Image to your UI canvas and assign the render texture in the Texture field
Run your 3D animation
Your camera will render the animation into the image on your UI