I have 3D animation and display it on Raw Image.
The animation was quite ok testing on 3D plane, now changed to 2D Raw Image, then I am a bit weird about origins of GameObjects.
On 3D plane, animation models have origin on the ground between two legs.
Once changed to 2D Raw Image, the origin is shifted to chest. The first image is on 3D plane and the second is on 2D Raw Image.
All other position of club handle, club shaft and club head are also shifted to chest.
When I print the (x,y,z) position of club head, I have 374.9705,741.7168,-0.4869962. They should be less than 2 in 3D plane and less than 2 is correct.
How I run 3D animation on 2D Raw Image is discussed here and stated below
(1)Put the object in a specific layer (called MyLayer for the sake of the example)
(2)Set the Culling mask of a new camera to render only this specific layer
(3)Uncheck the MyLayer in the Culling mask of your main camera in order to prevent the latter to render your model
(4)Create a new Render texture in the project, and drag & drop it in the Render Texture field of your new Camera
(5) Add a new Raw Image to your UI canvas and assign the render texture in the Texture field
(6)Run my 3D animation
How can I have position of clubhead same as in my previous 3D plane?
I can't shift the origin, when I drag the whole GameObject is also shifted.
EDIT:
Let me add in why I run animation on RawImage is I need to display 3D animation on 2D canvas. For that I need Raw Image and RenderTexture to run 3D animation.
Please see the below image.
EDIT 1:
I take out of the canvas, but I don't see my model at the scene view to set the position. Why I can't see my model.
I see the model on the preview of my RawImage, but not at the scene.
DisplayCamera can see the model, but when I run I don't see model on the greencanvas too.
Related
Is there a way to render a camera only inside a sphere (or any 3D object)?
I want to render 2 cameras...each of them has different post processing effects...but one of them renders its effects in a spherical area around the player (with feathered edges if possible)...and the other renders the rest of the screen.
I have a hololens app made in unity using the URP. I added a second camera of render type base and set the output texture to a custom render texture. I set the camera background type as uninitialized. I created a material and set the render texture as the surface input base map. I then created a plane, dragged the material onto it, and positioned it in the field of view of the main camera. I added a cube in the field of view of the second camera. I followed this link to do this... Rendering to a Render Texture
The result is I see the plane and the output of the second camera (cube) in the main camera, which is what I want. But I see the entire plane as well as a black background. Is there a way that I can make the plane appear transparent so only the cube is displayed?
Camera
Render Texture
Material
Plane with render texture
Main camera with cube and black background
Your plane's material is set to Surface Type -> Opaque.
Changing that to
Surface Type -> Transparent
Blending Mode -> Alpha
should solve your issue.
In Unity3d, I'm trying to make a simple metaball effect using multiple sprites of a blurred round, and those sprites move randomly.
Thus I'd like the shader to perform a pixel color change from the rendering of all the sprites all together and not one by one.
Here is an example :
The picture on the left shows four sprites with the blurred sprite ; the picture on the right is the result of the shader.
And I have no clue of how to do this.
If I understand your question correctly, what you can do is
Create a new RenderTexture
Move these sprites off-screen, out of the main camera's view.
Point a new orthographic camera at all of the sprites that you've moved off-screen and set this camera's Target Texture field (in the Inspector view) to the render texture. This will save whatever the camera is seeing to that texture.
From here you can render that texture onto the surface of another game object (maybe a Quad?)
Attach a custom shader material to that quad that takes the render texture as input.
Perform whatever operations you wish to the render texture within this shader
Position this quad object in front of your main camera so that the final result gets rendered to screen
Does this make sense?
I'm using Unity3D and ARKit and my goal is to project a camera image onto a plane detected with ARKit. So that I can render my plane with a second orthogonal camera in a brid's eye view. The result would be a textured map from top down view. The whole thing doesn't have to run in real time in the first version, it only should work for one frame on button click.
My current steps are:
freeze the frame on button click (ReadPixels to a UI Image)
duplicate the ARKit plane mesh, so that the plane is no longer extended and tracked
Now comes the problem, how do I get the camera image (which is stored in my UI image) correctly perspectivelly transformed on my plane? Do I have to do the transformation on the texture or in the shader?
How do I handle the case if the plane is larger than the current camera image? Do I have to crop the plane first? Like in the picture (case 2) only the green area can be textured.
From the ARKit plane geometry I can get the 3d vertices and the texture coordinates of the plane. I also can transfrom the World Coordinates to Screen, but I'm struggling with how and where to do the image transformation from screen to my detected plane.
What I am doing is displaying the same 2D texture map on the screen as the model in the scene.
And fit the model perfectly on the screen.
as the picture shows
The red dot is the center of Bounds.
The blue dot is the center of the 2D texture image.
Since 3D has depth, the center point of the 3D model is applied to the 2D image. 2D images will deviate from a certain position and cannot be connected together.
Edit
Finally I found the answer in this link.
You can check Camera.WorldToScreenPoint method, if you wanna get 2D screen position, as I understood.