I am developing an application for Pico VR headset using Unity. The SDK has a prefab containing two camera.
I have added a quad and a script to change its texture at runtime.
Since the quad is rectangular and display is circular, my texture is unable to fill the screen. If quad is too close to camera, the corners cannot fit the display. If its so far that corners are able to fit inside the headset's display, I am able to see the background at the edges.
The display of HMD looks like a circle (may be spherical - because I have observed the texture that I apply on to the quad looks zoomed in when I run the app on the device). I want to add such object and configure my cameras so that they can view the entire texture of that object and nothing else.
Related
I have a hololens app made in unity using the URP. I added a second camera of render type base and set the output texture to a custom render texture. I set the camera background type as uninitialized. I created a material and set the render texture as the surface input base map. I then created a plane, dragged the material onto it, and positioned it in the field of view of the main camera. I added a cube in the field of view of the second camera. I followed this link to do this... Rendering to a Render Texture
The result is I see the plane and the output of the second camera (cube) in the main camera, which is what I want. But I see the entire plane as well as a black background. Is there a way that I can make the plane appear transparent so only the cube is displayed?
Camera
Render Texture
Material
Plane with render texture
Main camera with cube and black background
Your plane's material is set to Surface Type -> Opaque.
Changing that to
Surface Type -> Transparent
Blending Mode -> Alpha
should solve your issue.
In Unity3d, I'm trying to make a simple metaball effect using multiple sprites of a blurred round, and those sprites move randomly.
Thus I'd like the shader to perform a pixel color change from the rendering of all the sprites all together and not one by one.
Here is an example :
The picture on the left shows four sprites with the blurred sprite ; the picture on the right is the result of the shader.
And I have no clue of how to do this.
If I understand your question correctly, what you can do is
Create a new RenderTexture
Move these sprites off-screen, out of the main camera's view.
Point a new orthographic camera at all of the sprites that you've moved off-screen and set this camera's Target Texture field (in the Inspector view) to the render texture. This will save whatever the camera is seeing to that texture.
From here you can render that texture onto the surface of another game object (maybe a Quad?)
Attach a custom shader material to that quad that takes the render texture as input.
Perform whatever operations you wish to the render texture within this shader
Position this quad object in front of your main camera so that the final result gets rendered to screen
Does this make sense?
I need to build an app using Unity which doesn't use a traditional camera to generate the graphics. I'll build them using some custom shaders and a few cameras whose results get stuffed in rendertextures and then frobbed. (Think http://www.purplefrog.com/~thoth/art/kaleidescope/kaleid1.html but even weirder)
I'm not sure what objects I would put in the scene to accomplish this. In any normal app you just put a camera and point it at the right spot and Unity gets the pixels into the window, but that is just not how this thing will work.
I'm not sure if I should be using a UI Canvas or what APIs would be used to copy various render textures into the proper locations.
If you are not targeting WebGL you can create a RenderTexture of the proper size (maybe using RenderTexture.GetTemporary) and use Graphics.CopyTexture or other techniques to assemble the image you want displayed in the game window.
Once you have the pixels you want in the RenderTexture you can use Graphics.Blit(src, (RenderTexture)null); which will copy the pixels into the game window. These pixels will be stretched if the game window is not the same size as the RenderTexture.
This technique worked for me in the editor's game window, but when I compile it to WebGL, all I get is a mostly-grey screen with a really big black rectangle in the bottom left.
I'm fairly new to Unity and I'm trying to embed a 3d view inside a 2d one.
I'm working on an emulation app that has a 2d UI for controls and a preview of the result in a 3d box that should be embedded in the 2d one, sort of as a player.
What's the right approach to doing that in Unity? Is there a way of "embedding" one scene in another?
Thanks!
If you want to create 3D effect with UI canvas, you should look to this link.
If you are using 2D project, it is basically 3D scene with camera set to use ortographic projection, not perspective. So you can use 3D models as well.
You should look into Render Texture. Those allows to render a camera view onto a texture in a scene. Say you have a part of a scene an dyou want to render onto a TV screen in your game. You would place the TV scene somewhere and place your camera to view it. Then you create a render texture and apply onto the mesh that is making your tv screen.
Now if you wish to make a UI system, like a radar with a top view, you would modify the viewport of your top view camera (0,0,.2,.2 would place it bottom left corner with 20% height and width) and make the depth higher so that it renders on top of the main camera.
I am playing with Unity3D and has tried to making a small and simple game. Now wanted to include a splash screen. Since the free version of Unity3D doesn't allow directly choosing the image for splash screen in Player Settings, I have followed this documentation: HOWTO-Splash screen in Unity3D
What I have done is, created a new Scene. Then drag and dropped my PNG image of size 1024x512 px to the Assets. Then clicked on this image and then in the Import Settings pane, I had chosen the Texture Type to Texture and hits Apply button.
Then I had created a new Cube object by going to Game Object --> Create Other --> Cube. Then for this cube, I have the values 0 for the 3 coordinates, and (16,9) for the x,y scaling.
Then dragged and dropped this splash screen image from the Assets window to this Cube. But the rendering gives the image inverted vertically! Also, the image was in White background with some text in it. But in the rendered window(ie. Game), it is in faded color!
Where did I went wrong?
I suspect there are two issues:
I suspect the shader that your cube uses relies on lighting which is why you're getting a faded colour. If that's the case, change the shader on the material to an unlit shader.
The image inverted vertically is a bit odd but I suspect you could scale the cube negatively on whatever axis is incorrect.
I suspect the splash screen may be easier to create with Unity's sprite system.