I want to create a mesh in Unity o using OpenGL using an alpha mask, like the simple one I show below. So I can hide part of virtual object that will be behind the mesh.
Is it possible? Is there any easier method to hide part of a virtual object?
“Stencil testing” is what you want. Look up how to do it in whatever game engine / graphics API you are using.
Related
Is it possible to use the new UI builder and Toolkit for World Space UI in Virtual Reality use?
I have seen ways of doing it with Render Textures but not only does It not seem to be the same as a World Space Canvas (Which I did expect but it´s not even close) but I also don´t find a way of interacting using the VR Raycast method?
This isn't officially supported yet but is certainly possible for those willing to implement it themselves.
As the OP mentioned it is straightforward enough to render the UI, the main idea is to set panelSettings.targetTexture to some RenderTexture, which you can apply to a quad like any other texture.
Note that if you have multiple UIs you will need multiple instances of PanelSettings.
For raycasting there is a method panelSettings.SetScreenToPanelSpaceFunction which can be used for translating 2d coordinates to panel coordinates, here is an Official Unity Sample demonstrating how it can be implemented in camera space. The method is called every update so it can be hijacked to use a raycast from a controller instead of screen coordinates, although I've had mixed results with this approach.
Check out this repo for an example implementation in XR, it is a more sophisticated solution which makes extended use of the input system.
I am working on a Projection Mapping Project and I am prototyping in Unity 3D. I have a cube like object with a 3D terrain and characters in it.
To recreate the 3D perspective and feel I am using two projectors which will project in a real world object which is exactly like the Unity Object. In order to do this I need to extract 2D views from the shape in Unity.
Is there an easy way to achieve this ?
Interesting project. It sounds like you would need multiple displays, one for each projector, each using a separate virtual camera in Unity, like documented there.
Not sure if I understood your concept correctly from the description above. If the spectator should be able to walk around the cube, onto which the rendered virtual scene should be projected, it would also be necessary to track a spectator's head/eyes to realize a convincing 3D effect. The virtual scene would need to be rendered from the matching point of view in virtual space (works for only one spectator). Otherwise the perspective would only be "right" from one single point in real space.
The effect would also only be convincing with stereo view, either by using shutter glasses or something similar. Shadows are another problem, when projecting onto the cube from outside the scene. By using only two projectors, you would also need to correct the perspective distortion, when projecting onto multiple sides of cube at the same time.
As an inspiration: There's also this fantastic experiment by Johnny Chung Lee demonstrating a head tracking technique using the Wii Remote, that might be useful in a projection mapping project like yours.
(In order to really solve this problem, it might be best to use AR glasses instead of conventional projectors, which have the projector built in, and use special projection surfaces that allow for multiple spectators at the same time (like CastAR). But I have no idea, if these devices are already on the market... - However, I see the appeal of a simple projection mapping without using special equipment. In that case it might be possible to get away from a realistic 3D scene, and use more experimental/abstract graphics, projected onto the cube...)
I am trying to use OpenVR Overlay API to overlay a 3d model over the top of another VR application.
I have successfully used this API, with some help from this HeadlessOverlayToolkit to overlay planes.
I have arranged 6 planes to make a 3d cube and can overlay that.
I am trying to figure out of there is a way to overlay actual 3d models, and if so how?
I see in the OpenVR docs it says the IVROverlay allows you to render 2d content through the compositor. However, surely if it is possible to construct 3d shapes (using 2d planes) then why wouldn't it be possible to overlay 3d models?
Any insight, experience or guidance here would be appreciated.
All the best,
Liam
It is possible. Create your overlay as usual, then call SetOverlayRenderModel. It takes a path to an .obj file as an argument. The only caveat is that for some reason you still need to provide a texture, otherwise the model will not appear, but it can be a transparent 1x1 one so that it is not visible - see this issue for details.
Note that currently it is impossible to add any dynamically generated mesh, you can only load from file. It is also impossible to do animations.
There appears to be no reporting of errors anywhere when SteamVR does not like your model even though the function is supposed to return EVROverlayError, it just won't appear. If this happens, double-check all paths and try to load the default controller models from C:\Program Files (x86)\Steam\steamapps\common\SteamVR\resources\rendermodels\vr_controller_vive_1_5\vr_controller_vive_1_5.obj, because they are definitely correct. I had some problems loading models with no textures, so make sure your models are correctly textured and UV-mapped.
How do I properly "attach" an image on a GameObject?
Without knowing what you want the image for, there are different ways to have an object render as an image.
Assuming you're trying to make a 2D game, it sounds like what you are looking for is the Sprite Renderer.
The GUI texture you tried to use is deprecated and was part of an old system for menus and UI elements. If a UI is what you're after, take look at UI Image.
I want to make a circular progress-bar for my video game in UnityScript. How can I achieve that by using a 3d model/texture for it? I haven't written any code yet and this is just a question on how to do something like this in plain UnityScript.
You can use an alpha gradient texture that is then used as a cutoff for your progress bar image.
Note that you'll need to use shaders for this to work and set the _cutoff variable for this by hand.
Another approach would be to use the NGUI plugin and use an UISprite with radial fill, that's gonna do the same thing but without the need to modify the shader params by hand. On the down side the NGUI plugin is not free.