I am trying to use OpenVR Overlay API to overlay a 3d model over the top of another VR application.
I have successfully used this API, with some help from this HeadlessOverlayToolkit to overlay planes.
I have arranged 6 planes to make a 3d cube and can overlay that.
I am trying to figure out of there is a way to overlay actual 3d models, and if so how?
I see in the OpenVR docs it says the IVROverlay allows you to render 2d content through the compositor. However, surely if it is possible to construct 3d shapes (using 2d planes) then why wouldn't it be possible to overlay 3d models?
Any insight, experience or guidance here would be appreciated.
All the best,
Liam
It is possible. Create your overlay as usual, then call SetOverlayRenderModel. It takes a path to an .obj file as an argument. The only caveat is that for some reason you still need to provide a texture, otherwise the model will not appear, but it can be a transparent 1x1 one so that it is not visible - see this issue for details.
Note that currently it is impossible to add any dynamically generated mesh, you can only load from file. It is also impossible to do animations.
There appears to be no reporting of errors anywhere when SteamVR does not like your model even though the function is supposed to return EVROverlayError, it just won't appear. If this happens, double-check all paths and try to load the default controller models from C:\Program Files (x86)\Steam\steamapps\common\SteamVR\resources\rendermodels\vr_controller_vive_1_5\vr_controller_vive_1_5.obj, because they are definitely correct. I had some problems loading models with no textures, so make sure your models are correctly textured and UV-mapped.
Related
I'm looking for either a 3D model or an image file over which I can apply my own custom graphical elements, such as eyeliner or lipstick.
In the ARCore docs, the solution to this issue is very well described. You can get either an FBX file or a PSD template, over which you place your own elements.
From what I can tell, the principle of ARCore and ARKit are very much the same - there's a standard face mesh which gets contorted to the shape of a detected face, however, I'm unable to find any such materials using Google.
Just use the same face model and use slightly larger copies of it for the makeup. No one is going to get close enough to see how thick its caked on, because all the polys would start disappearing anyway...
I am working on a Projection Mapping Project and I am prototyping in Unity 3D. I have a cube like object with a 3D terrain and characters in it.
To recreate the 3D perspective and feel I am using two projectors which will project in a real world object which is exactly like the Unity Object. In order to do this I need to extract 2D views from the shape in Unity.
Is there an easy way to achieve this ?
Interesting project. It sounds like you would need multiple displays, one for each projector, each using a separate virtual camera in Unity, like documented there.
Not sure if I understood your concept correctly from the description above. If the spectator should be able to walk around the cube, onto which the rendered virtual scene should be projected, it would also be necessary to track a spectator's head/eyes to realize a convincing 3D effect. The virtual scene would need to be rendered from the matching point of view in virtual space (works for only one spectator). Otherwise the perspective would only be "right" from one single point in real space.
The effect would also only be convincing with stereo view, either by using shutter glasses or something similar. Shadows are another problem, when projecting onto the cube from outside the scene. By using only two projectors, you would also need to correct the perspective distortion, when projecting onto multiple sides of cube at the same time.
As an inspiration: There's also this fantastic experiment by Johnny Chung Lee demonstrating a head tracking technique using the Wii Remote, that might be useful in a projection mapping project like yours.
(In order to really solve this problem, it might be best to use AR glasses instead of conventional projectors, which have the projector built in, and use special projection surfaces that allow for multiple spectators at the same time (like CastAR). But I have no idea, if these devices are already on the market... - However, I see the appeal of a simple projection mapping without using special equipment. In that case it might be possible to get away from a realistic 3D scene, and use more experimental/abstract graphics, projected onto the cube...)
I'm tasked with developing an application, which would emulate augmented reality in a virtual reality application. We are using Google Cardboard (Google VR), and want to show the camera images (don't mind the actual camera setup, say I already have the images) to the user.
I'm wondering about the ways to implement it. Some ideas I had:
Substituting the images rendered for each eye by my custom camera images.
Here I have the following problems: I don't know how to actually replace the images that are rendered to the screen, let alone to each eye. And how to afterwards show some models overlayed on top of the image (I would assume by using the Stencil Buffer?).
Placing 2 planes in from of the camera with custom images rendered onto it
In this case, I'm not sure about the whole "convenience" of the user experience, as the planes would most likely be placed really close, so you only see one plane with one eye, and not the other. Seems like it might put some strain onto your eyes, because they would close on something that is really close to you.
Somehow I haven't found a project that would try to achieve something like that, and especially with all the Windows Mixed Reality related stuff polluting the search results.
You can use Vuforia digital eyewear, here is the documentation for it.
And a simple tutorial on YouTube.
I'm somewhat familiar with tango and unity. I have worked through the examples and can get them to work correctly. I have seen some people doing an AR type example where they have their custom objects in an area to interact with or another example would be directions where you follow a line to a destination.
The one thing I cannot figure out is how to precisely place a 3d object into a scene. How are people getting that data to place it within unity in the correct location? I ha e an area set up and the AR demo seems promising but I'm not placing objects with the click of a finger. What I am looking to do is when they walk by my 3d object will already be there and they can interact with it. Any ideas? I feel like I've been searching everywhere with little luck to an answer to this question.
In my project, I have a specific space the user will always be in - so I place things in the (single room) scene when I compile.
I Create an ADF using the provided apps, and then my app has a mode where it does the 3D Reconstruction and saves off the mesh.
I then load the Mesh into my Unity Scene (I have to rotate it by 180° in the Y axis because of how I am saving the .obj files)
You now have a guide letting you place objects exactly where you want them, and a nice environment to build up your scene.
I disable the mesh before I build. When tango localises, your unity stuff matches up with the tango world space.
If you want to place objects programatically, you can place them in scripts using Instantiate
I also sometimes have my app place markers with a touch, like in the examples, and record the positions to a file, which I then use to place objects specifically... But having a good mesh loaded into your scene is really the nicest way i've found.
I'm starting to use Maya to do some bone animations of a 2D character, which is composed of several different parts (legs, body, head, weapon, etc.). I have each part in a separate .PNG image file. Right now I have a polygon for each part, with its own material and texture:
I was wondering if there's a way to automatically combine the textures into an atlas, make all the polygons use the same material with the atlas, and correct their UV maps so they still point to the right part of the atlas. Right now, I can do it manually in reverse: I can make the atlas outside Maya with other tools, then use the atlas on a material and manually correct the UV maps of each polygon. But it's a very long process and if I need to change a texture, I have to do it all over again. So I was wondering if there's a way to automate it.
The reason why I'm trying to do this is to save draw calls in Unity. From what I understand, Unity can batch objects as long as they share the same material. So instead of having a draw call for each polygon in the character, I'd like to have a single draw call for the whole character. I'm pretty new to Maya, so any help would be greatly appreciated!
If you want to do the atlassing in Maya, you can do it by duplicating your mesh and using the Transfer Maps tool to bake all of the different meshes onto the duplicate as a single model. The steps would be:
1) Duplicate the mesh
2) Use UV layout to make sure that the duplicated mesh has no overlapping UVs (or only has them where appropriate, like mirrored pieces.
3) Use the Transfer Maps... tool to project the original mesh onto the new one, using the "Use topology" option to ensure that the projection is clean.
The end result should be that the new model has the same geometry and appearance as the original, but with all of it's textures combined onto a single sheet attached to a single material.
The limitation of this method is that some kinds of mesh (particularly meshes that self-intersect) may not project properly, leaving you to manually touch up the atlassed texture. As with any atlassing solution you will probably see some softening in the textures, since the atlas texture is not a pixel for pixel copy but rather a a projection, and thus a resampling.
You may find it more flexible to reprocess the character in Unity with a script or assetPostprocessor. Unity has a native texture atlassing function, documented here. Unity comes with a script for combining static meshes, but you'd need to implement your own; theres'a an example on the unity wiki that probably does what you want : http://wiki.unity3d.com/index.php?title=SkinnedMeshCombiner (Caveat: we do something similar to this at work, but I can't share it; I have not used the one in this link). FWIW Unity's native atlassing works only in rectangles, so it's not as memory efficient as something you could do for yourself.