How to grab the 2D views/textures from a 3D Object in Unity - unity3d

I am working on a Projection Mapping Project and I am prototyping in Unity 3D. I have a cube like object with a 3D terrain and characters in it.
To recreate the 3D perspective and feel I am using two projectors which will project in a real world object which is exactly like the Unity Object. In order to do this I need to extract 2D views from the shape in Unity.
Is there an easy way to achieve this ?

Interesting project. It sounds like you would need multiple displays, one for each projector, each using a separate virtual camera in Unity, like documented there.
Not sure if I understood your concept correctly from the description above. If the spectator should be able to walk around the cube, onto which the rendered virtual scene should be projected, it would also be necessary to track a spectator's head/eyes to realize a convincing 3D effect. The virtual scene would need to be rendered from the matching point of view in virtual space (works for only one spectator). Otherwise the perspective would only be "right" from one single point in real space.
The effect would also only be convincing with stereo view, either by using shutter glasses or something similar. Shadows are another problem, when projecting onto the cube from outside the scene. By using only two projectors, you would also need to correct the perspective distortion, when projecting onto multiple sides of cube at the same time.
As an inspiration: There's also this fantastic experiment by Johnny Chung Lee demonstrating a head tracking technique using the Wii Remote, that might be useful in a projection mapping project like yours.
(In order to really solve this problem, it might be best to use AR glasses instead of conventional projectors, which have the projector built in, and use special projection surfaces that allow for multiple spectators at the same time (like CastAR). But I have no idea, if these devices are already on the market... - However, I see the appeal of a simple projection mapping without using special equipment. In that case it might be possible to get away from a realistic 3D scene, and use more experimental/abstract graphics, projected onto the cube...)

Related

Large scene scene in Unity, far bigger than one Terrain?

Imagine a large free-roam game in Unity,
The yellow size indicates about the largest you can make a typical Terrain in Unity.
Art dept. will completely build, meter by meter, the entire scene.
Please note, this has absolutely no connection to repeating scenery (as in runners) or procedural scenery (as in say some race games).
Really, what is the correct and good way to do this in Unity?
use say 50 or so terrains, each perhaps 100m x 100m ?
can you even have or use that many terrains?
or what?
For anyone googling here.
The correct solution is indeed
Terrain stitching
that's it.
In practice you must use one of the tools available to do this (eg, TerrainFormer) or, your team will write from scratch a terrain stitcher.
Yes, you just use "many terrains".
The best approach to the exact problem posed,
is in fact to just:
"use lots of Terrains".
It seems to be perfectly viable in Unity to have many (dozens) of Terrain units, basically "sitting next to each other".
In practice, you'll need TerrainFormer
https://assetstore.unity.com/packages/tools/terrain/terrain-former-20052
or one of the similar tools.
(Or, I suppose, from scratch write your own tool to stitch terrains, and allow you manipulate them all at once, join the edge-heights perfectly, etc etc.)
It's not realistically possible to just perfectly sit many Terrains together (by hand), matching all the edges, etc etc. So you're going to need a "stitcher" package for putting many Terrain squares together.
So, this huge area ..
has about 12 Terrain.
So that's the answer, you can indeed have "many, many" Terrain in a Unity project, you do indeed essentially just "sit them next to each other". In practice it's not achievable unless you use one of the editor tools such as TerrainFormer.
The proper way to do this would have been with procedural mesh generation with MeshFilter and Mesh API but you mentioned that this not at all random or generative so that one is eliminated.
It's just simply a very long, thin, hand-made environment
The best way in this case would be a Modular Level Design. You need to create Modular Assets. With this you can have a long road in pieces which can be tileable. A good modeling artist should be able to create and texture modular assets with 3D modeling packages like Blender, Maya or 3ds Max.
All the programmer has to do is make each asset a prefab then use the Instantiate function to instantiate them to create any distance of environment. You can also create a static environment in the scene from the Editor. Almost anything can be made into a modular Asset especially buildings and roads.
After assembling them in Unity, you can do static batch on all the instantiated modular parts with Unity's StaticBatchingUtility.Combine to improve performance of the game since they are not being moved.
Below is an example of a modular road asset that can be used to create almost any amount of road:
You already answered your question
in this case would it be better to not bother with Unity's otherwise excellent Terrain, and the modellers would just outright build the long course/scenery? (Obviously you'd have to chunk it so it all occludes fine)
I think it's the way to go. If the performance is an issue, try putting each chunk in different scenes and then have a master scene to load them async and additively. And of course you want to unload each scene as it becomes invisible in the camera.
I personally go with your own solution which is letting the Unity Occlusion Culling system to take care of the hiding and showing chunks. I only go with the separate scenes approach if I'm not getting enough performance this way.
I recently had the same problem. We build a tilebased infinite runner with road crossings. The camera was positioned behind and over the car looking down on the street and the player car. So the setup is quite comparable.
We used Curvy from the Asset store to create paths for moving the player and also for creating the geometry of the streets and the surroundings among the streets.
https://assetstore.unity.com/packages/tools/level-design/curvy-splines-7038
It is also easy to spawn tiles with curvy paths and combine them at runtime. So you can separate long distances into smaller segments and spawn them randomly.
We also used QuickBrush from ProCore to quickly paint environment detail to the geometry like trees, bushes or stones. I think procore tools are now implemented in the new Unity 2018 version.
Worked quite well.

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.

Tango Predefined Objects

I'm somewhat familiar with tango and unity. I have worked through the examples and can get them to work correctly. I have seen some people doing an AR type example where they have their custom objects in an area to interact with or another example would be directions where you follow a line to a destination.
The one thing I cannot figure out is how to precisely place a 3d object into a scene. How are people getting that data to place it within unity in the correct location? I ha e an area set up and the AR demo seems promising but I'm not placing objects with the click of a finger. What I am looking to do is when they walk by my 3d object will already be there and they can interact with it. Any ideas? I feel like I've been searching everywhere with little luck to an answer to this question.
In my project, I have a specific space the user will always be in - so I place things in the (single room) scene when I compile.
I Create an ADF using the provided apps, and then my app has a mode where it does the 3D Reconstruction and saves off the mesh.
I then load the Mesh into my Unity Scene (I have to rotate it by 180° in the Y axis because of how I am saving the .obj files)
You now have a guide letting you place objects exactly where you want them, and a nice environment to build up your scene.
I disable the mesh before I build. When tango localises, your unity stuff matches up with the tango world space.
If you want to place objects programatically, you can place them in scripts using Instantiate
I also sometimes have my app place markers with a touch, like in the examples, and record the positions to a file, which I then use to place objects specifically... But having a good mesh loaded into your scene is really the nicest way i've found.

Is it possible to combine textures in Maya into some sort of atlas?

I'm starting to use Maya to do some bone animations of a 2D character, which is composed of several different parts (legs, body, head, weapon, etc.). I have each part in a separate .PNG image file. Right now I have a polygon for each part, with its own material and texture:
I was wondering if there's a way to automatically combine the textures into an atlas, make all the polygons use the same material with the atlas, and correct their UV maps so they still point to the right part of the atlas. Right now, I can do it manually in reverse: I can make the atlas outside Maya with other tools, then use the atlas on a material and manually correct the UV maps of each polygon. But it's a very long process and if I need to change a texture, I have to do it all over again. So I was wondering if there's a way to automate it.
The reason why I'm trying to do this is to save draw calls in Unity. From what I understand, Unity can batch objects as long as they share the same material. So instead of having a draw call for each polygon in the character, I'd like to have a single draw call for the whole character. I'm pretty new to Maya, so any help would be greatly appreciated!
If you want to do the atlassing in Maya, you can do it by duplicating your mesh and using the Transfer Maps tool to bake all of the different meshes onto the duplicate as a single model. The steps would be:
1) Duplicate the mesh
2) Use UV layout to make sure that the duplicated mesh has no overlapping UVs (or only has them where appropriate, like mirrored pieces.
3) Use the Transfer Maps... tool to project the original mesh onto the new one, using the "Use topology" option to ensure that the projection is clean.
The end result should be that the new model has the same geometry and appearance as the original, but with all of it's textures combined onto a single sheet attached to a single material.
The limitation of this method is that some kinds of mesh (particularly meshes that self-intersect) may not project properly, leaving you to manually touch up the atlassed texture. As with any atlassing solution you will probably see some softening in the textures, since the atlas texture is not a pixel for pixel copy but rather a a projection, and thus a resampling.
You may find it more flexible to reprocess the character in Unity with a script or assetPostprocessor. Unity has a native texture atlassing function, documented here. Unity comes with a script for combining static meshes, but you'd need to implement your own; theres'a an example on the unity wiki that probably does what you want : http://wiki.unity3d.com/index.php?title=SkinnedMeshCombiner (Caveat: we do something similar to this at work, but I can't share it; I have not used the one in this link). FWIW Unity's native atlassing works only in rectangles, so it's not as memory efficient as something you could do for yourself.

How do I create a floor for a game?

I'm attempting to build a Lunar Lander style game on the iPhone. I've got Cocos2D and I'm going to use Box2D. I'm wondering what the best way is to build the floor for the game. I need to be able to create both the visual aspect of the floor and the data for the physics engine.
Oh, did I mention I'm terrible at graphics editing?
I haven't used Box2D before (but I have used other 2D physics engines), so I can give you a general answer but not a Box2D-specific answer. You can easily just use a single static (stationary) Box if you want a flat plane as the floor. If you want a more complicated lunar surface (lots of craters, the sea of tranquility, whatever), you can construct it by creating a variety of different physics objects - boxes will almost always do the trick. You just want to make sure that all your boxes are static. If you do that, they won't move at all (which you don't want, of course) and they can overlap without and problems (to simulate a single surface).
Making an image to match your collision data is also easy. Effectively what you need to do is just draw a single image that more or less matches where you placed boxes. Leave any spots that don't have boxes transparent in your image. Then draw it at the bottom of the screen. No problem.
The method I ended up going with (you can see from my other questions) is to dynamically create the floor at runtime and then draw it to the screen.