I am going to build a FPS video game. When I developing my game, I got this problem in my mind. Each and every video game developer spend very big time and use a lot of effort to make their game's environment more realistic and life-like. So my question is,
Can We Use HD or 4K Real Videos as Our Game's Environment? (As we seen on Google Streetview - but with more quality)
If we can, How we program the game engine?
Thank you very much..!
The simple answer to this is NO.
Of-course, you can extract texture from the video by capturing frames from it but that's it. Once you capture the texture, you still need a way to make a 3D Model/Mesh you can apply the texture to.
Now, there have been many companies working on video to 3D model converter. That technology exist but is more for movie stuff. Even with this technology, the generated 3D models from a video are not accurate and they are not meant to be used in a game because they end up generating a 3D model with many polygons, that will easily choke your Game engine.
Also, doing this in real-time is another story. So you will need to continuously read a frame from the video, extract a texture from the video, generate a mesh with the HQ texture, cleanup/reduce/reconstruct the mesh so that your game engine won't crash or drop many frames. You then have to generate a UV for the mesh so that the extracted image can be applied to the current mesh.
Finally, each one of these are CPU intensive. Doing them all in series,in real-time, will likely make your game unplayable.I also made doing this sound easy but it's not. What you can do with the video is to use it as a reference to model your 3D environment in a 3D application. That's it.
Related
I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.
All the Tango Apps and Demos I have seen so far have one major limitation: 3D-Objects are always "on top" of the real world camera image. They are placed correctly in 3D space but a real object in front of the virtual object will not overlap it!
Question:
Is it possible to mask 3D objects or parts of them in realtime by real world objects in front of them?
In theory the 3D data deliverd by Tango sensors should be sufficient to do this. But I wonder if anyone has done it before or if there might be performance limitations that make this impossible? Thanks for your advice!
One approach is to use the 3D Reconstruction library (search "Unity How-to Guide: Meshing with Color") to pre-scan the environment, and then use this model to provide depth data when rendering the AR scene. Here's a video of an AR game that appears to use this technique. It's not perfect for sure, but it does sorta work.
This questions has been asked before.
I've been using Unity3D for a while now and I've also had experience coding 2D games using LibGdx.
In the past, I used to get my sprites off the net or make my own however that wasn't really the best way to do things since I'm more of a programmer and would sometimes need very specific things and so I've started to learn blender and I'm actually enjoying it atm.
What I want to know is how much of an overhead is it if you're using 3D models for a 2D game? Especially if you want to port it to mobile?
The overhead is significant for rendering since with a basic sprite, you have 6 vertices (2 tris to make a quad) while a 3d model can have hundreds of thousands of vertices.
The advantage on the other hand is that animations are made of sprites, so your texture amount and size may increase. In 3D, an animation is a text file so fairly light.
The physics is simplified in 2D since you can do surface collision while 3D requires volume collision and obviously checking an extra dimension is more expensive.
There are probably other considerations but those are the first coming in mind.
Now, the choice of 3D over 2D should be simply based on what you are trying to achieve. Side scrolling games like Angry Birds do not need 3D. Games like Taichi Panda are better with 3D despite being a 2D game (only x and z camera movement I think).
A FPS game should only be done in 3D or it will look like Duke Nukem.
I am experimenting with overlaying augmented reality objects over a pass-through image from the rear camera in Unity.
Has anyone experimented with overlaying objects with accurate tracking? I've tweaked the movement scale to get somewhat decent results but rotation is still not accurate and drift is a big issue.
I've had good luck with the augmented reality sample that ships with the latest tango. in my experience it does work the way you speculated where if you add items to the unity scene they are synced to motion detected by the device.
I believe the tracking and syncing function have improved since you asked this question originally because I've noticed an improvement since I got my tango devkit a month or so ago. there was an update a week or so later, with an immediate improvement.
I have found that some scenes track better than others, it seems to help for there to be additional scenery for it to track. in my workspace, a fairly cluttered apartment, it tracks well but in the neighboring identical apartment unit which is currently vacant and empty, it does not track as well. that could also be a product of the blinds hanging up in my unit that are not hanging up in the vacant unit, filtering out additional infrared.
I'm experimenting with placing 3D objects over the real time input from the Tango color camera.
One problem here is that the hardware color camera 'point' in a (strange) direction. I wasn't able to get the direction vector from the api until now. Your virtual camera for rendering the scene needs this rotation to render 3D objects properly.
There are augmented reality examples of Tango's Unity plugin:
https://developers.google.com/tango/apis/unity/unity-simple-ar
They solve this problem with a matrix that rotates the 3d camera.
It can be found in the Unity script "TangoARPoseController" (C#) that, when attached to a unity camera, rotates it so that it looks at the scene in the right direction. The matrix is obtained in the method "SetCameraExtrinsics" of that script.
Unfortunately, when I apply the matrix to my unity scene it does not produce a perfect overlay (actually it's quiet bad). But I have other sources of position input which may be the problem here.
However, until now I'm not sure if the matrix used in the examples is good enough for accurate ar overlays. Maybe it is just suitable for demonstration purposes. But it should be a good starting point for further investigation.
Are we talking about displaying the 'webcam' in the background as opposed to a skybox ?
Take a look at my GhostHunter repo. It includes a shader and a script for displaying the rear facing camera 'behind' the gameplay objects (like the skybox). It should be useable with Tango and it is better than the 'display on a mesh' technique I`ve seen others used.
https://github.com/NVentimiglia/Augmented-Reality-Ghost-Hunter
I have a 3D CAD file of a set of products. I want to create a viewer so that the user can freely rotate the object in 3D.
How would I best go about this?
1) I had thought about exporting a series of 360 degree images every 30 degrees around the image, but that would be around 360 images per product. Then right the code to handle the matrix that would be required to handle rotation of the object. Seems very excessive, but doable.
2) OpenGL - I have never done any 3d animation using this, though.
We are using LightWave 3D, if that helps.
I'd recommend going with the 3-D rendering route, even though it might require more upfront work than the multiple sliced images approach. It will provide much greater flexibility over the long run, and I think you'll be able to generate a more pleasing experience in the end (small application binary size, smoother rotation, etc.). Also, once you have the display code done, you'll be able to pull in arbitrary models to add on to the ones you started with, and make tweaks to those models more easily.
This question points out a number of ways that you might be able to import LightWave models into formats usable by an OpenGL ES application. It looks like you'll probably need to pass through Blender or another intermediary to accomplish this.
Once you have the model in a form that you can work with, you can build off of several open source 3-D rendering applications for the iPhone / iPad, such as my Molecules application. My application is built for displaying 3-D molecular structures, but people have modified it to support rendering other models for their own needs, so I know that's possible. I go into detail on how this application works in the video for the OpenGL ES session of my class on iTunes U.
OpenGL ES may seem intimidating at first, but it only took me three weeks of nights-and-weekends development to build the initial version of Molecules, and I had no real OpenGL experience before starting that project. There are many great resources out there now, so it's easier than ever to get started.