Technology to use for creating an mobile app which can detect and track position of random objects in real time? - unity3d

I like to know which technology I can use for creating android application which has the feature of detecting an random object like hoarding board and after detecting, a video must played over that object. If the user move the camera , the video must also move over that object position.
I have an example of what I want to achieve in this video of big burger https://www.youtube.com/watch?v=lhXW8_7CaHM . But I want to play that video on any type of hoarding the camera detect. Not only on specific trained object.
I have study some technology like tensorflow and Vuforia which can made ar application.
But I'm not sure if it can detect real time objects with tracking of the objects position.

Related

How to put markers in the real world with mapbox and unity when making an augmented reality app

I'm making an augmented reality app with Unity and Mapbox for both ios and android. I have data sets that I am using to make markers in the real world when someone uses the app. I collected json files and converted them to geojson files and then I made a custom map in Mapbox Studio with these 4 different geojson files. Basically I want to have the markers from the datasets I collected to show up in the real world. I am not sure how to get these markers to show up in the real world and not with building prefabs. Example of my custom app made in Mapbox. Each color shows a different category of markers. There are four categories.
Here is an example of what I am referring to.
In this image skeletons can show up in the real world.
Here is an example of what I am not referring to.
In this image droids are place in a map but it is not the real world. It is like Pokemon Go where the map is generated with location but you don't actually see the real world when you are playing.
I already have my Unity project set up and this is the final step, but I am just having issues getting it to show up in the real world. So far, tutorials only show on to get it to reflect something like Pokemon Go.
You will have one scene with a stationary Camera. Your code will monitor the MapBox data in Update(), constantly passing the current GPS position and receiving your list of markers/points of interest. You can simply randomly spawn skeletons in a sphere area (see https://docs.unity3d.com/ScriptReference/Random-insideUnitSphere.html) around the Camera's transform position once you detect that the user's GPS position is in within a certain distance of the center of your point of interest. Keep track of that list, and destroy the skeletons once they leave the area - and have some way of making sure you only spawn them once for that area.
Your skeletons should have a NavMeshAgent, and you should generate a NavMesh onto the ARFoundation plane for them to walk on. In this case, the plane is probably dynamically created and you may need to use the dynamic NavMesh component https://github.com/Unity-Technologies/NavMeshComponents. If you tell the NavMeshAgent to go to a specific point it will walk to the closest point - so even though you get a random position in the sphere in 3D, the skeleton will move or spawn onto the nearest point so there is no need to figure out how to convert it to the 2D plane space.
Your AR view, both the tracking of the camera position/angle and the generation of a plane representing the ground, will be something generated by ARFoundation and it is simple to add the basic functionality. They have a prefab that already includes the camera and generates the plane for you. You can get ARFoundation via the Unity Package Manager. It will work with many different types of devices.
You should start with a cheap Android phone or tablet, even if you own an iPhone, because it's easier to load the APK and debug/develop your app via Android build.
This is a simplification. I recommend using Singletons, ScriptableObjects, Object Pooling, and other Unity paradigms and plenty of other things within Unity that would help you but as another user pointed out - you may want to spend time learning Unity, ARFoundation, MapBox, and ask more specific programming questions when you are ready.

How do I track the Unity position of physical objects the player is interacting with using Hololens2 hand tracking data?

Basically I am working on a mixed reality experience using the Hololens2 and Unity, where the player has several physical objects they need to interact with, as well as virtual objects. One of the physical objects is a gun controller that has an IMU to detect acceleration and orientation. My main challenge is this : how do I get the physical object's position in Unity, in order to accurately fire virtual projectiles at a virtual enemy?
My current idea is to have the player position the physical weapon inside a virtual bounding box at the start of the game. I can then track the position of the virtual box through collision with the player's hands when they pick up the physical controller. Does OnCollisionEnter, or a similar method, work with the Players hands? (see attached image)
I am also looking into the use of spatial awareness / image recognition / pose estimation to accomplish this task, as well as researching the use of a tracking base station to determine object position (similar to HTC Vive / Oculus Rift ).
Any suggestions, resources, and assistance is greatly appreciated here. Thank you!
EDIT UPDATE 11/30/2020 :
Hernando commented below suggesting QR codes, assume for this project we are not allowed to use QR codes, and we want as as precise orientation data as possible. Thanks Hernando!
For locating the object, QR code would definitely be the recommendation to find quickly with HL2 device. I have seen the QR approach in multiple venues too for VR LBE experiences like being described here. QR code is just sitting on top the device.
Otherwise, if the controller in question supports Bluetooth, can possibly pair the device and if device has location information, can possible transmit the location of where it is at. Based on what I am seeing from all of the above, this would be a custom solution and highly dependent on the controller abilities to be seen if QR codes are out of the equation. I have witnessed some controller solutions to first start the user experience to do something like touch the floor to get an initial reference point. Or alternatively doing something like always picking up the gun from specific location in the real world like some local based experiences do before starting.
Good luck with project, just my advice from using systems with VR
Is the controller allowed to paste multiple QRcodes? If allowed, we recommend you use QRCode tracking to assist in locating your controller. If you prefer to use image recognition, object detection, or other technologies, it needs Azure service or some third-party library, more information please see:Computer Vision documentation

Can Vuforia track spatial location when using targetless device tracking?

I am trying to wrap my head around Vuforia's capabilities. I want to make an app which lets me place a 3D object into a camera view and have that 3D object stick to the world. I've been learning how to use Vuforia in Unity3D, and Vuforia seems to be slightly capable of this, but is severely limited by its craving for "Targets". It doesn't seem to be able to do much if I don't give it some sort of target.
One workaround I've found is to set the ARCamera's World Center Mode to DEVICE_TRACKING. This seems to let me place a 3D object into the world (in Unity) and have this object overlay into the camera feed, almost making it seem like it's anhcored to the real world. This doesn't work perfectly though: it tracks properly when I angle the device up/down/left/right (rotation), but it does not seem to track the device's translational motion; that is, when I move the device forward/back/left/right, the overlaid object doesn't get closer/farther nor does it rotate as I move around it.
Is it possible to get this sort of tracking out of Vuforia, or am I better off switching to something like Google Tango?
The difficulty with setting World Center Mode to CAMERA in Vuforia is that apparently 3D objects rotate around the camera based on its accelerometer/gyroscope changes. This doesn't allow for objects to be anchored to the environment. Instead they follow with the camera.
Kudan is a good markerless tracking option.

Can we use Video as a Game Environment?

I am going to build a FPS video game. When I developing my game, I got this problem in my mind. Each and every video game developer spend very big time and use a lot of effort to make their game's environment more realistic and life-like. So my question is,
Can We Use HD or 4K Real Videos as Our Game's Environment? (As we seen on Google Streetview - but with more quality)
If we can, How we program the game engine?
Thank you very much..!
The simple answer to this is NO.
Of-course, you can extract texture from the video by capturing frames from it but that's it. Once you capture the texture, you still need a way to make a 3D Model/Mesh you can apply the texture to.
Now, there have been many companies working on video to 3D model converter. That technology exist but is more for movie stuff. Even with this technology, the generated 3D models from a video are not accurate and they are not meant to be used in a game because they end up generating a 3D model with many polygons, that will easily choke your Game engine.
Also, doing this in real-time is another story. So you will need to continuously read a frame from the video, extract a texture from the video, generate a mesh with the HQ texture, cleanup/reduce/reconstruct the mesh so that your game engine won't crash or drop many frames. You then have to generate a UV for the mesh so that the extracted image can be applied to the current mesh.
Finally, each one of these are CPU intensive. Doing them all in series,in real-time, will likely make your game unplayable.I also made doing this sound easy but it's not. What you can do with the video is to use it as a reference to model your 3D environment in a 3D application. That's it.

How do I process iPhone camera data and overlay a 3-D object on it?

I am trying to develop an Augmented Reality iPhone application in which I will place a 3D object in front of a live camera feed.
I need to zoom in and zoom out the object as the user moves backward/forward, and rotate the 3D model as the user walks around.
Is there a way to do this on the iPhone ?
The open source VRToolkit application by Benjamin Loulier does just this. It overlays a 3-D model onscreen in response to coded tags, rotating and scaling them in response to movement of this tag in the area viewed by the iPhone camera.
It leverages the ARToolkitPlus library to do the marker identification and processing.
However, be aware that this library is GPL-licensed, so you will need to release the source code of any application you build on this under the GPL.