Treasure hunt in Augmented Reality - iphone

I'm looking for an augmented reality browser/toolkit/api that supports the following:
Adding fixed 3d models such as a treasure-chest.
Possible image recognition of this treasure-chest so the iPhone knows when you're looking at it.
Specify altitude on a 3d model so it can be positioned on the ground or the second floor in an apartment building for example.
It must have support for "migrating" it to a standalone app that can be published on the app store.
The ability to customize the camera overlay with own buttons, huds, text and other UIViews.
Support for both iPhone and Android.
I have tried Wikitude which doesn't have support for 3d models in iPhone.
I have tried Junaio which doesn't support to create a standalone app using their browser.
I have tried Layar Player SDK, and asked the question on their community if I can customize the interface with own buttons etc.
I have tried the artoolkit on github.
None of the libraries I've tried have support for all my demands.
Am I looking for too much here?
Is there something I've missed using Layar, Wikitude and Junaio?

Specify altitude on a 3d model so it
can be positioned on the ground or the
second floor in a apartment building
for example.
Can you break this down? Do you want the phone to recognize it's on the second floor, at a particular location within the building? In general altitude is surprisingly tricky, and indoor positioning is very approximate - in the absence of indoor GPS repeaters or some other indoor positioning mechanisms which would probably require a lot of additional effort (bluetooth beacons, WiFi triangulation etc) this might be infeasible - in general, and not just given a particular AR library.
I think the Junaio libraries cover the other bases - CV recognition of a (prepared) object, stand-alone application packaging, customizable UI, and iPhone and Android support.

Related

implementing Android AR with UVC / External camera (Unity)

I'm working on an AR project (Unity) and I want to use an external camera instead of my Android's original one. I saw that Vuforia has such a feature - but claims that ny using that, Ground Plane detection wouldn't work at all and ModelTargets performances taking a hit.
I also saw EasyAR has CustomCamera and Camera2 lib in ARCore.
Question is: What's the best way to approach this? has anyone experienced using an external camera? and with what AR solution? (ARFoundation / Vuforia / EasyAR...).
2nd Question: What should I look for when buying said UVC? Any examples for one?
Also I'd like to hear about experiences with AR solutions regardless of the external camera thing.
Thanks in advance!
Unfortunately, this is unlikely to work with an external camera.
A key part of AR is having a precise calibration of the camera's optics. Without that, it's not possible to accurately analyze the world to draw new objects in it or other AR effects.
A UVC webcam doesn't come with any such calibration information. So it would have to be calibrated somehow, and the calibration information given to Unity's AR engine. I don't know if that's possible with Unity in some way.
Note that not that all internal camera devices on Android are calibrated enough for AR either, but the ARCore team is certifying devices that have sufficient calibration in place.

How do I track the Unity position of physical objects the player is interacting with using Hololens2 hand tracking data?

Basically I am working on a mixed reality experience using the Hololens2 and Unity, where the player has several physical objects they need to interact with, as well as virtual objects. One of the physical objects is a gun controller that has an IMU to detect acceleration and orientation. My main challenge is this : how do I get the physical object's position in Unity, in order to accurately fire virtual projectiles at a virtual enemy?
My current idea is to have the player position the physical weapon inside a virtual bounding box at the start of the game. I can then track the position of the virtual box through collision with the player's hands when they pick up the physical controller. Does OnCollisionEnter, or a similar method, work with the Players hands? (see attached image)
I am also looking into the use of spatial awareness / image recognition / pose estimation to accomplish this task, as well as researching the use of a tracking base station to determine object position (similar to HTC Vive / Oculus Rift ).
Any suggestions, resources, and assistance is greatly appreciated here. Thank you!
EDIT UPDATE 11/30/2020 :
Hernando commented below suggesting QR codes, assume for this project we are not allowed to use QR codes, and we want as as precise orientation data as possible. Thanks Hernando!
For locating the object, QR code would definitely be the recommendation to find quickly with HL2 device. I have seen the QR approach in multiple venues too for VR LBE experiences like being described here. QR code is just sitting on top the device.
Otherwise, if the controller in question supports Bluetooth, can possibly pair the device and if device has location information, can possible transmit the location of where it is at. Based on what I am seeing from all of the above, this would be a custom solution and highly dependent on the controller abilities to be seen if QR codes are out of the equation. I have witnessed some controller solutions to first start the user experience to do something like touch the floor to get an initial reference point. Or alternatively doing something like always picking up the gun from specific location in the real world like some local based experiences do before starting.
Good luck with project, just my advice from using systems with VR
Is the controller allowed to paste multiple QRcodes? If allowed, we recommend you use QRCode tracking to assist in locating your controller. If you prefer to use image recognition, object detection, or other technologies, it needs Azure service or some third-party library, more information please see:Computer Vision documentation

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.

iPhone app 3d engine or not

I am developing a simple iPhone app, which:
retrieves data from the server
presents the data
In order to present the data better I want to add nice 3d dynamic objects, for example:
a car with spinning wheels next to car sales bar chart.
power plant with smoke coming out of the chimney next to CO2 emission numbers
The questions are:
How do I work with the designer on this, what output should he provide for me (format)?
How do I put it in my application, should I involve some 3d engine/framework?
The team behind cocos2D has just announced cocos*3D* and this seems really promising.
The first public beta can be downloaded
http://www.cocos2d-iphone.org/archives/1274
You can use cocos2d for the iPhone and fake 3d with the art. So, you have a car that is drawn to look 3d but you're only using 2d to display it. The effects that you want to do don't require to use full 3d models.
You may also have a look at this one I discovered recently:
http://nineveh.gl/
It's pretty new but well documented and with video demos.

Starting Game dev on iPhone:iPad - learning path?

I'm beginning in iPhone/iPad game dev and I'm searching to set up my learning path.
The basic features I would like to learn (after the basic SDK iphone components programming) are :
using a board like interface where I can move pawn with my fingers
detect where the pawn was moved and triggers events in the game
The board will be constitute by 6 tiles that may be organised randomly when starting the game : may i use an sdk component with a delegate and datasource to determine where the pawn was left and on which tile ?
need to use dices (which kind of library may I use ?)
...
Do you have any idea about where to start ? ;-)
Many thanks,
Tib.
Jens Alfke has provided the GeekGameBoard framework for building Mac or iPhone board and card games. He talks about it here. I'd highly recommend that as a starting point for an inexperienced developer looking to create a board game.
Additionally, you might look at the answers to the similar question "iPhone board game: OpenGL ES or CoreGraphics?". As I recommend there, read up on Core Animation (what GeekGameBoard uses for layout and animation) for providing your layered graphics and animation, rather than jumping straight into the more complex OpenGL ES.