How to get the ARPointCloud counts from the depth camera on the front of iPhone X - arkit

when i use the code
session.currentFrame?.rawFeaturePoints?.points.count
to get the point cloud data from the depth camera on the front of iPhone X,but,it turns out to be nil
Does anyone knows what I am missing?
I don’t know how to do it. Please advice.

The ARPointCloud is a set of points a representing intermediate results of the scene analysis ARKit uses to perform world tracking. Since the front camera of the iPhone X only supports face tracking, and not world tracking, maybe that's the reason you are getting nil.
Have you tried the back camera?

Feature point detection requires a ARWorldTrackingConfiguration session.
quoted from apple documentation (https://developer.apple.com/documentation/arkit/arframe/2887449-rawfeaturepoints).
You will not be able to get feature points from front camera.
If you still want to use front camera to get the depth data use this.
https://developer.apple.com/documentation/arkit/arframe/2928208-captureddepthdata
I hope it helps.

Related

How do I track the Unity position of physical objects the player is interacting with using Hololens2 hand tracking data?

Basically I am working on a mixed reality experience using the Hololens2 and Unity, where the player has several physical objects they need to interact with, as well as virtual objects. One of the physical objects is a gun controller that has an IMU to detect acceleration and orientation. My main challenge is this : how do I get the physical object's position in Unity, in order to accurately fire virtual projectiles at a virtual enemy?
My current idea is to have the player position the physical weapon inside a virtual bounding box at the start of the game. I can then track the position of the virtual box through collision with the player's hands when they pick up the physical controller. Does OnCollisionEnter, or a similar method, work with the Players hands? (see attached image)
I am also looking into the use of spatial awareness / image recognition / pose estimation to accomplish this task, as well as researching the use of a tracking base station to determine object position (similar to HTC Vive / Oculus Rift ).
Any suggestions, resources, and assistance is greatly appreciated here. Thank you!
EDIT UPDATE 11/30/2020 :
Hernando commented below suggesting QR codes, assume for this project we are not allowed to use QR codes, and we want as as precise orientation data as possible. Thanks Hernando!
For locating the object, QR code would definitely be the recommendation to find quickly with HL2 device. I have seen the QR approach in multiple venues too for VR LBE experiences like being described here. QR code is just sitting on top the device.
Otherwise, if the controller in question supports Bluetooth, can possibly pair the device and if device has location information, can possible transmit the location of where it is at. Based on what I am seeing from all of the above, this would be a custom solution and highly dependent on the controller abilities to be seen if QR codes are out of the equation. I have witnessed some controller solutions to first start the user experience to do something like touch the floor to get an initial reference point. Or alternatively doing something like always picking up the gun from specific location in the real world like some local based experiences do before starting.
Good luck with project, just my advice from using systems with VR
Is the controller allowed to paste multiple QRcodes? If allowed, we recommend you use QRCode tracking to assist in locating your controller. If you prefer to use image recognition, object detection, or other technologies, it needs Azure service or some third-party library, more information please see:Computer Vision documentation

AR floor recognize and interactions

I'm as the new of AR development. I want to create AR demo app, but I face some problems.
could anyone help me to solve below problems:
. Does it possible to recognize the floor, if I want to placement with big 3D object ( around 3 meter x 1.5 meter )?
. How can I touch screen to placement only one object on the floor? after that, can disable or enable (buttons) plane detection and still appear 3D object, that we have added to interact on 3D object.
. After added one 3D object, How can we make interaction on 3D object? ex: rotation or scale.
could you share me the tutorials or other links to solving that problems?
Thank you very much.
you're in luck. there is a video that shares how to make almost everything exactly like you want it. Also if you want to read over the components that enable you this I'll give you the link to the Vuforia official website documentation when they go over each component and how it works.
Video link: https://www.youtube.com/watch?v=0O6VxnNRFyg
Vuforia link: https://library.vuforia.com/content/vuforia-library/en/features/overview.html

How to fix objects in the indoor environment with ArCore?

I need to insert some virtual objects in an indoor environment, but I need the position of these objects to be fixed. I have already tried using markers with the vuforia but it is complicated, it takes time to recognize. I'm thinking of using Google's ArCore. Does anyone know if this is possible and, if so, do they know how to do it?
I'm using Unity to do this. Can someone help me?
ARCore places the camera relative to the detected plane, so you will need a plane at some point so the application can locate the camera into the game.
HelloAR shows how this works, you may test into the unity editor and see how the camera moves arround the points and the detected plane.
One solution for your problem may be the image detection of ARCore + Plane detection, you place the image on the floor and when the image is detected you will have your objects in place while you move arround, but you will need to have a plane to move, not only the image detection, because if you don't, you will lose the objects once the camera loses the image.

How to get the distance from daydream controller to a pointed game object in unity?

Following the google-vr sample I manage to add a camera and controller to my scene.
The next thing I need is to get the distance between my controller to any pointed game object in the scene.
After searching for a while, I cannot find any tutorial nor information on how to get the distance.
So, is there any newest working tutorial on how to do this? (Many tutorial on the internet is outdated since google updates its API so frequently)
Or it is actually a simple task i.e. I can get the value from GvrPointerInputModule.Pointer / GvrLaserPointer / some other GVR class?
Thanks in advance~
You need to do raycasts from the controller and measure the difference between the hit location and the origin of the Ray cast. I think unity raycasts can return this distance built-in.
Just as I suspected, GvrLaserPointer is the answer.
If its CurrentRaycastResult.gameObject is not null, then the laser is intersecting with something. Then, we can get the intersection point from CurrentRaycastResult.worldPosition.
Using this point, we can easily calculate the distance.
Note: Just in case anyone failing with this method, like I did before. Check your ray casting group. Make sure that your Raycaster Event Mask in GVRPointerPhysicsRaycaster only include the desired layers. And if you have any canvas in screen space, check its Blocking Mask in Graphic Raycaster. It's Everything by default and your pointer may keep intersecting with the canvas, resulting in "weird" intersection point. This the cause of my problem, and to fix it, I select Nothing for Blocking Mask, and voila.

Can Vuforia track spatial location when using targetless device tracking?

I am trying to wrap my head around Vuforia's capabilities. I want to make an app which lets me place a 3D object into a camera view and have that 3D object stick to the world. I've been learning how to use Vuforia in Unity3D, and Vuforia seems to be slightly capable of this, but is severely limited by its craving for "Targets". It doesn't seem to be able to do much if I don't give it some sort of target.
One workaround I've found is to set the ARCamera's World Center Mode to DEVICE_TRACKING. This seems to let me place a 3D object into the world (in Unity) and have this object overlay into the camera feed, almost making it seem like it's anhcored to the real world. This doesn't work perfectly though: it tracks properly when I angle the device up/down/left/right (rotation), but it does not seem to track the device's translational motion; that is, when I move the device forward/back/left/right, the overlaid object doesn't get closer/farther nor does it rotate as I move around it.
Is it possible to get this sort of tracking out of Vuforia, or am I better off switching to something like Google Tango?
The difficulty with setting World Center Mode to CAMERA in Vuforia is that apparently 3D objects rotate around the camera based on its accelerometer/gyroscope changes. This doesn't allow for objects to be anchored to the environment. Instead they follow with the camera.
Kudan is a good markerless tracking option.