Can ARKit and ARCore use Beacon as an Anchor? - arkit

We’re trying to anchor a model using an absolute coordinate system.
I’m using a UWB beacon system to know where devices are.
I’m now trying to tell ARKit (or ARCore) where the origin Anchor is.
I’m trying to use a beacon location as the origin.
Is there an anchor type that will accept a real world beacon as its source?

iBeacon vs UWB
iBeacon is a low-precision Received Signal Strength Indicator (RSSI). With iBeacon, the device position can be estimated with accuracy of 2.5 to 3.5 meters (it's monstrously inaccurate positioning).
A more interesting approach is the "collaboration" of NearbyInteraction and ARKit frameworks based on U1 (Ultra-WideBand) chips. This trio will give you an incomparably preciser positioning.

Related

Does plane detection increase accuracy of Augmented Image Tracking

I have an application for visualizing scan of a room and i am using an Augmented Image to align my points to real world. I am not using plane detection in my application so it is optional for me.
However, i have some questions regarding tracking accuracy because accuracy of my alignment at the moment solely depends on how accurate i can detect the center position and corners of the augmented image.
Does using plane detection increase accuracy of detecting an image position in an Augmented Image application?
Does it also affect accuracy of tracking and ARCore environmental understanding. Because users can move in the room and inspect the scan and also i tested my application with and without plane detection and it appears with plane detection my alignment changes over time because of ARCore environmental understanding and there is a shift in anchors. This does not happen that much without plane detection.
Thanks in advance for any help!

iPhone4 iOS5 is there a physics engine to convert CMDeviceMotion events into displacement?

I'm running a CMDeviceMotion processing queue on iPhone 4, which gives me user-induced acceleration, along with the rotation rates. I can filter this data myself.
What I'm trying to understand is how to convert these discrete samples of acceleration, device attitude and rotational rate into a 3 dimensional displacement. This is possible with classical mechanics for straight lines, but I"m thinking of more advanced calculations - for example curves. This can be handled with GPS, but I'm looking for a much better resolution - lets say within 10 feet. GPS under clear sky has an average accuracy of about 30 feet.
Is there some sort of a physics engine or physics processor that can take a set of device motion or acceleration/turn rate events and give me a distance of how far the phone is from the original location?
I know that there are various pedometer and bike GPS trackers for iPhone. Are they based on GPS or do they actually do the acceleration integration like I'm describing?
Unfortunately, the acceleration integration you are describing won't work in itself.
However, you may improve the accuracy by fusing with the GPS signal and/or make domain specific assumptions. For details, see the above link.

iOS: Core Motion used to detect larger movements over distance?

I have a GPS app that I would like to detect if the user is standing still and not moving. Using Core Location works for this, but is sometimes not accurate because new updates move and gives the illusion of speed and motion.
So, I am wondering if in addition to that, I can also use Core Motion. Is this a good idea to detect motion such as someone walking, running, driving, etc, and know when they are no longer doing that motion? Or, is Core Motion only for small movements such as tilting the device or lifting it to your ear?
I wanted to tell others who visit this question what I've learned and what I think about this approach.
I have been doing some research of my own to know whether this is possible, and more importantly, even if it is what is the battery consumption and accuracy of the location change detected. For Android though, this question was asked quite sometime back. The answer provides links to this Google Tech Talk. At 23:20, the speaker talks about how difficult it is to achieve this and the accuracy you will achieve in the results.
Even though I have to come to realize the battery consumption from sensors on the iPhone is a little lesser than in most Android phones, I still think this is a costly affair in terms of accuracy and battery consumption.
you can use the GPS with the sensor readings to distinguish between walking, running, etc. if you combine the tilt angle frequency change and the GPS speed information (you need to do some work to get some of this info of course, but thats the way to do it).
You are talking about 4 different measurements from 4 different sensors (technically more than 4 but..) -
Latitude & Longitude - from CoreLocation. It uses a mix of GPS + cell tower triangulation.
Accelerometer - the current orientation of the device in 3D space.
Gyroscope - orientation of the device on its own axis.
Magnetometer - which tells you which direction a device is point w.r.t south,north,east,west
Of all these I think only Latitude & Longitude are of use to you. Basically what you do is to make the sensitivity (i.e. the update rate from the sensor) a bit more relaxed. With some tweaking around with this you should be able to tell with good accuracy if a person is standing or moving.

iPhone 4 Gyroscope/GPS versus Accelerometer/GPS/Compass

I am about to use iPhone 4's gyroscope/GPS on a game, to detect rotation and translation. As far as I know, the gyroscope can be used to detect rotations in all 3 axis.
But rotations, at least on the horizontal plane can be detected with the compass, tilts can be detected with the accelerometer and positions with the GPS.
Can a combination of compass/accelerometer/GPS create the same level of detection of gyroscope/GPS? (I am thinking of allowing this combination for people without iPhone 4).
Will this work perfectly?
The precision of the gyroscope and accelerometer sensors is much greater than the precision of the compass and GPS. The compass and GPS are for finding out where the device is on the globe, and the gyroscope and accelerometer are good for finding out where the device has moved in the last few milliseconds.
Therefore it depends upon what you're trying to control with the device's movement. Trying to simulate a gyroscope input to control a 3D simulation (like the Jenga game Jobs showed in the keynote that introduced the iPhone 4) will not work perfectly with just the compass/accelerometer/GPS. Figuring out if the device is pointed at the grocery store on the west side of the street instead of the furniture store on the east side of the street in an augmented reality game will work perfectly with just the compass/accelerometer/GPS.

Transform device orientation to world frame in objective c

I'd like to transform the yaw, pitch and roll of the iPhone from the body frame to the world frame, i.e. azimuth, pitch and roll. On Android this is easily done with the
SensorManager.remapCoordinateSystem(), SensorManager.getOrientation methods as detailed here: http://blog.mysticlakesoftware.com/2009/07/sensor-accelerometer-magnetics.html
Are similar methods available for the iPhone or can someone point me in the right direction how to do this transformation?
Thanks
The accelerometer is good enough to get gravity direction vector in device coordinate system. That is in case when device calms down.
The next step for full device orientation is to use CLLocationManager and get the true north vector in device coordinate system.
With the normalized true north vector and gravity vector you can easily get all other directions using the dot and cross vectors product.
The accelerometer (UIAccelerometer) will give you a vector from the device's accelerometer chip straight down. If you can assume that the device is being held fairly steady (i.e., that you're not reading acceleration from actual movement), then you can use simple trig (acos(), asin()) to determine the device's orientation.
If you're worried that the device might be moving, you can wait for several accelerometer readings in a row that are nearly the same. You can also filter out any vector with a length that's ± TOLERANCE (as you define it) from 1.0
In more general terms, the device has no way of knowing its orientation, other than by "feeling gravity", which is done via the accelerometer. The challenges you'll have center around the fact that the accelerometer feels all acceleration, of which gravity is only one possible source.
If you're targeting a device with a gyroscope (iPhone 4 at the time of writing), the CoreMotion framework's CMMotionManager can supply you with CMDeviceMotion updates. The framework does a good job of processing the raw sensor data and separating gravity and userAcceleration for you. You're interested in the gravity vector, which can define the pitch and roll with a little trig. To add yaw, (device rotation around the gravity vector) you'll also need to use the CoreLocation framework's CLLocationManager to get compass heading updates.