according to sensiya(http://www.sensiya.com/) their SDK can detect motions like walking running sitting driving etc.
I guess acceleration data can be used for classifier to detect run and walk.
But sitting and driving are quite the same, what else technique they used in order to distinct driving and sitting? does anyone have any insight?
Many thanks
For full disclosure, I am working at Sensiya. Many algorithms that recognize device's user activity rely mainly on the accelerometer sensor data analytics, as you mentioned, but if you want to fine tune and expand the type of activities you want to track I suggest using other device's sensors like proximity, magnetic field etc, or just use our tools ;)
For the specific driving and still recognition technique:
Differentiation of the still and driving states is a tough task. A simple solution will be to recognize that the device is in still state but its gps location changes, although this solution will not be efficient in terms of battery life. Our driving recognition tries to save battery life during this kind of recognition and we succeeded to find a slight difference between device's perfect still state and driving still state in terms of real time data you can collect from the device.
This is a good material to start with:
dialnet.unirioja.es/descarga/articulo/3954593.pdf
Related
What is the best way, if any, to use Apple's new ARKit with multiple users/devices?
It seems that each devices gets its own scene understanding individually. My best guess so far is to use raw features points positions and try to match them across devices to glue together the different points of views since ARKit doesn't offer any absolute referential reference.
===Edit1, Things I've tried===
1) Feature points
I've played around and with the exposed raw features points and I'm now convinced that in their current state they are a dead end:
they are not raw feature points, they only expose positions but none of the attributes typically found in tracked feature points
their instantiation doesn't carry over from frame to frame, nor are the positions exactly the same
it often happens that reported feature points change by a lot when the camera input is almost not changing, with either a lot appearing or disappearing.
So overall I think it's unreasonable to try to use them in some meaningful way, not being able to make any kind of good point matching within one device, let alone several.
Alternative would to implement my own feature point detection and matching, but that'd be more replacing ARKit than leveraging it.
2) QR code
As #Rickster suggested, I've also tried identifying an easily identifiable object like a QR code and getting the relative referential change from that fixed point (see this question) It's a bit difficult and implied me using some openCV to estimate camera pose. But more importantly very limiting
As some newer answers have added, multiuser AR is a headline feature of ARKit 2 (aka ARKit on iOS 12). The WWDC18 talk on ARKit 2 has a nice overview, and Apple has two developer sample code projects to help you get started: a basic example that just gets 2+ devices into a shared experience, and SwiftShot, a real multiplayer game built for AR.
The major points:
ARWorldMap wraps up everything ARKit knows about the local environment into a serializable object, so you can save it for later or send it to another device. In the latter case, "relocalizing" to a world map saved by another device in the same local environment gives both devices the same frame of reference (world coordinate system).
Use the networking technology of your choice to send the ARWorldMap between devices: AirDrop, cloud shares, carrier pigeon, etc all work, but Apple's Multipeer Connectivity framework is one good, easy, and secure option, so it's what Apple uses in their example projects.
All of this gives you only the basis for creating a shared experience — multiple copies on your app on multiple devices all using a world coordinate system that lines up with the same real-world environment. That's all you need to get multiple users experiencing the same static AR content, but if you want them to interact in AR, you'll need to use your favorite networking technology some more.
Apple's basic multiuser AR demo shows encoding an ARAnchor
and sending it to peers, so that one user can tap to place a 3D
model in the world and all others can see it. The SwiftShot game example builds a whole networking protocol so that all users get the same gameplay actions (like firing slingshots at each other) and synchronized physics results (like blocks falling down after being struck). Both use Multipeer Connectivity.
(BTW, the second and third points above are where you get the "2 to 6" figure from #andy's answer — there's no limit on the ARKit side, because ARKit has no idea how many people may have received the world map you saved. However, Multipeer Connectivity has an 8 peer limit. And whatever game / app / experience you build on top of this may have latency / performance scaling issues as you add more peers, but that depends on your technology and design.)
Original answer below for historical interest...
This seems to be an area of active research in the iOS developer community — I met several teams trying to figure it out at WWDC last week, and nobody had even begun to crack it yet. So I'm not sure there's a "best way" yet, if even a feasible way at all.
Feature points are positioned relative to the session, and aren't individually identified, so I'd imagine correlating them between multiple users would be tricky.
The session alignment mode gravityAndHeading might prove helpful: that fixes all the directions to a (presumed/estimated to be) absolute reference frame, but positions are still relative to where the device was when the session started. If you could find a way to relate that position to something absolute — a lat/long, or an iBeacon maybe — and do so reliably, with enough precision... Well, then you'd not only have a reference frame that could be shared by multiple users, you'd also have the main ingredients for location based AR. (You know, like a floating virtual arrow that says turn right there to get to Gate A113 at the airport, or whatever.)
Another avenue I've heard discussed is image analysis. If you could place some real markers — easily machine recognizable things like QR codes — in view of multiple users, you could maybe use some form of object recognition or tracking (a ML model, perhaps?) to precisely identify the markers' positions and orientations relative to each user, and work back from there to calculate a shared frame of reference. Dunno how feasible that might be. (But if you go that route, or similar, note that ARKit exposes a pixel buffer for each captured camera frame.)
Good luck!
Now, after releasing ARKit 2.0 at WWDC 2018, it's possible to make games for 2....6 users.
For this, you need to use ARWorldMap class. By saving world maps and using them to start new sessions, your iOS application can now add new Augmented Reality capabilities: multiuser and persistent AR experiences.
AR Multiuser experiences. Now you may create a shared frame of a reference by sending archived ARWorldMap objects to a nearby iPhone or iPad. With several devices simultaneously tracking the same world map, you may build an experience where all users (up to 6) can share and see the same virtual 3D content (use Pixar's USDZ file format for 3D in Xcode 10+ and iOS 12+).
session.getCurrentWorldMap { worldMap, error in
guard let worldMap = worldMap else {
showAlert(error)
return
}
}
let configuration = ARWorldTrackingConfiguration()
configuration.initialWorldMap = worldMap
session.run(configuration)
AR Persistent experiences. If you save a world map and then your iOS application becomes inactive, you can easily restore it in the next launch of app and in the same physical environment. You can use ARAnchors from the resumed world map to place the same virtual 3D content (in USDZ or DAE format) at the same positions from the previous saved session.
Not bulletproof answers more like workarounds but maybe you'll find these helpful.
All assume the players are in the same place.
DIY ARKit sets up it's world coordinate system quickly after the AR session has been started. So if you can have all players, one after another, put and align their devices to the same physical location and let them start the session there, there you go. Imagine the inside edges of an L square ruler fixed to whatever available. Or any flat surface with a hole: hold phone agains surface looking through the hole with camera, (re)init session.
Medium Save the player aligning phone manually, instead detect a real world marker with image analysis just like #Rickster described.
Involved Train an Core ML model to recognize iPhones and iPads and their camera location. Like it's done with human face and eyes. Aggregate data on a server, then turn off ML to save power. Note: make sure your model is cover-proof. :)
I'm in the process of updating my game controller framework (https://github.com/robreuss/VirtualGameController) to support a shared controller capability, so all devices would receive input from the control elements on the screens of all devices. The purpose of this enhancement is to support ARKit-based multiplayer functionality. I'm assuming developers will use the first approach mentioned by diviaki, where the general positioning of the virtual space is defined by starting the session on each device from a common point in physical space, a shared reference, and specifically I have in mind being on opposite sides of a table. All the devices would launch the game at the same time and utilize a common coordinate space relative to physical size, and using the inputs from all the controllers, the game would remain theoretically in sync on all devices. Still testing. The obvious potential problem is latency or disruption in the network and the sync falls apart, and it would be difficult to recover except by restarting the game. The approach and framework may work for some types of games fairly well - for example, straightforward arcade-style games, but certainly not for many others - for example, any game with significant randomness that cannot be coordinated across devices.
This is a hugely difficult problem - the most prominent startup that is working on it is 6D.ai.
"Multiplayer AR" is the same problem as persistent SLAM, where you need to position yourself in a map that you may not have built yourself. It is the problem that most self driving car companies are actively working on.
Presume I have two+ iPhones connected to the same server.
Using the sensors built in the iPhone and any possible calculations based on their information, is there any way to tell which direction one phone is from another?
They would be in the same room, so the fluctuation of GPS would not work very well here.
I've tried to model two points on a graph using only their compass readings, but I do not think this will work alone. I could be wrong though.
You could setup a calibration phase in your program where you start each phone in an exact position, and then using the 6 axis motion continually calculate the exact current position (in all 6 axis). But the longer you run that calculation the further from true position you will be and eventually (given a long enough time) one phone could think it's in canada and the other in Mexico.
So It could work for short term spurts if you do a calibration every time you want to start.
There is also the possibility of bluetooth localization, but that would require at least 3 phones and the sharing of positional data between them. Or you could do wifi location, but that would require the same as the bluetooth.
Long story short if you want inches localization it's not going to happen. If you want yards localization it's possible, but not as usable.
As you already mentioned, GPS does not work very well when used inside buildings. Thus, it is not possible to get the direction, as you don't have two reliable positions.
Indoor localisation should be much easier with iOS7 and location beacons .. but this does not help much now.
I'll start with the question.
Is the BTLE RSSI a good way to indicate two devices proximity to each other or not? does it only work with small devices like fobs etc?
The Issue:
I am currently looking at making an app that will use BTLE and allow connections based on proximity. In this regard it is much like the demo app that apple show in the Advanced Core Bluetooth keynote (When two devices are almost touching they then connect).
As I understand it the proximity is determined based on the RSSI value when the central discovers the peripheral. When I try this however with two iPads the signal seems too strong for this it is also too inconsistent to have an accurate stab at the proximity as it doesn't show very much correlation to the devices proximity.
I have tried the Apple sample code and that is similar in that the devices don't have to be close at all for the information to pass from one to another.
If only there was a way to reduce the signal strength of the peripheral devices advertisement....
Thanks in advance for any help.
The experience of Matthew Griffin matches mine. However - when we can measure for a fair period of time two things have helped us calibrate this better.
We did have to wrap a simple (kalman) filter on the antenna orientation and the IMU to get a rough running commentary though - and this is not very CPU or battery light.
Using the IMU you get a fair idea of the distance/direction of travel - and if this is over a short period of time - we assume the other 'side' is stationary. This helps a lot to get a value for 'current' orientation and 'callibrate current environment noise.
Likewise - do the same for rotations/position changes.
We've found that in general a re-orientation of the device is a better way to get direction; and that distance is only reliable some up to some 30 to 600 seconds after a 'move' calibration' and only if the device is not too much rotated. And in practice once needs some 4-5 'other' devices; ideally not too mobile, to keep oneself dynamically calibrated.
However the converse is quite reliable - i.e. we know when not to measure. And the net result is that one can fairly well ascertain things like 'at the keyboard' and 'relocated'/moved away through a specific door/openning or direction. Likewise measuring a field by randomly dancing through the room; changing orientation a lot - does work well once the receiver antenna lobes got somewhat worked out after a stationary period.
You are entirely correct about RSSI jumping wildly and randomly. You should retrieve your RSSI values every two seconds (any faster and you get a bunch of errors). Throw out the RSSI values that are more than a ~-40 decibel spike and use an aggregate of the past values before you declare your approximate range to the user.
As for your following statement, you are in luck.
If only there was a way to reduce the signal strength of the
peripheral devices advertisement....
The service you are looking for is called the TX Power Service. Implementing this service on your peripheral will allow you to decrease the transmit power of the device. That way you can throttle down the range that the advertisement data is visible from. We unfortunately do not however, have access to this service on an iOS device. But if you are writing your own firmware for a BLE peripheral, this is the service you want.
I've spent the last week dealing strictly with RSSI, trying to use Wifi and Bluetooth LE sensors for location triangulation and for distance conversion.
Unfortunately, what I have found is that RSSI is just too finicky and unreliable to consistently use to determine distance. In theory, the RSSI and distance behave according to the inverse square law (double the distance, and the RSSI will go down a fixed number of decibels), but in practice the RSSI is affected by uncontrollable factors such as weather (dry weather allows RF fields to travel better) and obstacles (any metal objects or humans in the path from one sensor to another will cause attenuation, and any metal objects closely positioned by one of the sensors will cause gain in the signal strength).
There are ways to try to compensate for this. This paper is one of the best papers I read on how to get accurate results, but the bottom line is that is an unreliable method unless you just want a general idea of where the device is.
If I understand well, you are trying to implement similar functionality that to what seen in the WWDC demo and what apps like Bump implement. For that RSSI will be adequate. Test for appopriate threshold values (e.g. >-30) and you will be fine.
quick question. How accurate is the GPS on the iPhone 4? I ask because I'm working on an enterprise project for a company, and part 2 of it will deal with iDevice development where I have to determine the position of the user. I'd like to know if the GPS is accurate enough to sense the user moving within rooms because the user will have to "tag" sections of the room as they move about it.
Thanks in advance!
P.S. Pressuming that it won't make much of a difference, but the users will actually end up using the iPads, not iPhones, and more than likely the iPad 2 will be out by the time the entire project is completed. I don't know if the iPad 2 will have a better GPS receiver or not, but at the minimum I should use the iPad/iPhone 4 GPS receiver...
Most buildings will not allow reception of an accurate set of GPS signals (if they can be received at all) indoors. The roof/ceiling/floors above are just too thick. Even a lot of trees overhanging a building will degrade the signal from the GPS satellites.
You might have a chance if all the rooms have very large unobstructed windows with no overhangs, and it's the right time of day for several satellites to be in view out that window.
Outdoors, in the clear, the iPhone 4 GPS seems to be very accurate. Sometimes I can walk around my parked car, and see the blue dot in the Maps app follow me in a circle.
I have done some work with a large location data set. My result set is based on cars driving outside and will therefore be, on average, more accurate than those taken inside (based on line of sight to satellites).
For the 650,704 location updates I used in my tests, I found the average accuracy radius was 246m (91m if your remove >1km outliers). 85.1% of updates had an accuracy of less than 100m. So given that your update will not be as accurate as these, I don't imagine you will have much success tracking indoor location changes.
For a further description of my results.
It is very difficult, and most of the time impossible to obtain a GPS signal inside a building. The type of waves used by the GPS (radio waves) are not powerful enough to go through the structure itself.
A simpler and probably cheaper solution would be to give people maybe tags or cards and install some sort of trnasreceiver in each room.
It seems the original question was "how accurate is the GPS on an iPhone 4", which hasn't exactly been answered yet.
I've done lots of testing with the accuracy of the GPS chips in iPhone 4, iPhone 4s, and iPhone 5, and the most accurate reading allowed seems to be ~5 meters, or ~16 feet when you're outside with clear line of sight to the sky. I'm guessing this is a software limitation imposed by Apple to conserve battery.
as I am developing for iPhone, I've just bought an iPhone 4 to test my application which needs to measure the coordinates of my location. I don't have any Internet (3GS, GPRS or whatever...) on my iPhone and the problem is:
1) Without internet I get a 1744m horizontal accuracy, and that's very bad. (I've also tested the accuracy in other applications too, and it is always as bad or worse)
2) With WiFi-Internet I get a 80m horizontal accuracy.
Is that normal? What can I do to improve my coordinates-measurement accuracy?
Thanks in advance for any help.
From my experience you need cellular data reception (3G or Edge) to get an accurate location on any iPhone. With that and a clear view of the sky you should be able to get within a few meters of your actual location.
Yes, this is normal. To improve accuracy, you can move somewhere with a clear view of the sky.
First thing I'd advise is make sure you have a clear view of the sky to get a good satellite signal.
I think that it is becoming somewhat "accepted" that the iPhone's GPS accuracy is somewhat lacking (in comparison to other handhelds)... I had to search through my history, but I remembered reading about this very issue on Hacker News - http://news.ycombinator.com/item?id=1526664.
If you don't want to follow the thread, here is the article directly - http://rnr.davidlokshin.com/post/825290568 .
I learned recently that the phone needs to download a batch of coordinate files in order to make any real sense of the GPS data it picks up. So without an internet connection, GPS service is very poor or possibly completely unavailable.
I'd bet your poor accuracy with WiFi is because you were indoors... That's my guess anyway.