In my iPhone app I'd like to monitor if the user enters some particular geographycal regions at a time and act consequently.
I now saw that with the new iOS 4 it is possible to register some interest regions (CLRegion) to a CLLocationManager, so it would do some job for me, but... I'd also need to dynamically change the set of the regions (thus signaling the user only some region at some time) but it seems that Regions can only be added.
Before noticing this change in iOS 4 I was planning on using an R-Tree to index all the regions (as rectangles in a 2d space) and query it on demand obviously adding/removing nodes by myself.
Here are my questions:
- Anyone knows if the CLLocationManager uses something similar to an R-Tree?
- Is it extreamly efficient ? (I could put all my regions as registered at some point and the apply a filter such as looking in an NSSet of available-at-that-moment scenes)
- I'd like to delete, at least, all the monitored regions in the CL, is it feasible? How?
What's wrong with -[CLLocationManager stopMonitoringForRegion:]?
Related
I built up a shopfloor where material flow is realized by Transporters (AGV / AMR) with free space navigation. I am looking for a possibility to observe traffic at certain spots (e.g. work stations, storage areas) or even on the whole shopfloor so I can compare different scenarios of the material flow and supply strategy of the working station with a view on the traffic. I tried out the Density Map but since it observes the whole layout which is quite big the values get too low for the scale quite fast so it isn't performing the way I want it to. Is there a way so set up like a "area density map" so I can just observe a defined rectangle or another functionality which could help me here?
Happy about all ideas! :-)
You can use normal Rectangular Node elements and trigger "on enter" code to count drive-throughs or similar, as below.
Just make sure to set the capacity to infinity so normal traffic flow is not interrupted :)
I am working on a drone based video surveillance project. I am required to implement object tracking in the same. I have tried conventional approaches but these seem to fail due to non static environment.
This is an example of what i would want to achieve. But this uses background subtraction which is impossible to achieve with a non static camera.
I have also tried feature based tracking using SURF features, but it fails for smaller objects and is prone to false positives.
What would be the best way to achieve the objective in this scenario ?.
Edit : An object can be anything within a defined region of interest. The object will usually be a person or a vehicle. The idea is that the user will make a bounding box which will define the region of interest. The drone now has to start tracking whatever is within this region of interest.
Tracking local features (like SURF) won't work in your case. Training a classifier (like Boosting with HAAR features) won't work either. Let me explain why.
Your object to track will be contained in a bounding box. Inside this bounding box there could be any object, not a person, a car, or something else that you used to train you classifier.
Also, near the object, in the bounding box there will be also background noise that will change as soon as your target object moves, even if the appearance of the object doesn't change.
Moreover the appearance of you object changes (e.g. a person turns, or drop the jacket, a vehicle get a reflection of the sun, etc...), or the object gets (partially or totally) occluded for a while. So tracking local features is very likely to lose the tracked object very soon.
So the first problem is that you must deal with potentially a lot of different objects, possibly unknown a priori, to track and you cannot train a classifier for each one of these.
The second problem is that you must follow an object whose appearance may change, so you need to update your model.
The third problem is that you need some logic that tells you that you lost the tracked object, and you need to detect it again in the scene.
So what to do? Well, you need a good long term tracker.
One of the best (to my knowledge) is Tracking-Learning-Detection (TLD) by Kalal et. al.. You can see on the dedicated page a lot of example videos, and you can see that it works pretty good with moving cameras, objects that change appearance, etc...
Luckily for us, OpenCV 3.0.0 has an implementation for TLD, and you can find a sample code here (there is also a Matlab + C implementation in the aforementioned site).
The main drawback is that this method could be slow. You can test if it's an issue for you. If so, you can downsample the video stream, upgrade your hardware, or switch to a faster tracking method, but this depends on you requirements and needs.
Good luck!
The simplest thing to try is frame differencing instead of background subtraction. Subtract the previous frame from the current frame, threshold the difference image to make it binary, and then use some morphology to clean up the noise. With this approach you typically only get the edges of the objects, but often that is enough for tracking.
You can also try to augment this approach using vision.PointTracker, which implements the KLT (Kanad-Lucas-Tomasi) point tracking algorithm.
Alternatively, you can try using dense optical flow. See opticalFlowLK, opticalFlowHS, and opticalFlowLKDoG.
I am doing research on touch screens and I couldnot find a good source except for this image below which could explain how multitouch IR systems work. basically the single touch IR systems are pretty simple as on two sides of the panel, lets say left and top are the IR transmitters and on the right and bottom are the receivers. So if a user touches somewhere in the middle, the path of IR will be disrupted and the ray will not reach the receiving end, therefore the processor can pick up the coordinates. but this will not work for multitouch systems as there is an issue of ghost points with this approach.
Below I have an image of 'PQ labs' multitouch IR system working, but as there is no explanation given, therefore I am not able to understand its working, Any help will be greatly appreciated.
I consider that they have a special algorithm to avoid the point caused by the inner cross of emitter light. But this algorithm will not work for every time, so sometime if you put your finger very close to each other. The ghost point may will show up.
My guess:
The sensors are analog (there must be an Analog to digital converter to read each of the opto transistor (IR receiver).
LEDa and LEDb are not on at the same time
The opto transistor are running in a linear range (not in saturation) when no object is present.
One object:
4. When an One object is placed on the surface. There will be less light accessing some of the opto transistors. This will be reflected by a reading that is lower then the read when no object is present.
The reading of the photo transistor array (it is an array reflecting the read from each opto transistor) will provide information about:
4.1. How many opto transistors are completely shaded (off)
4.2. What opto transistor are effected
Please note: A reading from one LED is not sufficient to know the object position
To get object location we need two reading (one from LEDa and one from LEDb). Now, we can calculate the object Position and Size since we know the geometry of the screen.
Two Objects:
Now each array may have "holes" (there will be two groups) in the shaded area. These holes will indicate that there is an additional object.
If the objects are closed to each other the holes may not be seen. However, there are many LEDs. So there will be multiple arrays (one for each LED) and based on the presented geometry these holes may be seen by some of the LEDs.
For more information please see US patent#: US7932899
Charles Bibas
I'm working on a location-based app that makes use of the CLLocationManager region monitoring.
I'm using a single CLLocationManager and single delegate (which are set up in the main app delegate at startup), and I'm noticing that I often get a burst of multiple calls to my delegate (on locationManager:didExitRegion:) when exiting a monitored region -- usually two calls, but sometimes more. Has anyone else experienced this, or have any ideas what can be going wrong?
I'm instantiating the CLLocationManager as follows, in a class that is instantiated in the app delegate:
_locationManager = [[CLLocationManager alloc] init];
_locationManager.desiredAccuracy = kCLLocationAccuracyHundredMeters;
_locationManager.delegate = self;
I'm setting up region monitoring like this:
// The region instance has a radius of 300 meters
[_locationManager startMonitoringForRegion:region desiredAccuracy:1000];
As I understand from the documentation, providing the desired accuracy of 1000 means that locationManager:didExitRegion: should only be called once we're 1000 meters outside of the region.
On additional point -- as far as I've seen, I only get a burst multiple notifications if I'm in the car (and therefore travelling quite quickly). It doesn't seem to happen if I'm on a bike or on foot. Any pointers as to what I'm doing wrong (or if this is an issue that others have already encountered) are appreciated.
tl;dr: It's really quite simple - you're getting as much information as Apple can give you about the fact that you're crossing cell tower boundaries - meaning, not really good data at all.
Now the real story:
Generally, there are only a couple of ways that CoreLocation can determine one's position:
By GPS. This is highly accurate (tens of meters), but takes lots of power to use.
By WiFi. This is generally accurate, though wifi base stations can and do change, which makes it fuzzy. This method cross-references the wifi stations in the area vs some known accurate locations - so it knows when it sees "FooWifiStation" that it's associated with a particular area, measured by precise instruments, or perhaps even other phones with GPS turned on (which is controversial, we may never know if Apple uses this method)
By cell tower locations. These don't move, so it knows you're within a big fuzzy dot of coverage when you're associated with a tower. This is the least accurate, but least power-consuming, because your phone is already doing the work to stay in contact.
You can even see this if you go into the maps application "cold": you start out immediately with a big fuzzy blue dot, (at least 1km where I am), then it shrinks as it gets a wifi location fix, then it more or less disappears when GPS gets its fix. It's doing [cell tower] => [wifi] => [gps] accuracies in real time.
From my experience, the "significant location change" clause means "We'll let you know whenever you move in a big way, as long as we don't have to do any more work or expend any more power to get you that data. This means de facto that the very best you can rely on is using transitions between cell towers. Apple deliberately left this vague, because if another app uses something that has better resolution - say, you open Maps.app, and it gets a GPS fix - it is possible that you will suddenly get a great fix on location, but you can't depend on that always being the case. You've asked for a "weak reference" to location.
Now think about what happens when you are wandering about in your car: your cell phone has to manage that transition. It gets handed off, talks to multiple towers at once, that sort of thing, to manage a seamless transition which must be viable while you are having a conversation. Just that is a pretty neat feat for any little box to do. Necessarily, this will cause you to see a bit of bounce in location updates, because, to the phone, you're vibrating between cell towers at this time as it negotiates the transition.
So what the radius really means is that you're interested in data of roughly that accuracy - but the API will not guarantee that you get updates within that radius. You can think of it as Apple saying, "We're going to bucket your accuracy into one of 3 groups - but we didn't give you an enumeration, because we want to reserve the right to develop better methods of fixing your location without you having to change your code". Of course, in reality, any real app will change if they change their method of getting location, because they are annoyingly vague about this location stuff.
In other words, you will get a location fix at the cell tower with some totally made up guess as to how good that accuracy is; when you move to the next tower, you will instantly jump to its location, with a similarly fuzzy fix - does that make sense? CoreLocation is telling you you're at the cell tower with accuracy of however far the cell tower's signal reaches; as you move to another tower, you will probably get the bounciness of the handoff.
So when it comes to it, to really do a good job, you have to assume that the data is one of "great", "ok", or "bad" and use other methods - like Kalmann filters - if you really need to have a better guess at where the user is. As a zeroth-order approximation, I would just debounce the callbacks based on the time of the update, which is given, and assume that the user isn't really leaping kilometers back and forth within a few seconds, but rather, travelling in the direction of the first new update.
I think you'd be better off, using a desiredAccuracy that is smaller than the radius of 300m, i.e. kCLLocationAccuracyHundredMeters. Have you tried that?
There is no documentation about the underlying logic, but I'd assume, that desiredAccuracy can be regarded as the minimium distance you'd have to travel that the movement counts as a "border crossing".
Region monitoring is based on "significant location events", not GPS – otherwise the battery wouldn't last half a day.
If you use such a high desiredAccuracy, the system might get more than one significant location event (these seem to be generated about every 500m – also depending how many wifi networks you have in the area.
In that case, the system might compare the current location resulting from the significant change with the distance to all sides of your region.
If you're just outside 1000m from opposite side of your region, it might notify again and only stop notifying once your outside the 1000m from each side of your region.
The reason for the accuracy parameter is rather to avoid border crossings, if you are too close to the border, so you're not seeing inside-outside-inside-outside etc while traveling just outside the border...
In my apps, I tried many many different combinations of radius and accuracy and settled on 500m/100m and 100m/10m – those work very well in my real work within-city szenario.
(see also the excellent article series http://longweekendmobile.com/2010/07/22/iphone-background-gps-accurate-to-500-meters-not-enough-for-foot-traffic/)
I'll bet this is your issue, locationManager:didExitRegion: gets called for EVERY region that every location manager in every application on your iPhone has registered. You need to compare the region's identifier string sent as a parameter with the region's identifier string of the region you want your app to currently do something with.
When you send [_locationManager startMonitoringForRegion:region desiredAccuracy:1000]; you're creating a region, if it doesn't already exists to be monitored for. This is NOT the same as adding an observer to NSNotification Center. Apple's region object blindly sends the notification to EVERY CLLocationManager which then sends the message to the delegate.
First, the documentation does not indicate that desiredAccuracy alone determines how far out of a region you need to be, before didExitRegion event is generated. If you want to fine-tune how often events are fired, you need to use the distanceFilter as well, which determines the horizontal movement you need to have moved before an event will be fired.
If use of the distanceFilter does not work, then I would recommend the following:
If you are using NSNotificationCenter, then ensure that you have removed other classes from notification, via [[NSNotificationCenter defaultCenter] removeObserver:self]. You can optionally specify a name and object to this call.
1000 Km is a large radius, which has a high chance of intersecting with many regions in the vicinity. I would try a lower number for this parameter to see if that doesn't decrease the number of exit notifications that you are receiving. The only thing that indicates that this may not be the solution is that you did not say you are receiving blasts of didEnterRegion as well.
Lastly, I would check the identifier of the CLRegion being passed in the didExitRegionEvent, to see if you can't set up an NSDictionary of regions yourself. You'll have to set a region to the dictionary on didEnterRegion, and remove it on didExitRegion. So, on didExitRegion, all you have to do is ensure that you are interested in the region by checking that you have the region already. I would guess that CLRegion is already equipped with isEqual: and hash to allow it to be stored in a collection.
Good luck!
I found that in order to save battery you have to use monitorForSignificantLocationChange.
My solution is to filter out multiple alerts coming within 60 seconds for the same region:
-(BOOL)checkForMultipleNotifications:(GeoFenceModel*)fence
{
GeoFenceModel *tempFence = [fenceAlertsTimeStamps objectForKey:fence.geoFenceId];
if(tempFence == nil){
fence.lastAlertDate = [NSNumber numberWithDouble:[[NSDate date] timeIntervalSince1970]];
[fenceAlertsTimeStamps setObject:fence forKey:fence.geoFenceId];
NSLog(#"checkForMultipleNotifications : no Notifications found for Region : %#, Adding to Dictionary with timestamp %.1f",fence.geoFenceId,fence.lastAlertDate.doubleValue);
}
else if(([[NSDate date] timeIntervalSince1970] - fence.lastAlertDate.doubleValue) <= 60.0){
NSLog(#"checkForMultipleNotifications : Multiple region break notifications within 60 seconds, skipping this alert");
return NO;
}
return YES;
}
I'd like to use reliable locations, even on an old iphone. However, many readings (particularly from cell towers) are too inaccurate. I think.
When I plot my position + accuracy radius (or look at google maps app), I notice the center of the estimated circle is generally close to my physical location. I'm guessing that if I cut the "accuracy" number in half, I'll still be in the circle 99% of the time.
I believe this is a probabilistic game - the location manager is trying to provide an estimate that's correct 99.99999% of the time, so they give a deliberately wide margin. Any thoughts/info?
The CoreLocation framework gives you the radius of the circle for every CLLocation you get using the horizontalAccuracy/verticalAccuracy properties. You can specify to the CLLocationManager a desiredAccuracy property that use these types:
kCLLocationAccuracyNearestTenMeters, kCLLocationAccuracyHundredMeters, kCLLocationAccuracyKilometer, kCLLocationAccuracyThreeKilometers;
So you get notifications when you get inside your desired range. That said, when you use the CLLocationManager the first event is given to you ASAP, and then the proceeding events are the ones that satisfy your conditions.
When you're using CoreLocation, you're getting back "answers" that get better and better. I've noticed that the "best" answer is almost always accurate to within 100m, so theoretically you could probably cut down on the "buffer" that you're normally given. The only way to really know, though, and this is what I would do, is to test test test. Find iphones and ipods from all generations and see what types of accuracies you're getting and what types of results you're getting. In a lot of ways, it depends on the type of app you're making, but if you want to deliver sensitive or important information based on where the user is, you should really wait for the framework to give you a nearly exact location.