I am developing an augmented reality app using wikitude sdk. I am quiet new with this sdk. I have implemented everything as per the sdk documentation guidelines. But I can't able to plot the POI points in the AR-Browser view. For generating the POI points I used the following codes.
- (NSMutableArray *)generatePois:(NSUInteger)numberOfPois
{
NSLog(#"%s",__PRETTY_FUNCTION__);
NSMutableArray *poiArray = [NSMutableArray arrayWithCapacity:numberOfPois];
for (NSUInteger i = 0; i < numberOfPois; ++i) {
objGlobalDatas=[arrayItems objectAtIndex:i];
WTPoi *poi = [[WTPoi alloc] init];
NSString *strId=objGlobalDatas.arId;
NSInteger poiId=[strId integerValue];
poi.id = poiId;
poi.name = objGlobalDatas.name;
NSLog(#"poi name =%#",objGlobalDatas.name);
poi.detailedDescription = objGlobalDatas.poi_description;
// poi.type = i%3; // set the type from 0->2->0->2...
poi.type=1;
NSString *strLat=objGlobalDatas.latitude;
NSString *strLng=objGlobalDatas.longitude;
float lan=[strLat doubleValue];
float lng=[strLng doubleValue];
poi.latitude = lan ;//+ WT_RANDOM(-0.01, 0.01); // set the latitude around your current location
poi.longitude = lng;// + WT_RANDOM(-0.01, 0.01);
poi.altitude = YOUR_CURRENT_ALTITUDE + WT_RANDOM(0, 200); // altitude offset
poi.poiimg=objGlobalDatas.poi_indicator_image;
poi.weburl=objGlobalDatas.website;
poi.distance=objGlobalDatas.distance;
[poiArray addObject:poi];
[poi release];
}
NSLog(#"check =%#",poiArray);
return poiArray;
}
I called this function after fetching the data of POI points by parsing a JSON link. In the poiArray I am getting three objects around my current latitude and longitude.But not showing in the AR-Browser view.
Is this all code related to the Wikitude SDK api? If so, you're missing the [architectView callJavaScript:#"*jsonString*"] call. This call will load the places, defined in an json string, to be loaded in AR. Have a closer look at the SimpleARBrowser sample. It demonstrates all necessary steps to load your places in AR.
Related
I'm facing a problem, I'm developing a hybrid application using phonegap and the customer wants the possibility to open the native map (Apple map for iOS and Google Map for Android) with a marker and then back to the application when finished. On android It's OK, the back button is already implemented, however my problem is for iOS. I already developed a little plugin to open the native map and place a marker with the following code :
CDVPluginResult* pluginResult = nil;
NSNumber * latN = [command.arguments objectAtIndex:0];
NSNumber * lonN = [command.arguments objectAtIndex:1];
double lat = [latN doubleValue];
double lon = [lonN doubleValue];
CLLocationCoordinate2D myCoordinate = CLLocationCoordinate2DMake(lat, lon);
MKPlacemark *myPlacemark = [[[MKPlacemark alloc] initWithCoordinate:myCoordinate addressDictionary:nil]autorelease];
MKMapItem *mapItem = [[[MKMapItem alloc] initWithPlacemark:myPlacemark]autorelease];
mapItem.name = #"My car";
[mapItem openInMapsWithLaunchOptions:nil];
if (latN != nil && lonN != nil) {
pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_OK messageAsString:#"ok"];
} else {
pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_ERROR];
}
[self.commandDelegate sendPluginResult:pluginResult callbackId:command.callbackId];
Is there a way to perform this on iOS ?
Thanks in advance,
Med.
I come back with a solution that I used in my case, after a lot of research I found that the only way to do it was to use the mapKit framework.
So I simply used cordova MapKit plugin (https://github.com/phonegap/phonegap-plugins/tree/master/iOS/MapKit) with some modifications because this version is not necessary up to date.
I setup a view with the map view and a toolbar with a back button on the top, so we can access the native map and then back to the application easily.
Med.
I want to create a project that reads the user's gesture (accelerometer-based) and recognise it, I searched a lot but all I found was too old, I neither have problems in classifying nor in recognition, I will use 1 dollar recogniser or HMM, I just want to know how to read the user's gesture using the accelerometer.
Is the accelerometer data (x,y,z values) enough or should i use other data with it like Attitude data (roll, pitch, yaw), Gyro data or magnitude data, I don't even understand anyone of them so explaining what does these sensors do will be useful.
Thanks in advance !
Finally i did it, i used userAcceleration data which is device acceleration due to device excluding gravity, i found a lot of people use the normal acceleration data and do a lot of math to remove gravity from it, now it's already done by iOS 6 in userAcceleration.
And i used 1$ recognizer which is a 2D recongnizer (i.e. point(5, 10), no Z).Here's a link for 1$ recognizer, there's a c++ version of it in the downloads section.
Here are the steps of my code...
Read userAcceleration data with frequancy 50 HZ.
Apply low pass filter on it.
Take a point into consideration only if its x or y values are greater than 0.05 to reduce noise. (Note: The next step depends on your code and on the recognizer you use).
Save x and y points into array.
Create a 2D path from this array.
Send this path to the recognizer to weather train it or recongize it.
Here's my code...
#implementation MainViewController {
double previousLowPassFilteredAccelerationX;
double previousLowPassFilteredAccelerationY;
double previousLowPassFilteredAccelerationZ;
CGPoint position;
int numOfTrainedGestures;
GeometricRecognizer recognizer;
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
previousLowPassFilteredAccelerationX = previousLowPassFilteredAccelerationY = previousLowPassFilteredAccelerationZ = 0.0;
recognizer = GeometricRecognizer();
//Note: I let the user train his own gestures, so i start up each time with 0 gestures
numOfTrainedGestures = 0;
}
#define kLowPassFilteringFactor 0.1
#define MOVEMENT_HZ 50
#define NOISE_REDUCTION 0.05
- (IBAction)StartAccelerometer
{
CMMotionManager *motionManager = [CMMotionManager SharedMotionManager];
if ([motionManager isDeviceMotionAvailable])
{
[motionManager setDeviceMotionUpdateInterval:1.0/MOVEMENT_HZ];
[motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue currentQueue]
withHandler: ^(CMDeviceMotion *motion, NSError *error)
{
CMAcceleration lowpassFilterAcceleration, userAcceleration = motion.userAcceleration;
lowpassFilterAcceleration.x = (userAcceleration.x * kLowPassFilteringFactor) + (previousLowPassFilteredAccelerationX * (1.0 - kLowPassFilteringFactor));
lowpassFilterAcceleration.y = (userAcceleration.y * kLowPassFilteringFactor) + (previousLowPassFilteredAccelerationY * (1.0 - kLowPassFilteringFactor));
lowpassFilterAcceleration.z = (userAcceleration.z * kLowPassFilteringFactor) + (previousLowPassFilteredAccelerationZ * (1.0 - kLowPassFilteringFactor));
if (lowpassFilterAcceleration.x > NOISE_REDUCTION || lowpassFilterAcceleration.y > NOISE_REDUCTION)
[self.points addObject:[NSString stringWithFormat:#"%.2f,%.2f", lowpassFilterAcceleration.x, lowpassFilterAcceleration.y]];
previousLowPassFilteredAccelerationX = lowpassFilterAcceleration.x;
previousLowPassFilteredAccelerationY = lowpassFilterAcceleration.y;
previousLowPassFilteredAccelerationZ = lowpassFilterAcceleration.z;
// Just viewing the points to the user
self.XLabel.text = [NSString stringWithFormat:#"X : %.2f", lowpassFilterAcceleration.x];
self.YLabel.text = [NSString stringWithFormat:#"Y : %.2f", lowpassFilterAcceleration.y];
self.ZLabel.text = [NSString stringWithFormat:#"Z : %.2f", lowpassFilterAcceleration.z];
}];
}
else NSLog(#"DeviceMotion is not available");
}
- (IBAction)StopAccelerometer
{
[[CMMotionManager SharedMotionManager] stopDeviceMotionUpdates];
// View all the points to the user
self.pointsTextView.text = [NSString stringWithFormat:#"%d\n\n%#", self.points.count, [self.points componentsJoinedByString:#"\n"]];
// There must be more that 2 trained gestures because in recognizing, it gets the closest one in distance
if (numOfTrainedGestures > 1) {
Path2D path = [self createPathFromPoints]; // A method to create a 2D path from pointsArray
if (path.size()) {
RecognitionResult recongnitionResult = recognizer.recognize(path);
self.recognitionLabel.text = [NSString stringWithFormat:#"%s Detected with Prob %.2f !", recongnitionResult.name.c_str(),
recongnitionResult.score];
} else self.recognitionLabel.text = #"Not enough points for gesture !";
}
else self.recognitionLabel.text = #"Not enough templates !";
[self releaseAllVariables];
}
I'm using deviceMotion for getting useracceleration(x, y, z). My aim is to create a filetext where in each iteration my application writes the 3 components in a row.
I'm using MotionGraphs code sample.
How is it possible - directly, or is necessary to create an array first?
This array; is it NSMutableArray or NSMutableNumber?
I've been looking for this question and I'm lost. :-(
I'm not an Objective-C expert but I remember Pascal code where I opened a file, and then I was writing in each iteration, but I checked: that programming has changed.
At the beginning we don't take into account different filters or discrimination window. For them, I've implemented freescale procedure. I'm just looking to save accelerometer data / to store data from accelerometer using deviceMotion userAcceleration.
float minX = 1.0f;
float minY = 1.0f;
float minZ = 1.0f;
NSMutableArray *container = [[NSMutableArray alloc] init];
-(void)startUpdatesWithSliderValue:(int)sliderValue
{
NSTimeInterval delta = 0.005;
NSTimeInterval updateInterval = deviceMotionMin + delta * sliderValue;
CMMotionManager *mManager = [(APLAppDelegate *)[[UIApplication sharedApplication] delegate] sharedManager];
APLDeviceMotionGraphViewController * __weak weakSelf = self;
[container addObject:[NSNumber numberWithFloat:deviceMotion.userAcceleration.x]];
[container addObject:[NSNumber numberWithFloat:deviceMotion.userAcceleration.y]];
[container addObject:[NSNumber numberWithFloat:deviceMotion.userAcceleration.z]];
}
//Finally we have to dump data to filetext, this is I donĀ“t know correctly.
1 Create NSMutableArray *container = [[NSMutableArray alloc] init]; to be your container.
2 Within the Accelerometer delegate method for did detect motion be sure to set a min for each of the 3 axis. e.g. float min_X = 1.0f; float min_y =1.0f; float min_Z = 1.0f
-(void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration {
}
3 Use Simple Filter Logic as in: (keep in mind the Acceleration is maxed at +- 2.3g so both positive and negative thresholds need consideration.
if ((acceleration.x > min_X || acceleration.x < -min_X) && (Y's..) && (Z's...) ) {
[container addObject:[NSNumber numberWithFloat:acceleration.x]];
[container addObject:[NSNumber numberWithFloat:acceleration.y]];
[container addObject:[NSNumber numberWithFloat:acceleration.z]];
}
4 The Array should be full of NSNumbers in groups of three (x,y,z).
5 The filter is needed, otherwise the accelerometers can pick small vibrations just sitting on the table.
WARNING: The Array will fill up fast, so Set the Sample Rate to an acceptable range based on how long you want to record data.
I am displaying current location for iPad in map view for this i am using following code
if ([CLLocationManager locationServicesEnabled])
{
lm = [[CLLocationManager alloc] init];
lm.delegate = self;
lm.desiredAccuracy = kCLLocationAccuracyBest;
lm.distanceFilter = 100.0f;
[lm startUpdatingLocation];
}
and i am calculating lat and longitude and passing it the url as
NSString *urlString = [NSString stringWithFormat:#"http://maps.google.com/maps?z=15&daddr=%##%#,%#&saddr=%##%#,%#",n,lat_New,lng_New,currentLocation,currentLatt,currentLong];
I am not getting the correct direction for initial starting point and ending point for iPad in map view.
Is there anything alternative for showing correct initial and staring point in iPad map view?
Does app store rejects apps for such problems and Pleas suggest me some good alternative for this.
There are couple of things .One simulator cannot give you GPS locations.Secondly Usually gps coordinates returned first time may not be accurate.so try receiving location updates by implementing the delegate.
I've got quite a lot of pins to put on my map so I think it would be a nice idea to cluster those annotations. I'm not really sure how to achieve this on iPhone, I was able to work something out with google maps and some javascript examples. But iPhone uses its mkmapview and I have no idea how to cluster annotations in there.
Any ideas or frameworks that you know and are good? Thanks.
You don't necessarily need to use a 3rd party framework because since iOS 4.2, MKMapView has a method called - (NSSet *)annotationsInMapRect:(MKMapRect)mapRect which you can use to do your clustering.
Check out the WWDC11 Session video 'Visualizing Information Geographically with MapKit'. About half way through it explains how to do it. But I'll summarize the concept for you:
Use Two maps (second map is never added to the view hierarchy)
Second map contains all annotations (again, it's never drawn)
Divide map area into a grid of squares
Use -annotationsInMapRect method to get annotation data from
invisible map
Visible map builds its annotations from this data from invisible map
Fortunately, you don't need 3rd party framework's anymore. iOS 11 has native clustering support.
You need to implement mapView:clusterAnnotationForMemberAnnotations: method.
Get more details in the Apple example: https://developer.apple.com/sample-code/wwdc/2017/MapKit-Sample.zip
Since this is a very common problem and i needed a solution i have wrote a custom subclass of MKMapView which supports clustering. Then i made it available open source! You can get it here: https://github.com/yinkou/OCMapView.
It manages the clustering of the annotations and you can handle their views by yourself.
You don't have to do anything but to copy the OCMapView folder to your project, create a MKMapView in your nib and set its class to OCMapView. (Or create and delegate it in code like a regular MKMapView)
By using Apple demo code it's easy to implement clustering concept in our code. Reference link
Simply we can use following code for the Clustering
Steps to implement clustering
Step1 : The important thing is for clustering we use two mapviews(allAnnotationsMapView, ), One is for reference(allAnnotationsMapView).
#property (nonatomic, strong) MKMapView *allAnnotationsMapView;
#property (nonatomic, strong) IBOutlet MKMapView *mapView;
In viewDidLoad
_allAnnotationsMapView = [[MKMapView alloc] initWithFrame:CGRectZero];
Step2 : Add all annotations to the _allAnnotationsMapView, In below _photos are the annotations array.
[_allAnnotationsMapView addAnnotations:_photos];
[self updateVisibleAnnotations];
Step3 : Add below methods for clustering, in this PhotoAnnotation is the custom annotation.
MapViewDelegate methods
- (void)mapView:(MKMapView *)aMapView regionDidChangeAnimated:(BOOL)animated {
[self updateVisibleAnnotations];
}
- (void)mapView:(MKMapView *)aMapView didAddAnnotationViews:(NSArray *)views {
for (MKAnnotationView *annotationView in views) {
if (![annotationView.annotation isKindOfClass:[PhotoAnnotation class]]) {
continue;
}
PhotoAnnotation *annotation = (PhotoAnnotation *)annotationView.annotation;
if (annotation.clusterAnnotation != nil) {
// animate the annotation from it's old container's coordinate, to its actual coordinate
CLLocationCoordinate2D actualCoordinate = annotation.coordinate;
CLLocationCoordinate2D containerCoordinate = annotation.clusterAnnotation.coordinate;
// since it's displayed on the map, it is no longer contained by another annotation,
// (We couldn't reset this in -updateVisibleAnnotations because we needed the reference to it here
// to get the containerCoordinate)
annotation.clusterAnnotation = nil;
annotation.coordinate = containerCoordinate;
[UIView animateWithDuration:0.3 animations:^{
annotation.coordinate = actualCoordinate;
}];
}
}
}
clustering Handling methods
- (id<MKAnnotation>)annotationInGrid:(MKMapRect)gridMapRect usingAnnotations:(NSSet *)annotations {
// first, see if one of the annotations we were already showing is in this mapRect
NSSet *visibleAnnotationsInBucket = [self.mapView annotationsInMapRect:gridMapRect];
NSSet *annotationsForGridSet = [annotations objectsPassingTest:^BOOL(id obj, BOOL *stop) {
BOOL returnValue = ([visibleAnnotationsInBucket containsObject:obj]);
if (returnValue)
{
*stop = YES;
}
return returnValue;
}];
if (annotationsForGridSet.count != 0) {
return [annotationsForGridSet anyObject];
}
// otherwise, sort the annotations based on their distance from the center of the grid square,
// then choose the one closest to the center to show
MKMapPoint centerMapPoint = MKMapPointMake(MKMapRectGetMidX(gridMapRect), MKMapRectGetMidY(gridMapRect));
NSArray *sortedAnnotations = [[annotations allObjects] sortedArrayUsingComparator:^(id obj1, id obj2) {
MKMapPoint mapPoint1 = MKMapPointForCoordinate(((id<MKAnnotation>)obj1).coordinate);
MKMapPoint mapPoint2 = MKMapPointForCoordinate(((id<MKAnnotation>)obj2).coordinate);
CLLocationDistance distance1 = MKMetersBetweenMapPoints(mapPoint1, centerMapPoint);
CLLocationDistance distance2 = MKMetersBetweenMapPoints(mapPoint2, centerMapPoint);
if (distance1 < distance2) {
return NSOrderedAscending;
} else if (distance1 > distance2) {
return NSOrderedDescending;
}
return NSOrderedSame;
}];
PhotoAnnotation *photoAnn = sortedAnnotations[0];
NSLog(#"lat long %f %f", photoAnn.coordinate.latitude, photoAnn.coordinate.longitude);
return sortedAnnotations[0];
}
- (void)updateVisibleAnnotations {
// This value to controls the number of off screen annotations are displayed.
// A bigger number means more annotations, less chance of seeing annotation views pop in but decreased performance.
// A smaller number means fewer annotations, more chance of seeing annotation views pop in but better performance.
static float marginFactor = 2.0;
// Adjust this roughly based on the dimensions of your annotations views.
// Bigger numbers more aggressively coalesce annotations (fewer annotations displayed but better performance).
// Numbers too small result in overlapping annotations views and too many annotations on screen.
static float bucketSize = 60.0;
// find all the annotations in the visible area + a wide margin to avoid popping annotation views in and out while panning the map.
MKMapRect visibleMapRect = [self.mapView visibleMapRect];
MKMapRect adjustedVisibleMapRect = MKMapRectInset(visibleMapRect, -marginFactor * visibleMapRect.size.width, -marginFactor * visibleMapRect.size.height);
// determine how wide each bucket will be, as a MKMapRect square
CLLocationCoordinate2D leftCoordinate = [self.mapView convertPoint:CGPointZero toCoordinateFromView:self.view];
CLLocationCoordinate2D rightCoordinate = [self.mapView convertPoint:CGPointMake(bucketSize, 0) toCoordinateFromView:self.view];
double gridSize = MKMapPointForCoordinate(rightCoordinate).x - MKMapPointForCoordinate(leftCoordinate).x;
MKMapRect gridMapRect = MKMapRectMake(0, 0, gridSize, gridSize);
// condense annotations, with a padding of two squares, around the visibleMapRect
double startX = floor(MKMapRectGetMinX(adjustedVisibleMapRect) / gridSize) * gridSize;
double startY = floor(MKMapRectGetMinY(adjustedVisibleMapRect) / gridSize) * gridSize;
double endX = floor(MKMapRectGetMaxX(adjustedVisibleMapRect) / gridSize) * gridSize;
double endY = floor(MKMapRectGetMaxY(adjustedVisibleMapRect) / gridSize) * gridSize;
// for each square in our grid, pick one annotation to show
gridMapRect.origin.y = startY;
while (MKMapRectGetMinY(gridMapRect) <= endY) {
gridMapRect.origin.x = startX;
while (MKMapRectGetMinX(gridMapRect) <= endX) {
NSSet *allAnnotationsInBucket = [self.allAnnotationsMapView annotationsInMapRect:gridMapRect];
NSSet *visibleAnnotationsInBucket = [self.mapView annotationsInMapRect:gridMapRect];
// we only care about PhotoAnnotations
NSMutableSet *filteredAnnotationsInBucket = [[allAnnotationsInBucket objectsPassingTest:^BOOL(id obj, BOOL *stop) {
return ([obj isKindOfClass:[PhotoAnnotation class]]);
}] mutableCopy];
if (filteredAnnotationsInBucket.count > 0) {
PhotoAnnotation *annotationForGrid = (PhotoAnnotation *)[self annotationInGrid:gridMapRect usingAnnotations:filteredAnnotationsInBucket];
[filteredAnnotationsInBucket removeObject:annotationForGrid];
// give the annotationForGrid a reference to all the annotations it will represent
annotationForGrid.containedAnnotations = [filteredAnnotationsInBucket allObjects];
[self.mapView addAnnotation:annotationForGrid];
for (PhotoAnnotation *annotation in filteredAnnotationsInBucket) {
// give all the other annotations a reference to the one which is representing them
annotation.clusterAnnotation = annotationForGrid;
annotation.containedAnnotations = nil;
// remove annotations which we've decided to cluster
if ([visibleAnnotationsInBucket containsObject:annotation]) {
CLLocationCoordinate2D actualCoordinate = annotation.coordinate;
[UIView animateWithDuration:0.3 animations:^{
annotation.coordinate = annotation.clusterAnnotation.coordinate;
} completion:^(BOOL finished) {
annotation.coordinate = actualCoordinate;
[self.mapView removeAnnotation:annotation];
}];
}
}
}
gridMapRect.origin.x += gridSize;
}
gridMapRect.origin.y += gridSize;
}
}
By following above steps we can achieve clustering on mapview, it is not necessary to use any third party code or framework. Please check the Apple sample code here. Please let me know if you have any doubts on this.
Have you looked at ADClusterMapView ? https://github.com/applidium/ADClusterMapView
It does precisely just this.
I just wanted to clustering pins, just showing its number. The following one
https://www.cocoacontrols.com/controls/qtree-objc fits my expectations.
I recently forked off of ADClusterMapView mentioned in another answer and resolved many, if not all, of the issues associated with the project. It's a kd-tree algorithm and animates the clustering.
It's available open source here https://github.com/ashare80/TSClusterMapView
Try this framework (XMapView.framework); it now supports iOS 8.
This framework doesn't need you to change your current project structure and it can directly be used to your MKMapView. There is a zip file. It gives you an example to cluster 200 pins at once. After I tested it in an iPod I found it is very smooth.
http://www.xuliu.info/xMapView.html
This library supports:
clustering different categories
clustering all categories
setting up your own cluster radius and so on
hide or show a certain of categories
individually handle and control each pin in the map
There is a pretty cool and well maintained library for both Objective-C and Swift here: https://github.com/bigfish24/ABFRealmMapView
It does clustering really well and also handles large amounts of points due to its integration with Realm.