Google maps snap to road for walking - Swift - swift

I have an iOS app that is using the Google Maps SDK. I track a user as they walk or run then use those coordinates to plot their course along the road using snap to roads.
I have found however the snap to roads uses the direction of the road to plot the course so when it comes to roundabouts the course follows this around even if the user was on the inside of the block.
Is there a way to use snap to roads (or directions) that allows for walking tracks and follows the shortest distance?
Thanks

Looking at the documentation, snap to road is not for walking...
The snapToRoads method takes up to 100 GPS points collected along a route, and returns a similar set of data, with the points snapped to the most likely roads the vehicle was traveling along.
but well.. what you can do it to compare the points, and if the returned points are "some" distance faraway then the original point, you know that this points should be inside the block.

Related

Understanding GazeProvider GazeDirection Vector3

I am working on taking two HoloLens 2 users' gaze data, and comparing them to verify they are tracking the same hologram's trajectory. I have access to the GazeProvider data, no issues there. However, the GazeProvider.GazeDirection data throws me. For instance, I've referenced the API at:
GazeDirection API Data
But, I dont really understand what the Vector 3 it returns means. Are the X,Y,Z relative motion? If not, can I use Vector3.angle to compute relative motion vectors between two points?
The vector returned by the GazeDirection property leveraging three coordinate parameters to point the direction that the user's eyes are looking towards. The origin is located between the user's eyes. The Vector3.angle method you mentioned can help you compute the angle between the two eye gaze directions.
I have just started to dig into gaze from a different scenario, but one suggestion I would make is that you also take a look at the gaze origin api.
Each user occupies a different location in space and is gazing into the world in a "gaze direction" from their location in space which would be their "gaze origin".
Basically you need to reconcile the different spatial coordinate systems.

Finding the next intersection on the current street by using OpenStreetMap

I'm new to OSM and would like to know if my approach for finding the next intersection ahead is possible when doing it offline.
The goal is to get the coordinates (latitude/longitude) of the next intersection on the street I'm currently driving on. For that I have my actual position (lat/lon coordinates) and heading (w.r.t. the north-pole) at disposition.
My current approach right now is to first use my coordinates for getting the name of the street/way/trace in which I am driving; then use that name for knowing which are the next intersections on that street (to both sides); and then use the heading for knowing which direction is the one I should pay attention to.
Once I have the intersection, I would get its coordinates and continue with the program.
My questions are then, is it possible to do all of that offline, i.e. with a .osm file (or similar)?
And, do you know a better approach for getting the coordinates of the next intersection ahead?
Thanks a lot in advance!
PS. I was able to get the name of the street by using nominatim and to get all the intersections of a street by using Overpass turbo, but this solutions would need internet; or is there a way of using them offline?

Optimizing The Way To Know most nearby objects

I'm using Core Location with ios 6 for this.
Scenario:
I have the spacial coordinates of a sample of points. I save all those coords using core data.
As and when I am moving with my iphone, I need to detect if I am like 500m from any points in that sample.
Right now, I am looping through those points and calculating the distances of them from my current location. It does this frequently as the user's current location is changing.
But the thing is this will not be a good idea if I have like 100 points, 1000 points.etc
Question:
How can I optimize this, any hint?
Idea:
rasterize (grid) your objects and asign each object to a grid-object (clustering).
while moving, detect the grids intersecting your current position/radius.
get the most nearby object(s) inside those grids.
So you only need to calculate the distances of your grid-objects and the distances to the objects inside the grid-objects nearby your position.

Calculate nearest point of KML polygon for iPhone app

I have a series of nature reserves that need to be plotted, as polygon overlays, on a map using the coordinates contained within KML data. I’ve found a tutorial on the Apple website for displaying KML overlays on map instances.
The problem is that the reserves vary in size greatly - from a small pond right up to several hundred kilometers in size. As a result I can’t use the coordinates of the center point to find the nearest reserves. Instead I need to calculate the nearest point of the reserves polygon to find the nearest one. With the data in KML - how would I go about trying to achieve this?
I've only managed to find one other person ask this and no one had replied :(
Well, there are a couple different solutions depending on your needs. The higher the accuracy required, the more work required. I like Phil's meanRadius parameter idea. That would give you a rough idea of which polygon is closest and would be pretty easy to calculate. This idea works best if the polygons are "circlish". If the polygon are very irregular in shape, this idea loses it's accuracy.
From a math standpoint, here is what you want to do. Loop through all points of all polygons. Calculate the distance from those points to your current coordinate. Then just keep track of which one is closest. There is one final wrinkle. Imagine a two points making a line segment that is very long. You are located one meter away from the midpoint of the line. Well, the distance to these two points is very large, while, in fact you are very close to the polygon. You will need to calculate the distance from your coordinate to every possible line segment which you can do in a variety of manners which are outlined here:
http://www.worsleyschool.net/science/files/linepoint/distance.html
Finally, you need to ask yourself, am I in any polygons? If you're 10 meters away from a point on a polygon, but are, in fact, inside the polygon, obviously, you need to consider that. The best way to do that is to use a ray casting algorithm:
http://en.wikipedia.org/wiki/Point_in_polygon#Ray_casting_algorithm

Translate GPS coordinates to location on PDF Map

I'd like to know (from a high level view) what would be required to take a pdf floor plan of a building and determine where exactly you are on that floor plan using GPS coordinates? In addition to location, the user would be presented with a "turn by turn" directions to another point on the map, navigating down hallways, between cubicles, etc.
Use case: an iPhone app that determined a user's location and guided them to a conference room or person's office in the building.
I realize that this is by no means trivial, but any help is appreciated. Thanks!
It's an interesting problem. When you're using Core Location, you're not necessarily using GPS. Using WiFi and cell tower triangulation, you can get pretty good location results. So from Core Location you get a latitude and longitude fix. (You might also get altitude info, since GPS data is 3-dimensional. You also will get an accuracy value.)
So you have lat and lon. You need to map these coordinates to the PDF plan's coordinates. Assuming that the plan is aligned with the latitude and longitude lines, and that you have a lat-long fix for one of the points on the plan, you need to calculate the x-axis scale and y-axis scale. Then it's some calculations to map the lat-long to x-y coordinates on the PDF plan.
GPS may not be accurate enough for this purpose, especially indoors. Assuming errors on
the order of 10 meters, you'll have difficulty determining which floor the user is on.
Here's a neat (?) idea that might work: can you post some "You are here" placards
at various locations around the building? You could label each one with a unique,
machine-readable location code (maybe a QR code or something similar), then take an
image using the camera, have your app read that image and interpret the location code,
and use that instead of GPS to determine the start location.
GPS inside? That's your first -- and biggest -- hurdle.
Next hurdle is knowing the GPS coordinates of at least three points on that PDF to define the plane of of your map in the real world. (The PDF will need to be to scale, of course.)
So that gives you where you are on the PDF. Now you'll need to figure out some way to determine where you can walk (or where you can't) to get directions.