Google turn by turn API (iPhone) - iphone

I can't find API from Google which provides turn by turn based directions. Just wanted to make sure if this type is even public? If not, what are my alternatives on iOS?
Thanks!

Have you tried Googling "Google Maps Directions API"? This is easy to find on Google or in the Google Maps API homepage.
http://code.google.com/apis/maps/documentation/directions/
"Each element in the steps array defines a single step of the
calculated directions. A step is the most atomic unit of a direction's
route, containing a single step describing a specific, single
instruction on the journey. E.g. "Turn left at W. 4th St." The step
not only describes the instruction but also contains distance and
duration information relating to how this step relates to the
following step. For example, a step denoted as "Merge onto I-80 West"
may contain a duration of "37 miles" and "40 minutes," indicating that
the next step is 37 miles/40 minutes from this step."

When I implemented this on Android, I passed a URI formatted string with an address to the OS which called the included turn-by-turn navigation which I recall was a 3rd party api (but shipped with the OS). Now that I'm developing iOS, I too would like to identify a solution on iPhone. As far as I can tell so far, there is no similar API for iOS (yet). I actually hope that I'm wrong here.
Hope this helps.

Related

Create custom Google Smart Home Action

I have a Google Nest Hub Max and I want to increase its capabilities for a custom need:
"Hey Google, add xyz to my work planning"
Then I want to make an HTTP call to my private server
The private server returns a text
The text is displayed in the Google Nest Hub Max screen + speak-out.
How can that be achieved?
Originally I thought that this will not be difficult. I've imagined a NodeJs, Java, Python or whatever framework where Google gives me the xyz text and I can do my thing and return a simple text. And obviously, Google will handle the intent matching and only call my custom code when users say the precise phrase.
I've tried to search for how to do it online, but there is a lot of documentation everywhere. This post resumes quite well the situation, but I've never found a tutorial or hello world example of such a thing.
Does anyone know how to do it?
For steps 2. and 3., I don't necessarily need to use a private server, if I can achieve what the private server does inside the Smart Home Action code, mostly some basic Python code.
First - you're on the right track! There are a few assumptions and terminology issues in your question that we need to clear up first, but your idea is fundamentally sound:
Google uses the term "Smart Home Actions" to describe controlling IoT/smart home devices such as lights, appliances, outlets, etc. Making something that you control through the Assistant, including Smart Speakers and Smart Hubs, means building a Conversational Action.
Most Conversational Actions need to be invoked by name. So you would start your action with something like "Talk to Work Planning" or "Ask Work Planning to add XYZ'. There are a limited, but growing, number of built in intents (BIIs) to cover other verticals - but don't count on them right now.
All Actions are public. They all share an invocation name namespace and anyone can access them. You can add Account Linking or other ways to ensure a limited audience, and there are ways to have more private alpha and beta testing, but there are issues with both. (Consider this an opportunity!)
You're correct that Google will help you with parsing the Intent and getting the parameter values (the XYZ in your example) and then handing this over to your server. However, the server must be at a publicly accessible address with an HTTPS endpoint. (Google refers to this as a webhook.)
There are a number of resources available, via Google, StackOverflow, and elsewhere:
On StackOverflow, look for the actions-on-google tag. Frequently, conversational actions are either built with dialogflow-es or, more recently, actions-builder which each have their own tags. (And don't forget that when you post your own questions to make sure you provide code, errors, screen shots, and as much other information as you can to help us help you overcome the issues.)
Google's documentation about how to design and build conversational actions.
Google also has codelabs and sample code illustrating how to build conversational actions. The codelabs include the "hello world" examples you are probably looking for.
Most sample code uses JavaScript with node.js, since Google provides a library for it. If you want to use python, you'll need the JSON format that the Assistant will send to your webhook and that it expects back in response.
There are articles and videos written about it. For example, this series of blog posts discussing designing and developing actions outlines the steps and shows the code. And this YouTube playlist takes you through the process step-by-step (and there are other videos covering other details if you want more).

How to detect more than one intent with IBM Watson Assistant?

Can the IBM Watson Conversation / Assistant service detect more than one intention in a single sentence?
Example of input:
play music and turn on the light
Intent 1 is #Turn_on
Intent 2 is #Play
==> the answer must be simultaneous for both intents: Music played and light turned on
If so, how can I do that?
Yes, Watson Assistant returns all detected intents with their associated confidence. See here for the API definition. In the response returned by Watson Assistant is n array of intents recognized in the user input, sorted in descending order of confidence.
The documents have an example on how to deal with multiple intents and their confidence. Be also aware of a setting alternate_intents to allow even more intents with lower confidence to be returned.
While #data_henrik is correct in how to get the other intents, it doesn't mean that the second question is related.
Take the following example graph, where we map the intents versus confidence that comes back:
Here you can clearly see that there are two intents in the persons question.
Now look at this one:
You can clearly see that there is only one intent.
So how do you solve this? There are a couple of ways.
You can check if the first and second intent fall within a certain percentage of each other. This is the easiest to detect, but tricker to code to select two different intents. It can get messy, and you will sometimes get false positives.
At the application layer you can do a K-Means on the intent result. K-Means will allow you to group intents by buckets, so you create two buckets (K=2), and if there is more than one in the first bucket, you have a compound question. I wrote about this and a sample on my site.
There is a new feature you can play with in Beta called "Disambiguation". This allows you to flag intent nodes with a question to ask to get it. Then if two questions are found it will say "Did you mean? ...." and the user can select.
IS this disambiguation feature available in non production environments, on Beta?

Google Fit Rest Api Step Counts inconsistent and different from Fit App

This seems to be a common enough problem that there are a lot of entries when one googles for help but nothing has helped me yet.
I am finding that the results provided by the REST API for estimated_steps are wildly different from those that appear in the device app.
I am running a fetch task for users via cron job on a PHP/Laravel app.
I'm using this https://developers.google.com/fit/scenarios/read-daily-step-total - estimated_steps to retrieve the step count.
Some days the data is correct. Some days its wildly different. For instance, on one given day, the REST API gives step count of 5661 while the app shows 11,108. Then there are six seven days when the stream is correct.
Has anyone faced this sort of behavior? I've tested for timezone differences, logged and analyzed the response json to see if i'm making some obvious mistake, but nope.
You may check this How do I get the same step count as the Google Fit app? documentation. Be noted that even when using the right data source, your step count may still be different from that of the Google Fit app.
This could be due to one of the following reasons:
On Wear, the Fit MicroApp when connected will display step counts queried on the phone and transferred over via the Wearable APIs. Other MicroApps accessing local-only data will only get watch steps. We are working on making this easier for developers.
Sometimes the step calculation code for the Google Fit app is updated with bug fixes before we are able to release the fixes to developers (which requires a Google Play Services release). We are also working on making it possible for developers to access fixes at the same time.
The Fit app uses a specific data source for steps and it adds some functionality (which can be seen on the documentation) on top of the default merged steps stream.
You can access the "estimated" steps stream as shown here:
derived:com.google.step_count.delta:com.google.android.gms:estimated_steps
Hope this helps!

Googles Places API

We are developing an application that utilizes several of the supported place types in the Google Places API. But, we have noticed that the supported type, restaurant code, is different from the restaurant info you would find at the bottom page of a Google page on a smart phone. Example; the restaurant info given from a smart phone provides restaurant details and location. The one listed in Google Places API does not show the same info. Please inform us how we can obtain the codes for the same info that’s provided on the Google page from a smart phone. Is it free or part of a premium deal with Google? Thanks, and I hope this is clear.
Arthur
The Google Places database is different from Google Maps.
I'd imagine that at some point, Google would unite the two.
You could match by lat/long & name. This would probably work for at least half the time. The other half means the name & lat/long from both databased maybe a little different.

Searching MKMapview using JSON tends to return no results, or an incorrect one

I have an iphone app (iOS 5) that uses a UISearchBar to search an MKMapView. We used JSON queries, and used the fantastic answers from this question as reference (our code is very similar). The process itself works fine now, but we tend to get no results back from Google when we query them, or just get a really far away and incorrect one. Most times we can even search for "McDonald's" or "Subway" at it won't return any results. In general, it rarely gives a good result back unless we're very specific and include city and state and everything.
Is there another better way to go about this? Has something been updated since that answer that we should now take in to account? The problem doesn't seem to be that the code isn't working, but rather that Google just doesn't handle queries well the way we do it. This seems to be a fairly common use for MKMapview so I figured there should be an easier and better-working solution.
Any help would be much appreciated.
Here is a very useful list of the parameters that the Google Maps API supports:
http://querystring.org/google-maps-query-string-parameters/
You have a couple of options:
1) Get the user's location from the app and pass it into the search query with the sll parameter, e.g.
this search doesn't include a location:
https://maps.google.com/?q=starbucks
but this one does (I've used San Francisco in this example):
https://maps.google.com/?q=starbucks&sll=37.776414,-122.437477
Then you'll get results for the user's actual location. You'll also need to do something sensible if the user does not permit the app to access their location (in that case you may want to disable search).
2) If your app is for a specific place, then you can just add that place on the end of your search string. e.g. my Domesday app is only for England, so I include ",England" on the end of all my search requests, and that works nicely for me.