I am trying to build a custom google action.
When my device is changing state (for example the user is turning the light on).I am updating the graph database to let the device know that a change happened.
The ReportState call works correctly (I get an HTTP 200 code). When I ask the google assistant for the device state, it gives me the correct information.
But the UI in Google home is not changing.
I read somewhere that is the way it works. Just like to have a confirmation on that, as it will confuse a lot of users...
This is the current behavior in the user interface. If you're receiving a 200 status code, you're doing it correctly.
Related
I am new to the whole creation-process of Google Actions.
I created an Action with the new Action Console.
It has an fulfillment endpoint to my server (as example: www.mypage.com/api).
For testing purposes it has no authentification, so it generates a public API response.
Said API generates a simple JSON response based on a send event handler.
In short: Action onEnter sends handler to API, API queries SQL database and sends the response back to my Google Action that then "speaks" the SQL result.
The result with the Actions Console Simulator is:
Testing with "Smart Display device: everything works.
Testing with "Speaker (e.g. Google Home): everything works.
Testing with "Phone": main intent is invoked and text is shown but does not get spoken.
So i tested the command "Hey Google, talk to the unicorn app" directly on my smartphone.
Here **everything works fine ** as said smartphone has the same e-mail like the one on my Action Console account: it recognizes the main invocation command and when i ask by voice to get the data, the data is received from my server and spoken.
So something is wrong with the "Phone" device in the simulator. This has been confirmed by me from other users saying that they often have trouble with the simulator function correctly.
I then deployed my Action.
It was reviewed and approved.
A few seconds later i received an auto-email saying that there were too many errors with my app, asking me to check its health status. I did so and in the Health tab, i can see that it has an error but it does not show me what the error is.
THEN I CONTACTED A ACTIONS-ON-GOOGLE EXPERT
They helped me a lot into the right direction but could not go deeper into the problem as my connection (the endpoint API) is outside of their servers.
So i ended with their tip on checking the Google Cloud Logging Console.
As said Logging Console is also new for me, i learned on how to query my results but:
How can i query for the so called "is_health_check" flag?
I am asking this because the Google Expert recommended me to search for said flag but i do not know on how to query it.
Sorry for this ultra-long entry but i am trying to be as transparent to you as possible, as i have been trying this out for several days now.
Thanks in advance for your time, ppl!
So the error is simple once that you know how Google handles the external webhooks. Thanks to the help of 2 Actions-On-Google Experts i was informed that Google pings your external Webhook from time to time.
As soon as they get an error as result from said ping, the Action will be deactivated from the Assistant until a new ping response sends that everything is fine again.
My problem was that after deploying the action and while it was under reviews, i continued to work on the code on my server. While coding, the Google server pinged and received an error code.
My fault, but at least i learned the thing about pinging your action!
I developed a actions on google app which sends a rich response. Everything works fine in the Actions on Google simulator. Now I want to test it on my Google Home Mini but my rich responses are not told by the mini. I would like to ask if it is possible to send my rich response to the google home app? The home mini says something like "Ok, I found these hotels, look at the home app" and there are the rich responses?
You can't send users to the Home app, but you can direct them to the Assistant available through their phone. The process is roughly:
At some point in the conversation (decide what is best for you, but when you have results that require display is usually good, or if the user says something like "Show me" or "Send this to my phone"), determine if they are on a device with a screen or not. You do this by using the app.getSurfaceCapabilities() method or by looking at the JSON in the originalRequest.data.surface.capabilities property. If they're using a screen, you're all set. But if not...
Make sure they have a screen they can use. You'll do this by checking out the results from app.getAvailableSurfaces() or looking at the JSON in the (not fully documented) originalRequest.data.availableSurfaces array. If they don't have a screen, you'll need to figure out your best course of action. But if they do have a screen surface (such as their phone, currently) available...
You can request to transfer them to the new surface using the app.askForNewSurface() method, passing a message explaining why you want to do the switch, a message that will appear as a notification on the device, and what surface you need (the screen).
If the user approves, they'll get the notification on their mobile device (using that device's normal notification system). When they select the notification, the Assistant will open up and will send your Action an Event called actions_intent_NEW_SURFACE. You'll need to create an Intent that handles this Event and forwards it to your webhook.
Your webhook should confirm that it is on a useful surface, and then proceed with the conversation and send the results.
You can see more about handling different surfaces at https://developers.google.com/actions/assistant/surface-capabilities
Rich responses can appear on screen-only or audio and screen experiences.
They can contain the following components:
One or two simple responses (chat bubbles)
An optional basic card
Optional suggestion chips
An optional link-out chip
An option interface (list or carousel)
So you need to make sure that the text response is containing all the details for cases like voice only (e.g. Google home/mini/max).
However, if your users are using the assistant from a device with a screen, you can offer them a better experience with the rich responses (e.g. suggestion chips, links etc').
I used Location Manager API for find out Latitude and Longitude of the current place.
But when i called update location manager method i get some alert from application which i have attach along this message.
I want to remove this alert message and by default i want to set YES or (Allow) for that.
Is it possible or not?
Thank You
Its not possible to do that. It is done to safeguard user's security. If you've been following the PATH ADDRESSBOOK issues, it was raised 'coz Path App didn't ask users before using their Address Book. So Apple's implemented the asking feature or permission by default in Core Location and Push Notifications. And By the looks of it, in the address book as well and you will have to stick to it - After all, you don't want to fall into privacy issues. :)
It is not possible,This dialog would be shown once when the app starts
You cannot and even if you could, probably your App wouldn't be accepted by Apple.
In my application, I'm using location information to capture the user locations.
I have the following questions:
What are the alert messages we have to provide in the application for user to show that we are going to use their location?
Can we fetch the location info in the background once the user accept that?
If user is allowed to fetch the location information let's say in the first time the application launches, do we need to provide the alerts for consecutive fetches (in next app loads)?
Thank you.
None, iOS will do this automatically.
This is the normal procedure.
iOS will bug the user as it is programmed to do.
If you assign the purpose property to your location manager instance, it will include that in the popup that asks the user for permission. See http://developer.apple.com/library/ios/documentation/CoreLocation/Reference/CLLocationManager_Class/CLLocationManager/CLLocationManager.html#//apple_ref/doc/uid/TP40007125-CH3-SW30
Yes for the other two.
I'm not an iPhone programmer, but I'm an iPhone user. Every time an application has asked for access to my location information, once I say yes it never asks again and regularly uses it.
This leads me to believe that the answer to your questions is:
1) Probably the alert message is provided automatically the first time your application requests location information. I'm pretty sure they aren't relying on developers policing themselves.
2) Yes, I believe so.
3) Yes, I believe so.
i'm working on a football application. the application connects to a webservice and gets the required data via soap request whenever a tab is opened. one of the tab shows live matches of the current day. when the live tab is opened, it refreshes the view by a timer and shows the status updates (goal scored, half time or full time). what i need to do is getting the status updates when the app is closed. the user will select max 2 competitions from settings of the app. then the status updates about these 2 competitions needed to be alerted. can i use push notification service to send soap requests and make alerts according to the response? or does it only allow getting response? or is there anyway that i can do it?
thanx in advance.
I'm not entirely clear what you are asking. The part where you write:
can i use push notification service to send soap
requests and make alerts according to the response?
or does it only allow getting response?
isn't really clear to me. What response are you talking about?
In any case.. push notification is what is says. It pushes a notification to the iPhone.
It does not:
activate your application in the background
allow for any action of your application without the user opening said application first
allow any kind of data to be gather from the phone
If you want the user's phone to talk to your server, the user will need to open your application. If that's what you're asking.