What is class in BadgeNotification at HMS push kit? - huawei-mobile-services

I will send push to huawei devices using api over a server. When I read the API document, it says it is mandatory to give a class name for the badge structure.
I do not know the class name in apk because I send notification via api.
What does this class name do?
Can I name any class? Or do I have to give the correct class in apk?

Using HMS Core Push Kit, you can automatically change the app badge number after setting the badge field on the server. Class field is mandatory in this scenario. The value must be the full path of the launcher class of your app. For example, if your app package name is com.huawei.push and the launcher class name of your app is MainActivity, so the value should be com.huawei.push.MainActivity.
So in your case, you can get this value from your client development colleagues.
Here is an example for your reference:
{
"validate_only": false,
"message": {
"notification": {
"title": "message title ",
"body": "message body"
},
"android": {
"notification": {
"click_action": {
"type": 2,
"url": " https://developer.huawei.com/consumer/en/hms"
},
"badge": {
"add_num": 1,
"class": "com.huawei.push.MainActivity",
"set_num": 10
}
},
"ttl": "1000"
},
"token": [
"pushtoken1"
]
}
}
For more details, you can refer to this guide: Push Kit-Badging

Related

Publishing Actions on google without Dialogflow

I am building a Google Assistant action without using Dialogflow instead using our own NLG / NLU , i am using gactions cli to upload my action file, all works fine , while i am able to test in simulator as well on my phone,
I am unable to submit for Alpha / Beta testing, all i see is those buttons disabled and its saying i do not have any action (as i am not using dialogflow).
How can i submit action for Alpha/Beta testing?
It looks like you haven't defined any actions in the action package (the file you upload using the gactions tool). Unfortunately "action" has several meanings here. One is the app itself, another is the things it can do via 'intents'.
Try adding the main intent to your action package:
{
"actions": [
{
"description": "Default Welcome Intent",
"name": "MAIN",
"fulfillment": {
"conversationName": "HelloWorld"
},
"intent": {
"name": "actions.intent.MAIN",
"trigger": {
"queryPatterns": [
"Open hello world"
]
}
}
}
]
}
The conversationName field must be the same as the one you define under conversations in the same file.
"conversations": {
"HelloWorld": {
"name": "HelloWorld",
"url": "https://myapp.example.com",
"fulfillmentApiVersion": 2,
}
}

Continue a conversation after Browsing carousel in DialogFlow/Actions

So I have been experimenting with different Response types for DialogFlow through Actions: Actions Responses and Webhook/Fulfillment.
And so far, I have been able to generate proper responses for types like List, Basic Card, Suggestion Chips successfully. What I need now is a list-based response that lets the user open a link in a browser when touched as well as "not" generate a chat bubble. "Browsing carousel" fits the criteria: Browsing Carousel.
I have successfully created and simulated the output with 2 sample items. The issue is when the user wants to continue the conversation. As per the Guidance section in the help above, the browsing carousel:
By default, the mic remains closed after a browse carousel is sent. If you want to continue the conversation afterwards, we strongly recommend adding suggestion chips below the carousel.
From this what I understood is that the user has to invoke the App again by saying "Ok Google, talk to [app]". This doesn't seem very user-friendly as the user expects to return back to the conversation she was having with the agent after she has looked through the links from the carousel. Please note, I have simulated the flow using the Google Actions Simulator on Console.Actions page.
As soon as I invoke the intent with the Browsing Carousel, it is shown to me with the sample Items. But when I enter/say the next command to continue the conversation, the agent simply returns with:
We're sorry, but something went wrong. Please try again.
And the REQUEST/REQUEST window as well as the ERRORS/DEBUG are empty. I have logged calls to the Webhook and there is no call received.
The question: Is there a way to give the user the ability to browse an informative link from a response "list" (not Basic Card) and return to the conversation without ending it.
Here is the response for Browsing Carousel from RESPONSE window in Actions > Simulator (note I've removed non-relevant parts):
{
"conversationToken": "[token info]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "You have the following 2 options:",
"displayText": "You have the following 2 options:"
}
},
{
"carouselBrowse": {
"items": [
{
"title": "Test 1",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
},
{
"title": "Test 2",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
}
]
}
}
],
"suggestions": [
{
"title": "Continue"
},
{
"title": "End"
}
]
}
}
}
],
"responseMetadata": {
"status": {
"message": "Success (200)"
},
"queryMatchInfo": {
"queryMatched": true
}
}
}
For all those who are using the Simulator to test the Browsing Carousel and seeing that it stops responding after the output, please use a device instead to test it. When you use a device to render the output and use the Carousel response, it doesn't kill the conversation but turns off the mic. This is the intended behavior. One can introduce Suggestion chips to assist the user to continue the conversation.
Updated: Also make sure each Webhook Intent call has a proper response with Simple Response. As long as the response has required textual and audio information correctly setup, the Simulator will not fail.

Google Home dialogFlow V2 API mediaResponse not working

I decided to upgrade my Google Assistant action to use "dialogFlow V2 API" and my webhook returns an object like this
{
"fulfillmentText": "Testing",
"fulfillmentMessages": [
{
"text": {
"text": [
"fulfillmentMessages text attribute"
]
}
}
],
"payload": {
"google": {
"richResponse": {
"items": [
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
},
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"ssml": "simpleResponse: ssml",
"displayText": "simpleResponse displayText"
}
}
]
}
}
},
"source": "webhook-play-sample"
}
But I get an error message saying my action it is not available, is mediaResponse supported by V2?, should I format my object differently?, also, when I remove "mediaResponse" object works just fine and the assistant will speak the simpleResponse part.
This action was re-created this Mid March 2018 and I read about May deadline and that is why I decide to upgrade to V2, do you think I should go back to V1, I know I will have to delete it and re-created but that is fine. This is a link to the JSON object I see in the debug tab. Thanks once again
I set "API V2" in my action dialogFlow console, this is a screenshot of that setting
Here is an screenshoot of my action's integration -> Google Assistant
Thanks Allen, Yes I do have "expectUserResponse": false, I added the suggestion object you recommended but, unfortunately nothing changed, I am still getting this error
Simulator debug tag details
First of all - this is not a problem with Dialogflow V2. You also seem to be confusing the sunset of Actions on Google V1 with the release of Dialogflow V2 - they are two different creatures completely. If your project was using AoG V1, there would be a setting on the Actions integration screen, and thee isn't.
It is fine if you want to move to Dialogflow V2, but it isn't required. Media definitely works under Dialogflow V2.
The array of items must include a simpleResponse item first, before any of the other items in the RichResponse. (You also shouldn't include both ssml and textToSpeech - just one of them.) You also don't need the fulfillmentText and fulfillmentMessages components, since those are provided by the richResponse.
You also need to include suggestions chips unless you have set expectUserResponse to false. Somewhere in the simulator debug is probably a block that says
{
"name": "MalformedResponse",
"debugInfo": "expected_inputs[0].input_prompt.rich_initial_prompt: Suggestions must be provided if media_response is used..",
"subDebugEntryList": []
}
So something more like this should work:
{
"payload": {
"google": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"displayText": "simpleResponse displayText"
},
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
}
}
]
"suggestions": [
{
"title": "This"
},
{
"title": "That"
}
]
}
}
},
"source": "webhook-play-sample"
}

Error "Empty speech response"

I tried to connect DialogFlow and Actions on Google, so I created some intents, connected the services, added explicit and implicit invocations etc, but when I try the bot in the simulator https://console.actions.google.com/project/[projectId]/simulator/ it always gives me the error:
"Failed to parse Dialogflow response into AppResponse, exception
thrown with message: Empty speech response"
even tough inputType was "KEYBOARD".
What I tried so far:
I did set "Response from this tab will be sent to the Google Assistant integration" in Dialog Flow (do you have to set it for every single intent?), but I don't see any extra setting for speech.
I disabled the second language, first I had also intents in German
I also turned off the Fullfillment Webhook (implemented in API v1 and then also v2) with no change
I only found this user with the same problem https://productforums.google.com/forum/#!topic/dialogflow/xYjKlz31yW0;context-place=topicsearchin/dialogflow/Empty$20speech$20response but no resolution.
the fulfillment checkbox is checked at the intents
The bot works fine when I use it through "Try it now" on the very right in Dialog Flow or in the Web Demo https://bot.dialogflow.com/994dda8b-4849-4a8a-ab24-c0cd03b5f420
Unfortunately the docs don't say anything about this error. Any ideas?
Here a screenshot of the error on the Actions integration:
This is the full debug output:
{
"agentToAssistantDebug": {
"agentToAssistantJson": {
"message": "Failed to parse Dialogflow response into AppResponse, exception thrown with message: Empty speech response",
"apiResponse": {
"id": "c12e1389-e887-49d4-b399-a332188ca946",
"timestamp": "2018-01-27T03:55:30.931Z",
"lang": "en-us",
"result": {},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "1517025330705"
}
}
},
"assistantToAgentDebug": {
"assistantToAgentJson": {
"user": {
"userId": "USER_ID",
"locale": "en-US",
"lastSeen": "2018-01-27T03:55:03Z"
},
"conversation": {
"conversationId": "1517025330705",
"type": "NEW"
},
"inputs": [
{
"intent": "actions.intent.MAIN",
"rawInputs": [
{
"inputType": "KEYBOARD",
"query": "Talk to Mica, the Hipster Cat Bot"
}
]
}
],
"surface": {
"capabilities": [
{
"name": "actions.capability.MEDIA_RESPONSE_AUDIO"
},
{
"name": "actions.capability.WEB_BROWSER"
},
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
}
]
},
"isInSandbox": true,
"availableSurfaces": [
{
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
}
]
}
]
},
"curlCommand": "curl -v 'https://api.api.ai/api/integrations/google?token=TOKEN' -H 'Content-Type: application/json;charset=UTF-8' -H 'Google-Actions-API-Version: 2' -H 'Authorization: AUTH_TOKEN' -A 'Mozilla/5.0 (compatible; Google-Cloud-Functions/2.1; +http://www.google.com/bot.html)' -X POST -d '{\"user\":{\"userId\":\"USER_ID\",\"locale\":\"en-US\",\"lastSeen\":\"2018-01-27T03:55:03Z\"},\"conversation\":{\"conversationId\":\"1517025330705\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"KEYBOARD\",\"query\":\"Talk to Mica, the Hipster Cat Bot\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"}]}]}'"
},
"sharedDebugInfo": [
{
"name": "ResponseValidation",
"subDebugEntry": [
{
"debugInfo": "API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: \": Cannot find field.\".",
"name": "UnparseableJsonResponse"
}
]
}
]
}
Also "debugInfo" sounds like an internal problem:
"API Version 2: Failed to parse JSON response string with
'INVALID_ARGUMENT' error: \": Cannot find field.\"."
Here a screenshot of the welcome intent:
ps.
It took me AGES to figure out, what
"Query pattern is missing for custom intent"
means - so I just document it here: In Dialog Flow - Intent - "User says" you have to DOUBLE CLICK on a word in the text input field when you want to set it as query parameter - which seems to be required for Actions on Google.
This happened to me. If this happens for an Intent you just added in the Dialogflow console and you are using Webhook fulfillment for the action, check the intent's fulfillment settings and ensure that the Webhook fulfillment slider is on. Evidently new intents don't automatically get webhook fulfillment: you have to opt each one in piecemeal (or at least, that was my experience).
I experienced this situation too.
My problem was that I used a SimpleResponse in my fulfillment index.js without referencing to it. So the solution for me was to add SimpleResponse like this in index.js:
const {dialogflow, SimpleResponse} = require('actions-on-google');
So, always check if you aren't using any dependencies without including it in your js-file.
Probably not the most common cause of the problem, but it can be.
I got this when running through the codelabs tutorial (https://codelabs.developers.google.com/codelabs/actions-1/index.html#4) and didn't name my intent the same name as it is referenced in the webhook script:
I came across this error when trying to develop my own WebHook. I first verified that my code was called by looking into the Nginx log, after which I knew there was a problem in my JSON output because I based my output on outdated examples.
The (up-to-date) documentation for both V1 and V2 of the API can be found here:
https://dialogflow.com/docs/fulfillment/how-it-works
This example response for v2 of the dialogflow webhook API helped me to resolve this error:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "this is a simple response"
}
}
]
}
}
}
}
Source: https://github.com/dialogflow/fulfillment-webhook-json/blob/master/responses/v2/ActionsOnGoogle/RichResponses/SimpleResponse.json
You can find more examples in the official github repository linked above.
Another possibility is if you have a text response (even an empty one) like so:
Then you need to click the trash can next to the response to clear it out to use the webhook.
The Actions on Google support helped me fix this problem:
I needed to add a text as Default Response to the intent used for Explicit Invocation.

Actions on Google Smart Home Skill: Room Hint / Home Graph

The documentation at https://developers.google.com/actions/smarthome/create-app#actiondevicessync mentions that the roomHint field of the JSON response to the sync request can be used to have Google automatically assign devices to correct rooms.
However, no matter what I return in that field, the user still has to manually assign every device to a room and I cannot get Google to automatically recognize the correct room using this roomHint field
Here's an example response:
{
"requestId": "500166151965294748",
"payload": {
"devices": [
{
"id": "9",
"type": "action.devices.types.LIGHT",
"traits": [
"action.devices.traits.OnOff"
],
"name": {
"name": "Light"
},
"willReportState": false,
"roomHint": "Attic"
}
]
}
}
Right now supplying a value for the roomHint is not used by the HomeGraph to determine which room this device is in.