what is the best way or service in IBM Cloud Platform to monitoring and logging Watson services?
I would be interested in extracting information like the response time for each request.
Thanks in advance
You can retrieve the Watson Assistant chat logs by making REST API call. It contains the request and response timestamp of each input. You would have to probably record Watson API calls in your application to take into account the network latency time to measure overall application response time.
curl -u "apikey:{apikey}" "https://gateway.watsonplatform.net/assistant/api/v1/workspaces/{workspace_id}/logs?version=2017-09-13"
{
"logs": [
{
"request": {
"input": {
"text": "Good morning"
}
},
"response": {
"intents": [
{
"intent": "hello",
"confidence": 1
}
],
.
.
.
"workspace_id": "{workspace_id}",
"request_timestamp": "2017-09-13T16:39:56.284Z",
"response_timestamp": "2017-09-13T16:39:58.828Z",
.
.
}
Related
I've read few posts around about the limitations on 600 requests in 600 seconds that Facebook Graph Api sets on requests.
This question is about getting some clarification in the issue I'm facing.
I'm doing requests, quite simple to the FB Graph:
So, from my home I run:
curl https://graph.facebook.com/v2.0/?id=https://www.example.com/article/the-name-of-the-article/
(Having the trail slash is not trivial)
which gives me empty results:
{
"share": {
"comment_count": 0,
"share_count": 605
},
"og_object": {
"id": "XXXXX6ZZ70301002",
"description": "text",
"title": "title",
"type": "article",
"updated_time": "2019-03-09T00:15:06+0000"
},
"id": "https://www.example.com/article/the-name-of-the-article"
}
I took the url from js code in the website.
Instead, running the Scrapy crawler, on the same url, still from home network, gives me the same as above:
{
"share": {
"comment_count": 0,
"share_count": 605
},
"og_object": {
"id": "XXXXX6ZZ70301002",
"description": "text",
"title": "title",
"type": "article",
"updated_time": "2019-03-09T00:15:06+0000"
},
"id": "https://www.example.com/article/the-name-of-the-article"
}
Which is more than fine for now and the js-code-scraping system seems to be working. The results contain all the information from js calls to FB Graph.
Hands on server side, the crawler runs as expected, but having a closer look at the results, information coming from js code execution is not there.
I've checked the whole code, in other url which also fires js actions to provide html content and the code actually works fine.
Then, repeating the simple:
curl https://graph.facebook.com/v2.0/?id=https://www.example.com/article/the-name-of-the-article
this time from the server ip, it replies:
{
"error": {
"message": "(#4) Application request limit reached",
"type": "OAuthException",
"is_transient": true,
"code": 4,
"fbtrace_id": "ErXXXXZZrOn"
}
}
Regarding ip-blocks, the code wasn't able of delivering more than 600 requests. Actually it sent less than 10 requests to the graph api.
Obviously, the information coming from js requests to the Fb Graph Api from server side is missing.
I tried from different servers, from different providers, to check if there was a Ip filter on Cloud providers, but it seems that is not the case, as in every server the results are the same.
What is going on here?
Why the js requests do not get valid response data when they are fired from server ip addresses? (as it gives the error OAuthException:Application request limit reached also using the curl command)
Thanks for any clue
Following the instructions on this page Working with users, groups, and items—ArcGIS REST API: Users, groups, and content | ArcGIS for Developers and the Add Item documentation I was able build a POST request in POSTMAN to add a new item to the user.
After getting the token, when I try the POST request to add the web map I get this error
{"error":{"code":403,"messageCode":"GWM_0003","message":"You do not have permissions to access this resource or perform this operation.","details":[]}}
This is the JSON that contain some simple Web Map data,
{
"operationalLayers": [],
"baseMap": {
"baseMapLayers": [
{
"id": "defaultBasemap",
"layerType": "ArcGISTiledMapServiceLayer",
"url": "https://services.arcgisonline.com/ArcGIS/rest/services/World_Topo_Map/MapServer",
"visibility": true,
"opacity": 1,
"title": "Topographic"
}
],
"title": "Topographic"
},
"spatialReference": {
"wkid": 102100,
"latestWkid": 3857
},
"authoringApp": "WebMapViewer",
"authoringAppVersion": "5.4",
"version": "2.11"
}
I was using the wrong access token.
I was using the access token I had for the app I was testing instead of the user's access token that I had to get with oauth2.
I'm leaving this here for future newbies.
We want to create a screen on multiple clients that shows "5 best selling product", "5 recently added product" and "5 product with great offers". All these would be shown as carousel.
We want to create Restful APIs for these. We have created following APIs:
/api/bestsellingproduct/
/api/recentlyaddedproduct/
/api/greatofferproduct/
Currently, every client i.e. desktop, mobile, android, ios has hard-coded these URIs. I am worried if we tomorrow change these URLs, it would be cumbersome and also REST suggests that "A REST client enters a REST application through a simple fixed URL. (Ref: https://en.wikipedia.org/wiki/HATEOAS)"
Can someone suggest how I can ensure that all clients enter application through simple fixed URL in this case?
In HATEOAS URIs are discoverable (and not documented) so that they can be changed. That is, unless they are the very entry points into your system (Cool URIs, the only ones that can be hard-coded by clients) - and you shouldn't have too many of those if you want the ability to evolve the rest of your system's URI structure in the future. This is in fact one of the most useful features of REST.
For the remaining non-Cool URIs, they can be changed over time, and your API documentation should spell out the fact that they should be discovered at runtime through hypermedia traversal.
Looking at the Richardson's Maturity Model (level 3), this would be where links come into play. For example, from the top level, say /api/version(/1), you would discover there's a link to the groups. Here's how this could look in a tool like HAL Browser:
Root:
{
"_links": {
"self": {
"href": "/api/root"
},
"api:bestsellingproduct": {
"href": "http://apiname:port/api/bestsellingproduct"
},
"api:recentlyaddedproduct": {
"href": "http://apiname:port/api/recentlyaddedproduct"
},
"api:greatofferproduct": {
"href": "http://apiname:port/api/greatofferproduct")
}
}
}
The advantage here would be that the client would only need to know the relationship (link) name (well obviously besides the resource structure/properties), while the server would be mostly free to alter the relationship (and resource) url.
You could even embed them to be returned in the same root api call:
{
"_embedded": {
"bestsellingproduct": [
{
"id": "1",
"name": "prod test"
},
{
"id": "2",
"name": "prod test 2"
}
],
"recentlyaddedproduct": [
{
"id": "3",
"name": "prod test 3"
},
{
"id": "5",
"name": "prod test 5"
}
]
}
I'm having difficulty testing my Alexa skill using the Service Simulator. If I set the appId, the skill doesn't work. Here is the relevant code:
'use strict';
const Alexa = require('alexa-sdk');
var APP_ID = "amzn1.ask.skill.[my skill ID]";
exports.handler = function(event, context, callback) {
var alexa = Alexa.handler(event, context);
alexa.appId = APP_ID;
alexa.registerHandlers(handlers);
alexa.execute();
}
When I run this code in the service simulator, I get the response "The remote endpoint could not be called, or the response it returned was invalid." and error messages in the CloudWatch logs:
The applicationIds don't match: applicationId and amzn1.ask.skill.[my skill id]
"errorMessage": "Invalid ApplicationId: amzn1.ask.skill.[my skill id]"
If I comment out setting the appId
//alexa.appId = APP_ID
the simulator appears to return a valid response, but I see this warning in the logs:
"Warning: Application ID is not set."
Here is the Lambda Request sent by the simulator:
{
"session": {
"sessionId": "SessionId.bb263d3e-2018-4aab-a0df-f945b3a25bf9",
"application": {
"applicationId": "amzn1.ask.skill.[my skill ID]"
},
"attributes": {},
"user": {
"userId": "amzn1.ask.account.[accountID]"
},
"new": true
},
"request": {
"type": "LaunchRequest",
"requestId": "EdwRequestId.d8b56c7f-63ea-48e8-8816-9b7c036d5816",
"locale": "en-US",
"timestamp": "2017-07-12T12:06:11Z"
},
"version": "1.0"
}
Some online examples suggest that the appId property should be APP_ID:
alexa.APP_ID = APP_ID;
but this doesn't appear to be correct. According to the alexa-sdk source code (and trying it anyway), the property needs it to be appId as I implemented.
It looks like the problem is more related to the json lambda request created by the Amazon simulator. To be clear, this is the simulator on the Amazon Alexa developer's portal, not the test function on the AWS lambda test event interface.
The odd thing is, if I cut and paste the lambda request from the Amazon simulator and run it from the AWS test interface, it works fine.
I also had this problem these two days. I believe it's their end problem. I saw this on amazon forum.
Amazon changed something over the weekend which affects the JSON
request received by Lambda from the simulator and breaks verification.
Here are two threads regarding this, which include workarounds to
allow it to work:
https://forums.developer.amazon.com/questions/78391/application-id-verification-issue-with-nodejs-and.html
https://forums.developer.amazon.com/questions/78393/my-alexa-skill-is-not-returning-a-lambda-response.html
So far there has been no update from Amazon, or even an acknowledgment
of the issue.
--- GadgetChannel
I am new to api.ai. I want to send data to the web server and receive it and then give it to the users? From the documentation that I read, I understood that I have to use a webhook. But I am not sure how will api.ai send and receive the data?
Can the webhook be developed in any language?
The webhook is a web service that you implement in any language and on any platform, with an HTTP (must be https for ghome) and JSON interface, that fullfils (in their lingo) a user intent.
API.AI matches a user utterance to an intent (which then suggests entity values and a response) and they pass these in the call to your web service. You do whatever processing you need - your domain logic - and then return a speech response for the user and optionally some API.AI contexts.
You can read more about it (and about slot filling fulfillment which is a little different) here.
You can visualize the working of a webhook like a block where data request comes in JSON format somewhat like this:
{
"id": "7aef9329-4a32-4d59-b661-8bf380a0f35b",
"timestamp": "2017-06-07T05:36:12.641Z",
"lang": "en",
"result": {
"source": "agent",
"resolvedQuery": "hi",
"action": "order.pizza",
"actionIncomplete": true,
"parameters": {
"address": "",
"crust": "",
"sauce": "",
"size": "",
"time": "",
"topping": "",
"type": ""
}
}
}
and another json file is returned to it according to the prescribed settings.