Difference in word confidence in IBM Watson Speech to text - ibm-cloud

I am using the node sdk to use the IBM watson speech-to-text module. After sending the audio sample and receiving a response, the confidence factor looks weird.
{
"results": [
{
"word_alternatives": [
{
"start_time": 3.31,
"alternatives": [
{
"confidence": 0.7563,
"word": "you"
},
{
"confidence": 0.0254,
"word": "look"
},
{
"confidence": 0.0142,
"word": "Lou"
},
{
"confidence": 0.0118,
"word": "we"
}
],
"end_time": 3.43
},
...
and
...
],
"alternatives": [
{
"word_confidence": [
[
"you",
0.36485132893469713
],
...
and I am asking for recognition with this config:
var params = {
audio: fs.createReadStream(req.file.path),
content_type: 'audio/wav',
'interim_results': false,
'word_confidence': true,
'timestamps': true,
'max_alternatives': 3,
'continuous': true,
'word_alternatives_threshold': 0.01,
'smart_formatting': true
};
Notice how the confidence factors for the word "you" is different in both places. Is one of these numbers something different? What is going on here?

John, confidence values coming in the "word_alternatives" are derived from confusion networks, and are at the word-level, while confidence values coming in the list of "alternatives" are computed over lattices, at the sentence level.
Confusion networks are derived from lattices, but contain a different representation of the hypothesis space, which explains why confidence values coming from one or the other could differ.
In this case the sentence contains only one word, that's why the difference is very visible.

Related

How to measure per user bandwidth usage on google cloud storage?

We want to charge users based on the amount of traffic their data has. Actually the amount of downstream bandwidth their data is consuming.
I have exported google cloud storage access_logs. From the logs, I can count the number of times a file is accessed. (filesize * count will be the bandwidth usage)
But the problem is that this doesn't work well with cached content. My calculated value is much more than the actual usage.
I went with this method because our traffic will be new and won't use the cache, which means that the difference won't matter. But in reality, it seems like it is a real problem.
This is a common use case and I think there should be a better way to solve this problem with google cloud storage.
{
"insertId": "-tohip8e1vmvw",
"logName": "projects/bucket/logs/cloudaudit.googleapis.com%2Fdata_access",
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"authenticationInfo": {
"principalEmail": "firebase-storage#system.gserviceaccount.com"
},
"authorizationInfo": [
{
"granted": true,
"permission": "storage.objects.get",
"resource": "projects/_/bucket/bucket.appspot.com/objects/users/2y7aPImLYeTsCt6X0dwNMlW9K5h1/somefile",
"resourceAttributes": {}
},
{
"granted": true,
"permission": "storage.objects.getIamPolicy",
"resource": "projects/_/bucket/bucket.appspot.com/objects/users/2y7aPImLYeTsCt6X0dwNMlW9K5h1/somefile",
"resourceAttributes": {}
}
],
"methodName": "storage.objects.get",
"requestMetadata": {
"destinationAttributes": {},
"requestAttributes": {
"auth": {},
"time": "2019-07-02T11:58:36.068Z"
}
},
"resourceLocation": {
"currentLocations": [
"eu"
]
},
"resourceName": "projects/_/bucket/bucket.appspot.com/objects/users/2y7aPImLYeTsCt6X0dwNMlW9K5h1/somefile",
"serviceName": "storage.googleapis.com",
"status": {}
},
"receiveTimestamp": "2019-07-02T11:58:36.412798307Z",
"resource": {
"labels": {
"bucket_name": "bucket.appspot.com",
"location": "eu",
"project_id": "project-id"
},
"type": "gcs_bucket"
},
"severity": "INFO",
"timestamp": "2019-07-02T11:58:36.062Z"
}
An entry of the log.
We are using a single bucket for now. Can also use multiple if it helps.
One possibility is to have a separate bucket for each user and get the bucket's bandwidth usage through timeseries api.
The endpoint for this purpose is:
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/list
And following are the parameters to achieve bytes sent for one hour (we can specify time range above 60s) whose sum will be the total bytes sent from the bucket.
{
"dataSets": [
{
"timeSeriesFilter": {
"filter": "metric.type=\"storage.googleapis.com/network/sent_bytes_count\" resource.type=\"gcs_bucket\" resource.label.\"project_id\"=\"<<<< project id here >>>>\" resource.label.\"bucket_name\"=\"<<<< bucket name here >>>>\"",
"perSeriesAligner": "ALIGN_SUM",
"crossSeriesReducer": "REDUCE_SUM",
"secondaryCrossSeriesReducer": "REDUCE_SUM",
"minAlignmentPeriod": "3600s",
"groupByFields": [
"resource.label.\"bucket_name\""
],
"unitOverride": "By"
},
"targetAxis": "Y1",
"plotType": "LINE",
"legendTemplate": "${resource.labels.bucket_name}"
}
],
"options": {
"mode": "COLOR"
},
"constantLines": [],
"timeshiftDuration": "0s",
"y1Axis": {
"label": "y1Axis",
"scale": "LINEAR"
}
}

Can not create new layer (featuretype) in GeoServer using REST API

So I just used 2 working days trying to figure this out. We are automatic rendering process for maps. All the data is given in SQL base and my job is to write "wrapper" so we can implement this in our in-house framework. I managed all but one needed requests.
That request is POST featuretype since this is a way of creating a layer that can later be rendered.
I have all requests saved in postman for pre-testing on example data given by geoserver itself. I can't even get response with status code 201 and always get 500 internal server error. This status is described as possible syntax error in sytax. But I actually just copied and pasted exampled and used geoserver provided data.
This is the requst: http://127.0.0.1:8080/geoserver/rest/workspaces/tiger/datastores/nyc/featuretypes
and its body:
{
"name": "poi",
"nativeName": "poi",
"namespace": {
"name": "tiger",
"href": "http://localhost:8080/geoserver/rest/namespaces/tiger.json"
},
"title": "Manhattan (NY) points of interest",
"abstract": "Points of interest in New York, New York (on Manhattan). One of the attributes contains the name of a file with a picture of the point of interest.",
"keywords": {
"string": [
"poi",
"Manhattan",
"DS_poi",
"points_of_interest",
"sampleKeyword\\#language=ab\\;",
"area of effect\\#language=bg\\;\\#vocabulary=technical\\;",
"Привет\\#language=ru\\;\\#vocabulary=friendly\\;"
]
},
"metadataLinks": {
"metadataLink": [
{
"type": "text/plain",
"metadataType": "FGDC",
"content": "www.google.com"
}
]
},
"dataLinks": {
"org.geoserver.catalog.impl.DataLinkInfoImpl": [
{
"type": "text/plain",
"content": "http://www.google.com"
}
]
},
"nativeCRS": "GEOGCS[\"WGS 84\", \n DATUM[\"World Geodetic System 1984\", \n SPHEROID[\"WGS 84\", 6378137.0, 298.257223563, AUTHORITY[\"EPSG\",\"7030\"]], \n AUTHORITY[\"EPSG\",\"6326\"]], \n PRIMEM[\"Greenwich\", 0.0, AUTHORITY[\"EPSG\",\"8901\"]], \n UNIT[\"degree\", 0.017453292519943295], \n AXIS[\"Geodetic longitude\", EAST], \n AXIS[\"Geodetic latitude\", NORTH], \n AUTHORITY[\"EPSG\",\"4326\"]]",
"srs": "EPSG:4326",
"nativeBoundingBox": {
"minx": -74.0118315772888,
"maxx": -74.00153046439813,
"miny": 40.70754683896324,
"maxy": 40.719885123828675,
"crs": "EPSG:4326"
},
"latLonBoundingBox": {
"minx": -74.0118315772888,
"maxx": -74.00857344353275,
"miny": 40.70754683896324,
"maxy": 40.711945649065406,
"crs": "EPSG:4326"
},
"projectionPolicy": "REPROJECT_TO_DECLARED",
"enabled": true,
"metadata": {
"entry": [
{
"#key": "kml.regionateStrategy",
"$": "external-sorting"
},
{
"#key": "kml.regionateFeatureLimit",
"$": "15"
},
{
"#key": "cacheAgeMax",
"$": "3000"
},
{
"#key": "cachingEnabled",
"$": "true"
},
{
"#key": "kml.regionateAttribute",
"$": "NAME"
},
{
"#key": "indexingEnabled",
"$": "false"
},
{
"#key": "dirName",
"$": "DS_poi_poi"
}
]
},
"store": {
"#class": "dataStore",
"name": "tiger:nyc",
"href": "http://localhost:8080/geoserver/rest/workspaces/tiger/datastores/nyc.json"
},
"cqlFilter": "INCLUDE",
"maxFeatures": 100,
"numDecimals": 6,
"responseSRS": {
"string": [
4326
]
},
"overridingServiceSRS": true,
"skipNumberMatched": true,
"circularArcPresent": true,
"linearizationTolerance": 10,
"attributes": {
"attribute": [
{
"name": "the_geom",
"minOccurs": 0,
"maxOccurs": 1,
"nillable": true,
"binding": "com.vividsolutions.jts.geom.Point"
},
{},
{},
{}
]
}
}
So it is example case and I can't get any useful response from the server. I get the code 500 with body name (the first item in json). Similarly I get same code with body FeatureTypeInfo when trying with xml body(first tag).
I already tried the request in new instance of geoserver in Docker (changed the port) and still no success.
I check if datastore, workspace is available and that layer "poi" doesn't yet exists.
Here are also some logs of request (similar for xml body):
2018-08-03 07:35:02,198 ERROR [geoserver.rest] -
com.thoughtworks.xstream.mapper.CannotResolveClassException: name at
com.thoughtworks.xstream.mapper.DefaultMapper.realClass(DefaultMapper.java:79)
at .....
Does anyone know the solution to this and got it working. I am using GeoServer 2.13.1
So i was still looking for the answer and using this post (https://gis.stackexchange.com/questions/12970/create-a-layer-in-geoserver-using-rest) got to the right content to POST featureType and hence creating a layer in GeoServer.
The documentation is off in REST API docs.
Using above link I found out that when using JSON there is a missing insertion in JSON. For API to work here we need to add:
{featureType:
name: "...",
nativeName: "...",
.
.
.}
So that it doesn't start with "name" attribute but it is contained in "featureType".
I didn't try that for XML also but I guess it could be similar.
Hope this helps someone out there struggling like I did.
Blaz is correct here, you need an outer object of FeatureType and then an inner object with your config. So;
{
"featureType": {
"name": "layer",
"nativeName": "poi",
"your config": "stuff"
}
I find though that using a post request I get very little if any response and its not obvious if the layer creation worked. But you can call http://IP:8080/geoserver/rest/layers.json to check if your new layer is there.
It costs me a lot of time to create FeatureTypes using REST API. Use Json like this really works:
{
"featureType": {
"name": "layer",
"nativeName": "poi"
"otherProperties...":"values..."
}
And use Json below to create Workspace:
{
"workspace": {
"name": "test_workspace"
}
}
The REST API is out of date now. That's disappointing. Is there anyone knows how to get the lastest REST API document?

identify new words as intents in rasa nlu

Have been using rasa nlu to classify intents and entities for my chatbot. Everything works as expected (with extensive training) but with entities, it seems to predict the value based on the exact position and length of the word. This is fine for a scenario where the entities are limited. But when the bot needs to identify a word (which has a different length and not trained yet, for example a new name), it's failing to detect. Is there a way wherein I can make rasa identify the entities based on the relative position of the word or better yet, insert a list of words that becomes the domain specific for the entity to find a match with (like phrase list in LUIS)?
{"q":"i want to buy a Casio SX56"}
{
"project": "default",
"entities": [
{
"extractor": "ner_crf",
"confidence": 0.7043648832678735,
"end": 26,
"value": "Casio SX56",
"entity": "watch",
"start": 16
}
],
"intent": {
"confidence": 0.8835646513829762,
"name": "buy_watch"
},
"text": "i want to buy a Casio SX56",
"model": "model_20180522-165141",
"intent_ranking": [
{
"confidence": 0.8835646513829762,
"name": "buy_watch"
},
{
"confidence": 0.07072182459497935,
"name": "greet"
}
]
}
But if Casio SX56 gets replaced with Citizen M1:
{"q":"i want to buy a Citizen M1"}
{
"project": "default",
"intent": {
"confidence": 0.8710909096729019,
"name": "buy_watch"
},
"text": "i want to buy a Citizen M1",
"model": "model_20180522-165141",
"intent_ranking": [
{
"confidence": 0.8710909096729019,
"name": "buy_watch"
},
{
"confidence": 0.07355588750895545,
"name": "greet"
}
]
}
Thank you!
Make sure you actually added each entity value training examples before training it with rasa_nlu.
--- For successful entity extraction we need to create at least 2 or more contextual training data ---
add this eg. in rasa_nlu training data if it's not extracting properly
"text": "i want to buy a Citizen M1",
"model": "model_20180522-165141",
"intent_ranking": [
{
"confidence": 0.8710909096729019,
"name": "buy_watch"
},
{
"confidence": 0.07355588750895545,
"name": "greet"
}
]
entity extraction with phrase matching does work in rasa_nlu try it with spacy_sklearn backend pipeline
The feature I was looking for is phrase matcher which would allow me to add a list of possible entities to the training model. This way, if any new name pops up, we can simply add the name to the phrase list and the model would be able to identify it with all possible utterances. Though this is still in development and should be added to the master soon: https://github.com/RasaHQ/rasa_nlu/pull/822

IBM Watson. How to pass context from node to node?

I"m trying to string together multiple IBM Watson requests:
Request #1: Play music.
Watson responds with the following:
{
"intents": [
{
"intent": "turn_on",
"confidence": 0.9498470783233643
}
],
"entities": [
{
"entity": "appliance",
"location": [
5,
10
],
"value": "radio",
"confidence": 1
}
],
"input": {
"text": "play music"
},
"output": {
"text": [
"What kind of music would you like to hear?"
],
"nodes_visited": [
"node_1_1510258504338",
"node_2_1510258615227"
],
"log_messages": []
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4",
"system": {
"dialog_stack": [
{
"dialog_node": "node_2_1510258615227"
}
],
"dialog_turn_counter": 1,
"dialog_request_counter": 1,
"_node_output_map": {
"node_2_1510258615227": [
0
]
}
}
}
}
Request #2: The patron would type rock.
My problem is that I'm getting an error message that states the following
No dialog node matched for the input at a root level. (and there is 1 more warning in the log)",
"log_messages": [
I'm pretty sure I have to pass a context into the 2nd request but I'm not sure what I need to include. Right now I'm only passing in the conversation_id. is there something specific from the above response that I need to pass in? For example, I'm passing this:
{
"input": {
"text": "rock"
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4"
}
}
You send back your whole context object. In this case it would be:
{
"input": {
"text": "rock"
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4",
"system": {
"dialog_stack": [
{
"dialog_node": "node_2_1510258615227"
}
],
"dialog_turn_counter": 1,
"dialog_request_counter": 1,
"_node_output_map": {
"node_2_1510258615227": [
0
]
}
}
}
}
But there are SDK's that will make this easier for you.
https://github.com/watson-developer-cloud
Your node that actions the type of music people select, is it a child of your 'turn_on' node [node_2_1510258615227]?
If so, as Simon demonstrates above, you need to also pass back as part of the API call the complete context packet. This informs Watson Conversation where in the dialog flow you were last. As the conversation system is state free, i.e. It does not store any state information about individual conversations, it will not by default know where it is within a conversation. This is why you then need to return the context element of the previous response, to allow watson to know where you were in the conversation flow.
Your error above states that Watson looked down the list of dialog nodes you have defined at your root level, and could not find a matching condition. Due to the fact your matching condition was within a child node.

Does the OData protocol provide a way to transform an array of objects to an array of raw values?

Is there a way specify in an OData query that instead of certain name/value pairs being returned, a raw array should be returned instead? For example, if I have an OData query that results in the following:
{
"#odata.context": "http://blah.org/MyService/$metadata#People",
"value": [
{
"Name": "Joe Smith",
"Age": 55,
"Employers": [
{
"Name": "Acme",
"StartDate": "1/1/1990"
},
{
"Name": "Enron",
"StartDate": "1/1/1995"
},
{
"Name": "Amazon",
"StartDate": "1/1/1999"
}
]
},
{
"Name": "Jane Doe",
"Age": 30,
"Employers": [
{
"Name": "Joe's Crab Shack",
"StartDate": "1/1/2007"
},
{
"Name": "TGI Fridays",
"StartDate": "1/1/2010"
}
]
}
]
}
Is there anything I can add to the query to instead get back:
{
"#odata.context": "http://blah.org/MyService/$metadata#People",
"value": [
{
"Name": "Joe Smith",
"Age": 55,
"Employers": [
[ "Acme", "1/1/1990" ],
[ "Enron", "1/1/1995" ],
[ "Amazon", "1/1/1999" ]
]
},
{
"Name": "Jane Doe",
"Age": 30,
"Employers": [
[ "Joe's Crab Shack", "1/1/2007" ],
[ "TGI Fridays", "1/1/2010" ]
]
}
]
}
While I could obviously do the transformation client side, in my use case the field names are very large compared to the data, and I would rather not transmit all those names over the wire nor spend the CPU cycles on the client doing the transformation. Before I come up with my own custom parameters to indicate that the format should be as I desire, I wanted to check if there wasn't already a standardized way to do so.
OData provides several options to control the amount of data and metadata to be included in the response.
In OData v4, you can add odata.metadata=minimal to the Accept header parameters (check the documentation here). This is the default behaviour but even with this, it will still include the field names in the response and for a good reason.
I can see why you want to send only the values without the fields name but keep in mind that this will change the semantic meaning of the response structure. It will make it less intuitive to deal with as a json record on the client side.
So to answer your question, The answer is 'NO',
Other options to minimize the response size:
You can use the $value OData option to gets the raw value of a single property.
Check this example:
services.odata.org/OData/OData.svc/Categories(1)/Products(1)/Supplier/Address/City/$value
You can also use the $select option to cherry pick only the fields you need by selecting a subset of properties to include in the response