Bluemix AlchemyLanguage api TextGetRankedNamedEntities text limit? - ibm-cloud

The alchemy api for entities (TextGetRankedNamedEntities) seems to have a text limit around 7500 characters. I was wondering if this was a documented limitation or a defect?

I just successfully sent a text doc with >40,000 characters successfully without any issues. I've posted the API Notes, CURL Command, and Response I got below.
CURL Command:
curl -X POST \
-d "apikey=$API_KEY" \
-d "outputMode=json" \
--data-urlencode text#testing.txt \
"https://gateway-a.watsonplatform.net/calls/text/TextGetRankedNamedEntities"
Response:
{
"status": "OK",
"usage": "By accessing AlchemyAPI or using information generated by AlchemyAPI, you are agreeing
to be bound by the AlchemyAPI Terms of Use: http://www.alchemyapi.com/company/terms.html",
"url": "",
"language": "english",
"entities": [
{
"type": "Company",
"relevance": "0.833922",
"count": "31",
"text": "TextGetRankedNamedEntities"
},
{
"type": "Quantity",
"relevance": "0.833922",
"count": "31",
"text": "50 kilobytes"
}
]
}
API Notes:
Calls to TextGetRankedNamedEntities should be made using HTTP POST.
HTTP POST calls should include the Content-Type header: application/x-www-form-urlencoded
Posted text documents can be a maximum of 50 kilobytes. Larger documents will result in a "content-exceeds-size-limit" error response.
Language detection is performed on the retrieved document before attempting named entity extraction. A minimum of 15 characters of text must exist within the requested HTTP document to perform language detection.
Documents containing less than 15 characters of text are assumed to be English-language content.
Disambiguation of detected entities is enabled by default. Disambiguation information will be included for each entity that is successfully resolved.
Entity extraction is currently supported for all languages listed on the language support page. Other non-supported language submissions will be rejected and an error response returned.
Enabling entity-level sentiment analysis results in one additional transaction utilized against your daily API limit. Entity-level sentiment analysis is currently provided for both English and German-language content.
Disambiguation and quotations extraction are currently available for English-language content only. Support for other languages is in development.

Related

Resolving 400 error when creating new work item on DevOps API 4.1

I have created a new custom field in my DevOps work item type and I can see the new field via the API using _apis/wit/fields/Custom.fieldname however when I post a new work item using the API I get a 400 bad request.
I'm using version 4.1 of the DevOps API and my array of operations does contain a mixture of values in quotes and this numeric entry.
Can anyone provide me with an example json array that should be valid please?
A 400 Bad Request usually means that your request body either is missing invalid keys or has invalid syntax.
I built a demo to test if Decimal field type will generate any problems following Create Work Item:
POST https://dev.azure.com/{organization}/{project}/_apis/wit/workitems/${type}?api-version=4.1
Request body:
[{
"op": "add",
"path": "/fields/System.Title",
"from": null,
"value": "Sample123"
},
{
"op": "add",
"path": "/fields/Custom.MyField",
"value": 0.5
}]
This is working well:
This 400 problem should be caused by other part of your Request body.

Azure remaining credit value from API

Whenever I log on the azure web portal i get a notification saying $xx.xx remaining credit.
I would like to understand how I can retrieve this value using Azure API, az cli or Powershell.
I have tried using the following as advised by the support but unfortunately without luck:
accesstoken=$(curl -s --header "accept: application/json" --request POST "https://login.windows.net/$TennantID/oauth2/token" --data-urlencode "resource=https://management.core.windows.net/" --data-urlencode "client_id=$ClientID" --data-urlencode "grant_type=client_credentials" --data-urlencode "client_secret=$ClientSecret" | jq -r '.access_token')
subscriptionURI="https://management.azure.com/subscriptions/$SubscriptionID?api-version=2016-09-01"
curl -s --header "authorization: Bearer $accesstoken" --request GET $subscriptionURI | jq .
The spending limit variable says just "on"
{
"authorizationSource": "RoleBased",
"subscriptionPolicies": {
"spendingLimit": "On",
redacted (too long)
When you see "$xx credit remaining", it means your account has spending limit. There are many offerings which use spending limit mode. For example, you are given Azure Pass, or Microsoft Azure for Students Starter. Here is the list for reference (https://azure.microsoft.com/en-us/support/legal/offer-details/).
In fact, Azure provides some REST APIs to let you work with Billing and Consumption but the API is limited to spending limit offerings. Advanced billing and consumption REST API are only fully supported for CSP (Cloud Service Provider) partner and Enterprise Agreement customers. That's said, with your current subscription, you cannot retrieve the balance summary (e.g https://learn.microsoft.com/en-us/rest/api/billing/enterprise/billing-enterprise-api-balance-summary).
However, there is a workaround to get usage details then perform SUM. To do so, first you need to retrieve billing period name
https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Billing/billingPeriods?api-version=2017-04-24-preview
Here is the sample response of my billing periods:
"value": [
{
"id": "/subscriptions/2dd8cb59-ed12-4755-a2bc-356c212fbafc/providers/Microsoft.Billing/billingPeriods/201805-1",
"type": "Microsoft.Billing/billingPeriods",
"name": "201805-1",
"properties": {
"billingPeriodStartDate": "2018-02-06",
"billingPeriodEndDate": "2018-03-05"
}
},
{
"id": "/subscriptions/2dd8cb59-ed12-4755-a2bc-356c212fbafc/providers/Microsoft.Billing/billingPeriods/201804-1",
"type": "Microsoft.Billing/billingPeriods",
"name": "201804-1",
"properties": {
"billingPeriodStartDate": "2018-01-06",
"billingPeriodEndDate": "2018-02-05"
}
},
...then get usage detail with the billing period name
https://management.azure.com/subscriptions/{Subscription}/providers/Microsoft.Billing/billingPeriods/201805-1?api-version=2017-04-24-preview
{
"id": "subscriptions/2dd8cb59-ed12-4755-a2bc-356c212fbafc/providers/Microsoft.Billing/billingPeriods/201805-1/providers/Microsoft.Consumption/usageDetails/cdd74390-374b-53cf-2260-fc8ef10a2be6",
"name": "cdd74390-374b-53cf-2260-fc8ef10a2be6",
"type": "Microsoft.Consumption/usageDetails",
"tags": null,
"properties": {
"billingPeriodId": "subscriptions/2dd8cb59-ed12-4755-a2bc-356c212fbafc/providers/Microsoft.Billing/billingPeriods/201805-1",
"usageStart": "2018-02-06T00:00:00Z",
"usageEnd": "2018-02-07T00:00:00Z",
"instanceId": "/subscriptions/2dd8cb59-ed12-4755-a2bc-356c212fbafc/resourceGroups/securitydata/providers/Microsoft.Storage/storageAccounts/2d1993southeastasia",
"instanceName": "2d1993southeastasia",
"meterId": "c1635534-1c1d-4fc4-b090-88fc2672ef87",
"usageQuantity": 0.002976,
"pretaxCost": 7.1424E-05,
"currency": "USD",
"isEstimated": false,
"subscriptionGuid": "2dd8cb59-ed12-4755-a2bc-356c212fbafc",
"meterDetails": null
}
},
This returns list of each resource's usage and cost in JSON format and you need to SUM all pretaxCost value. This way would be complicated with PowerShell because you need to initialize objects, then deserialize JSON and perform MATH. With PowerShell, it can be done technically but requires C# experience.

Get Reactions using the Github api

Github issues may contain "reactions" for quite a while (as described here: https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comments)
I would like to receive that information using the Github api, but there doesn't seem to be anything like that when getting an issue e.g.
api.github.com/repos/twbs/bootstrap/issues/19575
that information does not seem to be inside that response. Also, I did not find another API call that could retrieve that information. How to get those "reactions"?
This is now possible, being the preview state (meaning, you have to pass a custom Accept header in the request). Check out the GitHub API documentation page
Example
$ curl -H 'Accept: application/vnd.github.squirrel-girl-preview' https://api.github.com/repos/twbs/bootstrap/issues/19575/reactions
[
{
"id": 257024,
"user_id": 947110,
"content": "+1"
},
...
{
"id": 888868,
"user_id": 1889800,
"content": "+1"
}
]
The endpoint looks like this:
GET /repos/:owner/:repo/issues/:number/reactions
You can even pass a content parameter (querystring) indicating what kind of reaction you want to retrieve.

RESTfully create or update a resource that references

If I wanted to create (POST) a new resource linking two independent resources, what is the most proper - with respect to HATEOAS and REST principles - way to structure the entity of the request?
Any references in RFCs, W3C documents, Fielding's thesis, etc., about the proper way for a client to request two independent resources be linked together would be most valuable. Or, if what I'm interested in is simply outside the scope of REST, HATEOAS, an explanation of why would also be great.
Hopefully my question above is clear. If not, here's a scenario and some background to ground the question.
Let's say I have two independent resources: /customer and /item, and a third resource /order intended to the two.
If I'm representing these resource to the client in a HATEOAS-like way (say with JSON-LD), a customer might (minimally) look like:
{
"#id": "http://api.example.com/customer/1"
}
and similarly an item like:
{
"#id": "http://api.example.com/item/1"
}
I'm more concerned about what scheme the entity of the POST request should have, rather than the URL I'm addressing the request to. Assuming I'm addressing the request to /order, would POSTing the following run afoul of HATEOAS and REST principles in any way?
{
"customer": {"#id": "http://api.example.com/customer/1"},
"item": {"#id": "http://api.example.com/item/1"}
}
To me, this seems intuitively OK. However, I can't find much or any discussion of the right way to link two independent resources with a POST. I discovered the LINK and UNLINK HTTP methods, but these seem inappropriate for a public API.
The client does not build URIs, so this is wrong unless these resource identifiers or at least their template came from the service. It is okay to use the id numbers instead of the URIs until you describe this in the response which contains the POST link.
An example from the hydra documentation:
{
"#context": "http://www.w3.org/ns/hydra/context.jsonld",
"#id": "http://api.example.com/doc/#comments",
"#type": "Link",
"title": "Comments",
"description": "A link to comments with an operation to create a new comment.",
"supportedOperation": [
{
"#type": "CreateResourceOperation",
"title": "Creates a new comment",
"method": "POST",
"expects": "http://api.example.com/doc/#Comment",
"returns": "http://api.example.com/doc/#Comment",
"possibleStatus": [
... Statuses that should be expected and handled properly ...
]
}
]
}
The "http://api.example.com/doc/#Comment" contains the property descriptions.
{
"#context": "http://www.w3.org/ns/hydra/context.jsonld",
"#id": "http://api.example.com/doc/#Comment",
"#type": "Class",
"title": "The name of the class",
"description": "A short description of the class.",
"supportedProperty": [
... Properties known to be supported by the class ...
{
"#type": "SupportedProperty",
"property": "#property", // The property
"required": true, // Is the property required in a request to be valid?
"readable": false, // Can the client retrieve the property's value?
"writeable": true // Can the client change the property's value?
}
]
}
A supported property can have an rdfs:range, which describes the value constraints. This is not yet (2015.10.22.) added to the hydra vocab as far as I can tell, but I don't have time to follow the project. I think you still can use the rdfs:range instead of waiting for a hydra range.
So in your case you could add an item property with a range of http://api.example.com/doc/#Item and so on. I assume you could add the links of the alternatives, something like http://api.example.com/items/, so you could generate a select input box. Be aware that this technology is not stable yet.
So you can send a simple JSON as POST body {item: {id:1}, customer: {id:1}} or something like that, which you generate based on the POST link. The RDF is for the client not for the server. The server can understand the data structure it requires, it does not need RDF. You don't need a dictionary to understand yourself...

Marketo "Import Lead" fails with error 610 Requested resource not found

I'm trying to batch update a bunch of existing records through Marketo's REST API. According to the documentation, the Import Lead function seems to be ideal for this.
In short, I'm getting the error "610 Resource Not Found" upon using the curl sample from the documentation. Here are some steps I've taken.
Fetching the auth_token is not a problem:
$ curl "https://<identity_path>/identity/oauth/token?
grant_type=client_credentials&client_id=<my_client_id>
&client_secret=<my_client_secret>"
Proving the token is valid, fetching a single lead isn't a problem as well:
# Fetch the record - outputs just fine
$ curl "https://<rest_path>/rest/v1/lead/1.json?access_token=<access_token>"
# output:
{
"requestId": "ab9d#12345abc45",
"result": [
{
"id": 1,
"updatedAt": "2014-09-18T13:00:00+0000",
"lastName": "Potter",
"email": "harry#hogwartz.co.uk",
"createdAt": "2014-09-18T12:00:00+0000",
"firstName": "Harry"
}
],
"success": true
}
Now here's the pain, when I try to upload a CSV file using the Import Lead function. Like so:
# "Import Lead" function
$ curl -i -F format=csv -F file=#test.csv -F access_token=<access_token>
"https://<rest_path>/rest/bulk/v1/leads.json"
# results in the following error
{
"requestId": "f2b6#14888a7385a",
"success": false,
"errors": [
{
"code": "610",
"message": "Requested resource not found"
}
]
}
The error codes documentation only states Requested resource not found, nothing else. So my question is: what is causing the 610 error code - and how can I fix it?
Further steps I've tried, with no success:
Placing the access_token as url parameter (e.g. appending '?access_token=xxx' to the url), with no effect.
Stripping down the CSV (yes, it's comma seperated) to a bare minimum (e.g. only fields 'id' and 'lastName')
Looked at the question Marketo API and Python, Post request failing
Verified that the CSV doesn't have some funky line endings
I have no idea if there are specific requirements for the CSV file, like column orders, though...
Any tips or suggestions?
Error code 610 can represent something akin to a '404' for urls under the REST endpoint, i.e. your rest_path. I'm guessing this is why you are getting that '404': Marketo's docs show REST paths as starting with '/rest', yet their rest endpoint ends with /rest, so if you follow their directions you get an url like, xxxx.mktorest.com/rest/rest/v1/lead/..., i.e. with '/rest' twice. This is not correct. Your url must have only one 'rest/'.
I went through the same trouble, just want to share some points that help resolve my problem.
Bulk API endpoints are not prefixed with ‘/rest’ like other endpoints.
Bulk Import uses the same permissions model as the Marketo REST API and does not require any additional special permissions in order to use, though specific permissions are required for each set of endpoints.
As #Ethan Herdrick suggested, the endpoints in the documentation are sometimes prefixed with an extra /rest, make sure to remove that.
If you're a beginner and need step-by-step instructions to set up permissions for Marketo REST API: Quick Start Guide for Marketo REST API