We are working on a rewrite of our API documentation and we would like to add custom content to our generated JSDocs. This custom content could for instance be runnable snippets of code, which is a feature we currently have in our manually written docs (based on Jekyll injecting dynamic content). Is there a way of achieving this directly, using the templating system of JSDoc and some custom code today, alternatively indirectly by post-processing the generated html looking for some kind of symbols? Other JSDoc generators that could handle this is also of interest.
Say, we could have something like this
/**
* #param {String} foo
* #runnableExample foo-example
*/
would generate some kind of custom code for that #runnableExample code that would insert a file called ./examples/foo-example.js. This is just a suggestion for a good user experience, but not how it needs to be.
The JSDoc CLI has a useful -X flag which you can use to inspect (and process) the metadata from your annotations:
/**
* #return {number}
*/
function answer() {
return 42;
}
$ jsdoc -X tmp/example.js
[
{
"comment": "/**\n * #return {number}\n */",
"meta": {
"range": [
28,
62
],
"filename": "example.js",
"lineno": 4,
"columnno": 0,
"path": "/workspace/dev/tmp",
"code": {
"id": "astnode100000002",
"name": "answer",
"type": "FunctionDeclaration",
"paramnames": []
}
},
"returns": [
{
"type": {
"names": [
"number"
]
}
}
],
"name": "answer",
"longname": "answer",
"kind": "function",
"scope": "global",
"params": []
},
{
"kind": "package",
"longname": "package:undefined",
"files": [
"/workspace/dev/tmp/example.js"
]
}
]
But what about custom tags? The -X flag will process them too:
/**
* #answer 42
*/
function answer() {
return 42;
}
[
{
"comment": "/**\n * #answer 42\n */",
"meta": {
"range": [
22,
56
],
"filename": "example.js",
"lineno": 4,
"columnno": 0,
"path": "/workspace/dev/tmp",
"code": {
"id": "astnode100000002",
"name": "answer",
"type": "FunctionDeclaration",
"paramnames": []
}
},
"tags": [
{
"originalTitle": "answer",
"title": "answer",
"text": "42",
"value": "42"
}
],
"name": "answer",
"longname": "answer",
"kind": "function",
"scope": "global",
"params": []
},
{
"kind": "package",
"longname": "package:undefined",
"files": [
"/workspace/dev/tmp/example.js"
]
}
]
As you can see the output is pure JSON which makes it very easy to process either via JavaScript or any decent templating systems. I have abandoned JSDoc templates for a long time. I use jsdoc -X and process the annotations myself.
Recently I've been playing with JQ and EJS and fed the result to Slate to generate documentation websites: https://customcommander.github.io/project-blueprint/#project-blueprint-fake-api — This is the documentation website for a fake API.
What you want to achieve is definitely doable. Hopefully this is enough to get you started. If you want to know more: https://github.com/customcommander/project-blueprint
Related
I am creating a language extension in vscode for myself. Because it will associate with different file types, I plan to make different tmlanguge files for specific rules. According to this, I could extend the scopeName to achieve that.
So I created in my ./package.json files something like this:
{
"name": "tst",
"displayName": "Test Language",
"description": "A test for language extension",
"version": "0.0.1",
"engines": {
"vscode": "^1.34.0"
},
"contributes": {
"languages": [{
"id": "tst",
"aliases": ["Test", "tst"],
"extensions": [".tst",".type1",".type2"],
"configuration": "./language-configuration.json"
}],
"grammars": [{
"language": "tst",
"scopeName": "source.tst",
"path": "./syntaxes/tst.tmLanguage.json"
},
{
"scopeName": "source.tst.type1",
"path": "./syntaxes/type1.tmLanguage.json"
},
{
"scopeName": "source.tst.type2",
"path": "./syntaxes/type2.tmLanguage.json"
}]
}
}
Then I create the base rules in ./syntaxes/tst.tmLanguage.json and both .type1 and .type2 have been applied with my grammars.
{
"name": "Test",
"patterns": [
{
"match": "test",
"name": "constant.character"
}
],
"scopeName": "source.tst"
}
Afterwards I also make ./syntaxes/type1.tmLanguage.json something like this:
{
"name": "type1",
"patterns": [
{
"match": "type1",
"name": "constant.language"
}
],
"scopeName": "source.tst.type1"
}
Nothing works for any rules in .type1.
I hope both file in the picture can recognize test and type1.
I checked the vscode pre-installed cpp language extension.
They also use scopeName for source.c and source.c.platform.
I guess it is for the similar purpose?
Did I overlook something?
Thanks for the help.
If you want to use these scopes from different tmLanguage files in the main grammar, you have to explicitly include them:
{
"name": "Test",
"patterns": [
{
"match": "test",
"name": "constant.character"
},
{
"include": "source.tst.type1"
},
{
"include": "source.tst.type2"
}
],
"scopeName": "source.tst"
}
Regarding the built-in cpp extension and platform.tmLanguage.json - as far as I can tell, it's not actively being used by the c and cpp grammars. There's this comment in cpp/build/update-grammars.js:
// `source.c.platform` which is still included by other grammars
So that sounds more like a backwards-compatibility measure in case any third-party grammars still use it.
I am trying to render an Alexa Presentation Language document while Alexa is speaking her speech. I tried with a pager with several pages and the AutoPager command. The problem I am trying to solve is that the document is rendered when Alexa starts speaking but the command is started when the speech is finished, and I would like to see the three pages moving while speaking.
I am using the RenderDocumet and the executeCommand and the speak directives of responseBuilder.
The Document Template:
PagerDoc —>
{
"type": "APL",
"version": "1.0",
"theme": "dark",
"import": [],
"resources": [],
"styles": {},
"layouts": {},
"mainTemplate": {
"parameters": [
"datasource"
],
"item": [{
"type": "Container",
"items": [
{
"type": "Sequence",
"id": "pagerComponentId",
"scrollDirection": "vertical",
"numbered": true,
"width": "100vw",
"height": "100vh",
"alignItems": "center",
"justifyContent": "center",
"direction": "column",
"items": [
{
"type": "Image",
"source": "${datasource.app.properties.images.robot1}",
"position": "relative",
"width": "100vw",
"height": "100vh"
},
{
"type": "Image",
"source": "${datasource.app.properties.images.robot2}",
"position": "relative",
"width": "100vw",
"height": "100vh"
}
]
}
]
}
]
}
}
And the Directives:
var response = handlerInput.responseBuilder;
return response
.addDirective({
type : 'Alexa.Presentation.APL.RenderDocument',
token: 'pagerToken',
document : pagerDoc,
datasources : {
"app": {
"properties": {
"images": {
"robot1": "https://xxx/robot1.png",
"robot2": "https://xxx/robot2.png"
}
}
}
}
})
.addDirective({
type: 'Alexa.Presentation.APL.ExecuteCommands',
token: 'pagerToken',
commands: [
{
"type": "Parallel",
"commands": [
{
"type": "Scroll",
"componentId": "pagerComponentId",
"distance": 1
}
]
})
.speak(speechOutput)
.reprompt(repromptOutput)
.getResponse();
Could somebody tell me what should I do? If this is possible with Alexa?
Thanks a lot in advance and best regards,
Fernando
It's not posible yet. If you wait until the release of APL 1.1 (coming soon)
APL 1.1 will add onMount to the APL document which should allow for the execution of commands as soon as a document is loaded (eg. before alexa speaks)
When calling the Magento 2 REST API to get the schema for working with products using:
..rest/all/schema?services=catalogProductRepositoryV1
The response back includes:
...
"paths": {
"/V1/products": {
"post": {
"tags": [
"catalogProductRepositoryV1"
],
"description": "Create product",
"operationId": "catalogProductRepositoryV1SavePost",
"parameters": [
{
"name": "$body",
"in": "body",
"schema": {
"required": [
"product"
],
"properties": {
"product": {
"$ref": "#/definitions/catalog-data-product-interface"
},
"saveOptions": {
"type": "boolean"
}
},
"type": "object"
}
}
],...
How do you go about getting the definition/schema for the "product" object that it expects during a POST call? i.e. the following definition:
"properties": {
"product": {
"$ref": "#/definitions/catalog-data-product-interface"
},
Looks like its only possible using the swagger GUI. Essentially replicate these steps and change your search term for whatever you're searching for:
Go to: http://devdocs.magento.com/swagger/index.html
Ctrl + f >> search for whatever you're after. In the case of the above: catalogProductRepositoryV1
Expand the API interface by clicking it
Choose your REST Method
Under "Parameters" there will be a Model/Model Schema showing you what payload it expects.
Welcome to Swagger! It's great when you know how to use it!
I am trying to use Mapbox to calculate the duration between two locations however the examples here are incomplete (at least with my limited experience). I would like to connect to this API using server-side Java, however I can't even get a basic example working in javaScript, Python or simply in the address bar in my browser.
I can get an example working in my browser using this url and substituting in my API key:
https://api.mapbox.com/geocoding/v5/mapbox.places/Chester.json?country=us&access_token=pk.my-token-value
However I can't get a similar example working with the distance API. The best I can manage is something like this:
https://api.mapbox.com/distances/v1/driving/[13.41894,52.50055],[14.10293,52.50055]&access_token=pk.my-token-value.
But I have no idea how to format my coordinates as I can't find a single example.
Has anyone been able to get this working. Ideally in Java, but client-side JavaScript or a valid url would be a great start.
I should also add that I can't get the JavaScript or Python ones working as they rely on external librarys that aren't referenced anywhere in the documentation!.
Thanks in advance.
Looks like you can provide a list of 2 or more semi-colon separated coordinates:
https://api.mapbox.com/optimized-trips/v1/mapbox/driving/13.41894,52.50055;14.10293,52.50055?access_token=pk.your_token
returns:
{
"code":"Ok",
"waypoints":[
{"distance":9.0352511932471,"name":"","location":[13.418991,52.500625],"waypoint_index":0,"trips_index":0},
{"distance":91.0575241567836,"name":"Berliner Chaussee","location":[14.103096,52.499738],"waypoint_index":1,"trips_index":0}
],
"trips":
[
{"geometry":"}_m_Iu{{pA}ZuQ}Iad#}cAk^jr#etOdE_iGtTqoAxBkoGnOkiCjR_s#wJ_v#b#}aN|aBogRyVucBiEw_C~r#_eB`Fc`NtP_bAshBorHa#}dCkOe~AmPmrGlPlrGjOd~A`#|dCrhBnrHuP~aAaFb`N_s#~dBhEv_CxVtcBkbBbeRo#l`NzJ`z#mRpr#qOpjCwBpnGoT~lAeEdkGsr#jtOtp#dQ~UjTtDfZf]jS",
"legs":[
{"summary":"","weight":4198.3,"duration":3426.4,"steps":[],"distance":49487},
{"summary":"","weight":7577.8,"duration":3501.3,"steps":[],"distance":49479.7}
],
"weight_name":"routability",
"weight":11776.1,
"duration":6927.700000000001,
"distance":98966.7}
]
}
I don't know if this is better now, but I had to work through it right now and can give you this example
curl "https://api.mapbox.com/directions-matrix/v1/mapbox/driving/9.51416,54.47004;13.5096410,50.0716190;6.8614070,48.5206360;14.1304840,51.0856060?sources=0&access_token={token}"`
This will return the following json for driving durations. I made use of the sources attribute, which is my search lng/lat and all other points are places in my database.
{
"code": "Ok",
"durations": [
[0.0, 29407.9, 34504.7, 24163.5]
],
"destinations": [{
"distance": 131.946157371,
"name": "",
"location": [9.514914, 54.468939]
}, {
"distance": 34.636975593,
"name": "",
"location": [13.509868, 50.071344]
}, {
"distance": 295.206928933,
"name": "",
"location": [6.863566, 48.52287]
}, {
"distance": 1186.975749670,
"name": "",
"location": [14.115694, 51.080408]
}],
"sources": [{
"distance": 131.946157371,
"name": "",
"location": [9.514914, 54.468939]
}]
}
Adding the annotations=distance parameter to the url will return the distances instead of the durations if you need that.
{
"code": "Ok",
"distances": [
[0.0, 738127.3, 902547.6, 616060.8] // distances in meters
],
"destinations": [{ // destinations including the source
"distance": 131.946157371, // result 0
"name": "",
"location": [9.514914, 54.468939]
}, {
"distance": 34.636975593, // result 1
"name": "",
"location": [13.509868, 50.071344]
}, {
"distance": 295.206928933, // result 2
"name": "",
"location": [6.863566, 48.52287]
}, {
"distance": 1186.975749670, // result 3
"name": "",
"location": [14.115694, 51.080408]
}],
"sources": [{ // source where we start from
"distance": 131.946157371,
"name": "",
"location": [9.514914, 54.468939]
}]
}
I am using the swagger tool for documenting my Jersey based REST API (the swaggerui I am using was downloaded on June 2014 don't know if this issue has been fixed in later versions but as I made a lot of customization to its code so I don't have the option to download the latest without investing lot of time to customize it again).
So far and until now, all my transfer objects have one level deep properties (no embedded pojos). But now that I added some rest paths that are returning more complex objects (two levels of depth) I found that SwaggerUI is not expanding the JSON model schema when having embedded objects.
Here is the important part of the swagger doc:
...
{
"path": "/user/combo",
"operations": [{
"method": "POST",
"summary": "Inserts a combo (user, address)",
"notes": "Will insert a new user and a address definition in a single step",
"type": "UserAndAddressWithIdSwaggerDto",
"nickname": "insertCombo",
"consumes": ["application/json"],
"parameters": [{
"name": "body",
"description": "New user and address combo",
"required": true,
"type": "UserAndAddressWithIdSwaggerDto",
"paramType": "body",
"allowMultiple": false
}],
"responseMessages": [{
"code": 200,
"message": "OK",
"responseModel": "UserAndAddressWithIdSwaggerDto"
}]
}]
}
...
"models": {
"UserAndAddressWithIdSwaggerDto": {
"id": "UserAndAddressWithIdSwaggerDto",
"description": "",
"required": ["user",
"address"],
"properties": {
"user": {
"$ref": "UserDto",
"description": "User"
},
"address": {
"$ref": "AddressDto",
"description": "Address"
}
}
},
"UserDto": {
"id": "UserDto",
"properties": {
"userId": {
"type": "integer",
"format": "int64"
},
"name": {
"type": "string"
},...
},
"AddressDto": {
"id": "AddressDto",
"properties": {
"addressId": {
"type": "integer",
"format": "int64"
},
"street": {
"type": "string"
},...
}
}
...
The embedded objects are User and Address, their models are being created correctly as shown in the json response.
But when opening the SwaggerUI I can only see:
{
"user": "UserDto",
"address": "AddressDto"
}
But I should see something like:
{
"user": {
"userId": "integer",
"name": "string",...
},
"address": {
"addressId": "integer",
"street": "string",...
}
}
Something may be wrong in the code that expands the internal properties, the javascript console doesn't show any error so I assume this is a bug.
I found the solution, there is a a line of code that needs to be modified to make it work properly:
In the swagger.js file there is a getSampleValue function with a conditional checking for undefined:
SwaggerModelProperty.prototype.getSampleValue = function(modelsToIgnore) {
var result;
if ((this.refModel != null) && (modelsToIgnore[this.refModel.name] === 'undefined'))
...
I updated the equality check to (removing quotes):
modelsToIgnore[this.refModel.name] === undefined
After that, SwaggerUI is able to show the embedded models.