Is there a way to stop Autorest.Powershell from flattening response objects? - powershell

I have a response object in my swagger.json file that includes a nested object as one of its fields. When I use Autorest.Powershell to generate a client for this API, it flattens the nested object. So when the service returns the following response:
{
"code": 200,
"status": "OK",
"data": {
"FileName": "gameserver.zip",
"AssetUploadUrl": "https://example.com"
}
}
my Autorest.Powershell client returns a flattened object like this:
{
"code": 200,
"status": "OK",
"dataFileName": "gameserver.zip",
"dataAssetUploadUrl": "https://example.com"
}
Is there some sort of configuration setting I can use to disable this behavior?
Here are the relevant portions of my swagger.json file, if it helps:
"definitions": {
"GetAssetUploadUrlResponse": {
"type": "object",
"properties": {
"AssetUploadUrl": {
"description": "The asset's upload URL.",
"type": "string"
},
"FileName": {
"description": "The asset's file name to get the upload URL for.",
"type": "string"
}
},
"example": {
"FileName": "gameserver.zip",
"AssetUploadUrl": "https://example.com"
}
}
},
"responses": {
"GetAssetUploadUrlResponse": {
"description": "",
"schema": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"description": "The Http status code. If X-ReportErrorAsSuccess header is set to true, this will report the actual http error code."
},
"status": {
"type": "string",
"description": "The Http status code as a string."
},
"data": {
"$ref": "#/definitions/GetAssetUploadUrlResponse"
}
},
"example": {
"code": 200,
"status": "OK",
"data": {
"FileName": "gameserver.zip",
"AssetUploadUrl": "https://example.com"
}
}
}
}
}

There are several ways, none of which is really straightforward (as, I'm starting to believe, is the case with most things AutoRest-related; sorry, couldn't resist :-P ).
There are three semi-official ways. Semi-official here means they are based on public AutoRest mechanism but are not themselves documented. Being semi-official, they might only work with certain versions of AutoRest components, so, here are the ones I used
(from autorest --info):
#autorest/core (3.0.6369)
#autorest/modelerfour (4.15.414)
#autorest/powershell (3.0.421)
Finally, here are the relevant parts of AutoRest's code base: inline properties plug-in and configuration directive definition
inlining-threshold setting
This setting control the maximum number of properties an inner object could have for it to be considered eligible for inlining. You can set it either on the command line or in the "literate config" .md file.
```yaml
inlining-threshold: 0
```
In theory, setting this to 0 should prevent any inner member's properties from being inlined, however the plug-in has a hard-coded exception that if the inner object is in a property that's itself named properties then the limit is ignored and it's still flattened.
definitions:
SomeSchema:
type: "object"
properties:
detail_info: # <-- threshold honored
$ref: "#/definitions/InfoSchema"
properties: # <-- this is always flattened because of its special name
$ref: "#/definitions/OtherSchema"
no-inline directive
The PowerShell AutoRest plug-in also defines a custom directive that is used to specify that certain schemas should never be inlined. Using "literate config", it goes like
```yaml
directive:
- no-inline:
- OtherSchema
- ThirdSchema
```
The pros of this approach are that the no-inline directive overrides the "always inline properties in a property named properties" exception mentioned above, so it can be used to alleviate the problem.
The cons are that all schema names should be listed explicitly. (It seems the directive should also support Rx name expression but I couldn't get no-inline: ".*" to work)
Low-level transform
This is approach disables inlining unconditionally in all cases, however it is coupled to the specific internal code model used by AutoRest. (In principle, the model should be stable, at least within major versions). It also relies on the PowerShell plug-in using a specific (non-contractual) property to flag schemas excluded from inlining.
```yaml
directive:
- from: code-model-v4-no-tags
where: $.schemas.objects.*
transform: |
$.language.default['skip-inline'] = true;
```

Related

Azdo custom task extension definition of string input with a regular expression doesn't work

I have an Azure custom task implemented with Typescript with a task.json containing a string input which is supposed to get a semantic version:
{
"name": "version",
"type": "string",
"required": true,
"label": "Version",
"defaultValue": "",
"helpMarkDown": "",
"pattern": "^(0|[1-9]\\d*)\\.(0|[1-9]\\d*)\\.(0|[1-9]\\d*)(?:-((?:0|[1-9]\\d*|\\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\\.(?:0|[1-9]\\d*|\\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\\+([0-9a-zA-Z-]+(?:\\.[0-9a-zA-Z-]+)*))?$"
},
Even though the regex for the version is defined (and the regex itself is correct and taken from the semantic version's official docs), the user can still enter whatever string he wants with no limitation and no error message is shown.
How do I make the input show an error message when the user enters an input which does not match the regular expression?
You need to use the validation.expression and message, like in this example:
https://github.com/microsoft/azure-pipelines-tasks/blob/b0e99b6d8c7d1b8eba65d9ec08c118832a5635e3/Tasks/KubernetesManifestV0/task.json#L90
"validation": {
"expression": "isMatch(value, '(^(([0-9]|[1-9][0-9]|100)(\\.\\d*)?)$)','Multiline')",
"message": "Enter valid percentage value i.e between 0 to 100."
}
See also:
https://github.com/Microsoft/azure-pipelines-tasks/blob/master/docs/taskinputvalidation.md

Problem when the entity value attribute contain special character

I have tied to insert in OCB an entity with a password attribute codified:
{
"id": "prueba-tipo-string2",
"type": "StringParser",
"dateObserved": {
"type": "DateTime",
"value": "2020-08-13T08:56:56.00Z"
},
"password": {
"type": "text",
"value": "U2FsdGVkX10bFP8Rj7xLAQDFwMBphXpK/+leH3mlpQ="
}
}
OCB always response to me with the following error:
"found a forbidden character in the value of an attribute"
In Postman
{
"error": "BadRequest",
"description": "Invalid characters in attribute value"
}
Orion restricts the usage of some characters due to security reasons (script injections attack in some circumstances), see this piece of documentation. In particular, the = you have in the password attribute value.
You can avoid this, for instance, by encoding the password in base 64, or using URL encoding before storing it in Orion.
Another alternative using TextUnrestricted in attribute type. This special attribute type does not check if the attribute value contains a forbidden character. However, it could have security implications, use it at your own risk!

Implemented a Resource Type: How does Concourse use the output of the check, in, and out scripts?

Reading the Concourse documentation about Implementing a Resource Type, in regards to what the check, in, and out scripts must emit, it is not clear why this output is needed or how Concourse uses it. My questions are:
1) How does Concourse use the output of the check script, the in script, and the out script?
2) And, why is it required that the in and out script emit the version? What happens if you don't?
For context, here is the relevant parts of the documentation:
1) For the check script:
...[it] must print the array of new versions, in chronological order,
to stdout, including the requested version if it's still valid.
For example:
[
{ "ref": "61cbef" },
{ "ref": "d74e01" },
{ "ref": "7154fe" }
]
2) For the in script:
The script must emit the fetched version, and may emit metadata as a list of key-value pairs. This data is intended for public consumption and will make it upstream, intended to be shown on the build's page.
For example:
{
"version": { "ref": "61cebf" },
"metadata": [
{ "name": "commit", "value": "61cebf" },
{ "name": "author", "value": "Hulk Hogan" }
]
}
3) Similar to the in script, the out script:
The script must emit the resulting version of the resource. For
example, the git resource emits the sha of the commit that it just
pushed.
For example:
{
"version": { "ref": "61cebf" },
"metadata": [
{ "name": "commit", "value": "61cebf" },
{ "name": "author", "value": "Mick Foley" }
]
}
Concourse uses the check result to verify if there is any new resource available. According to your pipeline definition, the presence of a new resource would trigger a job. The in is therefore used to read the specific resource using parameters provided by the pipeline whilst the out would take care of writing them.
As your in is going to use the information provided by the check you may want to use a similar structure, but you're not obliged to. It is useful to echo the same version information in your check/in/out in order to be able to log it and understand each resource in your pipeline is belonging to which version.

Create Entities and training phrases for values in functions for google action

I have created a trivia game using the SDK, it takes user input and then compares it to a value in my DB to see if its correct.
At the moment, I am just passing a raw input variable through my conversation, this means that it regularly fails when it mishears the user since the exact string which was picked up is rarely == to the value in the DB.
Specifically I would like it to only pick up numbers, and for example realise that it must extract '10' , from a speech input of 'my answer is 10'.
{
"actions": [
{
"description": "Default Welcome Intent",
"name": "MAIN",
"fulfillment": {
"conversationName": "welcome"
},
"intent": {
"name": "actions.intent.MAIN"
}
},
{
"description": "response",
"name": "Raw input",
"fulfillment": {
"conversationName": "rawInput"
},
"intent": {
"name": "raw.input",
"parameters": [{
"name": "number",
"type": "org.schema.type.Number"
}],
"trigger": {
"queryPatterns":[
"$org.schema.type.Number:number is the answer",
"$org.schema.type.Number:number",
"My answer is $org.schema.type.Number:number"
]
}
}
}
],
"conversations": {
"welcome": {
"name": "welcome",
"url": "https://us-central1-triviagame",
"fulfillmentApiVersion": 2
},
"rawInput": {
"name": "rawInput",
"url": "https://us-central1-triviagame",
"fulfillmentApiVersion": 2
}
}
}
app.intent('actions.intent.MAIN', (conv) => {
conv.data.answers = answersArr;
conv.data.questions = questionsArr;
conv.data.counter = answersArr.length;
var thisQuestion = conv.data.questions;
conv.ask((conv.data.answers)[0]));
});
app.intent('raw.input', (conv, input) => {
if(input == ((conv.data.answers)[0])){
conv.ask(nextQuestion());
}
app.intent('actions.intent.TEXT', (conv,input) => {
//verifying if input and db value are equal
// at the moment input is equal to 'my number is 10' (for example) instead of '10'
//therefore the string verification never works
conv.ask(nextQuestion());
});
In a previous project i used the dialogflow UI and I used this #system.entities number parameter along with creating some training phrases so it understands different speech patterns.
This input parameter I am passing through my conv , is only a raw string where I'd like it to be filtered using some sort of entity schema.
How do I create the same effect of training phrases/entities using the JSON file?
You can't do this using just the Action SDK. You need a Natural Language Processing system (such as Dialogflow) to handle this as well. The Action SDK, by itself, will do speech-to-text, and will use the actions.json configuration to help shape how to interpret the text. But it will only return the entire text from the user - it will not try to determine how it might match an Intent, nor what parameters may exist in it.
To do that, you need an NLP/NLU system. You don't need to use Dialogflow, but you will need something that does the parsing. Trying to do it with simple pattern matching or regular expressions will lead to nightmares - find a good system to do it.
If you want to stick to things you can edit yourself, Dialogflow does allow you to download its configuration files (they're just JSON), edit them, and update or replace the configuration through the UI or an API.

API Mapping Templates with Serverless

When using http-event with serverless-framework multiple response status are created by default.
In case of error a Lambda returns an error message stringified in the errorMessage property, so you need a mapping template such as
$input.path('$.errorMessage')
for any status code you want to use. F.e.:
"response": {
"statusCodes": {
"200": {
"pattern": ""
},
"500": {
"pattern": ".*\"success\":false.*",
"template": "$input.path('$.errorMessage')"
}
}
}
But the serverless-framework does not create one by default, thus rendering the default status codes useless. If I would create a mapping template myself, the default response status would be overwritten by my custom ones.
What is the correct way of mapping with the default status codes created by the serverless-framework#1.27.3?