Azure Pipelines: Bulk approve of deployments to environments - azure-devops

Is there any way to approve runs via the CLI or the API (or anything else)? I'm looking for a way to bulk approve multiple runs from different pipelines but it's not available in the UI.
Let's say I have 100 pipelines that have a deployment job to a production environment. I would like to approve all awaiting for approval runs.
Currently, I cannot find something like it in the docs of the Azure DevOps REST API or the CLI.
The feature docs:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals
The following question is related but I'm looking for any way of solving it but not just via API:
Approve a yaml pipeline deployment in Azure DevOps using REST api

I was just searching for an answer for this regarding getting the approval id that you would need. In fact there is an undocumented API to approve an approval check.
This is as Merlin explain the following
https://dev.azure.com/{org}/{project}/_apis/pipelines/approvals/{approvalId}
The body has to look like this
[{
"approvalId": "{approvalId}",
"status": {approvalStatus},
"comment": ""
}]
where {approvalStatus} is telling the API if you approved or not. You probly have to try, but I had a 4 as a status. I guess there are only 2 possibilities. Either for "approved" or "denied".
The question is now how you get the approval ID? I found it. You get it by using the timeline API of a classic build. The build API documentation says that you get it by the following
https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}?api-version=5.1
the build timeline you get in the response of the build run, but it has a pattern which is
https://dev.azure.com/{organization}/{project}/_apis/build/builds/{buildId}/Timeline?api-version=5.1
Besides a flat array container a parent / child rleationship from stage, phase, job and tasks, you can find within it something like the following:
{
"records": [
{
"previousAttempts": [
],
"id": "95f5837e-769d-5a92-9ecb-0e7edb3ac322",
"parentId": "9e7965a8-d99d-5b8f-b47b-3ee7c58a5b1c",
"type": "Checkpoint",
"name": "Checkpoint",
"startTime": "2020-08-14T13:44:03.05Z",
"finishTime": null,
"currentOperation": null,
"percentComplete": null,
"state": "inProgress",
"result": null,
"resultCode": null,
"changeId": 73,
"lastModified": "0001-01-01T00:00:00",
"workerName": null,
"details": null,
"errorCount": 0,
"warningCount": 0,
"url": null,
"log": null,
"task": null,
"attempt": 1,
"identifier": "Checkpoint"
},
{
"previousAttempts": [
],
"id": "9e7965a8-d99d-5b8f-b47b-3ee7c58a5b1c",
"parentId": null,
"type": "Stage",
"name": "Power Platform Test (orgf92be262)",
"startTime": null,
"finishTime": null,
"currentOperation": null,
"percentComplete": null,
"state": "pending",
"result": null,
"resultCode": null,
"changeId": 1,
"lastModified": "0001-01-01T00:00:00",
"workerName": null,
"order": 2,
"details": null,
"errorCount": 0,
"warningCount": 0,
"url": null,
"log": null,
"task": null,
"attempt": 1,
"identifier": "Import_Test"
},
{
"previousAttempts": [
],
"id": "e54149c5-b5a7-4b82-8468-56ad493224b5",
"parentId": "95f5837e-769d-5a92-9ecb-0e7edb3ac322",
"type": "Checkpoint.Approval",
"name": "Checkpoint.Approval",
"startTime": "2020-08-14T13:44:03.02Z",
"finishTime": null,
"currentOperation": null,
"percentComplete": null,
"state": "inProgress",
"result": null,
"resultCode": null,
"changeId": 72,
"lastModified": "0001-01-01T00:00:00",
"workerName": null,
"details": null,
"errorCount": 0,
"warningCount": 0,
"url": null,
"log": null,
"task": null,
"attempt": 1,
"identifier": "e54149c5-b5a7-4b82-8468-56ad493224b5"
}
],
"lastChangedBy": "00000002-0000-8888-8000-000000000000",
"lastChangedOn": "2020-08-14T13:44:03.057Z",
"id": "86fb4204-9c5e-4e72-bdb1-eefe230480ec",
"changeId": 73,
"url": "https://dev.azure.com/***"
}
below you can see a step that is called "Checkpoint.Approval". The id of that step IS the approval Id you need to approve everything. If you want to know from which stage the approval is, then you can follow up the parentIds until the parentId property is null.
This will then be the stage.
With this you can successfully get the approval id and use it to approve with the said

What jessehouwing's guess is correct. Now multi-stage still be in preview, and the corresponding SDK/API/extension hasn't been expanded and provided to public.
You may think that what about not using API. I have checked the corresponding code from our backend, all of operations to multi-stage approval contain one required parameter: approvalId. I'm sure you have known that this value is unique and different approval map with different approvalId value. This means, no matter which method you want to try with, approvalId is the big trouble. And based on my known, until now, there's no any api/SDK, third tool or extension can achieve this value directly.
In addition, for multi-stage YAML, its release process logic is not same with the release that defined with UI. So, all of public APIs which can work with release(UI), are not suitable with the release of multi-stage.
We have one undisclosed api, can get Approval message of multi-stage:
https://dev.azure.com/{org}/{project}/_apis/pipelines/approvals/{approvalId}
You can try with listing approval without specifying approvalId: https://dev.azure.com/{org}/{project}/_apis/pipelines/approvals. And its response message: Query for approvals failed. A minimum of one query parameter is required.\r\nParameter name: queryParameters. This represents you must tell system the specified approval(the big trouble I mentioned previously).
In fact, for why approvalId is a necessary part, it is caused from our backend code structure. I'd suggest you raise suggestion on developing API/SDK for multi-stage here.

I can confirm that Sebastian's answer worked for me, even in Azure DevOps 2020 on-prem.
After retrieving the approvalId from either methods used above (I was specifically using a service hook for my integration), I used the following API PATCH call:
https://dev.azure.com/{organization}/{project}/_apis/pipelines/approvals/?api-version=6.0-preview
and in the body:
[
{
"approvalId": "{approvalId}",
"status": {status integer}, (4 - approved; 8 - rejected)
"comment": ""
}
]
The call is passed with the application/json Content-Type, but in some situations it did not like that I was using the [] brackets, so you will need to work around that, only then will the call work.
I was even able to integrate this call into my custom connector in MS Power Automate

I added support to the latest version of the AzurePipelinesPS Powershell module to support bulk pipeline approvals.
Code snippet without using the AzurePipelinesPS sessions
$instance = 'https://dev.azure.com'
$collection = 'your_project'
$project = 'your_project'
$apiVersion = '5.1-preview'
$securePat = 'your_personal_access_token' | ConvertTo-SecureString -Force -AsPlainText
Get-APPipelinePendingApprovalList -Instance $instance -Collection $collection -Project $project -PersonalAccessToken $securePat -ApiVersion $apiVersion | Out-GridView -Passthru | % { Update-APPipelineApproval -Instance $instance -Collection $collection -Project $project -PersonalAccessToken $securePat -ApiVersion $apiVersion -ApprovalId $PSitem.approvalId -status 'approved'}
Code snippet with AzurePipelinesPS sessions
$session = 'your_session'
Get-APPipelinePendingApprovalList $session | Out-GridView -Passthru | % { Update-APPipelineApproval $session -ApprovalId $PSitem.approvalId -status 'approved'}
See the AzurePipelinesPS project page for details on secure session handling.
Function Definitions used in the code above
Get-APPipelinePendingApprovalList
Loops through pipeline build runs with the status of 'notStarted' or 'inProgress' in a project. This build lookup supports filters like pipeline definition ids or a source branch name.
For each build it then looks up the timeline record where the approval id, the stage name and the stage identifier are found.
Optionally with the ExpandApproval switch it can expand each approval with details
The object returned from this function contains the following properties, the values have been mocked
pipelineDefinitionName : MyPipeline
pipelineDefinitionId : 100
pipelineRunId : 2001
pipelineUrl : https://dev.azure.com/your_project/_build/results?
sourceBranch : refs/heads/master
stageName : Prod Deployment
stageIdentifier : Prod_Deployment
approvalId : xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx
Out-GridView
Displays data in a Grid View where the results can be filtered, ordered and selected.
%
The percent sign is shorthand for Foreach-Object
Update-APPipelineApproval
Updates the status of an approval to approved or rejected.
Credit
Thanks to Sebastian Schütze for cracking the timeline part!

The az pipelines extension doesn't suport approvals yet, I suppose due to the fact that multi-stage pipelines are still in preview and the old release hub will eventually be replaced by it.
But there is a REST API you can use to list and update approvals. These can be called from PowerShell with relative ease.
Or use the vsteam powershell module and Get-VSTeamApproval and Set-VSTeamApproval.

Related

MongoDB Replica Set - The value of parameter linuxConfiguration.ssh.publicKeys.keyData is invalid

This is concerning the Azure Deployment Template for a MongoDB Replica Set defined here mongodb-replica-set-centos.
When I run the recommended deployment commands to deploy the replica set, namely
az group create --name <resource-group-name> --location <resource-group-location> # Use this command when you need to create a new resource group for your deployment.
az deployment group create --resource-group <my-resource-group> --template-uri https://raw.githubusercontent.com/migr8/AzureDeploymentTemplates/main/mongo/mongodb-replica-set-centos/azuredeploy.json
where the resource group is already set up. I receive the following error:
{
"status": "Failed",
"error": {
"code": "DeploymentFailed",
"message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.",
"details": [
{
"code": "Conflict",
"message": "{\r\n \"status\": \"Failed\",\r\n \"error\": {\r\n \"code\": \"ResourceDeploymentFailure\",\r\n \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n \"details\": [\r\n {\r\n \"code\": \"DeploymentFailed\",\r\n \"message\": \"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.\",\r\n \"details\": [\r\n {\r\n \"code\": \"BadRequest\",\r\n \"message\": \"{\\r\\n \\\"error\\\": {\\r\\n \\\"code\\\": \\\"InvalidParameter\\\",\\r\\n \\\"message\\\": \\\"The value of parameter linuxConfiguration.ssh.publicKeys.keyData is invalid.\\\",\\r\\n \\\"target\\\": \\\"linuxConfiguration.ssh.publicKeys.keyData\\\"\\r\\n }\\r\\n}\"\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n}"
},
{
"code": "Conflict",
"message": "{\r\n \"status\": \"Failed\",\r\n \"error\": {\r\n \"code\": \"ResourceDeploymentFailure\",\r\n \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n \"details\": [\r\n {\r\n \"code\": \"DeploymentFailed\",\r\n \"message\": \"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.\",\r\n \"details\": [\r\n {\r\n \"code\": \"BadRequest\",\r\n \"message\": \"{\\r\\n \\\"error\\\": {\\r\\n \\\"code\\\": \\\"InvalidParameter\\\",\\r\\n \\\"message\\\": \\\"The value of parameter linuxConfiguration.ssh.publicKeys.keyData is invalid.\\\",\\r\\n \\\"target\\\": \\\"linuxConfiguration.ssh.publicKeys.keyData\\\"\\r\\n }\\r\\n}\"\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n}"
}
]
}
}
The problem field is in both primary-resources.json and secondary-resources.json appears to be
"variables": {
"subnetRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('subnet').vnet, parameters('subnet').name)]",
"securityGroupName": "[concat(parameters('namespace'), parameters('vmbasename'), 'nsg')]",
"linuxConfiguration": {
"disablePasswordAuthentication": true,
"ssh": {
"publicKeys": [
{
"path": "[concat('/home/', parameters('adminUsername'), '/.ssh/authorized_keys')]",
"keyData": "[parameters('adminPasswordOrKey')]"
}
]
}
}
},
And ascociated with the variable adminPasswordOrKey. I have tried changing this to be both standard passwords and SSH keys of varying bit-depth, no luck...
How can I fix this?
Repro steps
Run az group create --name <resource-group-name> --location <resource-group-location> where resource group exists.
Run az deployment group create --resource-group <my-resource-group> --template-uri https://raw.githubusercontent.com/migr8/AzureDeploymentTemplates/main/mongo/mongodb-replica-set-centos/azuredeploy.json and step through the prompts
Enter the relevant in formation.
Further Investigation
I have just seen this answer (https://stackoverflow.com/a/60860498/626442) saying specifically that
Note: Please note that the only allowed path is /home//.ssh/authorized_keys due to a limitation of Azure.
I have changed this value of the path, no joy, same error. :'[
You forgot to pass parameters in az deployment group create .... --parameters azuredeploy.parameters.json. You can download azuredeploy.parameters.json and change values as needed. See https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/template-tutorial-use-parameter-file?tabs=azure-cli#deploy-template for details.
Specifically the error in the question complains about adminUsername parameter being empty. Bear in mind this user name is also being used in the home directory path, so limit yourself to lowcase ASCII a-z, numbers, underscore. No spaces, not special characters, no utf.
Not related to the error, but be aware these necromancers use mongo 3.2 which was buried 4 years ago: https://www.mongodb.com/support-policy/lifecycles. Considering they open it wide to the internet you may have way more problems if you actually deploy it.
UPDATE
An example of the parameters I used:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminUsername": {
"value": "yellow"
},
"mongoAdminUsername": {
"value": "phrase"
},
"mongoAdminPassword": {
"value": "settle#SING"
},
"secondaryNodeCount": {
"value": 2
},
"sizeOfDataDiskInGB": {
"value": 2
},
"dnsNamePrefix": {
"value": "written"
},
"centOsVersion": {
"value": "7.7"
},
"primaryNodeVmSize": {
"value": "Standard_D1_v2"
},
"secondaryNodeVmSize": {
"value": "Standard_D1_v2"
},
"zabbixServerIPAddress": {
"value": "Null"
},
"adminPasswordOrKey": {
"value": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDdNRTU0XF3xazhhDmwXXWGG7wp4AaQC1r89K7sZFRXp9VSUtydV59DHr67mV5/0DWI5Co1yWK713QJ00BPlBIHNMNLuoBBq8IkOx8fBZF1g9YFm5Zy4ay+CF4WgDITAsyxhKvUWL6jwG5M3XIdVYm49K+EFOCWSSaNtCk8tHhi3v6/5HFkwc2r0UL/WWWbbt5AmpJ8QOCDk/x+XcgCjP9vE5jYYGsFz9F6V1FdOpjVfDwi13Ibivj/w2wOZh2lQGskC+qDjd2upK13+RfWYHY3rr+ulNRPckHRhOqmZ2vlUapO4T0X9mM6ugSh1FprLP5nHdVCUls2yw4BAcSoM9NMiyafE56Xkp9h3bTAfx5Ufpe5mjwQp+j15np1pVpwDaEgk7ZeaPoZPhbalpvZGyg9KiKfs9+KUYHfGklIOHKJ3RUoPE286rg1U4LGswil5RARRSf86kBBHXaIPxy1X0N6QryeWhk0aM6LWEdl7mVbQksa7ilANnsaVMl7FSdY/Cc="
}
}
}
DANGER: It will deploy publicly accessible mongodb replica set with publicly accessible credentials, so please delete the resources as soon as you are happy with testing/debugging
This is how deployment looks like on the portal:

pod identity on aks cluster crreation

Right now, it's impossible to have assigned user assigned identities on arm templates (and terraform) on cluster creation. I already tried a lot of things, and updates works great, after inserting manually with:
az aks pod-identity add --cluster-name my-aks-cn --resource-group myrg --namespace myns --name example-pod-identity --identity-resource-id /subscriptions/......
But, I want to have this done at once, with the deployment, so I need to insert the pod user identities to the cluster automatically. I also tried to run the command using the DeploymentScripts but the deployment scripts are not ready to use preview aks extersion.
My config looks like this:
{
"type": "Microsoft.ContainerService/managedClusters",
"apiVersion": "2021-02-01",
"name": "[variables('cluster_name')]",
"location": "[variables('location')]",
"dependsOn": [
"[resourceId('Microsoft.Network/virtualNetworks', variables('vnet_name'))]"
],
"properties": {
....
"podIdentityProfile": {
"allowNetworkPluginKubenet": null,
"enabled": true,
"userAssignedIdentities": [
{
"identity": {
"clientId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', 'managed-indentity'), '2018-11-30').clientId]",
"objectId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', 'managed-indentity'), '2018-11-30').principalId]",
"resourceId": "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', 'managed-indentity')]"
},
"name": "managed-indentity",
"namespace": "myns"
}
],
"userAssignedIdentityExceptions": null
},
....
},
"identity": {
"type": "SystemAssigned"
}
},
I'm always getting the same issue:
"statusMessage": "{\"error\":{\"code\":\"InvalidTemplateDeployment\",\"message\":\"The template deployment 'deployment_test' is not valid according to the validation procedure. The tracking id is '.....'. See inner errors for details.\",\"details\":[{\"code\":\"PodIdentityAddonUserAssignedIdentitiesNotAllowedInCreation\",\"message\":\"Provisioning of resource(s) for container service cluster-12344 in resource group myrc failed. Message: {\\n \\\"code\\\": \\\"PodIdentityAddonUserAssignedIdentitiesNotAllowedInCreation\\\",\\n \\\"message\\\": \\\"PodIdentity addon does not support assigning pod identities on creation.\\\"\\n }. Details: \"}]}}",
The Product team has shared the answer here: https://github.com/Azure/aad-pod-identity/issues/1123
which says:
This is a known limitation in the existing configuration. We will fix
this in the V2 implementation.
For others who are facing the same issue, please refer to the GitHub issue above.

How to create a schedule in pagerduty using restApi and python

When through the documentation of pagerduty was but still not able to understand what parameters to send in the request body and also facing trouble in understanding how to make the api request.If any one can share the sample code on making a pagerduty schedule that would help me alot.
Below is the sample code to create schedules in PagerDuty.
Each list can have multiple items (to add more users / layers)
import requests
url = "https://api.pagerduty.com/schedules?overflow=false"
payload={
"schedule": {
"schedule_layers": [
{
"start": "<dateTime>", # Start Time of layer | "start": "2021-01-01T00:00:00+05:30",
"users": [
{
"user": {
"id": "<string>", # ID of user to add in layer
"summary": "<string>",
"type": "<string>", # "type": "user"
"self": "<url>",
"html_url": "<url>"
}
}
],
"rotation_virtual_start": "<dateTime>", # Start of layer | "rotation_virtual_start": "2021-01-01T00:00:00+05:30",
"rotation_turn_length_seconds": "<integer>", # Layer rotation, for multiple user switching | "rotation_turn_length_seconds": <seconds>,
"id": "<string>", # Auto-generated. Only needed if you want update and existing Schedule Layer
"end": "<dateTime>", # End Time of layer | "end": "2021-01-01T00:00:00+05:30",
"restrictions": [
{
"type": "<string>", # To restrict shift to certain timings Weekly daily etc | "type": "daily_restriction",
"duration_seconds": "<integer>", # Duration of layer | "duration_seconds": "300"
"start_time_of_day": "<partial-time>", #Start time of layer | "start_time_of_day": "00:00:00",
"start_day_of_week": "<integer>"
}
],
"name": "<string>", # Name to give Layer
}
]
"time_zone": "<activesupport-time-zone>", # Timezone to set for layer and its timings | "time_zone": "Asia/Kolkata",
"type": "schedule",
"name": "<string>", # Name to give Schedule
"description": "<string>",# Description to give Schedule
"id": "<string>", # Auto-generated. Only needed if you want update and existing Schedule Layer
}
}
headers = {
'Authorization': 'Token token=<Your token here>',
'Accept': 'application/vnd.pagerduty+json;version=2',
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, json=payload)
print(response.text)
Best way to do this is to get the postman collection for PagerDuty and edit the request as per your liking. Once you get a successful response, convert that into code using the inbuilt feature of postman.
Using PagerDuty API for scheduling is not easy. Creating new schedule is okaish, but if you decide to update schedule - it is definitely not trivial. You'll probably occur bunch of limitation: number of restriction per layer, must reuse current layers, etc.
As option you can use a python library pdscheduling https://github.com/skrypka/pdscheduling

PropertyParams when deploying VM from OVF

I am using the VMWare vCenter REST API to deploy new Virtual Machines from OVF library items. Part of the API allows for additional_paramaters but I am unable to get it to function properly. Specifically, I would like to set the PropertyParams for custom OVF template properties.
When deploying VM from OVF, I am using the following REST API:
POST https://{server}/rest/com/vmware/vcenter/ovf/library-item/id:{ovf_library_item_id}?~action=deploy
I have tried many structures and either end up with the POST succeeding but the parameters completely ignored, or with a 500 Internal Server error with a message about failing to convert the properties structure:
Could not convert field 'properties' of structure 'com.vmware.vcenter.ovf.property_params'
The payload that seems correct from the documentation (but fails with the error above):
deployment_spec : {
/* ... */
additional_parameters : [
{
type : 'PropertyParams',
properties : [
{
id : 'my_property_name',
value : 'foo',
}
]
}
]
}
Given an OVF that contains the following:
<ProductSection>
<Info>Information about the installed software</Info>
<Product>MyProduct</Product>
<Vendor>MyCompany</Vendor>
<Version>1.0</Version>
<Category>Config</Category>
<Property ovf:userConfigurable="true" ovf:type="string" ovf:key="my_property_name" ovf:value="">
<Label>My Property</Label>
<Description>A custom property</Description>
</Property>
</ProductSection>
This also fails for other property types such as boolean.
Note that I have posted on the vCenter forums as well.
I had the same issue, i success to solve it by browsing the vapi structure /com/vmware/vapi/metadata/metamodel/structure/id:<idstructure>
Here is my finding :
firstly, get your properties structure by using the filter api :
https://{{vc}}/rest/com/vmware/vcenter/ovf/library-item/id:300401a5-4561-4c3d-ac67-67bc7a1a6
Then, to deploy, use the class com.vmware.vcenter.ovh.property_params. It will be more clear with the exemple :
{
"deployment_spec": {
"accept_all_EULA": true,
"name": "clientok",
"default_datastore_id": "datastore-10",
"additional_parameters": [
{
"#class": "com.vmware.vcenter.ovf.property_params",
"properties":
[
{
"instance_id": "",
"class_id": "",
"description": "The gateway IP for this virtual appliance.",
"id": "gateway",
"label": "Default Gateway Address",
"category": "LAN",
"type": "ip",
"value": "10.1.2.1",
"ui_optional": true
}
],
"type": "PropertyParams"
}
]
}

Asterisk REST ARI snoop (cURL)

I try to:
curl -v -u j123:j321 -X POST "http://localhost:8088/ari/channels/1421226074.4874/snoop?spy=SIP/695"
In response to receiving:
"message": "Invalid direction specified for spy"
I try to:
SIP/695; SIP:695, SIP#695, localhost#695, channel, channelName
It's all not working.
Call comes into the queue from sip-416 to queue_1 and distribute to 694. I need to connect 695 for wiretapping channel 1421226074.4874.
I only need to listen and not to whisper.
Help me please)
The error message is telling you what the problem is:
"message": "Invalid direction specified for spy"
The spy parameter is a direction for spying, not the channel to spy on (see reference documentation here). You've already specified the channel to snoop on in the URI path - you need to specify the direction of the media in the spy parameter.
As an aside, apparently the auto generated wiki isn't display enum values, which is unfortunate. We'll have to fix that.
For reference, here's the parameter in the Swagger JSON:
"name": "spy",
"description": "Direction of audio to spy on",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string",
"defaultValue": "none",
"allowableValues": {
"valueType": "LIST",
"values": [
"none",
"both",
"out",
"in"
]
}