I am trying to use the REST API PUT call to
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ApiManagement/service/{serviceName}/apis/{apiName}/schemas/{schemaId}?api-version=2021-01-01-preview
as an equivalent powershell cmdlet doesn't function as expected for adding schema-definitions. But the problem is even with REST API call, it is able to add one definition at a time. If my schema has more than 1 definition, when I fire the 2nd and subsequent PUT call it overwrites the previously written definition, and finally only 1 definition remains. Tried adding the If-Match to Request Header on 2nd and subsequent calls too, but in vain.
Tried adding multiple definitions under "schemas" as array of json, but even if that creates multiple definitions in 1 go, the DefinitionName are 0, 1, 2, 3 etc. and not actual names given in the input json body.
Multiple Definition Sample below -
"properties": {
"contentType": "application/vnd.oai.openapi.components+json",
"document": {
"components": {
"schemas": [
{
"Definition1": {
"type": "object",
"properties": {
"String1": {
"type": "string"
}
}
},
"Definition2": {
"type": "object",
"properties": {
"String2": {
"type": "integer"
}
}
}
}
]**
}
}
}
}
Does the PUT call allow putting definitions at once and if so, how?
Found the issue in the JSON being PUT on the REST API request.
The multiple definition json has to be like this -
{
"properties": {
"contentType": "application/vnd.oai.openapi.components+json",
"document": {
"components": {
"schemas": {
"Definition1": {
"type": "object",
"properties": {
"String1": {
"type": "string"
}
}
},
"Definition2": {
"type": "object",
"properties": {
"String2": {
"type": "integer"
}
}
}
}
}
}
}
}
The definitions given under "schemas" need not be put inside []. Just specify as per the above json structuring and we should be good.
I porting code from ruby to Python for CloudFormation stack creation projects. Below is a stack that I just keep getting 'Parameter values specified for a template which does not require them.'
This really doesn't tell me anything.
I have checked the json against the schemas and all was ok, and checked against the stack created by the original code and it matches, so can someone see an issue here, or at least point me in the right direction.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "EcsStack-5ad0c44afbf508d0b5a158df0da307fca33f5f63",
"Outputs": {
"marc1EcsCluster": {
"Value": {
"Ref": "marc1EcsCluster"
}
},
"marc1EcsClusterArn": {
"Value": {
"Fn::GetAtt": [
"marc1EcsCluster",
"Arn"
]
}
}
},
"Parameter": {
"Vpc": {
"Description": "VPC ID",
"Type": "String"
}
},
"Resources": {
"CloudFormationDummyResource": {
"Metadata": {
"Comment": "Resource to update stack even if there are no changes",
"GitCommitHash": "5ad0c44afbf508d0b5a158df0da307fca33f5f63"
},
"Type": "AWS::CloudFormation::WaitConditionHandle"
},
"marc1EcsCluster": {
"Type": "AWS::ECS::Cluster"
}
},
"Transform": "AWS::Serverless-2016-10-31"
}
As more general advice, the CloudFormation Linter will catch these errors with messages like:
E1001: Top level item Parameter isn't valid
template.json:19
I just finished the pluralsigt course and completed the tutorial of the official project documentation without problems, but nevertheless using the CLI I could not use the functions get_acc_ast_tx, get_acc_tx, I checked that the peer keys and the configuration files and correspond to genesis file, where admin#test is allowed to use these functions and I get:
[2019-12-08 04: 55: 57.883070400] [E] [CLI/ResponseHandler/Query]: Query is stateless invalid.
The genesis file I use is the initial one of the git repository:
{
"blockV1": {
"payload": {
"transactions": [{
"payload": {
"reducedPayload": {
"commands": [{
"addPeer": {
"peer": {
"address": "127.0.0.1:10001",
"peerKey": "bddd58404d1315e0eb27902c5d7c8eb0602c16238f005773df406bc191308929"
}
}
}, {
"createRole": {
"roleName": "admin",
"permissions": ["can_add_peer", "can_add_signatory", "can_create_account", "can_create_domain", "can_get_all_acc_ast", "can_get_all_acc_ast_txs", "can_get_all_acc_detail", "can_get_all_acc_txs", "can_get_all_accounts", "can_get_all_signatories", "can_get_all_txs", "can_get_blocks", "can_get_roles", "can_read_assets", "can_remove_signatory", "can_set_quorum"]
}
}, {
"createRole": {
"roleName": "user",
"permissions": ["can_add_signatory", "can_get_my_acc_ast", "can_get_my_acc_ast_txs", "can_get_my_acc_detail", "can_get_my_acc_txs", "can_get_my_account", "can_get_my_signatories", "can_get_my_txs", "can_grant_can_add_my_signatory", "can_grant_can_remove_my_signatory", "can_grant_can_set_my_account_detail", "can_grant_can_set_my_quorum", "can_grant_can_transfer_my_assets", "can_receive", "can_remove_signatory", "can_set_quorum", "can_transfer"]
}
}, {
"createRole": {
"roleName": "money_creator",
"permissions": ["can_add_asset_qty", "can_create_asset", "can_receive", "can_transfer"]
}
}, {
"createDomain": {
"domainId": "test",
"defaultRole": "user"
}
}, {
"createAsset": {
"assetName": "coin",
"domainId": "test",
"precision": 2
}
}, {
"createAccount": {
"accountName": "admin",
"domainId": "test",
"publicKey": "313a07e6384776ed95447710d15e59148473ccfc052a681317a72a69f2a49910"
}
}, {
"createAccount": {
"accountName": "test",
"domainId": "test",
"publicKey": "716fe505f69f18511a1b083915aa9ff73ef36e6688199f3959750db38b8f4bfc"
}
}, {
"appendRole": {
"accountId": "admin#test",
"roleName": "admin"
}
}, {
"appendRole": {
"accountId": "admin#test",
"roleName": "money_creator"
}
}],
"quorum": 1
}
}
}],
"txNumber": 1,
"height": "1",
"prevBlockHash": "0000000000000000000000000000000000000000000000000000000000000000"
}
}
}
I use the hyperledger image of docker, in MAC OS CATALINA.
I followed the tutorial according to this manual: https://iroha.readthedocs.io/en/latest/build/index.html
Thank you very much for the help.
Unfortunately, CLI is rather outdated – we are working on new solution for it, but meanwhile it is better to use one of the SDKs available – for Java, Python, JS or iOS (if you prefer mobile development).
All of them contain examples, so it should not be too tricky to use those. Although, if you encounter any issues, please contact us using one of the chats here.
This is due to outdated cli. A newer version that is developed will replace it, but is not yet ready.
The exact problem is that there was pagination metadata added for these queries in iroha, but the cli was not updated to set it properly. Protobuf transport allows cli to send a query without some fields that were added later, but iroha refuses to handle it.
You can use one of client libraries that are always kept up to date: https://iroha.readthedocs.io/en/latest/develop/libraries.html.
I want to insert a document using a REST call to Firestore createDocument method. One of the fields is a timestamp field that should be set on the server. With Android SDK it's as simple as annotating a Date field with #ServerTimestamp and keeping it null — now how do I do it in REST?
{
"fields": {
"timezoneId": {
"stringValue": "Europe\/London"
},
"city": {
"stringValue": "London"
},
"timestamp": {
"timestampValue": "???"
}
}
}
I tried using null, 0, empty string, timestamp — everything fails with an error requiring the standard RFC3339 format (e.g. 2018-01-31T13:50:30.325631Z). Is there any placeholder value I can use, or any way to obtain that timestamp?
The Android SDK doesn't execute a createDocument request when creating the document. Instead it uses the write request to issue an update and a transform request at the same time. If you are wanting to only use createDocument, then the answer is no.
Your payload would look something like this:
{
"writes": [
{
"update": {
"name": "projects/{projectId}/databases/{databaseId}/documents/{document_path}",
"fields": {
"timezoneId": {
"stringValue": "Europe\/London"
},
"city": {
"stringValue": "London"
}
}
},
// ensure the document doesn't exist
"currentDocument": {
"exists": false
}
},
{
"transform": {
"document": "projects/{projectId}/databases/{databaseId}/documents/{document_path}",
"fieldTransforms": [
{
"fieldPath": "timestamp",
"setToServerValue": "REQUEST_TIME"
}
]
}
}
]
}
The only downside to adding documents this way is you would need to generate the Document ID yourself (the SDK's generate them). I hope this helps.
I know the official reference is here.
I am having trouble understanding if I can have a managed_schema structured like this?
schema = {};
schema.propertyA = {};
schema.propertyA.property1 = "1";
schema.propertyA.property2 = "2";
schema.propertyB = "B";
And if so, how would the uploaded policy config file look like?
You can nest properties in a managed_schema just fine with the "object" type. For your example, the schema would look like this:
{
"type": "object",
"properties": {
"propertyA": {
"type": "object",
"properties": {
"property1": {
"type": "string"
},
"property2": {
"type": "string"
}
}
},
"propertyB": {
"type": "string"
}
}
}
And with this policy schema, the uploaded config file would have this format:
{
"propertyA": {
"Value": {
"property1": "1",
"property2": "2"
}
},
"propertyB": {
"Value": "B"
}
}
I've found this page useful when configuring and testing Chrome apps with the managed storage API.