Test-AzureRmResourceGroupDeployment doesn't validate nested resource - powershell

I'm looking to incorporate Test-AzureRmResourceGroupDeployment into a build pipeline so I know before deployment that the template / parameters has got any major problems.
However I'm finding if I used nested deployments it provides no validation to the nested deployment whatsoever, I can have a bad templateLink -> uri with incorrect variables even in the URI and it's still validating as successful.
I have tried with a local template, a template uri, with/without hashed parameters and parameters file just in case.
I assume underneath the AzureRM powershell is using the Resource Manager API, it doesn't hint to what the validate actually does with nested templates: https://learn.microsoft.com/en-us/rest/api/resources/deployments/validate
Anything I've missed? Any suggestions on how to validate the entire template, do I need to parse the nested templates and some how re-construct the parameters from json and do the sub-deployments by hand (which would be a shame)?

Reading a forum post from a Microsoft Employee in the Resource Manager team (a private forum so unfortunately cannot provide a link), it appears Test-AzureRmResourceGroupDeployment does "template expansion" which as 4c74356b41 has also kindly pointed out - surely the nested template validation should work...
So further experimentation has led to finding a limitation in the validation, see below for an example. If there is a variable missing entirely in a nested deployment it doesn't appear to be picked up as a validation warning in the parent template, and also appears to interfere with the template expansion leading to the nested template to be ignored also.
If "parameters": { "missing" : "[variables('PURPOSEFULLY_MISSING')]" } is removed then the template is validated as normal and the nested template also.
Snippet of the overall template for just the nested resources:
"resources": [
{
"name": "[variables('deploymentName')]",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2018-05-01",
"properties": {
"mode": "Incremental",
"templateLink": {
"uri": "[variables('deploymentUri')]",
"contentVersion": "1.0.0.0"
},
"parameters": { "missing" : "[variables('PURPOSEFULLY_MISSING')]" }
}
}
],

that is not true, it will validate nested deployment even if you gate it with condition: false, so you are doing something wrong, we would need to look at the template and how you are calling the cmdlet to understand whats going on
as to the validation: there is no real way to validate the deployment works (test-azurermresourcegroupdeployment is just garbage, extremely low value). the only way to validate it - deploy it.

Related

How to know the structure (body) of rest api azure POST request?

i am new at rest api azure and i dont know how to get correct body template of policy.
For example i used :
GET https://dev.azure.com/organization/project/_apis/policy/types?api-version=7.0
and the response are types of policies which i can use but how do i know the construction of the request body? Like this one:
{
"isEnabled": true,
"isBlocking": false,
"type": {
"id": "fa4e907d-c16b-4a4c-9dfa-4906e5d171dd"
},
"settings": {
"minimumApproverCount": 4,
"creatorVoteCounts": false,
"scope": [
{
"repositoryId": "a957e751-90e5-4857-949d-518cf5763394",
"refName": "refs/heads/master",
"matchKind": "exact"
}
]
}
}
Where should I find those request body templates? :(
Resources: https://learn.microsoft.com/en-us/rest/api/azure/devops/policy/configurations/create?view=azure-devops-rest-5.1&tabs=HTTP
Usually, when you could list or get the repo policy correctly, you could use the parameter configuration part of the returning result as the request body in creating the policy with post method.
rest api to list the branch policy.
GET https://dev.azure.com/{organization}/{project}/_apis/policy/configurations?api-version=5.1
with optional parameter
GET https://dev.azure.com/{organization}/{project}/_apis/policy/configurations?scope={scope}&policyType={policyType}&api-version=5.1
You could check the templates below for different configurations in Policy template examples.
Examples
Approval count policy
Build policy
Example policy
Git case enforcement policy
Git maximum blob size policy
Merge strategy policy
Work item policy
If you still don't know how to compose the request body, you could also share your scenario.
i finally made it, it was very hard and i dont understand why Microsoft has so bad documentation.... i had to made it by sending randoms request and look at the elements how the names are... so bad so much time spend...

Fhir R4 - Track resources created user

I'm using FHIR R4 with Hapi FHIR API.
I want to know how marked the ServiceRequest resources with information about created user.
I've read the FHIR documentation and I've found the relevantHistory tag where I can put there a Provenance reference.
All good but the HAPI Fhir can't query that field/tag so I can't get all ServiceRequests created by me or another user.
I've also tried to use a customize extension named tracking, where I've put the tracking user info.
I don't want to use a requester tag because, it is filled with other guide line meaning supplied by customer
EDIT AFTER Mirjam Baltus
Hi,
interesting your point of view but, I've found another solution as follow, I want to discuss it with you (if you want).
I've added a SearchParameter resource attached on ServiceRequest to allow the search on relevantHistory field.
This is the JSON resource:
{
"resourceType": "SearchParameter",
"id": "6589",
"meta": {
"versionId": "7",
"lastUpdated": "2021-02-25T11:25:25.549+00:00",
"source": "#1btUOFbG0D3dMdwI"
},
"title": "Storia",
"status": "active",
"code": "relevantHistory",
"base": [
"ServiceRequest"
],
"type": "reference",
"expression": "ServiceRequest.relevantHistory",
"xpathUsage": "normal",
"target": [
"Provenance"
],
"modifier": [
"missing"
],
"chain": [
"reference"
]
}
So I've written a query on ServiceRequest filtered by relevantHistory field (linked to Provenance).
I've adopted this strategy because I need to know only the creator of ServiceRerquest, so in this way, I've factorized the information in Provenance resource where in the target field I've put the Practitioner / Organization who created the ServiceRequest and in the agent component I've replicated this information with ENTERER value in the enum AgentRole and AgentType.
In this way, I've collected one Provenance for more ServiceRequests, instead If I follow your way, I'll have for each ServiceRequest a dedicated Provenance.
You think I've followed a wrong way or it is a possible solution?
The relevantHistory is not the right field to use, since that will only list older Provenance resources that hold relevant information. The description specifically says it does not hold the Provenance resource associated with the current version of the ServiceRequest (see http://hl7.org/fhir/servicerequest-definitions.html#ServiceRequest.relevantHistory).
I think Provenance can still help you. You would not search on a field in ServiceRequest, but find ServiceRequests that have a Provenance where you/user are the actor:
GET [base]/ServiceRequest?_has:Provenance:target:actor=[user_reference]
Or approach it the other way around, by looking for Provenance resources from the user, and including ServiceRequests that are the target of the Provenance.
Added after edit of original post:
As I mention in my comment, I think the way you are trying to use the relevantHistory field and one Provenance for multiple ServiceRequests is not according to how that field and resource type are supposed to be used.
If you are able to create a custom search parameter, why not use an extension on the ServiceRequest to indicate who created it, and then make that extension searchable?
If you want more discussion about this, please ask on https://chat.fhir.org, where more people from the FHIR community will be able to chime in.

Passing 'settable at queue time' build pipeline variables through REST api [duplicate]

I would like to start a Azure Pipelines build through the REST API. There is an API for queuing builds but I couldn't find a way to define variables.
The accepted answer does not really answers the question when you need to set a value at queue time.
The solution is actually pretty simple you just have to add a parameters field to the json payload. The content should be a json string (not directly an object) containing the parameters
Ex :
{
"parameters": "{\"ReleaseNumber\": \"1.0.50\", \"AnotherParameter\": \"a value\"}",
"definition": {
"id": 2
}
}
EDIT : This feature is now properly documented as an optional stringified dictionary. See https://www.visualstudio.com/fr-fr/docs/integrate/api/build/builds#queue-a-build
Variables are included in definitions, you can update your build definition to set the variables via build-definition api first and then queue the build.
Following is the variable section get via build-definition api:
"variables": {
"system.debug": {
"value": "false",
"allowOverride": true
},
"BuildConfiguration": {
"value": "release",
"allowOverride": true
},
"BuildPlatform": {
"value": "any cpu",
"allowOverride": true
}
},
For anyone having problems with this (I did), there is a difference in APIs used since the accepted answer (which to me didn't work at all). But following Cyprien Autexier's advice, I took a look under the hood (Firefox Dev Tools) and I noticed the portal does not use the Builds API anymore. It uses the Pipelines one (https://learn.microsoft.com/en-us/rest/api/azure/devops/pipelines/runs/run-pipeline?view=azure-devops-rest-6.1). With this, worked flawlessly.
For anyone looking this, I was able to make it work with 'templateParameters', which allow you to send an Object instead of a String on version 7.1.
Method: POST
URL: https://dev.azure.com/{organization}/{project}/_apis/build/builds?api-version=7.1-preview.7
Body: JSON example:
{
"sourceBranch":"Development",
"definition": {
"id": 5
}
"templateParameters": {
"PARAMETER1": "value1",
"parameter2": "valuex"
}
}
Docs: https://learn.microsoft.com/en-us/rest/api/azure/devops/build/builds/queue?view=azure-devops-rest-7.1
Seems it works with 5.1. All you need to do is define the variables you pass in as parameters within the pipeline variables and ensure the checkbox "Settable at queue time" is checked. If you have same variable in any library make sure you remove those references as library variables are seen to override those set via API.
Note I use Azure Devops Server 2019
API: https://learn.microsoft.com/en-us/rest/api/azure/devops/build/builds/queue?view=azure-devops-rest-5.1
Navigating to set variables: Edit the YAML pipeline -->click on the 3 dots near "Run" button --> Variables --> Variables TAB
Hope it helps someone

Published property in data factory dataset json

I have noticed that upon saving dataset definition in data factory via the azure portal that
"published": false
Appears in the definition, I have seen dataset's work fine with published false. But also seen some seemingly only start working with published: true, however that might of been a coincidence.
I've been unable to find any documentation for this property.
{
"name": "DataLakeDummyXmlInput",
"properties": {
"published": false,
"type": "AzureDataLakeStore",
This property is currently classified as "legacy" as described by a Microsoft employee here:
...this is a legacy element in our object model.
The link also mentions the possibility of "lighting it up as a future feature" which translates as it may come in to use in the future. For now, don't worry about it.

Breeze failing silently while parsing metadata

Trying to get going on breeze but encountering the worst kind of error, which is none at all. It appears the metadata that I am producing is not being accepted by breeze. I know currently there are some issues with the metadata, such as 'foreignKeyNamesOnServer' has incorrect values in it and a bunch of others. The metadata I am producing can be viewed here (too large):
http://pastebin.com/ycP4jXxn
var serviceName = 'http://www.dockyard.com:8080/rest';
var entityManager = new breeze.EntityManager({serviceName: serviceName});
var entityQuery = new breeze.EntityQuery();
var query = breeze.EntityQuery.from("application")
entityManager.executeQuery(query)
.then(function (data) {
console.log(data);
}, function (error) {
console.log(error);
});
So the behaviour I am seeing is no javascript errors related to metadata parsing, the metadata is returning ok with 200 OK. The hit to /rest/application is returning 200 OK with the following data.
[{"#id":1,"id":1,"name":"dsad","deploymentStrategies":null,"versions":null,"groups":null},{"#id":2,"id":2,"name":"sss","deploymentStrategies":null,"versions":null,"groups":null},{"#id":3,"id":3,"name":"fdsfs","deploymentStrategies":null,"versions":null,"groups":null},{"#id":4,"id":4,"name":"fdsa","deploymentStrategies":null,"versions":null,"groups":null},{"#id":5,"id":5,"name":"dasda","deploymentStrategies":null,"versions":null,"groups":null}]
Promise is calling the error callback with: cannot execute _executeQueryCore until metadataStore is populated
The contents of the metadata store:
{"namingConvention":{"name":"camelCase"},"localQueryComparisonOptions":{"name":"caseInsensitiveSQL","isCaseSensitive":false,"usesSql92CompliantStringComparison":true},"dataServices":[{"serviceName":"http://www.dockyard.com:8080/rest/","hasServerMetadata":true,"jsonResultsAdapter":"webApi_default","useJsonp":false}],"_resourceEntityTypeMap":{"platform":"Platform:#com.psidox.dockyard.controller.model.dockyard","application":"Application:#com.psidox.dockyard.controller.model.application","host":"Host:#com.psidox.dockyard.controller.model.host","groupdeploymentstrategy":"GroupDeploymentStrategy:#com.psidox.dockyard.controller.model.application","dockyard":"Dockyard:#com.psidox.dockyard.controller.model.dockyard","configurationentry":"ConfigurationEntry:#com.psidox.dockyard.controller.model","hoststrategy":"HostStrategy:#com.psidox.dockyard.controller.model.application","dockerimage":"DockerImage:#com.psidox.dockyard.controller.model.docker","version":"Version:#com.psidox.dockyard.controller.model.application","docker":"Docker:#com.psidox.dockyard.controller.model.docker","hostproviderconfig":"HostProviderConfig:#com.psidox.dockyard.controller.model.host","hostprovider":"HostProvider:#com.psidox.dockyard.controller.model.host","metadataimpl":"MetadataImpl:#com.psidox.dockyard.controller.model","deployment":"Deployment:#com.psidox.dockyard.controller.model.dockyard","hosttype":"HostType:#com.psidox.dockyard.controller.model.host","group":"Group:#com.psidox.dockyard.controller.model.application","groupimplementation":"GroupImplementation:#com.psidox.dockyard.controller.model.application","deploymentstrategy":"DeploymentStrategy:#com.psidox.dockyard.controller.model.dockyard","groupdeployment":"GroupDeployment:#com.psidox.dockyard.controller.model.application","metadata":"Metadata:#com.psidox.dockyard.controller.model"},"_structuralTypeMap":{},"_shortNameMap":{},"_ctorRegistry":{},"_incompleteTypeMap":{},"_incompleteComplexTypeMap":{},"_id":0,"_deferredTypes":{}}"
I am pretty sure this error is related to Metadata store not being populated correctly from my metadata. Just wondering why Breeze is not throwing any type of error when it is encountering invalid metadata?
Edit:
After debugging the parse metadata call it appears that Breeze Metadata Schema Documentation is out of date. At a quick glance this is what it looks like has changed:
Key name "structuralTypeMap" has changed to "structuralTypes".
"structuralTypeMap" use to be a object with the key as the EntityTypeName and value was the Entity definition. Now it appears that "structuralTypes" is an array with the Entity definitions.
Suggestions also there should possibly be an exception thrown if the metadata doesn't contain any structuralTypes? Currently it is failing silently which isn't very helpful for debugging.
I fear they you've jumped into the deep end of the pool before learning to swim. I admire your bravery but I'm not surprised that you're struggling to stay afloat. You're not following any of the easy paths we've set out for you. I assume that is because none of these paths are suitable to your situation.
On the bright side, you've reinforced by sense that we soon must make it easier for developers who get their data from a custom REST service.
Problem #1
The Query results do not identify the EntityType and you didn't mention that you wrote a custom JsonResultsAdapter to cope with that. Your question and your MetadataStore contents below suggest that you are using the out-of-the-box Web API adapter which wouldn't know what to do with the JSON query results.
Here is one item in the JSON payload from your query, reformatted for readability
{
"#id": 1,
"id": 1,
"name": "dsad",
"deploymentStrategies": null,
"versions": null,
"groups": null
}
There's nothing in there to indicate to which EntityType this data belongs. Just looking I have no idea what type this is. Breeze won't know either.
You'll need to learn about the "JsonResultsAdapter" which is how Breeze interprets JSON data arriving from the server and maps it into instances of EntityTypes.
The Ruby Sample has a custom JsonResultsAdapter. It depends upon the fact that the server is explicit about the type of each object it returns; see how every Rails view adds a $type node (for example, the sessions:index view). This is the approach to take if you can control what the server sends the client.
The Edmunds Sample has a custom JsonResultsAdapter that has to infer the type by examining the characteristics of the JSON data. It's kind of a forensic exercise that you only want to indulge if you have to do so.
Problem #2
The MetadataStore you serialized is empty of all type information. Here it is reformatted for legibility
{
"namingConvention": {
"name": "camelCase"
},
"localQueryComparisonOptions": {
"name": "caseInsensitiveSQL",
"isCaseSensitive": false,
"usesSql92CompliantStringComparison": true
},
"dataServices": [
{
"serviceName": "http:\/\/www.dockyard.com:8080\/rest\/",
"hasServerMetadata": true,
"jsonResultsAdapter": "webApi_default",
"useJsonp": false
}
],
"_resourceEntityTypeMap": {
"platform": "Platform:#com.psidox.dockyard.controller.model.dockyard",
"application": "Application:#com.psidox.dockyard.controller.model.application",
... a bunch more ...
},
"_structuralTypeMap": {
},
"_shortNameMap": {
},
... more emptiness ...
}
}
I'm not really surprised, having discovered problem #3.
Problem #3
Your raw metadata doesn't match a format that Breeze understands. It looks like you cobbled it together by hand. It sure doesn't look like anything I recognize. It doesn't match the CSDL format from Entity Framework. It doesn't match the "Breeze Metadata Format" that you'd see when you exported a MetadataStore.
It's in trouble almost immediately. Here is how you start the definition of your first type:
"structuralTypeMap": {
"Group:#com.psidox.dockyard.controller.model.application": {
"shortName": "Group",
"namespace": "com.psidox.dockyard.controller.model.application",
Here is how it should begin:
"structuralTypes": [
{
"shortName": "Group",
"namespace": "com.psidox.dockyard.controller.model.application",
I accept your point that the Breeze Metadata Schema Documentation is incorrect. We should fix that.
I'm sympathetic with your argument that Breeze should have thrown an exception. I can see why it didn't throw. It simply ignored all the nodes that it didn't understand. A lot of parsers do that, not that that is a sufficient excuse.
In this case, it ignored the "structuralTypeMap" node and everything it had to say about types. When the parser was done, it had learned nothing at all about the types. Breeze can't know how many types you'll specify but it could act suspicious if there are none.
I confess I personally never thought to use this metadata schema description as my guide. That would be just about the hardest possible way to write metadata.
I suggest that you look at the documentation topic "Metadata by hand".
In sum
Please examine a simple example first. Maybe Edmunds. Maybe Ruby.
Learn to write metadata by hand; it's not hard.
Learn about the JsonResultsAdapter
We do hope soon to offer specific guidance for the developer who has a "vanilla" REST data service.