I am trying to define dataSourceBindings in the json of a task group I intend to reuse in release definitions. My task group is basically to deploy a web app with some custom actions. On the parameters I expose on the group, I want to populate the picklists with proper values from AzureRM endpoint like the subscriptions, resource groups, web apps and slots available.
To achieve it, I created a first template from the VSTS UI I then exported to edit in JSON. Based on multiple posts and examples availables in VSTS tasks repo, I defined my dataSourceBindings as:
"dataSourceBindings": [
{
"target": "ResourceGroupName",
"endpointId": "$(ConnectedServiceName)",
"dataSourceName": "AzureResourceGroups"
},
{
"target": "WebAppName",
"endpointId": "$(ConnectedServiceName)",
"dataSourceName": "AzureRMWebAppNamesByType",
"parameters": {
"WebAppKind": "app"
}
},
{
"target": "SlotName",
"endpointId": "$(ConnectedServiceName)",
"dataSourceName": "AzureRMWebAppSlotsId",
"parameters": {
"WebAppName": "$(WebAppName)",
"ResourceGroupName": "$(ResourceGroupName)"
},
"resultTemplate": "{\"Value\":\"{{{ #extractResource slots}}}\",\"DisplayValue\":\"{{{ #extractResource slots}}}\"}"
}
]
I was able to import my task group in VSTS. But once added to a release definition the picklists were still empty.
Then I tried to export the previously imported group and noticed the dataSourceBindings field was empty.
Is importing a group task with dataSourceBindings supported?
If yes, what could possibly go wrong with mine?
Related
We need a way to access data through an automated way (either Rest API or some SDK) that is contained within the Retrospective Azure Dev Ops extension. Currently, there is an option to export CSV but the process is manual and limited to each Retrospective. Any ideas/thoughts?
You can try like as the following steps:
Run the API to get the information of project teams in a project.
Request URL
POST https://dev.azure.com/{organization_Name}/_apis/Contribution/HierarchyQuery?api-version=5.0-preview.1
Request Body
{
"contributionIds": ["ms.vss-admin-web.org-admin-groups-data-provider"],
"dataProviderContext": {
"properties": {
"teamsFlag": true,
"sourcePage": {
"url": "https://dev.azure.com/{organization_Name}/{project_Name}/_settings/teams",
"routeId": "ms.vss-admin-web.project-admin-hub-route",
"routeValues": {
"project": "{project_Name}",
"adminPivot": "teams",
"controller": "ContributedPage",
"action": "Execute",
"serviceHost": "{organization_Id} ({organization_Name})"
}
}
}
}
}
Run the API to list the retrospectives for a specified project team in the project.
GET https://extmgmt.dev.azure.com/{organization_Name}/_apis/ExtensionManagement/InstalledExtensions/ms-devlabs/team-retrospectives/Data/Scopes/Default/Current/Collections/{projectTeam_identityId}/Documents?api-version=3.1-preview.1
Run the API to get more details about a specified retrospective.
GET https://extmgmt.dev.azure.com/{organization_Name}/_apis/ExtensionManagement/InstalledExtensions/ms-devlabs/team-retrospectives/Data/Scopes/Default/Current/Collections/{retrospective_Id}?api-version=3.1-preview.1
However, we have not any available interface (API or CLI) to Export CSV content.
I am trying to deploy data factory using ARM template. It is easy to use the exported template to create a deployment pipeline.
However, as the data factory needs to access an on-premise database server, I need to have an integrated runtime. The problem is how can I include the run time in the arm template?
The template looks like this and we can see that it is trying to include the runtime:
{
"name": "[concat(parameters('factoryName'), '/OnPremisesSqlServer')]",
"type": "Microsoft.DataFactory/factories/linkedServices",
"apiVersion": "2018-06-01",
"properties":
{
"annotations": [],
"type": "SqlServer",
"typeProperties": {
"connectionString": "[parameters('OnPremisesSqlServer_connectionString')]"
},
"connectVia": {
"referenceName": "OnPremisesSqlServer",
"type": "IntegrationRuntimeReference"
}
},
"dependsOn": [
"[concat(variables('factoryId'), '/integrationRuntimes/OnPremisesSqlServer')]"
]
},
{
"name": "[concat(parameters('factoryName'), '/OnPremisesSqlServer')]",
"type": "Microsoft.DataFactory/factories/integrationRuntimes",
"apiVersion": "2018-06-01",
"properties": {
"type": "SelfHosted",
"typeProperties": {}
},
"dependsOn": []
}
Running this template gives me this error:
\"connectVia\": {\r\n \"referenceName\": \"OnPremisesSqlServer\",\r\n \"type\": \"IntegrationRuntimeReference\"\r\n }\r\n }\r\n} and error is: Failed to encrypted linked service credentials on self-hosted IR 'OnPremisesSqlServer', reason is: NotFound, error message is: No online instance..
The problem is that I will need to type in some key in the integrated runtime's UI, so it can be registered in azure. But I can only get that key from my data factory instance's UI. So above arm template deployment will always fail at least once. I am wondering if there is a way to create the run time independently?
The problem is that I will need to type in some key in the integrated
runtime's UI, so it can be registered in azure. But I can only get
that key from my data factory instance's UI. So above arm template
deployment will always fail at least once. I am wondering if there is
a way to create the run time independently?
It seems that you already know how to create Self-Hosted IR in the ADF ARM.
{
"name": "[concat(parameters('dataFactoryName'), '/integrationRuntime1')]",
"type": "Microsoft.DataFactory/factories/integrationRuntimes",
"apiVersion": "2018-06-01",
"properties": {
"additionalProperties": {},
"description": "jaygongIR1",
"type": "SelfHosted"
}
}
Result:
Only you concern is that Windows IR Tool need to be configured with AUTHENTICATION KEY to access ADF Self-Hosted IR node.So,it should be Unavailable status once it is created.This flow is make sense i think,authenticate key should be created first,then you can use it to configure On-Premise Tool.You can't implement all of things in one step because these behaviors are operated on both of azure and on-premise sides.
Based on the Self-Hosted IR Tool document ,the Register steps can't be implemented with Powershell code. So,all steps can't be processed in the flow are creating IR and getting Auth key,not for Registering in the tool.
I created an Azure DevOps webhook for build.complete event and i'd like to get information about artifacts that have been published from the build but the resource.drop information is always null:
"resource": {
"uri": "vstfs:///Build/Build/10301",
"id": 10301,
"buildNumber": "cxcxzccxczxcxzcx_master_2018-12-17.3",
"url": "https://xxxxxxxxxxxxxxxxxx/_apis/build/Builds/10301",
"startTime": "2018-12-17T15:19:29.3762035Z",
"finishTime": "2018-12-17T15:22:01.4107196Z",
"reason": "batchedCI",
"status": "succeeded",
"drop": {},
"log": {},
I'm using the Publish Build Artifacts v.1 task to publish the artifact. The artifact publish location is Azure Pipelines/TFS. Is there something i should add to the build definition to get the information about the artifacts in the webhook payload?
Thanks!
I am using the Microsoft Graph Editor to add application roles to a SAML SSO enabled application in Azure AD. I copied out the existing appRoles stanza using Get and have edited it to include two new roles.
https://graph.microsoft.com/beta/servicePrincipals/<objectID>
in this case is the objectID of my application.
However, when I run a Patch call to update the servicePrincipals data it throws a very generic error (One or more properties contains invalid values).
I have validated the JSON and am unable to determine what is causing the error.
My JSON is as follows:
{
"appRoles": [{
"allowedMemberTypes": ["User"],
"description": "msiam_access",
"displayName": "msiam_access",
"id": "b9632174-c057-4f7e-951b-be3adc52bfe6",
"isEnabled": true,
"origin": "Application",
"value": null
},
{
"allowedMemberTypes": ["User"],
"description": "Administrator",
"displayName": "Administrator",
"id": "b45591dd-c1f4-404e-9554-18fea972c3e4",
"isEnabled": true,
"origin": "ServicePrincipal",
"value": "SAML_Admin"
},
{
"allowedMemberTypes": ["User"],
"description": "ReadOnlyUsers",
"displayName": "ReadOnlyUsers",
"id": "e3c19ea4-e86a-4897-9bb5-3d2d115fed80",
"isEnabled": true,
"origin": "ServicePrincipal",
"value": "SAML_RO"
}]
}
I also used a GUID generator to generate the GUIDs. If and when they are not unique I get an error to that effect. So, I am ruling that out for now.
You need to update the application, not the service principal.
Custom permissions are defined on the Application object, and are only reflected in the Service principal.
So you'll need to do a PATCH on:
https://graph.microsoft.com/beta/applications/<objectID>
Where objectID is the object id for the Application object (note this is different from the service principal's object id).
You may then have to re-create the service principal.
Thanks to #juunas for helpful feedback.
The only solution that worked for me was to edit the Enterprise Application manifest directly with the new roles. I used a GUID creator web application to create the GUIDs and everything is working as expected.
Update: I found executing script on the octopus server is now available in version 3.3, I haven't update my octopus yet but I will take that would work as designed. I'm still wondering if there is a better way to do this without octo.exe?
The task I'm trying to accomplish is after each successful production deployment, automatically schedule a DR deployment to happen next 24 hours.
My desired approach is have octopus do it.
I added a new Octopus step at the end of the deployment only runs upon success of previous step. I attempted to use octo deploy-release --deployAt can be found here in the newly created step.
My challenge is, a script step requires me to pick a target role, which means it will be executed on a tentacle. Also, presence of Octo.exe is required.
I tried to create my own octopus step template, a deployment target role is still required in my customized step.
{
"Id": "ActionTemplates-2",
"Name": "Octopus - Schedule Deployment",
"Description": "Schedule a future octopus deployment",
"ActionType": "Octopus.Script",
"Version": 3,
"Properties": {
"Octopus.Action.Script.Syntax": "PowerShell",
"Octopus.Action.Script.ScriptBody": "--hide--"
},
"SensitiveProperties": {},
"Parameters": [
{
"Name": "OctoPath",
"Label": "Path for Octo.exe",
"HelpText": "Location for octo.exe",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
},
{
"Name": "projName",
"Label": "Project Name",
"HelpText": "The name of the project should be deployed",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
},
{
"Name": "days",
"Label": "Days",
"HelpText": "The days in future this deployment would happen",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
},
{
"Name": "hours",
"Label": "Hours",
"HelpText": "The hours in future this deployment would happen",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
},
{
"Name": "env",
"Label": "Environment to deploy",
"HelpText": "The environment next deployment should happen",
"DefaultValue": null,
"DisplaySettings": {
"Octopus.ControlType": "SingleLineText"
}
}
],
"$Meta": {
"ExportedAt": "2016-04-20T13:58:54.263Z",
"OctopusVersion": "3.2.0",
"Type": "ActionTemplate"
}
}
Is there a way to alter the template to get rid of the role selection and have octopus server directly execute it as it does for Azure script step?
Is there any another way we can have octopus server automatically schedule the deployment without external help? I guess this go back to first problem, I may still need octopus to run something on the server side.
Note: We kick off production deployment manually, thus I don't have another tool waiting for the response of the deployment. I think it is possible to have a process regularly call out the last deployment and do some analysis then schedule new deployment accordingly but this is not as clean as have octopus do it directly. Injecting octo.exe to a random production machine is not desired at all
You could create new WebAPI project in C#, pull in the Octopus.Deploy nuget package,
write code that accepts HTTP requests, and deals with the scheduling logic.
Host that project on the same server as Octopus server itself. Should be 20-30 minute job to set the website up in IIS.
In your deployment process, add step that creates http request, and done. You could go even one step further, and have the site/service listen for every successful deployment, and do decisions based on that, such that other projects don't have to add extra steps to octopus deployment process.
As you said, polling is also viable option.
Alternatively, if you're on Octopus deploy 3.0, they already expose REST API, I am not sure if it's powerful enough to allow you create scheduled deployment, but you could explore that: https://github.com/OctopusDeploy/OctopusDeploy-Api/wiki/Releases
I agree floating octo.exe in production servers is bad idea. It might get out of sync, and your production server shouldn't deal with this.