Terraform Azure Devops Provider - azure-devops

We are trying to automate the Azure DevOps functions using Terraform. We are able to create Projects and Repos using Terraform. But we need to create multiple projects and repos specific to each project.
I have my terraform.tfvars file as given below
Proj1_Repos = ["Repo1","Repo2","Repo3"]
Proj2_Repos = ["Repo4","Repo5","Repo7"]
Project_Name = ["Proj1","Proj2"]
How i can write my terraform configuration file to create Proj1_Repos in Proj1 and Proj2_Repos in Proj2

I think you will have an easier time restructuring the variables to look something like:
"Projects" = {
"Proj1" = {
"repos" = ["Repo1","Repo2","Repo3"]
},
"Proj2" = {
"repos" = ["Repo4","Repo5","Repo6"]
}
}
This way you can more cleanly iterate over your declarations using the for_each operator for your devops repo resources.
Alternatively, if restructuring the input variables isn't an option, you can use the locals block to construct an association map for your variables. Something like this
If you are looking for a way to feed a variable value to reference another variable, you will not be able to do so without constructing a custom data object using the key and value of your variables. This route can get pretty wonky and not recommended.

Related

how can i pass a powershell variable to terraform

I need to pass a powershell/devops variable to terraform, is there a way of doing this? As in the below example i want the below PR number to be used as a variable in terraform.
testvalue-$(System.PullRequest.PullRequestNumber)
As far as I know, there is no way to define a variable by the output of a command (shell ..), but you can take a look at this data source external data source ,
the idea is that you define a bash script or any program and use it's output as parameters for other resources.
Example
data "external" "PullRequest" {
program = [
"${path.module}/scriptWhichReturnsPullRequestName.sh",
]
result {
}
}
...
resource ... {
value = data.external.PullRequest.property
}
I put my variables in a variables.tf and trigger terraform execution from a powershell script. Prior that execution I just replace certain strings in variables.tf.

Best practice for using variables to configure and create new Github repository instance in Terraform instead of updating-in-place

I am trying to set up a standard Github repository template for my organization that uses Terraform to spin up new repos with the configured settings.
Every time I try to update the configuration file to create a new instance of the repository with a new name, instead it will try to update-in-place any repo that was already created using that file.
My question is what is the best practice for making my configuration file reusable with input variables like repo name? Should I make a module or is there some way of reusing that file otherwise?
Thanks for the help.
Terraform is a desired-state-configuration system, which means that your configuration should represent the full set of objects that should exist rather than an instruction to create a single object.
Therefore the typical way to add a new repository is to add a new resource block declaring that new repository, and leave the existing ones unchanged. Terraform will then see that there's a new resource not currently tracked in the state and will propose to create it.
If your repositories are configured in some systematic way that you can describe using a mechanical rule rather than manual configuration then you can potentially use the for_each meta-argument to declare multiple resource instances from the same resource block, using Terraform language expressions to describe the systematic rule.
For example, you could create a local value with a higher-level data structure that describes what should be different between your repositories and then use that data structure with for_each on a single resource block:
locals {
repositories = tomap({
example_1 = {
description = "First example repository"
}
example_2 = {
description = "Second example repository"
}
})
}
resource "github_repository" "all" {
for_each = local.repositories
name = each.key
description = each.value.description
private = true
}
For simplicity in this example I've only made the name and description variable between the instances, but you can add whatever extra attributes you need for each of the elements of local.repositories and then access them via each.value inside the resource block.
The private argument above illustrates how this approach can avoid the need to re-state argument values that will be the same for each declared repository, and have your local.repositories data structure focus only on the minimum attributes needed to describe the variations you need for your local policies around GitHub repositories.
A resource block with for_each set appears as a map of objects when used in expressions elsewhere, using the same keys as in the map given in for_each. Therefore if you need to access the repository ids, or any other attribute of the systematically-declared objects, you can write Terraform expressions that work with maps. For example, if you want to output all of the repository ids as a map of strings:
output "repository_ids" {
value = tomap({
for k, r in github_repository.all : k => r.repo_id
})
}

Passing Generated Value to Downstream Job

I'm struggling to figure out a way to populate a parameter for a downstream, freestyle project based on a value generated during my pipeline run.
A simplified example would probably best serve to illustrate the issue:
//other stuff...
stage('Environment Creation') {
steps {
dir(path: "${MY_DIR}") {
powershell '''
$generatedProps = New-Instance #instancePropsSplat
$generatedProps | Export-Clixml -Depth 99 -Path .\\props.xml
'''
stash includes: 'props.xml', name: 'props'
}
}
}
//...later
stage('Cleanup') {
unstash props
// either pass props.xml
build job: 'EnvironmentCleanup', wait: false, parameters: [file: ???]
// or i could read the xml and then pass as a string
powershell '''
$props = Import-Clixml -Path props.xml
# how to get this out of this powershell script???
'''
}
I create some resources in one stage, and then in a subsequent stage I want to kick off a job using those resources as a parameter. I can modify the downstream job however I want, but I am struggling to figure out how to do this.
Things I've tried:
File Parameter (just unstashing and passing through)
Apparently do not work with pipelines
Potential Paths:
EnvInject: may not be safe to use and apparently doesn't work with pipelines?
Defining a "global" variable like here, but I'm not sure how powershell changes that
So, what's the best way of accomplishing this? I have a value that is generated in one stage of a pipeline, and I then need to pass that value (or a file containing that value) as a parameter to a downstream job.
So here's the approach I've found that works for me.
In my stage that depends on the file created in a previous stage, I am doing the following:
stage ("Environment Cleanup') {
unstash props
archiveArtifacts "props.xml"
build job: 'EnvironmentCleanup', parameters: [string(name: "PROJECT", value: "${JOB_NAME}")], wait: false
}
Then in my dependent job (freestyle), I copy the "props.xml" file from the triggering build and using the job name passed in as a parameter, then execute my powershell to deserialize the xml to an object, and then read the properties I need.
The last, and most confusing, part I was missing was in the options for my triggering pipeline project I needed to grant copy permission to the downstream job:
options {
copyArtifactPermission('EnvironmentCleanup') // or just '*' if you want
}
This now works like a charm and I will be able to use it across my pipelines that follow this same workflow.

Custom CloudFormation Resources in Terraform

I'm trying out Terraform, and am in the process of translating one of my more interesting CloudFormation stacks to TF. Included as a key part of the stack is the following declaration that specifies a custom resource for the template - a Lambda that queries a list of AMIs and selects the latest one for the context, based on the description as a filter.
LatestAMI:
Type: Custom::LatestAMI
Properties:
ServiceToken: arn:aws:lambda:us-east-1:XXXXXXX:function:GetLatestAMI
Description: ubuntu-16.04
I've looked around the Terraform docs, but I can't seem to find out how I can specify this resource. Is there a Terraform analog for custom resources in CloudFormation?
The CF codes you posted calls a lambda function to get the latest ami id (filter with Description: ubuntu-16.04. There is simpler way to do in terraform.
You need data source aws_ami
https://www.terraform.io/docs/providers/aws/d/ami.html
Use this data source to get the ID of a registered AMI for use in other resources.
data "aws_ami" "latest_ami" {
most_recent = true
executable_users = ["all"]
filter {
name = "owner-alias"
values = ["amazon"]
}
filter {
name = "name"
values = ["*ubuntu-16.04*"]
}
}

How do tf.exe and tfpt.exe establish connections to a TFS instance based on execution directory?

I'm doing some work at the command line using using the TFS object model, and I want to reproduce the workspace-detection bahavior seen in tf.exe and tfpt.exe without introducing artifacts because of my own particular implementation. Currently, my scripts require more information than what tf.exe needs--a significant amount of my parameters are there simply to instantiate the connection.
Specifically, I have to require users to explicitly pass in the server Uri (tfsUriString) and the collection name (tfsCollectionName), but this seems needless and annoying since tf.exe is able to do it.
Uri tfsUri = new Uri(tfsUriString);
TfsConfigurationServer configurationServer = TfsConfigurationServerFactory.GetConfigurationServer(tfsUri);
ReadOnlyCollection<CatalogNode> collectionNodes = configurationServer.CatalogNode.QueryChildren( new[] { CatalogResourceTypes.ProjectCollection }, false, CatalogQueryOptions.None);
CatalogNode collectionNode = collectionNodes.Where(node => node.Resource.DisplayName == tfsCollectionName).SingleOrDefault();
Guid collectionId = new Guid(collectionNode.Resource.Properties["InstanceId"]);
TfsTeamProjectCollection teamProjectCollection = configurationServer.GetTeamProjectCollection(collectionId);
var vcServer = teamProjectCollection.GetService<VersionControlServer>();
What classes and methods can be used to perform this detection in the same way that tf.exe does?
Team Foundation Server clients use a workspace cache that contains all the user's workspaces for each Team Project Collection that they've used from the current machine. tf.exe uses this cache to determine the TfsTeamProjectCollection and Workspace to use for the paths given on the command line.
You can get the cached WorkspaceInfo for a particular local path by using:
Workstation.Current.GetLocalWorkspaceInfo(localPath)
Or you can get the entire workspace cache by calling:
Workstation.Current.GetAllLocalWorkspaceInfo()
The WorkspaceInfo.ServerUri property will contain the server URI you can use to create a TfsTeamProjectCollection with.