we created a Terraform template which we will deploy in future many dozen times in different workspaces. Each workspace will have a separate configuration file.
Now we would like to automate this procedure and think about keeping the configuration files in a Git Repo.
Has anyone a best practise how to store the configuration files in a Git Repo and trigger a CICD workflow (Azure DevOps)?
In general we only would like to apply changes for workspaces that have a changed configuration.
The terraform plan and apply command have an option for you to pass in the tfvars file you want to use. So something like this:
terraform apply --var-file=workspace.tfvars
So in the pipeline you would grab your terrafom template artifacts and your config files. I would then set a TF_WORKSPACE variable to force your workspace and I would also make your tfvars files match the workspace name so you can re use the variable in your apply command. This would force your workspace and configuration file to match.
To trigger this when those files have changed would require a path trigger that would trigger on those changes.
I don't see any harm in running the Terraform every time regardless if changes occur. The worse possible outcome would be that someone made a change that isn't in Terraform and it gets undone.
Related
I'm setting up secondary region for my synapse workspace, is there a way I can export all the triggers from one workspace to another?
You have 3 options, as far as I can see:
set up Git and Devops integration between your 2 workspaces and then set up a release pipeline to copy everything from one workspace to the other. Here is a link to the documentation. This is the best way if you have a lot to copy, and\or want to create a way to copy between environments on a regular basis.
Build a PowerShell script to get information about triggers from one workspace and then create them in the second workspace. Try the commands Get-AzSynapseTrigger to copy from one workspace and Set-AzSynapseTrigger to create on the new environment.
If you have few triggers, simply copying them is the simplest, thought programmatically disappointing solution.
As background, I am in the process of upgrading a few projects from Jenkins and Gitlab CI based CI to use Tekton. In those projects, it is common to see a Jenkinsfile or .gitlab-ci.yml defining what the pipeline is to run for the project. Those files are then used by the corresponding tool at build time whenever a triggering event occurs (such as a merge/commit/etc). Those files would change over time to accommodate whatever was needed by the repository to perform its build and then committed to the repo like any other kind of work performed. This had the desirable behavior of knowing exactly what the build pipeline looked like at any point in the commit history and aiding build reproducibility if handled carefully/correctly.
The corresponding approach with Tekton appears to suggest that you store the CRD yaml files under a /tekton folder. However, most of the documentation and examples I've seen for Tekton focus on what appears to be a manual process of pushing your CRDs out with kubectl. Once the CRDs have been installed, the EventTrigger is capable of using the defined resources whenever necessary, but what happens when I commit an update to the pipeline.yaml? Is the expectation that a developer manually pushes the updated CRDs with kubectl or is there a way for the EventTrigger to automatically use the ./tekton/pipeline.yaml that is stored in the git repo that sourced the event?
I have 4 terraform directories for 4 branches of a repo (dev, qa, uat, prod). I was able to interpolate every variable I needed by adding "branch" as an env variable in TF cloud, and using it across my resources. However, the workspace name itself is the problem. Trying to add an interpolated variable to it throws an error - workspace name can't have those. And since my branches auto PR to each other (code is instantly promoted from dev all the way up to prod) it causes conflicts - because all tf files now have different hardcoded workspace names. Yes, I could just ignore the terraform file when I promote my branches, but the idea is that editing dev.tf affects all others. Any way to go around this issue?
I think you need to move away from having multiple branches.
Branches track changes on the same codebase. Also,the workspaces are usable when you have a single backend configuration but multiple instances of your infra. An example can be, multiple developers are using a single AWS account to create their own sandbox application using the same IaC.
As you have multiple environments (dev, qa, uat, prod) I will suggest to use the same codebase and different configuration files.
Something like:
common.tfvars
dev.tfvars
qa.tfvars
uat.tfvars
prod.tfvars
when you want to change something edit only the dev.tfvars, test your changes and then add those config chnages to the rest files.
Also, consider tagging your prod infra and apply changes only using git tags there.
In the product that I work on, there are many configuration tables. I need to find a way to track configuration changes (hopefully with some kind of version/changeset number), deploy the configuration changes to other environments using the changeset number and if needed rollback particular configuration based on changeset number.
I am wondering how can I do that?
One solution that I think could work is to write a script(s) to take all the configurations from all the config tables and create Json file(s). I can then check-in that file(s) to tfs or github to maintain versioning and write another script(s) to load that configuration file(s) in any environment.
What is the easiest way to apply the changes from a specific changeset from one TFS instance on another instance?
What I want is to get some sort of patch file from instance A that I can apply to instance B. Since there are two different instances, a traditional branch/merge approach cannot be used. And as far as I know, TFS has poor support for patch files in the traditional Unix-sense.
Do I really need to inspect a changeset on instance A and manually zip the relevant files which I can then extract into the source tree of instance B?
Turns out that the "patch" route was a dead end due to lack of support in TFS. The solution we ended up with was to perform a nightly job which basically does the following:
Get all code from remote repo with a read-only user.
Overwrite all content of a separate branch in our repo with the content from the other.
Perform a merge from that separate branch to the main branch whenever we want to bring their changes into our main branch.
This answer explains how to create a patch file using the tf diff command. However, there is no built-in way to apply that patch file to another instance or branch. I have not seen any third-party tools to do so either.
I wrote a blog post about a similar issue where I used the TF.exe command and 7Zip to create a TFS patch file that could then be applied on another TFS server or workspace. I posted the the Powershell scripts at Github, which can be used to Zip up any pending changes on one workspace and then apply them to a different server. It would have to be modified to use a changeset instead of pending changes, but that shouldn't be too difficult to accomplish.