How to prevent Pulumi cli from shuffling configurations in stack settings file - pulumi

I am trying to organize configuration values in stack settings file (Pulumi.dev.yaml) from top to bottom sequentially i.e. first Resource Group, then Storage Account, then Virtual Network, AKS and so on as following:
secretsprovider: xxx
encryptedkey: xxx
config:
azure-native:location: japaneast
#
# Resource Group
#
ns:MainResourceGroupArgs:
ResourceGroupName: xxx
Tags:
TestTag: xxx
#
# Storage Account
#
ns:MainStorageAccountArgs:
AccountKind: StorageV2
AccountName: xxxsa
AccountSku: Standard_LRS
Tags:
TestTag: xxx
#
# Spoke VNet
#
ns:SpokeVirtualNetworkArgs:
AddressPrefixes:
- 10.10.0.0/18
Subnets:
# ... ... ...
#
# Hub VNet
#
# ... ... ...
#
# AKS
#
# ... ... ...
But every time a Pulumi command is executed (i.e. pulumi preview -s dev or pulumi up -s dev) followings are happening:
configuration values are being shuffled, for example before executing command Resource Group was at top but after executing command Resource Group is at bottom. This is very annoying and bad when we have huge number of configurations
yaml comments are being removed
How to solve this issue?
I want to keep yaml comments in stack settings file and prevent Pulumi cli from shuffling configuration values.
Info: Pulumi cli v3.17.1

With Pulumi 3.24.1 and PULUMI_EXPERIMENTAL=true, you might check if Pulumi.dev.yaml has its content still reshuffled after:
pulumi preview -s dev --save-plan plan.json
Because after that, you can do:
pulumi up --plan-file plan.json
See:
Announcing the public preview of Update Plans
Before today, there is no guarantee that the pulumi up operation will do only what was previewed; if the program, or your infrastructure, changes between the preview and the update, the update might make additional changes to bring your infrastructure back in line with what’s defined in your program.
We’ve heard from many of you that you need a strong guarantee about exactly which changes an update will make to your infrastructure, especially in critical and production environments.
Today (Feb. 9th, 2022), I’m excited to announce the public preview of Update Plans, a new Pulumi feature which guarantees that operations shown in pulumi preview will run on pulumi up.
Update Plans also help catch any unexpected changes that might happen between when you preview a change and when you apply that change.
Update Plans work by saving the results of a pulumi preview to a plan file, which enables you to restrict subsequent pulumi up operations to only the actions saved in the plan file.
This helps you ensure that what you saw in the pulumi preview is what will actually happen when you run pulumi up.

Related

Move variable groups to the code repository and reference it from YAML pipelines

We are looking for a solution how to move the non-secret variables from the Variable groups into the code repositories.
We would like to have the possibilities:
to track the changes of all the settings in the code repository
version value of the variable together with the source code, pipeline code version
Problem:
We have over 100 variable groups defined which are referenced by over 100 YAML pipelines.
They are injected at different pipeline/stage/job levels depends on the environment/component/stage they are operating on.
Example problems:
some variable can change their name, some variable can be removed and in the pipeline which targets the PROD environment it is still referenced, and on the pipeline which deploys on DEV it is not there
particular pipeline run used the version of the variables at some date in the past, it is good to know with what set of settings it had been deployed in the past
Possible solutions:
It should be possible to use the simple yaml template variables file to mimic the variable groups and just include the yaml templates with variable groups into the main yamls using this approach: Variable reuse.
# File: variable-group-component.yml
variables:
myComponentVariable: 'SomeVal'
# File: variable-group-environment.yml
variables:
myEnvVariable: 'DEV'
# File: azure-pipelines.yml
variables:
- template: variable-group-component.yml # Template reference
- template: variable-group-environment.yml # Template reference
#some stages/jobs/steps:
In theory, it should be easy to transform the variable groups to the YAML template files and reference them from YAML instead of using a reference to the variable group.
# Current reference we use
variables:
- group: "Current classical variable group"
However, even without implementing this approach, we hit the following limit in our pipelines: "No more than 100 separate YAML files may be included (directly or indirectly)"
YAML templates limits
Taking into consideration the requirement that we would like to have the variable groups logically granulated and separated and not stored in one big yml file (in order to not hit another limit with the number of variables in a job agent) we cannot go this way.
The second approach would be to add a simple script (PowerShell?) which will consume some key/value metadata file with variables (variableName/variableValue) records and just execute job step with a command to
##vso[task.setvariable variable=one]secondValue.
But it could be only done at the initial job level, as a first step, and it looks like the re-engineering variable groups mechanism provided natively in Azure DevOps.
We are not sure that this approach will work everywhere in the YAML pipelines when the variables are currently used. Somewhere they are passed as arguments to the tasks. Etc.
Move all the variables into the key vault secrets? We abandoned this option at the beginning as the key vault is a place to store sensitive data and not the settings which could be visible by anyone. Moreover storing it in secrets cause the pipeline logs to put * instead of real configuration setting and obfuscate the pipeline run log information.
Questions:
Q1. Do you have any other propositions/alternatives on how the variables versioning/changes tracking could be achieved in Azure DevOps YAML pipelines?
Q2. Do you see any problems in the 2. possible solution, or have better ideas?
You can consider this as alternative:
Store your non-secrets variable in json file in a repository
Create a pipeline to push variables to App Configuration (instead a Vault)
Then if you need this settings in your app make sure that you reference to app configuration from the app instead running replacement task in Azure Devops. Or if you need this settings directly by pipelines Pull them from App Configuration
Drawbacks:
the same as one mentioned by you in Powershell case. You need to do it job level
What you get:
track in repo
track in App Configuration and all benefits of App Configuration

How do I prevent resources from being destroyed during the Azure DevOps Terraform Plan Step?

I have inherited some reponsibilities and by that I mean managing Terraform in AzureDevOps Release Pipeline deployments.
I am using the Terraform Task with the following steps:
init
validate
plan
apply
But during the plan output I can see a number of resources being destroyed that I don't want to be removed.
azurerm_key_vault_secret.kv_secret_az_backup_storage_account_name will be destroyed
I was looking for a way to disable any resource destruction during the creation of the tfstate file but there doesn't appear to be a way in Azure DevOps. So my best option would be I suppose resorted to amending the underlying main.tf script but I don't know how.
This is one of the resources being removed. I have renamed to keep anonymity. Can anyone suggestion a solution to my dilemma?
resource "azurerm_key_vault_secret" "kv_secret_az_storage_account_name" {
name = "storage-account-name"
value = azurerm_storage_account.storage_account.name
key_vault_id = azurerm_key_vault.keyvault.id
depends_on = [azurerm_storage_account.storage_account]
}
plan phase doesn't destroy your resource nor creates new. It does inform you what will happen when you run apply.
so
azurerm_key_vault_secret.kv_secret_az_backup_storage_account_name will be destroyed
this just says that your storage account will be destroyed if you run apply.
But since it tries to remove, it means that terrafrom keeps information bout this resource in state. So it was created be terraform and now if you want to put it out of the scope here - I mean you don't want to have it longer maintained by your teffafrom script you can use state rm command.
Items removed from the Terraform state are not physically destroyed. Items removed from the Terraform state are only no longer managed by Terraform. For example, if you remove an AWS instance from the state, the AWS instance will continue running, but terraform plan will no longer see that instance.

how to protect resources in a specific Pulumi stack from being deleted

I use Pulumi to bring up my infrastructures in GCP . Pulumi has the stack features that helps you to build multiple replications of the same type of Pulumi's code.
So I have dev/stage/prod stack that corresponds to each of the environment we have.
I want to know if there is a way that I can protect the production stack so that no one can delete any resources in there.
I am aware that about the protect bit flag, but that would apply to all the stacks which I don't want to.
there are a couple options to achieve this:
Option 1
One option would be to restrict access to the Pulumi state file such that only a privileged user or entity (e.g. a continuous delivery pipeline) is able to read and write the prod state and therefore able to perform operations that might destroy resources. The Pulumi Console backend supports this with stack permissions at a granular level and access can be restricted with the other state backends via the IAM capabilities of the specific provider (e.g. AWS IAM).
Option 2
Another option (that could be used in conjunction with the first) would be to programmatically set the protect flag based on the stack name. Below is an example in Python, but the same concept works in all languages:
import pulumi
from pulumi_aws import s3
# only set `protect=True` for "prod" stacks
prod_protected = False
if "prod" == pulumi.get_stack():
prod_protected = True
bucket = s3.Bucket("my-bucket",
opts=pulumi.ResourceOptions(
protect=prod_protected, # use `prod_protected` flag
),
)
You would be required to set protect=... on each resource in your stack to protect all resources in the prod stack. The Pulumi SDK provides a way to set this on all resources at once with a stack transformation. There's an example of doing a stack transformation to set tags on resources here.

Timeout configuration for CloudFormation

I am running CloudFormation updates to ECS. Triggered by CodePipeline. I would like to abort the CloudFormation deployment and rollback to the previous version after a timeout.
What is the best way to accomplish this? I saw something about WaitConditions but I'm not sure that is the right mechanism.
I also found that you can configure a TimeoutInMinutes on nested stacks https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-stack.html#cfn-cloudformation-stack-timeoutinminutes - but sounds like you cannot apply a similar property at the top level of the stack or to an arbitrary resource?
Is there another way that I can use the combination where I can abort the Codepipeline->Cloudformation->ECS deployment after a few minutes if it doesn't succeed?
This is a general gripe with CodePipeline ECS Deploy action (ECS, not ECS B/G) that if you push a bad image, you will have to wait 1hr for the timeout to occur before you can retry the pipeline.
At the moment, CodePipeline doesn't support rollbacks. You can detect a failed pipeline using CloudWatch [1] and take some action. The action will probably be roll-forward to a good version.
[1] Detect and React to Changes in Pipeline State with Amazon CloudWatch Events - https://docs.aws.amazon.com/codepipeline/latest/userguide/detect-state-changes-cloudwatch-events.html
We don't use CodePipeline, we're using Sceptre. But I guess my workaround could still work.
My workaround for this problem is that before triggering a deployment, run a script in the background.
./deployment-breaker.sh &
And for the script
#!/bin/bash
sleep 600
$deploymentStatus = (aws cloudformation describe-stack --stack-name STACK_NAME | jq XXX)
if [[ $deploymentStatus == YOUR_TERMINATE_CONDITION ]]then
aws cloudformation cancel-update-stack --stack-name STACK_NAME
fi

In teraform, is there a way to refresh the state of a resource using TF files without using CLI commands?

I have a requirement to refresh the state of a resource "ibm_is_image" using TF files without using CLI commands ?
I know that we can import the state of a resource using "terraform import ". But I should do the same using IaC in TF files.
How to achieve this ?
Example:
In workspace1, I create a resource "f5_custom_image" which gets deleted later from command line. In workspace2, the same code in TF file will assume that "f5_custom_image" already exists and it fails to read the custom image resource. So, my code has to refresh the terraform state of this resource for every execution of "terraform apply":
resource "ibm_is_image" "f5_custom_image" {
depends_on = ["data.ibm_is_images.custom_images"]
href = "${local.image_url}"
name = "${var.vnf_vpc_image_name}"
operating_system = "centos-7-amd64"
timeouts {
create = "30m"
delete = "10m"
}
}
In Terraform's model, an object is fully managed by a single Terraform configuration and nothing else. Having an object be managed by multiple configurations or having an object be created by Terraform but then deleted later outside of Terraform is not a supported workflow.
Terraform is intended for managing long-lived architecture that you will gradually update over time. It is not designed to manage build artifacts like machine images that tend to be created, used, and then destroyed.
The usual architecture for this sort of use-case is to consider the creation of the image as a "build" step, carried out using some other software outside of Terraform, and then we use Terraform only for the "deploy" step, at which point the long-lived infrastructure is either created or updated to use the new image.
That leads to a build and deploy pipeline with a series of steps like this:
Use separate image build software to construct the image, and record the id somewhere from which it can be retrieved using a data source in Terraform.
Run terraform apply to update the long-lived infrastructure to make use of the new image. The Terraform configuration should include a data block to read the image id from wherever it was recorded in the previous step.
If desired, destroy the image using software outside of Terraform once Terraform has completed.
When implementing a pipeline like this, it's optional but common to also consider a "rollback" process to use in case the new image is faulty:
Reset the recorded image id that Terraform is reading from back to the id that was stored prior to the new build step.
Run terraform apply to update the long-lived infrastructure back to using the old image.
Of course, supporting that would require retaining the previous image long enough to prove that the new image is functioning correctly, so the normal build and deploy pipeline would need to retain at least one historical image per run to roll back to. With that said, if you have a means to quickly recreate a prior image during rollback then this special workflow isn't strictly needed: instead, you can implement rollback instead by "rolling forward" to an image constructed with the prior configuration.
An example software package commonly used to prepare images for use with Terraform on other cloud vendors is HashiCorp Packer, but sadly it looks like it does not have IBM Cloud support and so you may need to look for some similar software that does support IBM Cloud, or write something yourself using the IBM Cloud SDK.