how to protect resources in a specific Pulumi stack from being deleted - pulumi

I use Pulumi to bring up my infrastructures in GCP . Pulumi has the stack features that helps you to build multiple replications of the same type of Pulumi's code.
So I have dev/stage/prod stack that corresponds to each of the environment we have.
I want to know if there is a way that I can protect the production stack so that no one can delete any resources in there.
I am aware that about the protect bit flag, but that would apply to all the stacks which I don't want to.

there are a couple options to achieve this:
Option 1
One option would be to restrict access to the Pulumi state file such that only a privileged user or entity (e.g. a continuous delivery pipeline) is able to read and write the prod state and therefore able to perform operations that might destroy resources. The Pulumi Console backend supports this with stack permissions at a granular level and access can be restricted with the other state backends via the IAM capabilities of the specific provider (e.g. AWS IAM).
Option 2
Another option (that could be used in conjunction with the first) would be to programmatically set the protect flag based on the stack name. Below is an example in Python, but the same concept works in all languages:
import pulumi
from pulumi_aws import s3
# only set `protect=True` for "prod" stacks
prod_protected = False
if "prod" == pulumi.get_stack():
prod_protected = True
bucket = s3.Bucket("my-bucket",
opts=pulumi.ResourceOptions(
protect=prod_protected, # use `prod_protected` flag
),
)
You would be required to set protect=... on each resource in your stack to protect all resources in the prod stack. The Pulumi SDK provides a way to set this on all resources at once with a stack transformation. There's an example of doing a stack transformation to set tags on resources here.

Related

Policy for Cloudformation stack creation

I'm putting together a role/policy for running cloudformation/sam to limit access as much as I can. Is there a general set of policy actions that should be used to run create-stack?
This is for a codebuild which I'm using to create infrastructure using a cloudformation template during runtime of my application.
At the moment I've got a policy which allows full access, because it needs to create the infrastructure within the stack.
But there are only a subset of actions which cloudformation can actually perform and it doesn't need full access. For example, CF can't put items into a dynamodb table.
So this led me to think that maybe there's a basic role/policy that is limited to only the actions which cloudformation is able to perform.
If you're having to assign a role to a service (such as CodePipeline or CodeBuild) to deploy a stack, you do not only need to assign the necessary CloudFormation permissions (such as cloudformation:CreateStack or cloudformation:ExecuteChangeSet) but also permissions necessary for the deployment of the CloudFormation stack itself.
When you are deploying a stack manually, CloudFormation will use your user permissions to verify access to the services you are deploying/updating. When you're initiating the action from another AWS service, the same thing happens, but with the services from the service role. (Unless you are specifically assigning a role to the CloudFormation stack, documentation).
Keep in mind if you're constructing such a role, that CloudFormation might need more permissions than you think, such as extra read permissions, permissions to add tags to resources, permissions to delete and/or update those resources when you're deleting/updating the resources etc.

In AWS cloudformation, what is the difference between a custom resource and a resource provider?

As per my understanding:
A custom resource is just an AWS Lambda function that runs whenever the stack is provisioned or updated or deleted.
A resource provider is plain old code where one writes hooks for all the Stack operations (update, create, delete, etc).
I can't see why anyone would use the former over the latter. Resource providers seem easier to write and test.
One historical reason is that custom resources were the only option until recently:
CloudFormation Release History
18 Nov 2019 Resource Provider announcement

In teraform, is there a way to refresh the state of a resource using TF files without using CLI commands?

I have a requirement to refresh the state of a resource "ibm_is_image" using TF files without using CLI commands ?
I know that we can import the state of a resource using "terraform import ". But I should do the same using IaC in TF files.
How to achieve this ?
Example:
In workspace1, I create a resource "f5_custom_image" which gets deleted later from command line. In workspace2, the same code in TF file will assume that "f5_custom_image" already exists and it fails to read the custom image resource. So, my code has to refresh the terraform state of this resource for every execution of "terraform apply":
resource "ibm_is_image" "f5_custom_image" {
depends_on = ["data.ibm_is_images.custom_images"]
href = "${local.image_url}"
name = "${var.vnf_vpc_image_name}"
operating_system = "centos-7-amd64"
timeouts {
create = "30m"
delete = "10m"
}
}
In Terraform's model, an object is fully managed by a single Terraform configuration and nothing else. Having an object be managed by multiple configurations or having an object be created by Terraform but then deleted later outside of Terraform is not a supported workflow.
Terraform is intended for managing long-lived architecture that you will gradually update over time. It is not designed to manage build artifacts like machine images that tend to be created, used, and then destroyed.
The usual architecture for this sort of use-case is to consider the creation of the image as a "build" step, carried out using some other software outside of Terraform, and then we use Terraform only for the "deploy" step, at which point the long-lived infrastructure is either created or updated to use the new image.
That leads to a build and deploy pipeline with a series of steps like this:
Use separate image build software to construct the image, and record the id somewhere from which it can be retrieved using a data source in Terraform.
Run terraform apply to update the long-lived infrastructure to make use of the new image. The Terraform configuration should include a data block to read the image id from wherever it was recorded in the previous step.
If desired, destroy the image using software outside of Terraform once Terraform has completed.
When implementing a pipeline like this, it's optional but common to also consider a "rollback" process to use in case the new image is faulty:
Reset the recorded image id that Terraform is reading from back to the id that was stored prior to the new build step.
Run terraform apply to update the long-lived infrastructure back to using the old image.
Of course, supporting that would require retaining the previous image long enough to prove that the new image is functioning correctly, so the normal build and deploy pipeline would need to retain at least one historical image per run to roll back to. With that said, if you have a means to quickly recreate a prior image during rollback then this special workflow isn't strictly needed: instead, you can implement rollback instead by "rolling forward" to an image constructed with the prior configuration.
An example software package commonly used to prepare images for use with Terraform on other cloud vendors is HashiCorp Packer, but sadly it looks like it does not have IBM Cloud support and so you may need to look for some similar software that does support IBM Cloud, or write something yourself using the IBM Cloud SDK.

fine grained ACLs in pulumi cloud

It seems that by default a lambda function created by Pulumi has an AWSLambdaFullAccess permissions. This type of access is too wide and I'd like to replace it with fine-grained ACLs.
For instance, assuming I am creating a cloud.Table in my index.js file, I would like to specify that the lambda endpoint I am creating (in the same file) only has read access to that specific table.
Is there a way to do it without coding the IAM policy myself?
The #pulumi/cloud library currently runs all compute (lambdas and containerized services) with a single uniform set of IAM policies on AWS.
You can set the policies to use by running:
pulumi config set cloud-aws:computeIAMRolePolicyARNs "arn:aws:iam::aws:policy/AWSLambdaFullAccess,arn:aws:iam::aws:policy/AmazonEC2ContainerServiceFullAccess"
The values above are the defaults. See https://github.com/pulumi/pulumi-cloud/blob/master/aws/config/index.ts#L52-L56.
There are plans to support more fine-grained control over permissions and computing permissions directly from resources being used in #pulumi/cloud - see e.g. https://github.com/pulumi/pulumi-cloud/issues/145 and https://github.com/pulumi/pulumi-cloud/issues/168.
Lower level libraries (like #pulumi/aws and #pulumi/aws-serverless) provide complete control over the Role and/or Policies applied to Function objects.

How to implement the "One Binary" principle with Docker

The One Binary principle explained here:
http://programmer.97things.oreilly.com/wiki/index.php/One_Binary states that one should...
"Build a single binary that you can identify and promote through all the stages in the release pipeline. Hold environment-specific details in the environment. This could mean, for example, keeping them in the component container, in a known file, or in the path."
I see many dev-ops engineers arguably violate this principle by creating one docker image per environment (ie, my-app-qa, my-app-prod and so on). I know that Docker favours immutable infrastructure which implies not changing an image after deployment, therefore not uploading or downloading configuration post deployment. Is there a trade-off between immutable infrastructure and the one binary principle or can they complement each-other? When it comes to separating configuration from code what is the best practice in a Docker world??? Which one of the following approaches should one take...
1) Creating a base binary image and then having a configuration Dockerfile that augments this image by adding environment specific configuration. (i.e my-app -> my-app-prod)
2) Deploying a binary-only docker image to the container and passing in the configuration through environment variables and so on at deploy time.
3) Uploading the configuration after deploying the Docker file to a container
4) Downloading configuration from a configuration management server from the running docker image inside the container.
5) Keeping the configuration in the host environment and making it available to the running Docker instance through a bind mount.
Is there another better approach not mentioned above?
How can one enforce the one binary principle using immutable infrastructure? Can it be done or is there a trade-off? What is the best practice??
I've about 2 years of experience deploying Docker containers now, so I'm going to talk about what I've done and/or know to work.
So, let me first begin by saying that containers should definitely be immutable (I even mark mine as read-only).
Main approaches:
use configuration files by setting a static entrypoint and overriding the configuration file location by overriding the container startup command - that's less flexible, since one would have to commit the change and redeploy in order to enable it; not fit for passwords, secure tokens, etc
use configuration files by overriding their location with an environment variable - again, depends on having the configuration files prepped in advance; ; not fit for passwords, secure tokens, etc
use environment variables - that might need a change in the deployment code, thus lessening the time to get the config change live, since it doesn't need to go through the application build phase (in most cases), deploying such a change might be pretty easy. Here's an example - if deploying a containerised application to Marathon, changing an environment variable could potentially just start a new container from the last used container image (potentially on the same host even), which means that this could be done in mere seconds; not fit for passwords, secure tokens, etc, and especially so in Docker
store the configuration in a k/v store like Consul, make the application aware of that and let it be even dynamically reconfigurable. Great approach for launching features simultaneously - possibly even accross multiple services; if implemented with a solution such as HashiCorp Vault provides secure storage for sensitive information, you could even have ephemeral secrets (an example would be the PostgreSQL secret backend for Vault - https://www.vaultproject.io/docs/secrets/postgresql/index.html)
have an application or script create the configuration files before starting the main application - store the configuration in a k/v store like Consul, use something like consul-template in order to populate the app config; a bit more secure - since you're not carrying everything over through the whole pipeline as code
have an application or script populate the environment variables before starting the main application - an example for that would be envconsul; not fit for sensitive information - someone with access to the Docker API (either through the TCP or UNIX socket) would be able to read those
I've even had a situation in which we were populating variables into AWS' instance user_data and injecting them into container on startup (with a script that modifies containers' json config on startup)
The main things that I'd take into consideration:
what are the variables that I'm exposing and when and where am I getting their values from (could be the CD software, or something else) - for example you could publish the AWS RDS endpoint and credentials to instance's user_data, potentially even EC2 tags with some IAM instance profile magic
how many variables do we have to manage and how often do we change some of them - if we have a handful, we could probably just go with environment variables, or use environment variables for the most commonly changed ones and variables stored in a file for those that we change less often
and how fast do we want to see them changed - if it's a file, it typically takes more time to deploy it to production; if we're using environment variable
s, we can usually deploy those changes much faster
how do we protect some of them - where do we inject them and how - for example Ansible Vault, HashiCorp Vault, keeping them in a separate repo, etc
how do we deploy - that could be a JSON config file sent to an deployment framework endpoint, Ansible, etc
what's the environment that we're having - is it realistic to have something like Consul as a config data store (Consul has 2 different kinds of agents - client and server)
I tend to prefer the most complex case of having them stored in a central place (k/v store, database) and have them changed dynamically, because I've encountered the following cases:
slow deployment pipelines - which makes it really slow to change a config file and have it deployed
having too many environment variables - this could really grow out of hand
having to turn on a feature flag across the whole fleet (consisting of tens of services) at once
an environment in which there is real strive to increase security by better handling sensitive config data
I've probably missed something, but I guess that should be enough of a trigger to think about what would be best for your environment
How I've done it in the past is to incorporate tokenization into the packaging process after a build is executed. These tokens can be managed in an orchestration layer that sits on top to manage your platform tools. So for a given token, there is a matching regex or xpath expression. That token is linked to one or many config files, depending on the relationship that is chosen. Then, when this build is deployed to a container, a platform service (i.e. config mgmt) will poke these tokens with the correct value with respect to its environment. These poke values most likely would be pulled from a vault.