Terraform: update resources only when Vault secret data has changed - kubernetes

This should be fairly easy, or I might doing something wrong, but after a while digging into it I couldn't find a solution.
I have a Terraform configuration that contains a Kubernetes Secret resource which data comes from Vault. The resource configuration looks like this:
resource "kubernetes_secret" "external-api-token" {
metadata {
name = "external-api-token"
namespace = local.platform_namespace
annotations = {
"vault.security.banzaicloud.io/vault-addr" = var.vault_addr
"vault.security.banzaicloud.io/vault-path" = "kubernetes/${var.name}"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
}
data = {
"EXTERNAL_API_TOKEN" = "vault:secret/gcp/${var.env}/micro-service#EXTERNAL_API_TOKEN"
}
}
Everything is working fine so far, but every time I do terraform plan or terraform apply, it marks that resource as "changed" and updates it, even when I didn't touch the resource or other resources related to it. E.g.:
... (other actions to be applied, unrelated to the offending resource) ...
# kubernetes_secret.external-api-token will be updated in-place
~ resource "kubernetes_secret" "external-api-token" {
~ data = (sensitive value)
id = "platform/external-api-token"
type = "Opaque"
metadata {
annotations = {
"vault.security.banzaicloud.io/vault-addr" = "https://vault.infra.megacorp.io:8200"
"vault.security.banzaicloud.io/vault-path" = "kubernetes/gke-pipe-stg-2"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
generation = 0
labels = {}
name = "external-api-token"
namespace = "platform"
resource_version = "160541784"
self_link = "/api/v1/namespaces/platform/secrets/external-api-token"
uid = "40e93d16-e8ef-47f5-92ac-6d859dfee123"
}
}
Plan: 3 to add, 1 to change, 0 to destroy.
It is saying that the data for this resource has been changed. However the data in Vault remains the same, nothing has been modified there. This update happens every single time now.
I was thinking on to use the ignore_changes lifecycle feature, but I assume this will make any changes done in Vault secret to be ignored by Terraform, which I also don't want. I would like the resource to be updated only when the secret in Vault was changed.
Is there a way to do this? What am I missing or doing wrong?

You need to add in the Terraform Lifecycle ignore changes meta argument to your code. For data with API token values but also annotations for some reason Terraform seems to assume that, that data changes every time a plan or apply or even destroy has been run. I had a similar issue with Azure KeyVault.
Here is the code with the lifecycle ignore changes meta argument included:
resource "kubernetes_secret" "external-api-token" {
metadata {
name = "external-api-token"
namespace = local.platform_namespace
annotations = {
"vault.security.banzaicloud.io/vault-addr" = var.vault_addr
"vault.security.banzaicloud.io/vault-path" = "kubernetes/${var.name}"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
}
data = {
"EXTERNAL_API_TOKEN" = "vault:secret/gcp/${var.env}/micro-service#EXTERNAL_API_TOKEN"
}
lifecycle {
ignore_changes = [
# Ignore changes to data, and annotations e.g. because a management agent
# updates these based on some ruleset managed elsewhere.
data,annotations,
]
}
}
link to meta arguments with lifecycle:
https://www.terraform.io/language/meta-arguments/lifecycle

Related

Terraform Kubernetes Secrets not applying due to Namespace

I am learning terraform and trying to translate kubernetes infrastructure over to terraform.
I have a terraform script which creates a given namespace, and then creates secrets from local files. Most of the files do not create properly due to the namespace not being created fast enough.
Is there a correct method to create and wait for confirmation of the name space before continuing within the terraform script? Such as depends_on, etc.?
My current approach:
resource "kubernetes_namespace" "namespace" {
metadata {
name = "specialNamespace"
}
}
resource "kubernetes_secret" "api-env" {
metadata {
name = var.k8s_name_api_env
namespace = "specialNamespace"
}
data = {
".api" = file("${path.cwd},${var.local_dir_path_api_env_file}")
}
}
resource "kubernetes_secret" "password-env" {
metadata {
name = var.k8s_name_password_env
namespace = "specialNamespace"
}
data = {
".password" = file("${path.cwd},${var.local_dir_path_password_env_file}")
}
}
resource "kubernetes_secret" "tls-crt-env" {
metadata {
name = var.k8s_name_tls_crt_env
namespace = "specialNamespace"
}
data = {
"server.crt" = file("${path.cwd},${var.local_dir_path_tls_crt_env_file}")
}
}
resource "kubernetes_secret" "tls-key-env" {
metadata {
name = var.k8s_name_tls_key_env
namespace = "specialNamespace"
}
data = {
"server.key" = file("${path.cwd},${var.local_dir_path_tls_key_env_file}")
}
}
Since there is a way to get the name property of the metadata from the kubernetes_namespace resource, I would advise going with that. For example, for the kubernetes_secret resource:
resource "kubernetes_secret" "api-env" {
metadata {
name = var.k8s_name_api_env
namespace = kubernetes_namespace.namespace.metadata[0].name
}
data = {
".api" = file("${path.cwd},${var.local_dir_path_api_env_file}")
}
}
Also, note that most of the resources also have the _v1 version (e.g., namespace [1], secret [2] etc.), so I would strongly suggest going with those ones.
[1] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace_v1
[2] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret_v1
Such as depends_on, etc.?
Exactly. Here, you should use depends_on:
resource "kubernetes_secret" "api-env" {
depends_on = [resource.kubernetes_namespace.namespace]
...
}
...

How to set global tags in Pulumi Azure Native

In a stack specific settings file (i.e. Pulumi.dev.yaml), if location is set (i.e. azure-native:location) then resource group location is set automatically and location for resources is derived from resource group location. Now I am trying to apply common tag for all resources i.e. CreatedBy: Pulumi. Is there any way to set common/global tags, similar to azure-native:location in settings file (Pulumi.dev.yaml) ?
Expected: both location and tags will be set from Pulumi.dev.yaml
config:
azure-native:location: japaneast
azure-native:tags:
CreatedBy: Pulumi
var mainRgArgs = config.RequireObject<JsonElement>(KEY_RESOURCE_GROUP_ARGS);
var mainRgName = mainRgArgs.GetProperty(RESOURCE_GROUP_NAME).GetString()!;
var mainRg = new ResourceGroup(RESOURCE_GROUP_MAIN, new ResourceGroupArgs
{
ResourceGroupName = mainRgName
//Location =
//Tags =
});
It isn't possible to set the tags automatically, because tags aren't a required API property.
The reason location is a provider argument is because every resource requires a location when it's created. That isn't true for tags.
However, it is possible to automatically add tags to resources that are taggable (which isn't all resources) using a transformation
Transformations allow you to inject properties into every resource, regardless of whether you've set that value on your resource explicitly. You will however have to set a list of taggable resources, because not every Azure resource is taggable.
A function which will register tags on resources will look something like this:
export function registerAutoTags(autoTags: Record<string, string>): void {
pulumi.runtime.registerStackTransformation((args) => {
if (isTaggable(args.type)) {
args.props["tags"] = { ...args.props["tags"], ...autoTags };
return { props: args.props, opts: args.opts };
}
return undefined;
});
}
and then you can use those tags by calling the function:
registerAutoTags({
"user:Project": pulumi.getProject(),
"user:Stack": pulumi.getStack(),
"user:Cost Center": config.require("costCenter"),
});
There's more information on this (albeit for AWS, not Azure) here. You can find a list of Azure resources that are support tags here

Terraform data.google_container_cluster.cluster.endpoint is null

I would like to manage configuration for a service using terraform to a GKE cluster defined using external terraform script.
I created the configuration using kubernetes_secret.
Something like below
resource "kubernetes_secret" "service_secret" {
metadata {
name = "my-secret"
namespace = "my-namespace"
}
data = {
username = "admin"
password = "P4ssw0rd"
}
}
And I also put this google client configuration to configure the kubernetes provider.
data "google_client_config" "current" {
}
data "google_container_cluster" "cluster" {
name = "my-container"
location = "asia-southeast1"
zone = "asia-southeast1-a"
}
provider "kubernetes" {
host = "https://${data.google_container_cluster.cluster.endpoint}"
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate)
}
when I apply the terraform it shows error message below
data.google_container_cluster.cluster.endpoint is null
Do I miss some steps here?
I just had the same/similar issue when trying to initialize the kubernetes provider from a google_container_cluster data source. terraform show just displayed all null values for the data source attributes. The fix for me was to specify the project in the data source, e.g.,
data "google_container_cluster" "cluster" {
name = "my-container"
location = "asia-southeast1"
zone = "asia-southeast1-a"
project = "my-project"
}
https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/container_cluster#project
project - (Optional) The project in which the resource belongs. If it is not provided, the provider project is used.
In my case the google provider was pointing to a different project than the one containing the cluster I wanted to get info about.
In addition, you should be able to remove the zone attribute from that block. location should refer to the zone if it is a zonal cluster or the region if it is a regional cluster.
I had the same problem when I wanted to start the cluster from scrach. No varible was declared then.
I solved it with depends on parameter in google_container_cluster block.
data "google_container_cluster" "my_cluster" {
name = var.project_name
depends_on = [
google_container_node_pool.e2-small
]
}
In my case the provider is waiting on nodes to be deployed. Kubernete provider starts working after the nodes are provisioned and variables are declared.
my problem
the name wasn't right so the data returned nothing
I tried
data "google_container_cluster" "my_cluster" {
name = "gke_my-project-name_us-central1_my-cluster-name"
location = "us-central1"
}
But the name is wrong and needs to be just my-cluster-name
And most importantly, the data directive didn't say that this cluster didn't exist --sad_beeps--
Solution (in my case): double check that the cluster exists
data "google_container_cluster" "my_cluster" {
name = "my-cluster-name"
location = "us-central1"
}
hope it will save time for someone else

Allow user to update/delete certain policies(Hashicorp Vault)

Description
I am using Hashicorp's Vault ,version 1.7.0, free version.
I would like to allow a certain range of policies that a user can assign/delete to a group. In that way he can add or delete entities user to the group from the UI.
What I have done
Bellow is written into blocks the overall policy file.
{
capabilities = ["list"]
}
#To show the identity endpoint from the UI
path "/identity/*"{
capabilities = ["list" ]
}
#policies that I would like the user to have the ability to #assign to the group.
path "/sys/policies/acl/it_team_leader"{
capabilities = ["read", "update", "list"]
}
path "sys/policies/acl/it_user"{
capabilities = ["read", "update","list"]
}
path "sys/policies/acl/ui_settings"{
capabilities = ["read", "update", "list"]
}
path "sys/policies/acl/personal_storage"{
capabilities = ["read", "update","list"]
}
#Group id that the user have full access
path "/identity/group/id/2c97485a-754f-657a-5a8b-62b08a3ce8cb" {
capabilities = ["sudo","read","update","create","list"]
}
What is the issue
Lets assume that I have an super-privileged policy that provides access to the the whole secret engine.
From the UI I am able to assign to that group the super-priveleged policy and basically allow a restricted user to assign this super policy to the whole group.
When I extended the policy with :
path "sys/policies/acl/**super-priveleged**" {
capabilities = ["deny"]
}
is just restricting the policy to be read from the UI.
Appending the group path with allowed_parameters such us :
capabilities = ["sudo","read","update","create","list"]
denied_parameters = {
"policies" = ["it_user","it_team_leader",etc]
}
I receive a permission denied error(403).
Appending with denied parameters :
path "/identity/group/id/2c97485a-754f-657a-5a8b-62b08a3ce8cb" {
capabilities = ["sudo","read","update","create","list"]
denied_parameters = {
"policies" = ["super-policy"]
}
is not functioning and I am still allowed to assign the super policy.
I also tried wildcards with the same result.
Is it even possible to restrict one/a range of policies that can be assigned from the Vault UI?
Thanks in advance if you made it so far.
Found the solution, to restrict user to update certain policies the allowed parameters fields should encapsulate a list and add an asterisk key with an empty list.
Note : The order of the policies assigned from the UI should comply with the order that is written in the .hcl file.
path "/identity/group/id/2c97485a-754f-657a-5a8b-62b08a3ce8cb"
{
capabilities = ["sudo","read","update","create","list"]
allowed_parameters = {
"policies" = [["policy1","policy2","policy3"]]
"*" = []
}
}

how can I attach multiple pre-existing AWS managed roles to a policy?

I want to associate existing policies in AWS to a role, I am using the terraform tool
I want to associate these policies, this code is with the aws cloudformation tool:
AWSCodeCommitFullAccess
AWSCodeBuildAdminAccess
AWSCodeDeployFullAccess
AWSCodePipelineFullAccess
AWSElasticBeanstalkFullAccess
try with the attach
data "aws_iam_policy" "attach-policy" {
arn = ["arn:aws:iam::aws:policy/AWSCodeCommitFullAccess", "arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess", "arn:aws:iam::aws:policy/AWSCodeDeployFullAccess", "arn:aws:iam::aws:policy/AWSCodePipelineFullAccess"]
}
resource "aws_iam_role_policy_attachment" "tc-role-policy-attach" {
role = "${aws_iam_role.toolchain-role.name}"
policy_arn = "${data.aws_iam_policy.attach-policy.arn}"
}
You go with the right direction with terraform resource aws_iam_role_policy_attachment but need some adjustment.
AWS managed policies' ARN are exist in the system. For example, if you need attach the first managed policy to an IAM role,
resource "aws_iam_role_policy_attachment" "test-policy-AWSCodeCommitFullAccess" {
policy_arn = "arn:aws:iam::aws:policy/AWSCodeCommitFullAccess"
role = "${aws_iam_role.toolchain-role.name}"
}
You can add other managed policies one by one.
If you want to do together, you can try below code
variable "managed_policies" {
default = ["arn:aws:iam::aws:policy/AWSCodeCommitFullAccess",
"arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess",
"arn:aws:iam::aws:policy/AWSCodeDeployFullAccess",
"arn:aws:iam::aws:policy/AWSCodePipelineFullAccess",
]
}
resource "aws_iam_role_policy_attachment" "tc-role-policy-attach" {
count = "${length(var.managed_policies)}"
policy_arn = "${element(var.managed_policies, count.index)}"
role = "${aws_iam_role.toolchain-role.name}"
}