az cli/bicep targeted/single module deploys? - azure-devops

Is there an az/bicep equivalent to terraform apply -target=module.my_app.module.something?
Given a root bicep file:
module app '../../../projects/my/application/app.bicep' = {
name: 'app'
}
module test '../../../projects/my/application/test.bicep' = {
name: 'test'
}
module sample '../../../projects/my/application/sample.bicep' = {
name: 'sample'
params {
p1: 'p1'
}
}
Can I provision just the sample module somehow?
I could do something like: az deployment sub create --template-file ../../../projects/my/application/sample.bicep -l germanywestcentral
But this is not really the same thing, because this bypasses the params passed from the root module (which provides env separations) down to the actual module.

The command you have:
az deployment sub create --template-file ../../../projects/my/application/sample.bicep -l germanywestcentral will work just fine, you just pass the parameters you would normally pass to root.bicep that are needed by that module (e.g. p1)
If you have params that are created/manipulated in root.bicep you'd have to decide how you marshal those values manually.

Related

mock outputs in Terragrunt dependency

I want to use Terragrunt to deploy this example: https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/examples/complete-kubernetes-addons/main.tf
So far, I was able to create the VPC/EKS resource without a problem, I separated each module into a different module directory, and everything worked as expected.
When I tried to do the same for the Kubernetes-addons module, I faced an issue with the data source trying to call to the cluster and failing since the cluster wasn't created at this point.
Here's my terragrunt.hcl which I'm trying to execute for this specific module:
...
terraform {
source = "git::git#github.com:aws-ia/terraform-aws-eks-blueprints.git//modules/kubernetes-addons?ref=v4.6.1"
}
locals {
# Extract needed variables for reuse
cluster_version = "${include.envcommon.locals.cluster_version}"
name = "${include.envcommon.locals.name}"
}
dependency "eks" {
config_path = "../eks"
mock_outputs = {
eks_cluster_endpoint = "https://000000000000.gr7.eu-west-3.eks.amazonaws.com"
eks_oidc_provider = "something"
eks_cluster_id = "something"
}
}
inputs = {
eks_cluster_id = dependency.eks.outputs.cluster_id
eks_cluster_endpoint = dependency.eks.outputs.eks_cluster_endpoint
eks_oidc_provider = dependency.eks.outputs.eks_oidc_provider
eks_cluster_version = local.cluster_version
...
}
The error that I'm getting here:
`
INFO[0035]
Error: error reading EKS Cluster (something): couldn't find resource
with data.aws_eks_cluster.eks_cluster,
on data.tf line 7, in data "aws_eks_cluster" "eks_cluster":
7: data "aws_eks_cluster" "eks_cluster" {
`
The kubernetes-addons module is deploying addons into an existing Kubernetes cluster. If you don't have a cluster running (apparently you don't have one when you're mocking the cluster_id variable), then you get the error of not having the aws_eks_cluster data source.
You need to create the K8s cluster first, before you can start deploying the addons.

Azure bicep use key vault from different resource group

I've an Azure Key Vault(KV) that has shared secrets and a cert that needs to be pulled into different deployments.
E.g. DEV, TEST, UAT, Production all have their own key vaults BUT need access to the shared KV for wild card ssl cert.
I've tried a number of approaches but each has errors. I'm doing something similar for KV within the deployment resource group without issues
Is it possible to have this and then use it as a module? Something like this...
sharedKV.bicep
var kvResourceGroup = 'project-shared-rg'
var subscriptionId = subscription().id
var name = 'project-shared-kv'
resource project_shared_kv 'Microsoft.KeyVault/vaults#2021-06-01-preview' existing = {
name: name
scope: resourceGroup(subscriptionId, kvResourceGroup )
}
And then uses like:
template.bicep
module shared_kv './sharedKeyVault/template.bicep' = {
name: 'sharedKeyVault'
}
resource add_secrect 'Microsoft.KeyVault/vaults/secrets#2021-06-01-preview' = {
name: '${shared_kv.name}/mySecretKey'
properties: {
contentType: 'string'
value: 'secretValue'
attributes: {
enabled: true
}
}
}
If you need to target a different resourceGroup (and/or sub) than the rest of the deployment, the module's scope property needs to target that RG/sub. e.g.
module shared_kv './sharedKeyVault/template.bicep' = {
scope: resourceGroup(kvSubscription, kvResourceGroupName)
name: 'sharedKeyVault'
params: {
subId: kvSubscription
rg: kvResourceGroupName
...
}
}
Ideally, the sub/rg for the KV would be passed in to the module rather than hardcoded (which you probably knew, but just in case...)

Retrieve a secret from keyvault in Bicep and use as input for Synapse Workspace creation

I want to do the following by using bicep:
Create a keyvault
Create a keyvault secret
Use this secret as the input for the creation of a Synapse Workspace(admin password)
I am using modules for creating all of the resources.
module keyVault 'modules/keyVault.bicep' = {
scope: resourceGroup
name: 'keyVault'
params: {
keyVaultName: keyVaultName
location: location
tenantID: subscription().tenantId
}
}
module keyVaultSecret 'modules/keyVaultSecret.bicep' = {
scope: resourceGroup
name: 'keyVaultSecretSynapseSQLAdminPassword'
params: {
secretName: 'synapseSQLAdministratorLoginPassword'
secretValue: synapseSqlAdministratorLoginPassword
keyVaultName: keyVaultName
keyVaultSecretName: '${keyVault.name}/synapseSQLAdministratorLoginPassword'
}
}
module synapse 'modules/synapseWs.bicep' = {
scope: resourceGroup
name: 'synapse'
params: {
synapseWSName: synapseWSName
synapseWSLocation: location
defaultAccountUrl: storageAccount.outputs.accURL
synapseSqlAdministratorLogin:synapseSqlAdministratorLogin
synapseSqlAdministratorLoginPassword: keyVault.getSecret('keyVaultSecretSynapseSQLAdminPassword')
managedResourceGroupName: '${environmentName}-cargo-${applicationName}-synapsemanaged-rg'
sqlPoolName: sqlPoolName
synapsePrivateLinkHubName: synapsePrivateLinkHubName
synapsePrivateLinkHubLocation: location
}
}
The getSecret function used in the line
synapseSqlAdministratorLoginPassword: keyVault.getSecret('keyVaultSecretSynapseSQLAdminPassword')
gives the error: "The type "module" does not contain function "getSecret"."
Apparently this function can only be used in resources. How could I do this in a different way?
Thanks
You has to reference the keyvault as existing in the bicep template. You can not use that function referencing a module. You has to reference the resource.
Create the keyvault with the module
Reference existing keyvault (as you just created)
Use the function on the existing keyvault reference.
https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/resource-declaration?tabs=azure-powershell#reference-existing-resources

terraform plan recreates resources on every run with terraform cloud backend

I am running into an issue where terraform plan recreates resources that don't need to be recreated every run. This is an issue because some of the steps depend on those resources being available, and since they are recreated with each run, the script fails to complete.
My setup is Github Actions, Linode LKE, Terraform Cloud.
My main.tf file looks like this:
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
provider "linode" {
}
provider "helm" {
debug = true
kubernetes {
config_path = "${local_file.kubeconfig.filename}"
}
}
resource "linode_lke_cluster" "lke_cluster" {
label = "MY-LABEL-HERE"
k8s_version = "1.21"
region = "us-central"
pool {
type = "g6-standard-2"
count = 3
}
}
and my outputs.tf file
resource "local_file" "kubeconfig" {
depends_on = [linode_lke_cluster.lke_cluster]
filename = "kube-config"
# filename = "${path.cwd}/kubeconfig"
content = base64decode(linode_lke_cluster.lke_cluster.kubeconfig)
}
resource "helm_release" "ingress-nginx" {
# depends_on = [local_file.kubeconfig]
depends_on = [linode_lke_cluster.lke_cluster, local_file.kubeconfig]
name = "ingress"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
}
resource "null_resource" "custom" {
depends_on = [helm_release.ingress-nginx]
# change trigger to run every time
triggers = {
build_number = "${timestamp()}"
}
# download kubectl
provisioner "local-exec" {
command = "curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl"
}
# apply changes
provisioner "local-exec" {
command = "./kubectl apply -f ./k8s/ --kubeconfig ${local_file.kubeconfig.filename}"
}
}
In Github Actions, I'm running these steps:
jobs:
init-terraform:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./terraform
steps:
- name: Checkout code
uses: actions/checkout#v2
with:
ref: 'privatebeta-kubes'
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TERRAFORM_API_TOKEN }}
- name: Terraform Init
run: terraform init
- name: Terraform Format Check
run: terraform fmt -check -v
- name: List terraform state
run: terraform state list
- name: Terraform Plan
run: terraform plan
id: plan
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
When I look at the results of terraform state list I can see my resources:
Run terraform state list
terraform state list
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin state list
helm_release.ingress-nginx
linode_lke_cluster.lke_cluster
local_file.kubeconfig
null_resource.custom
But my terraform plan fails and the issue seems to stem from the fact that those resources try to get recreated.
Run terraform plan
terraform plan
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
LINODE_TOKEN: ***
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
Waiting for the plan to start...
Terraform v1.0.2
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
linode_lke_cluster.lke_cluster: Refreshing state... [id=31946]
local_file.kubeconfig: Refreshing state... [id=fbb5520298c7c824a8069397ef179e1bc971adde]
helm_release.ingress-nginx: Refreshing state... [id=ingress]
╷
│ Error: Kubernetes cluster unreachable: stat kube-config: no such file or directory
│
│ with helm_release.ingress-nginx,
│ on outputs.tf line 8, in resource "helm_release" "ingress-nginx":
│ 8: resource "helm_release" "ingress-nginx" {
Is there a way to tell terraform it doesn't need to recreate those resources?
Regarding the actual error shown, Error: Kubernetes cluster unreachable: stat kibe-config: no such file or directory... which is referencing your outputs file... I found this which could help with your specific error: https://github.com/hashicorp/terraform-provider-helm/issues/418
1 other thing looks strange to me. Why does your outputs.tf refer to 'resources' & not 'outputs'. Shouldn't your outputs.tf look like this?
output "local_file_kubeconfig" {
value = "reference.to.resource"
}
Also I see your state file / backend config looks like it's properly configured.
I recommend logging into your terraform cloud account to verify that the workspace is indeed there, as expected. It's the state file that tells terraform not to re-create the resources it manages.
If the resources are already there and terraform is trying to re-create them, that could indicate that those resources were created prior to using terraform or possibly within another terraform cloud workspace or plan.
Did you end up renaming your backend workspace at any point with this plan? I'm referring to your main.tf file, this part where it says MY-WORKSPACE-HERE :
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
Unfortunately I am not a kurbenetes expert, so possibly more help can be used there.

terraform-kubernetes-provider how to create secret from file?

I'm using the terraform kubernetes-provider and I'd like to translate something like this kubectl command into TF:
kubectl create secret generic my-secret --from-file mysecret.json
It seems, however the secret resource's data field expects only a TF map.
I've tried something like
data "template_file" "my-secret" {
template = "${file("${path.module}/my-secret.json")}"
}
resource "kubernetes_secret" "sgw-config" {
metadata {
name = "my-secret"
}
type = "Opaque"
data = "{data.template_file.my-secret.template}"
}
But it complains that this is not a map. So, I can do something like this:
data = {
"my-secret.json" = "{data.template_file.my-secret.template}"
}
But this will write the secret with a top-level field named my-secret.json and when I volume mount it, it won't work with other resources.
What is the trick here?
as long the file is UTF-8 encoded you can use something like this
resource "kubernetes_secret" "some-secret" {
metadata {
name = "some-secret"
namespace = kubernetes_namespace.some-ns.metadata.0.name
labels = {
"sensitive" = "true"
"app" = "my-app"
}
}
data = {
"file.txt" = file("${path.cwd}/your/relative/path/to/file.txt")
}
}
If the file is a binary one you will have an error like
Call to function "file" failed: contents of
/your/relative/path/to/file.txt are not valid UTF-8; use the
filebase64 function to obtain the Base64 encoded contents or the other
file functions (e.g. filemd5, filesha256) to obtain file hashing
results instead.
I tried encoding the file in base64 but then the problem is that the resulting text will be re-encoded in base64 by the provider. So I guess there is no solution for binary files at the moment...
I'll edit with what I find next for binaries.
This might be a bit off-topic, but I've been facing similar problem except that the file might not be present in which case the terraform [plan|apply] fails.
To be exact: I needed to duplicate a secret from one namespace to another one.
I realized that by using hashicorp/external provider.
The steps are pretty simple:
Load data by calling external program
Refer to the data in kubernetes_secret resource
The program should accept (and process) JSON on STDIN and produce valid JSON on STDOUT as response to the parameters passed-in in the STDIN's JSON.
Example shell script:
#!/bin/bash
set -e
/bin/echo -n '{ "token": "'
kubectl get -n consul secrets/hashicorp-consul-bootstrap-acl-token --template={{.data.token}}
/bin/echo -n '"}'
tarraform source:
data "external" "token" {
program = ["sh", "${path.module}/consul-token.sh"]
}
resource "kubernetes_secret" "consul-token" {
depends_on = [data.external.token]
metadata {
name = "consul-token"
namespace = "app"
}
data = {
token = base64decode(data.external.token.result.token)
}
}
and requirements:
terraform {
required_providers {
external = {
source = "hashicorp/external"
version = ">= 2.0.0"
}
}
}
Just use
https://www.terraform.io/docs/providers/kubernetes/r/config_map.html#binary_data
resource "kubernetes_config_map" "example" {
metadata {
name = "my-config"
}
binary_data = {
"my_payload.bin" = "${filebase64("${path.module}/my_payload.bin")}"
}
}
I believe you can use binary_data attribute in the secret now.
e.g.
binary_data = {
"my_payload.bin" = "${filebase64("${path.module}/my_payload.bin")}"
}
reference:
https://github.com/hashicorp/terraform-provider-kubernetes/pull/1228
https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret#binary_data
Basically you need to provide a map like this :
resource "kubernetes_secret" "sgw-config" {
metadata {
name = "my-secret"
}
type = "Opaque"
data {
"key1" = "value1"
"key2" = "value2"
}
}
you can refer to your internal variables using
resource "kubernetes_secret" "sgw-config" {
metadata {
name = "my-secret"
}
type = "Opaque"
data {
"USERNAME" = "${var.some_variable}"
"PASSWORD" = "${random_string.root_password.result}"
}
}
It seems if you run the command kubectl create secret generic my-secret --from-file mysecret.json
and then
$ kubectl get secrets my-secret -o yaml
apiVersion: v1
data:
my-secret.json: ewogICA.....
kind: Secret
metadata:
creationTimestamp: "2019-03-25T18:20:43Z"
name: my-secret
namespace: default
resourceVersion: "67026"
selfLink: /api/v1/namespaces/default/secrets/my-secret
uid: b397a29c-4f2a-11e9-9806-000c290425d0
type: Opaque
it stores it similarly with the filename as the single key. When I mount this in a volume/volumeMount it works as expected. I was afraid that it wouldn't but when I create the secret using the --from-file argument, this is exactly how it stores it.