I'm able to install grafana using the stable/grafana chart, using Terraform and the Helm provider. I'm trying to configure grafana with a new grafana.ini file, which should be possible using a set, however it doesn't appear to pick up the configuration at all.
I've also tried using the Helm release resources values key to merge in the same config in yaml format (with a top-level grafana.ini key), also with no success.
What I'm trying to achieve is a file containing my config, in ini or yml format, passed to the grafana Helm chart so I can configure grafana correctly (ultimately I need to configure OAuth providers via the config) using Terraform.
Relevant config snips below.
Chart https://github.com/helm/charts/tree/master/stable/grafana
Terraform v0.12.3
provider.helm v0.10.2
provider.kubernetes v1.8.0
grafana.ini
[security]
admin_user = username
main.tf (excerpt)
resource "helm_release" "grafana" {
chart = "stable/grafana"
name = "grafana"
set {
name = "grafana.ini"
value = file("grafana.ini")
}
}
I eventually found the correct way of merging the values key - it turns out (no surprise) I had the format of grafana.ini wrong when converting to YAML. Here's the working config:
config.yaml
grafana.ini:
default:
instance_name: my-server
auth.basic:
enabled: true
main.tf
resource "helm_release" "grafana" {
chart = "stable/grafana"
name = "grafana"
values = [file("config.yaml")]
}
Related
I have a terraform controller for Flux running with a Github provider, however, it seems to be picking up the wrong Terraform state, so it keeps trying to recreate the resources again and again (and fails because they already exist)
This is how it is configured
apiVersion: infra.contrib.fluxcd.io/v1alpha1
kind: Terraform
metadata:
name: saas-github
namespace: flux-system
spec:
interval: 2h
approvePlan: "auto"
workspace: "prod"
backendConfig:
customConfiguration: |
backend "s3" {
bucket = "my-bucket"
key = "my-key"
region = "eu-west-1"
dynamodb_table = "state-lock"
role_arn = "arn:aws:iam::11111:role/my-role"
encrypt = true
}
path: ./terraform/saas/github
runnerPodTemplate:
metadata:
annotations:
iam.amazonaws.com/role: pod-role
sourceRef:
kind: GitRepository
name: infrastructure
namespace: flux-system
locally running terraform init with a state.config file that has a similar/same configuration it works fine and it detect the current state properly:
bucket = "my-bucket"
key = "infrastructure-github"
region = "eu-west-1"
dynamodb_table = "state-lock"
role_arn = "arn:aws:iam::111111:role/my-role"
encrypt = true
Reading the documentation I saw also a configPath that could be used, so I tried to point it to the state file, but then I got the error:
Failed to initialize kubernetes configuration: error loading config file couldn't get version/kind; json parse error
Which is weird, like it tries to load Kuberntes configuration, not Terraform, or at least it expects a json file, which is not the case of my state configuration
I'm running Terraform 1.3.1 on both locally and on the tf runner pod
On the runner pod I can see the generated_backend_config.tf and it is the same configuration and .terraform/terraform.tfstate also points to the bucket
The only suspicious thing on the logs that I could find is this:
- Finding latest version of hashicorp/github...
- Finding integrations/github versions matching "~> 4.0"...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/github v5.9.1...
- Installed hashicorp/github v5.9.1 (signed by HashiCorp)
- Installing integrations/github v4.31.0...
- Installed integrations/github v4.31.0 (signed by a HashiCorp partner, key ID 38027F80D7FD5FB2)
- Installing hashicorp/aws v4.41.0...
- Installed hashicorp/aws v4.41.0 (signed by HashiCorp)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Warning: Additional provider information from registry
The remote registry returned warnings for
registry.terraform.io/hashicorp/github:
- For users on Terraform 0.13 or greater, this provider has moved to
integrations/github. Please update your source in required_providers.
It seems that it installs 2 github providers, one from hashicorp and one from integrations... I have changed versions of Terraform/provider during the development, and I have removed any reference to the hashicorp one, but this warning still happens
However, it also happens locally, where it reads the correct state, so I don't think it is related.
I'm installing airflow Helm charts i want to use vault as secret backend so airflow get database connection uri from vault secrets. Anyone succeed to configure it this way please help. As I couldn't find how to parse vault namespace to airflow and didn't understand the documentation.
T
This can be accomplished through a customized airflow.cfg configuration file. According to the documentation, for HashiCorp Vault secrets backend you need to add the following to your config:
[secrets]
backend = airflow.providers.hashicorp.secrets.vault.VaultBackend
backend_kwargs = {
"connections_path": "connections",
"url": "http://127.0.0.1:8200",
"mount_point": "airflow"
}
You can read how to set these parameters here in the docs.
Since you are installing using helm charts, you need to be able to inject a custom configuration file to the airflow installation. This is provided by the airflow helm chart in values.yaml file for the chart, which you can see here.
I hope this is helpful.
I have a statefulset created using the terraform helm provider. I need to update the value of an attribute (serviceName) in the statefulset but I keep getting the following error
Error: failed to replace object: StatefulSet.apps "value" is invalid: spec: Forbidden:
updates to statefulset spec for fields other than 'replicas', 'template', and
'updateStrategy' are forbidden
Error is pretty descriptive and I understand that serviceName property can't be changed but then how do I update this property? I am totally fine with having a downtime and also letting helm delete all the old pods and create new ones.
I have tried setting force_update and recreate_pods properties to true in my helm chart with no luck. Manually deleting the old helm chart is not an option for me.
I maintain the Kustomization provider and unlike the helm integration into Terraform it tracks each individual Kubernetes resource in the Terraform state. Therefor, it will show changes to the actual Kubernetes resources in the plan. And, most important for your issue here, it will also generate destroy-and-recreate plans for cases where you have to change immutable fields.
It's a bit of a migration effort, but you can make it easier for you by using the helm template command to write YAML to disk and then point the Kustomize provider at the YAML.
As part of Kubestack, the Terraform framework for AKS, EKS and GKE I maintain, I also provide a convenience module.
You could use it like this, to have it apply the output of helm template:
module "example_stateful_set" {
source = "kbst.xyz/catalog/custom-manifests/kustomization"
version = "0.1.0"
configuration_base_key = "default"
configuration = {
default = {
resources = [
"${path.root}/path/to/helm/template/output.yaml"
]
}
}
}
Finally, you will have to import the existing Kubernetes resources into the Terraform state, so that the provider can start managing the existing resources.
Goal: Deploy a helm chart Terraform targeting a Azure kubernetes cluster. The chart has to be pulled from Azure Container registry.
Steps followed to push the helm chart to ACR:
helm registry login command:
echo $spPassword | helm registry login .azurecr.io --username --password-stdin
helm save chart to local
From within the directory that has the helm chart.yml, values.yaml and other dirs:
helm chart save . .azurecr.io/:
helm push chart to Azure container Repository
helm push .azurecr.io/:
I was able to pull the chart from registry, export in local again and could successfully install manually.
Following which proceeded with Terraform based deployment approach. Below is the code snippet used:
provider "azurerm" {
version = "~>2.0"
features {}
}
data "azurerm_kubernetes_cluster" "credentials" {
name = "k8stest"
resource_group_name = "azure-k8stest"
}
provider "helm" {
kubernetes {
host = data.azurerm_kubernetes_cluster.credentials.kube_config.0.host
client_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.client_certificate)
client_key = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.client_key)
cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.credentials.kube_config.0.cluster_ca_certificate)
}
}
resource "helm_release" "test-tf" {
name = "test-twinkle"
repository = "https://<myacrname>.azurecr.io/helm/v1/repo"
repository_username = <serviceprincipal appid>
repository_password = <serviceprincipal password>
chart = <chart name with which it was pushed to ACR>
version = <chart version with which it was pushed to ACR>
namespace = "test-dev"
create_namespace = "true"
}
Error:
Chart version not found in repository.
I've had the same issue and the documentation isn't helpful at all currently. I did find another answer on stack which somewhat sorts the issue out in a hacky sort of way by using a null_resource. I think the OCI for ACR might not be supported properly yet so you have to choose between using the deprecated API or helm3 which is currently experimental for ACR. I have not found another way to add the Repo URL :(
We are using the prometheus operator chart
Currently, Im creating my own values.yaml that im overriding the default values from the chart like
helm install po -f values.yaml stable/prometheus-operator -n po
There is a Grafana properties which I need to modify as the opertor come with grafana properties
https://github.com/helm/charts/blob/master/stable/prometheus-operator/values.yaml#L486
However, I want to modify properties that is not in the values.yaml of the prometheus chart and found here:
https://github.com/helm/charts/blob/master/stable/grafana/values.yaml#L422 (there is a reference on the chart)
My question is assume I want to modify the client_id , what is the recommended way to do it?
https://github.com/helm/charts/blob/master/stable/grafana/values.yaml#L431
You can overwrite the values of dependent charts by using the name of the dependency (which for grafana in the prometheus chart can be found here) as another key within the values.yml.
In thise case, it is just grafana and so to overwrite it in your values.yml, do it like this:
# ... config of the original prometheus chart
# overwrite grafana's yaml by using the dependency name
grafana:
grafana.ini:
auth.github:
client_id: 'what you need to put here'