I have a helm release file test-helm-release.yaml as given below.
apiVersion: helm.toolkit.gitops.io/v2beta1
kind: HelmRelease
metadata:
name: "test"
namespace: "test-system"
spec:
chart:
spec:
chart: "test-environment"
version: "0.1.10"
values:
key1: "value1"
key1: "value1"
key1: "value1"
key1: "value1"
gitRepository:
url: https://github.com/test-eng/test.git
helmRepositories:
- name: testplatform
url: https://test-platform/charts
While creating the helm release I can pass the values from above helm release to the new release using the below command
chart=$(yq '.spec.chart.spec.chart' test-helm-release.yaml)
version=$(yq '.spec.chart.spec.version' test-helm-release.yaml)
yq '.spec.values' test-helm-release.yaml | helm upgrade --install --values - --version "$version" --namespace test-system --create-namespace test-platform "helm-repo/$chart"
Above code is working perfectly and I'm able to pass the values to the helm release using "yq" command. How I can do the same "yq" function with terraform "helm-release" and git repository data type given below.
data "github_repository_file" "test-platform" {
repository = "test-platform"
branch = "test"
file = "internal/default/test-helm-release.yaml"
}
resource "helm_release" "test-platform" {
name = "test-platform"
repository = "https://test-platform/charts"
chart = "test-environment"
namespace = "test-system"
create_namespace = true
timeout = 800
lifecycle {
ignore_changes = all
}
}
Note
I cannot use "set" because i want to fetch the values form test-helm-release.yaml dynamically.Any idea how I could fetch the .spec.values alone using templatefile functio or a different way?
Related
I'm installing a helm chart which creates a CRD and then I want to instantiate the CRD defined in the helm chart. What's the correct way to declare a dependency between them so that terraform doesn't try to create the CRD until after the helm chart has finished installing?
new helm.Release(this, "doppler-kubernetes-operator-helm-chart", {
chart: "doppler-kubernetes-operator",
name: "doppler",
repository: "https://helm.doppler.com",
version: "1.2.0"
})
const dopplerOperatingSystemNamespace = "doppler-operator-system";
// create a secret referenced by the CRD
const dopplerApiServerProjectServiceTokenSecret = new kubernetes.Secret(this, "doppler-api-server-project-service-token", {
metadata: {
name: "doppler-api-server-project-service-token",
namespace: dopplerOperatingSystemNamespace
},
data: {
"serviceToken": "<some secret>"
}
})
// Create the CRD <------------- how do I get this to depend on the helm chart?
new kubernetes.Manifest(this, "doppler-kubernetes-operator", {
manifest: {
apiVersion: "secrets.doppler.com/v1alpha1",
kind: "DopplerSecret",
metadata: {
name: "doppler-secret-default",
namespace: dopplerOperatingSystemNamespace,
},
spec: {
tokenSecret: {
name: dopplerApiServerProjectServiceTokenSecret.metadata.name
},
managedSecret: {
name: "doppler-api-server-managed-secret",
namespace: "default"
}
}
}
})
In this case I would like to only attempt creating doppler-kubernetes-operator after the helm chart has finished installing.
Turns out I'm an idiot. I was looking for dependsOn (which I use with AWS classes) and Intellij wasn't autocompleting it for the kubernetes Manifest but I guess my cursor was in the wrong position...
Hello iam trying to insert a Kubernetes ConfigMap inside the cert-manager Helm Chart. The Helm Chart gets defined with a values.yaml.
The needed ConfigMap is already defined with the corresponding data inside the same namespace as my Helm Chart.
resource "helm_release" "certmanager" {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
namespace = kubernetes_namespace.certmanager.metadata.0.name
version = local.helm_cert_manager_version
values = [
file("./config/cert-manager-values.yml")
]
}
# !! ConfigMap is defined with Terraform !! #
resource "kubernetes_config_map" "example" {
metadata {
name = "test-config"
namespace = kubernetes_namespace.certmanager.metadata.0.name
}
data = {
"test_ca" = "${data.google_secret_manager_secret_version.test_crt.secret_data}"
}
}
The data of the ConfigMap should be mounted to the path /etc/ssl/certs inside my Helm Chart.
I think down below is the rigth spot to mount the data?
...
volumes: []
volumeMounts: []
..
Do you have any idea how to mount that ConfigMap over /etc/ssl/certs within the cert-manager Chart?
Based on your question, there could be two things you could do:
Pre-populate the ./config/cert-manager-values.yml file with the values you want.
Use the templatefile [1] built-in function and pass the values dynamically.
In the first case, the changes to the file would probably have to be as follows:
...
volumes:
- name: config-map-volume
configMap:
name: test-config
volumeMounts:
- name: config-map-volume
mountPath: /etc/ssl/certs
...
Make sure the indentation is correct since this is YML. In the second case, you could do something like this in the helm_release resource:
resource "helm_release" "certmanager" {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
namespace = kubernetes_namespace.certmanager.metadata.0.name
version = local.helm_cert_manager_version
values = [templatefile("./config/cert-manager-values.yml", {
config_map_name = kubernetes_config_map.example.metadata[0].name
volume_mount_path = "/etc/ssl/certs"
})]
}
In this case, you would also have to use template variables as placeholders inside of the cert-manager-values.yml file:
...
volumes:
- name: config-map-volume
configMap:
name: ${config_map_name}
volumeMounts:
- name: config-map-volume
mountPath: ${mount_path}
...
Note that the first option might not work as expected due to Terraform parallelism which tries to create as many resources as possible. If the ConfigMap is not created before the chart is applied it might fail.
[1] https://www.terraform.io/language/functions/templatefile
I am trying to deploy the superset Helm chart with a customized image. There's no option to specify imagePullSecrets for the chart. Using k8s on DigitalOcean. I linked the repository, and tested it using a basic deploy, and it "just works". That is to say, the pods get the correct value for imagePullSecret, and pulling just works.
However, when trying to install the Helm chart, the used imagePullSecret mysteriously gets a registry- prefix (there's already a -registry suffix, so it becomes registry-xxx-registry when it should just be xxx-registry). The values on the default service account are correct.
To illustrate, default service accounts for both namespaces:
$ kubectl get sa default -n test -o yaml
apiVersion: v1
imagePullSecrets:
- name: xxx-registry
kind: ServiceAccount
metadata:
creationTimestamp: "2022-04-14T14:26:41Z"
name: default
namespace: test
resourceVersion: "13125"
uid: xxx-xxx
secrets:
- name: default-token-9ggrm
$ kubectl get sa default -n superset -o yaml
apiVersion: v1
imagePullSecrets:
- name: xxx-registry
kind: ServiceAccount
metadata:
creationTimestamp: "2022-04-14T14:19:47Z"
name: default
namespace: superset
resourceVersion: "12079"
uid: xxx-xxx
secrets:
- name: default-token-wkdhv
LGTM, but after trying to install the helm chart (which fails because of registry auth), I can see that the wrong secret is set on the pods:
$ kubectl get -n superset pods -o json | jq '.items[] | {name: .spec.containers[0].name, sa: .spec.serviceAccount, secret: .spec.imagePullSecrets}'
{
"name": "superset",
"sa": "default",
"secret": [
{
"name": "registry-xxx-registry"
}
]
}
{
"name": "xxx-superset-postgresql",
"sa": "default",
"secret": [
{
"name": "xxx-registry"
}
]
}
{
"name": "redis",
"sa": "xxx-superset-redis",
"secret": null
}
{
"name": "superset",
"sa": "default",
"secret": [
{
"name": "registry-xxx-registry"
}
]
}
{
"name": "superset-init-db",
"sa": "default",
"secret": [
{
"name": "registry-xxx-registry"
}
]
}
In the test namespace the secret name is just correct. Extra interesting is that postgres DOES have the correct secret name, and that uses a Helm dependency. So it seems like there's an issue in the superset Helm chart that is causing this, but there's no imagePullSecrets values being set anywhere in the templates. And as you can see above, they are using the default service account.
I have already tried destroying and recreating the whole cluster, but the problem recurs.
I have tried version 0.5.10 (latest) of the Helm chart and version 0.3.5, both result in the same issue.
https://github.com/apache/superset/tree/dafc841e223c0f01092a2e116888a3304142e1b8/helm/superset
https://github.com/apache/superset/tree/1.3/helm/superset
I am trying to build ConfigMap data directly from values.yaml in helm
My Values.yaml
myconfiguration: |-
key1: >
{ "Project" : "This is config1 test"
}
key2 : >
{
"Project" : "This is config2 test"
}
And the configMap
apiVersion: v1
kind: ConfigMap
metadata:
name: poc-secrets-configmap-{{ .Release.Namespace }}
data:
{{.Values.myconfiguration | indent 1}}
But the data is empty when checked on the pod
Name: poc-secrets-configmap-xxx
Namespace: xxx
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: poc-secret-xxx
meta.helm.sh/release-namespace: xxx
Data
====
Events: <none>
Can anyone suggest
You are missing indentation in your values.yaml file, check YAML Multiline
myconfiguration: |-
key1: >
{ "Project" : "This is config1 test"
}
key2 : >
{
"Project" : "This is config2 test"
}
Also, the suggested syntax for YAML files is to use 2 spaces for indentation, so you may want to change your configmap to {{.Values.myconfiguration | indent 2}}
How to patch "db.password" in the following cm with kustomize?
comfigmap:
apiVersion: v1
data:
dbp.conf: |-
{
"db_properties": {
"db.driver": "com.mysql.jdbc.Driver",
"db.password": "123456",
"db.user": "root"
}
}
kind: ConfigMap
metadata:
labels: {}
name: dbcm
you can create new file with updated values and use command replace along wih create
kubectl create configmap NAME --from-file file.name -o yaml --dry-run | kubectl replace -f -
create a placeholder in your file and replace it with real data while applying kustomize
your code will be like this:
#!/bin/bash
sed -i "s/PLACE-HOLDER/123456/g" db_config.yaml
kustomize config.yaml >> kustomizeconfig.yaml
kubectl apply -f kustomizeconfig.yaml -n foo
And the db_config file will be:
apiVersion: v1
data:
dbp.conf: |-
{
"db_properties": {
"db.driver": "com.mysql.jdbc.Driver",
"db.password": "PLACE_HODLER",
"db.user": "root"
}
}
kind: ConfigMap
metadata:
labels: {}
name: dbcm
NB: This should be running on the pipeline to have the config file cloned from repo, so the real file won't be updated.