Hello iam trying to insert a Kubernetes ConfigMap inside the cert-manager Helm Chart. The Helm Chart gets defined with a values.yaml.
The needed ConfigMap is already defined with the corresponding data inside the same namespace as my Helm Chart.
resource "helm_release" "certmanager" {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
namespace = kubernetes_namespace.certmanager.metadata.0.name
version = local.helm_cert_manager_version
values = [
file("./config/cert-manager-values.yml")
]
}
# !! ConfigMap is defined with Terraform !! #
resource "kubernetes_config_map" "example" {
metadata {
name = "test-config"
namespace = kubernetes_namespace.certmanager.metadata.0.name
}
data = {
"test_ca" = "${data.google_secret_manager_secret_version.test_crt.secret_data}"
}
}
The data of the ConfigMap should be mounted to the path /etc/ssl/certs inside my Helm Chart.
I think down below is the rigth spot to mount the data?
...
volumes: []
volumeMounts: []
..
Do you have any idea how to mount that ConfigMap over /etc/ssl/certs within the cert-manager Chart?
Based on your question, there could be two things you could do:
Pre-populate the ./config/cert-manager-values.yml file with the values you want.
Use the templatefile [1] built-in function and pass the values dynamically.
In the first case, the changes to the file would probably have to be as follows:
...
volumes:
- name: config-map-volume
configMap:
name: test-config
volumeMounts:
- name: config-map-volume
mountPath: /etc/ssl/certs
...
Make sure the indentation is correct since this is YML. In the second case, you could do something like this in the helm_release resource:
resource "helm_release" "certmanager" {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
namespace = kubernetes_namespace.certmanager.metadata.0.name
version = local.helm_cert_manager_version
values = [templatefile("./config/cert-manager-values.yml", {
config_map_name = kubernetes_config_map.example.metadata[0].name
volume_mount_path = "/etc/ssl/certs"
})]
}
In this case, you would also have to use template variables as placeholders inside of the cert-manager-values.yml file:
...
volumes:
- name: config-map-volume
configMap:
name: ${config_map_name}
volumeMounts:
- name: config-map-volume
mountPath: ${mount_path}
...
Note that the first option might not work as expected due to Terraform parallelism which tries to create as many resources as possible. If the ConfigMap is not created before the chart is applied it might fail.
[1] https://www.terraform.io/language/functions/templatefile
Related
I have a helm release file test-helm-release.yaml as given below.
apiVersion: helm.toolkit.gitops.io/v2beta1
kind: HelmRelease
metadata:
name: "test"
namespace: "test-system"
spec:
chart:
spec:
chart: "test-environment"
version: "0.1.10"
values:
key1: "value1"
key1: "value1"
key1: "value1"
key1: "value1"
gitRepository:
url: https://github.com/test-eng/test.git
helmRepositories:
- name: testplatform
url: https://test-platform/charts
While creating the helm release I can pass the values from above helm release to the new release using the below command
chart=$(yq '.spec.chart.spec.chart' test-helm-release.yaml)
version=$(yq '.spec.chart.spec.version' test-helm-release.yaml)
yq '.spec.values' test-helm-release.yaml | helm upgrade --install --values - --version "$version" --namespace test-system --create-namespace test-platform "helm-repo/$chart"
Above code is working perfectly and I'm able to pass the values to the helm release using "yq" command. How I can do the same "yq" function with terraform "helm-release" and git repository data type given below.
data "github_repository_file" "test-platform" {
repository = "test-platform"
branch = "test"
file = "internal/default/test-helm-release.yaml"
}
resource "helm_release" "test-platform" {
name = "test-platform"
repository = "https://test-platform/charts"
chart = "test-environment"
namespace = "test-system"
create_namespace = true
timeout = 800
lifecycle {
ignore_changes = all
}
}
Note
I cannot use "set" because i want to fetch the values form test-helm-release.yaml dynamically.Any idea how I could fetch the .spec.values alone using templatefile functio or a different way?
I'm installing a helm chart which creates a CRD and then I want to instantiate the CRD defined in the helm chart. What's the correct way to declare a dependency between them so that terraform doesn't try to create the CRD until after the helm chart has finished installing?
new helm.Release(this, "doppler-kubernetes-operator-helm-chart", {
chart: "doppler-kubernetes-operator",
name: "doppler",
repository: "https://helm.doppler.com",
version: "1.2.0"
})
const dopplerOperatingSystemNamespace = "doppler-operator-system";
// create a secret referenced by the CRD
const dopplerApiServerProjectServiceTokenSecret = new kubernetes.Secret(this, "doppler-api-server-project-service-token", {
metadata: {
name: "doppler-api-server-project-service-token",
namespace: dopplerOperatingSystemNamespace
},
data: {
"serviceToken": "<some secret>"
}
})
// Create the CRD <------------- how do I get this to depend on the helm chart?
new kubernetes.Manifest(this, "doppler-kubernetes-operator", {
manifest: {
apiVersion: "secrets.doppler.com/v1alpha1",
kind: "DopplerSecret",
metadata: {
name: "doppler-secret-default",
namespace: dopplerOperatingSystemNamespace,
},
spec: {
tokenSecret: {
name: dopplerApiServerProjectServiceTokenSecret.metadata.name
},
managedSecret: {
name: "doppler-api-server-managed-secret",
namespace: "default"
}
}
}
})
In this case I would like to only attempt creating doppler-kubernetes-operator after the helm chart has finished installing.
Turns out I'm an idiot. I was looking for dependsOn (which I use with AWS classes) and Intellij wasn't autocompleting it for the kubernetes Manifest but I guess my cursor was in the wrong position...
How to patch "db.password" in the following cm with kustomize?
comfigmap:
apiVersion: v1
data:
dbp.conf: |-
{
"db_properties": {
"db.driver": "com.mysql.jdbc.Driver",
"db.password": "123456",
"db.user": "root"
}
}
kind: ConfigMap
metadata:
labels: {}
name: dbcm
you can create new file with updated values and use command replace along wih create
kubectl create configmap NAME --from-file file.name -o yaml --dry-run | kubectl replace -f -
create a placeholder in your file and replace it with real data while applying kustomize
your code will be like this:
#!/bin/bash
sed -i "s/PLACE-HOLDER/123456/g" db_config.yaml
kustomize config.yaml >> kustomizeconfig.yaml
kubectl apply -f kustomizeconfig.yaml -n foo
And the db_config file will be:
apiVersion: v1
data:
dbp.conf: |-
{
"db_properties": {
"db.driver": "com.mysql.jdbc.Driver",
"db.password": "PLACE_HODLER",
"db.user": "root"
}
}
kind: ConfigMap
metadata:
labels: {}
name: dbcm
NB: This should be running on the pipeline to have the config file cloned from repo, so the real file won't be updated.
I am trying to integrate Kubewatch in a kubernetes cluster. The cluster was built using Terraform's kubernetes provider. How do I convert the data section of this configmap yaml file to terraform?
YAML
apiVersion: v1
kind: ConfigMap
metadata:
name: kubewatch
data:
.kubewatch.yaml: |
namespace: "default"
handler:
slack:
token: xoxb-OUR-BOT-TOKEN
channel: kubernetes-events
resource:
deployment: true
replicationcontroller: false
replicaset: false
daemonset: false
services: true
pod: true
secret: true
configmap: false
While I haven't done very complex config maps, this should get you pretty close.
resource "kubernetes_config_map" "example" {
metadata {
name = "kubewatch"
}
data {
namespace = "default"
handler {
slack {
token = "xoxb-OUR-BOT-TOKEN"
channel = "kubernetes-events"
}
}
resource {
deployment = true
replicationcontroller = false
replicaset = false
daemonset = false
services = true
pod = true
secret = true
configmap = false
}
api_host = "myhost:443"
db_host = "dbhost:5432"
}
}
I am trying to write a kubernetes crd validation schema. I have an array (vc) of structures and one of the fields in those structures is required (name field).
I tried looking through various examples but it doesn't generate error when name is not there. Any suggestions what is wrong ?
vc:
type: array
items:
type: object
properties:
name:
type: string
address:
type: string
required:
- name
If you are on v1.8, you will need to enable the CustomResourceValidation feature gate for using the validation feature. This can be done by using the following flag on kube-apiserver:
--feature-gates=CustomResourceValidation=true
Here is an example of it working (I tested this on v1.12, but this should work on earlier versions as well):
The CRD:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: foos.stable.example.com
spec:
group: stable.example.com
versions:
- name: v1
served: true
storage: true
version: v1
scope: Namespaced
names:
plural: foos
singular: foo
kind: Foo
validation:
openAPIV3Schema:
properties:
spec:
properties:
vc:
type: array
items:
type: object
properties:
name:
type: string
address:
type: string
required:
- name
The custom resource:
apiVersion: "stable.example.com/v1"
kind: Foo
metadata:
name: new-foo
spec:
vc:
- address: "bar"
Create the CRD.
kubectl create -f crd.yaml
customresourcedefinition.apiextensions.k8s.io/foos.stable.example.com created
Get the CRD and check if the validation field exists in the output. If it doesn't, you probably don't have the feature gate turned on.
kubectl get crd foos.stable.example.com -oyaml
Try to create the custom resource. This should fail with:
kubectl create -f cr-validation.yaml
The Foo "new-foo" is invalid: []: Invalid value: map[string]interface {}{"metadata":map[string]interface {}{"creationTimestamp":"2018-11-18T19:45:23Z", "generation":1, "uid":"7d7f8f0b-eb6a-11e8-b861-54e1ad9de0be", "name":"new-foo", "namespace":"default"}, "spec":map[string]interface {}{"vc":[]interface {}{map[string]interface {}{"address":"bar"}}}, "apiVersion":"stable.example.com/v1", "kind":"Foo"}: validation failure list:
spec.vc.name in body is required