How to patch configmap in json file with kustomize? - kubernetes

How to patch "db.password" in the following cm with kustomize?
comfigmap:
apiVersion: v1
data:
dbp.conf: |-
{
"db_properties": {
"db.driver": "com.mysql.jdbc.Driver",
"db.password": "123456",
"db.user": "root"
}
}
kind: ConfigMap
metadata:
labels: {}
name: dbcm

you can create new file with updated values and use command replace along wih create
kubectl create configmap NAME --from-file file.name -o yaml --dry-run | kubectl replace -f -

create a placeholder in your file and replace it with real data while applying kustomize
your code will be like this:
#!/bin/bash
sed -i "s/PLACE-HOLDER/123456/g" db_config.yaml
kustomize config.yaml >> kustomizeconfig.yaml
kubectl apply -f kustomizeconfig.yaml -n foo
And the db_config file will be:
apiVersion: v1
data:
dbp.conf: |-
{
"db_properties": {
"db.driver": "com.mysql.jdbc.Driver",
"db.password": "PLACE_HODLER",
"db.user": "root"
}
}
kind: ConfigMap
metadata:
labels: {}
name: dbcm
NB: This should be running on the pipeline to have the config file cloned from repo, so the real file won't be updated.

Related

Pass values from Helmrelease to Terraform

I have a helm release file test-helm-release.yaml as given below.
apiVersion: helm.toolkit.gitops.io/v2beta1
kind: HelmRelease
metadata:
name: "test"
namespace: "test-system"
spec:
chart:
spec:
chart: "test-environment"
version: "0.1.10"
values:
key1: "value1"
key1: "value1"
key1: "value1"
key1: "value1"
gitRepository:
url: https://github.com/test-eng/test.git
helmRepositories:
- name: testplatform
url: https://test-platform/charts
While creating the helm release I can pass the values from above helm release to the new release using the below command
chart=$(yq '.spec.chart.spec.chart' test-helm-release.yaml)
version=$(yq '.spec.chart.spec.version' test-helm-release.yaml)
yq '.spec.values' test-helm-release.yaml | helm upgrade --install --values - --version "$version" --namespace test-system --create-namespace test-platform "helm-repo/$chart"
Above code is working perfectly and I'm able to pass the values to the helm release using "yq" command. How I can do the same "yq" function with terraform "helm-release" and git repository data type given below.
data "github_repository_file" "test-platform" {
repository = "test-platform"
branch = "test"
file = "internal/default/test-helm-release.yaml"
}
resource "helm_release" "test-platform" {
name = "test-platform"
repository = "https://test-platform/charts"
chart = "test-environment"
namespace = "test-system"
create_namespace = true
timeout = 800
lifecycle {
ignore_changes = all
}
}
Note
I cannot use "set" because i want to fetch the values form test-helm-release.yaml dynamically.Any idea how I could fetch the .spec.values alone using templatefile functio or a different way?

Pod not able to read ConfigMap despite Role and RoleBinding being in place

I would like permit a Kubernetes pod in namespace my-namespace to access configmap/config in the same namespace. For this purpose I have defined the following role and rolebinding:
apiVersion: v1
kind: List
items:
- kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: config
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["config"]
verbs: ["get"]
- kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: config
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: default
namespace: my-namespace
roleRef:
kind: Role
name: config
apiGroup: rbac.authorization.k8s.io
Yet still, the pod runs into the following error:
configmaps \"config\" is forbidden: User \"system:serviceaccount:my-namespace:default\"
cannot get resource \"configmaps\" in API group \"\" in the namespace \"my-namespace\"
What am I missing? I guess it must be a simple thing, which a second pair of eyes may spot immediately.
UPDATE Here is a relevant fragment of my client code, which uses go-client:
cfg, err := rest.InClusterConfig()
if err != nil {
logger.Fatalf("cannot obtain Kubernetes config: %v", err)
}
k8sClient, err := k8s.NewForConfig(cfg)
if err != nil {
logger.Fatalf("cannot create Clientset")
}
configMapClient := k8sClient.CoreV1().ConfigMaps(Namespace)
configMap, err := configMapClient.Get(ctx, "config", metav1.GetOptions{})
if err != nil {
logger.Fatalf("cannot obtain configmap: %v", err) // error occurs here
}
I don't see anything in particular wrong with your Role or
Rolebinding, and in fact when I deploy them into my environment they
seem to work as intended. You haven't provided a complete reproducer in your question, so here's how I'm testing things out:
I started by creating a namespace my-namespace
I have the following in kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
commonLabels:
app: rbactest
resources:
- rbac.yaml
- deployment.yaml
generatorOptions:
disableNameSuffixHash: true
configMapGenerator:
- name: config
literals:
- foo=bar
- this=that
In rbac.yaml I have the Role and RoleBinding from your question (without modification).
In deployment.yaml I have:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cli
spec:
replicas: 1
template:
spec:
containers:
- name: cli
image: quay.io/openshift/origin-cli
command:
- sleep
- inf
With this in place, I deploy everything by running:
kubectl apply -k .
And then once the Pod is up and running, this works:
$ kubectl exec -n my-namespace deploy/cli -- kubectl get cm config
NAME DATA AGE
config 2 3m50s
Attempts to access other ConfigMaps will not work, as expected:
$ kubectl exec deploy/cli -- kubectl get cm foo
Error from server (Forbidden): configmaps "foo" is forbidden: User "system:serviceaccount:my-namespace:default" cannot get resource "configmaps" in API group "" in the namespace "my-namespace"
command terminated with exit code 1
If you're seeing different behavior, it would be interesting to figure out where your process differs from what I've done.
Your Go code looks fine also; I'm able to run this in the "cli" container:
package main
import (
"context"
"fmt"
"log"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
func main() {
config, err := rest.InClusterConfig()
if err != nil {
panic(err.Error())
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
namespace := "my-namespace"
configMapClient := clientset.CoreV1().ConfigMaps(namespace)
configMap, err := configMapClient.Get(context.TODO(), "config", metav1.GetOptions{})
if err != nil {
log.Fatalf("cannot obtain configmap: %v", err)
}
fmt.Printf("%+v\n", configMap)
}
If I compile the above, kubectl cp it into the container and run it, I get as output:
&ConfigMap{ObjectMeta:{config my-namespace 2ef6f031-7870-41f1-b091-49ab360b98da 2926 0 2022-10-15 03:22:34 +0000 UTC <nil> <nil> map[app:rbactest] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","data":{"foo":"bar","this":"that"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app":"rbactest"},"name":"config","namespace":"my-namespace"}}
] [] [] [{kubectl-client-side-apply Update v1 2022-10-15 03:22:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:foo":{},"f:this":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:app":{}}}} }]},Data:map[string]string{foo: bar,this: that,},BinaryData:map[string][]byte{},Immutable:nil,}

imagePullSecrets mysteriously get `registry-` prefix

I am trying to deploy the superset Helm chart with a customized image. There's no option to specify imagePullSecrets for the chart. Using k8s on DigitalOcean. I linked the repository, and tested it using a basic deploy, and it "just works". That is to say, the pods get the correct value for imagePullSecret, and pulling just works.
However, when trying to install the Helm chart, the used imagePullSecret mysteriously gets a registry- prefix (there's already a -registry suffix, so it becomes registry-xxx-registry when it should just be xxx-registry). The values on the default service account are correct.
To illustrate, default service accounts for both namespaces:
$ kubectl get sa default -n test -o yaml
apiVersion: v1
imagePullSecrets:
- name: xxx-registry
kind: ServiceAccount
metadata:
creationTimestamp: "2022-04-14T14:26:41Z"
name: default
namespace: test
resourceVersion: "13125"
uid: xxx-xxx
secrets:
- name: default-token-9ggrm
$ kubectl get sa default -n superset -o yaml
apiVersion: v1
imagePullSecrets:
- name: xxx-registry
kind: ServiceAccount
metadata:
creationTimestamp: "2022-04-14T14:19:47Z"
name: default
namespace: superset
resourceVersion: "12079"
uid: xxx-xxx
secrets:
- name: default-token-wkdhv
LGTM, but after trying to install the helm chart (which fails because of registry auth), I can see that the wrong secret is set on the pods:
$ kubectl get -n superset pods -o json | jq '.items[] | {name: .spec.containers[0].name, sa: .spec.serviceAccount, secret: .spec.imagePullSecrets}'
{
"name": "superset",
"sa": "default",
"secret": [
{
"name": "registry-xxx-registry"
}
]
}
{
"name": "xxx-superset-postgresql",
"sa": "default",
"secret": [
{
"name": "xxx-registry"
}
]
}
{
"name": "redis",
"sa": "xxx-superset-redis",
"secret": null
}
{
"name": "superset",
"sa": "default",
"secret": [
{
"name": "registry-xxx-registry"
}
]
}
{
"name": "superset-init-db",
"sa": "default",
"secret": [
{
"name": "registry-xxx-registry"
}
]
}
In the test namespace the secret name is just correct. Extra interesting is that postgres DOES have the correct secret name, and that uses a Helm dependency. So it seems like there's an issue in the superset Helm chart that is causing this, but there's no imagePullSecrets values being set anywhere in the templates. And as you can see above, they are using the default service account.
I have already tried destroying and recreating the whole cluster, but the problem recurs.
I have tried version 0.5.10 (latest) of the Helm chart and version 0.3.5, both result in the same issue.
https://github.com/apache/superset/tree/dafc841e223c0f01092a2e116888a3304142e1b8/helm/superset
https://github.com/apache/superset/tree/1.3/helm/superset

Set ConfigMap values directly from values.yaml in helm

I am trying to build ConfigMap data directly from values.yaml in helm
My Values.yaml
myconfiguration: |-
key1: >
{ "Project" : "This is config1 test"
}
key2 : >
{
"Project" : "This is config2 test"
}
And the configMap
apiVersion: v1
kind: ConfigMap
metadata:
name: poc-secrets-configmap-{{ .Release.Namespace }}
data:
{{.Values.myconfiguration | indent 1}}
But the data is empty when checked on the pod
Name: poc-secrets-configmap-xxx
Namespace: xxx
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: poc-secret-xxx
meta.helm.sh/release-namespace: xxx
Data
====
Events: <none>
Can anyone suggest
You are missing indentation in your values.yaml file, check YAML Multiline
myconfiguration: |-
key1: >
{ "Project" : "This is config1 test"
}
key2 : >
{
"Project" : "This is config2 test"
}
Also, the suggested syntax for YAML files is to use 2 spaces for indentation, so you may want to change your configmap to {{.Values.myconfiguration | indent 2}}

Helm: howto use ".Files.Get" to import a json into a config map

I try to import a json file into a configmap but the map doesn't contain the file.
my ConfigMap-Template:
apiVersion: v1
kind: ConfigMap
metadata:
name: serilog-configmap
data:
serilog.json: |-
{{ .Files.Get "serilog.json" | indent 4}}
serilog.json is in the root-path of the Project, there is a sub-dir with the Chart and the templetes ( from helm create ).
I allso tried "../../serilog.json" and the fullpath as the filename but it allways ends with the same result when i run helm install --debug --dry-run.
---
# Source: hellowebapi/templates/serilogConfigMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: serilog-configmap
data:
serilog.json: |-
---
I would excpect:
---
# Source: hellowebapi/templates/serilogConfigMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: serilog-configmap
data:
serilog.json: |-
{
"Serilog": {
"Using": [
"Serilog.Sinks.ColoredConsole"
],
...
---
Can anybody tell me where i make my mistake?
Try this :
---
apiVersion: v1
kind: ConfigMap
metadata:
name: serilog-configmap
data:
serilog.json: |-
{{- $.Files.Get "configurations/serilog.json" | nindent 6 -}}
with a relative path for the json file (hellowebapi/configurations/serilog.json)
It will produce :
---
# Source: serilog/templates/test.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: serilog-configmap
data:
serilog.json: |-
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"annotations": {
Your json file should be in your chart directory.
See Accessing Files Inside Templates
λ ls
Chart.yaml charts/ serilog.json templates/ values.yaml
λ helm template .
---
# Source: templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: serilog-configmap
data:
serilog.json: |-
{
"Serilog": {
"Using": [
"Serilog.Sinks.ColoredConsole"
]
}
}