I have the following secrets.yaml in templetes in Helm Charts:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
USER_NAME: YWRtaW4=
PASSWORD: MWYyZDFlMmU2N2Rm
I need to create the same secret in different namespace, for example, namespace test1, test2, test3, test4, how to specify the different namespace with the same secrets so the same secret can be created in different namespace?
You can set the namespace name in the metadata section like
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: test1
type: Opaque
data:
USER_NAME: YWRtaW4=
PASSWORD: MWYyZDFlMmU2N2Rm
You can set a for loop with helm to create one Secret definition in each namespace.
Update.
# values.yaml
namespaces:
- test1
- test2
# templates.secrets.tpl
{{- range .Values.namespaces }}
---
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: {{ . | quote }}
type: Opaque
data:
USER_NAME: YWRtaW4=
PASSWORD: MWYyZDFlMmU2N2Rm
{{- end }}
### output ###
---
# Source: base/templates/secrets.tpl
---
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: "test1"
type: Opaque
data:
USER_NAME: YWRtaW4=
PASSWORD: MWYyZDFlMmU2N2Rm
---
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: "test2"
type: Opaque
data:
USER_NAME: YWRtaW4=
PASSWORD: MWYyZDFlMmU2N2Rm
Related
My ExternalSecret resource references a Hashicorp key-value Vault secret that stores a Google service account (json).
The ExternalSecret will create a Secret of type kubernetes.io/dockerconfigjson.
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: gcr-external
namespace: vault-dev
spec:
refreshInterval:
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: gcr
creationPolicy: Owner
template:
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: '\{"auths": {"eu.gcr.io": {"username": "_json_key", "password": {{ .data }} }}}'
data:
- secretKey: data
remoteRef:
key: gcp/sa
However, .dockerconfigjson string is not reading the *data *secretKey as it is referenced now with "password": {{ .data }}.
What's the correct way to reference it?
I want to patch (overwrite) list in kubernetes manifest with Kustomize.
I am using patchesStrategicMerge method.
When I patch the parameters which are not in list the patching works as expected - only addressed parameters in patch.yaml are replaced, rest is untouched.
When I patch list the whole list is replaced.
How can I replace only specific items in the list and the res of the items in list stay untouched?
I found these two resources:
https://github.com/kubernetes-sigs/kustomize/issues/581
https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md
but wasn't able to make desired solution of it.
exmaple code:
orig-file.yaml
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
test: test
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: test-user
channel: "#alerts"
sendResolved: true
apiURL:
name: slack-webhook-url
key: address
patch.yaml:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
test: brase-yourself
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- orig-file.yaml
patchesStrategicMerge:
- patch.yaml
What I get:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
test: brase-yourself
What I want:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
channel: "#alerts"
sendResolved: true
apiURL:
name: slack-webhook-url
key: address
test: brase-yourself
What you can do is to use jsonpatch instead of patchesStrategicMerge, so in your case:
cat <<EOF >./kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- orig-file.yaml
patches:
- path: patch.yaml
target:
group: monitoring.coreos.com
version: v1alpha1
kind: AlertmanagerConfig
name: alertmanager-slack-config
EOF
patch:
cat <<EOF >./patch.yaml
- op: replace
path: /spec/receivers/0/slackConfigs/0/username
value: Karl
EOF
I try to import a json file into a configmap but the map doesn't contain the file.
my ConfigMap-Template:
apiVersion: v1
kind: ConfigMap
metadata:
name: serilog-configmap
data:
serilog.json: |-
{{ .Files.Get "serilog.json" | indent 4}}
serilog.json is in the root-path of the Project, there is a sub-dir with the Chart and the templetes ( from helm create ).
I allso tried "../../serilog.json" and the fullpath as the filename but it allways ends with the same result when i run helm install --debug --dry-run.
---
# Source: hellowebapi/templates/serilogConfigMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: serilog-configmap
data:
serilog.json: |-
---
I would excpect:
---
# Source: hellowebapi/templates/serilogConfigMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: serilog-configmap
data:
serilog.json: |-
{
"Serilog": {
"Using": [
"Serilog.Sinks.ColoredConsole"
],
...
---
Can anybody tell me where i make my mistake?
Try this :
---
apiVersion: v1
kind: ConfigMap
metadata:
name: serilog-configmap
data:
serilog.json: |-
{{- $.Files.Get "configurations/serilog.json" | nindent 6 -}}
with a relative path for the json file (hellowebapi/configurations/serilog.json)
It will produce :
---
# Source: serilog/templates/test.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: serilog-configmap
data:
serilog.json: |-
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"annotations": {
Your json file should be in your chart directory.
See Accessing Files Inside Templates
λ ls
Chart.yaml charts/ serilog.json templates/ values.yaml
λ helm template .
---
# Source: templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: serilog-configmap
data:
serilog.json: |-
{
"Serilog": {
"Using": [
"Serilog.Sinks.ColoredConsole"
]
}
}
Below is my python script to update a secret so I can deploy to kubernetes using kubectl. So it works fine. But I want to create a kubernetes cron job that will run a docker container to update a secret from within a kubernetes cluster. How do I do that? The aws secret lasts only 12 hours to I have to regenerate from within the cluster so I can pull if pod crash etc...
This there an internal api I have access to within kubernetes?
cmd = """aws ecr get-login --no-include-email --region us-east-1 > aws_token.txt"""
run_bash(cmd)
f = open('aws_token.txt').readlines()
TOKEN = f[0].split(' ')[5]
SECRET_NAME = "%s-ecr-registry" % (self.region)
cmd = """kubectl delete secret --ignore-not-found %s -n %s""" % (SECRET_NAME,namespace)
print (cmd)
run_bash(cmd)
cmd = """kubectl create secret docker-registry %s --docker-server=https://%s.dkr.ecr.%s.amazonaws.com --docker-username=AWS --docker-password="%s" --docker-email="david.montgomery#gmail.com" -n %s """ % (SECRET_NAME,self.aws_account_id,self.region,TOKEN,namespace)
print (cmd)
run_bash(cmd)
cmd = "kubectl describe secrets/%s-ecr-registry -n %s" % (self.region,namespace)
print (cmd)
run_bash(cmd)
cmd = "kubectl get secret %s-ecr-registry -o yaml -n %s" % (self.region,namespace)
print (cmd)
As it happens I literally just got done doing this.
Below is everything you need to set up a cronjob to roll your AWS docker login token, and then re-login to ECR, every 6 hours. Just replace the {{ variables }} with your own actual values.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: {{ namespace }}
name: ecr-cred-helper
rules:
- apiGroups: [""]
resources:
- secrets
- serviceaccounts
- serviceaccounts/token
verbs:
- 'delete'
- 'create'
- 'patch'
- 'get'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: ecr-cred-helper
namespace: {{ namespace }}
subjects:
- kind: ServiceAccount
name: sa-ecr-cred-helper
namespace: {{ namespace }}
roleRef:
kind: Role
name: ecr-cred-helper
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-ecr-cred-helper
namespace: {{ namespace }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
annotations:
name: ecr-cred-helper
namespace: {{ namespace }}
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
serviceAccountName: sa-ecr-cred-helper
containers:
- command:
- /bin/sh
- -c
- |-
TOKEN=`aws ecr get-login --region ${REGION} --registry-ids ${ACCOUNT} | cut -d' ' -f6`
echo "ENV variables setup done."
kubectl delete secret -n {{ namespace }} --ignore-not-found $SECRET_NAME
kubectl create secret -n {{ namespace }} docker-registry $SECRET_NAME \
--docker-server=https://{{ ECR_REPOSITORY_URL }} \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
echo "Secret created by name. $SECRET_NAME"
kubectl patch serviceaccount default -p '{"imagePullSecrets":[{"name":"'$SECRET_NAME'"}]}' -n {{ namespace }}
echo "All done."
env:
- name: AWS_DEFAULT_REGION
value: eu-west-1
- name: AWS_SECRET_ACCESS_KEY
value: '{{ AWS_SECRET_ACCESS_KEY }}'
- name: AWS_ACCESS_KEY_ID
value: '{{ AWS_ACCESS_KEY_ID }}'
- name: ACCOUNT
value: '{{ AWS_ACCOUNT_ID }}'
- name: SECRET_NAME
value: '{{ imagePullSecret }}'
- name: REGION
value: 'eu-west-1'
- name: EMAIL
value: '{{ ANY_EMAIL }}'
image: odaniait/aws-kubectl:latest
imagePullPolicy: IfNotPresent
name: ecr-cred-helper
resources: {}
securityContext:
capabilities: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: Default
hostNetwork: true
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: 0 */6 * * *
successfulJobsHistoryLimit: 3
suspend: false
I add my solution for copying secrets between namespaces using cronjob because this was the stack overflow answer that was given to me when searching for secret copying using CronJob
In the source namespace, you need to define Role, RoleBinding and 'ServiceAccount`
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-user-user-secret-service-account
namespace: source-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: demo-user-role
namespace: source-namespace
rules:
- apiGroups: [""]
resources: ["secrets"]
# Secrets you want to have access in your namespace
resourceNames: ["demo-user" ]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: demo-user-cron-role-binding
namespace: source-namespace
subjects:
- kind: ServiceAccount
name: demo-user-user-secret-service-account
namespace: source-namespace
roleRef:
kind: Role
name: demo-user-role
apiGroup: ""
and CronJob definition will look like so:
apiVersion: batch/v1
kind: CronJob
metadata:
name: demo-user-user-secret-copy-cronjob
spec:
schedule: "* * * * *"
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 5
successfulJobsHistoryLimit: 3
startingDeadlineSeconds: 10
jobTemplate:
spec:
template:
spec:
containers:
- name: demo-user-user-secret-copy-cronjob
image: bitnami/kubectl:1.25.4-debian-11-r6
imagePullPolicy: IfNotPresent
command:
- "/bin/bash"
- "-c"
- "kubectl -n source-namespace get secret demo-user -o json | \
jq 'del(.metadata.creationTimestamp, .metadata.uid, .metadata.resourceVersion, .metadata.ownerReferences, .metadata.namespace)' > /tmp/demo-user-secret.json && \
kubectl apply --namespace target-namespace -f /tmp/demo-user-secret.json"
restartPolicy: Never
securityContext:
privileged: false
allowPrivilegeEscalation: true
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop: [ "all" ]
serviceAccountName: demo-user-user-secret-service-account
In the target namespace you also need Role and RoleBinding so that CronJob in source namespace can copy over the secrets.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: target-namespace
name: demo-user-role
rules:
- apiGroups: [""]
resources:
- secrets
verbs:
- 'list'
- 'delete'
- 'create'
- 'patch'
- 'get'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: demo-user-role-binding
namespace: target-namespace
subjects:
- kind: ServiceAccount
name: demo-user-user-secret-service-account
namespace: source-namespace
roleRef:
kind: Role
name: demo-user-role
apiGroup: ""
In your target namespace deployment you can read in the secrets as regular files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 1
...
spec:
containers:
- name: my-app
image: [image-name]
volumeMounts:
- name: your-secret
mountPath: /opt/your-secret
readOnly: true
volumes:
- name: your-secret
secret:
secretName: demo-user
items:
- key: ca.crt
path: ca.crt
- key: user.crt
path: user.crt
- key: user.key
path: user.key
- key: user.p12
path: user.p12
- key: user.password
path: user.password
At present I am creating a configmap from the file config.json by executing:
kubectl create configmap jksconfig --from-file=config.json
I would want the ConfigMap to be created as part of the deployment and tried to do this:
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
data:
config.json: |-
{{ .Files.Get "config.json" | indent 4 }}
But doesn't seem to work. What should be going into configmap.yaml so that the same configmap is created?
---UPDATE---
when I do a helm install dry run:
# Source: mychartv2/templates/jks-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
data:
config.json: |
Note: I am using minikube as my kubernetes cluster
Your config.json file should be inside your mychart/ directory, not inside mychart/templates
Chart Template Guide
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
config.json: |-
{{ .Files.Get "config.json" | indent 4}}
config.json
{
"val": "key"
}
helm install --dry-run --debug mychart
[debug] Created tunnel using local port: '52091'
[debug] SERVER: "127.0.0.1:52091"
...
NAME: dining-saola
REVISION: 1
RELEASED: Fri Nov 23 15:06:17 2018
CHART: mychart-0.1.0
USER-SUPPLIED VALUES:
{}
...
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: dining-saola-configmap
data:
config.json: |-
{
"val": "key"
}
EDIT:
But I want it the values in the config.json file to be taken from values.yaml. Is that possible?
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
config.json: |-
{
{{- range $key, $val := .Values.json }}
{{ $key | quote | indent 6}}: {{ $val | quote }}
{{- end}}
}
values.yaml
json:
key1: val1
key2: val2
key3: val3
helm install --dry-run --debug mychart
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mangy-hare-configmap
data:
config.json: |-
{
"key1": "val1"
"key2": "val2"
"key3": "val3"
}
Here is an example of a ConfigMap that is attached to a Deployment:
ConfigMap:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
data:
config.json: |-
{{ .Files.Get "config.json" | indent 4 }}
Deployment:
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: jksapp
labels:
app: jksapp
spec:
selector:
matchLabels:
app: jksapp
template:
metadata:
labels:
app: jksapp
containers:
- name: jksapp
image: jksapp:1.0.0
ports:
- containerPort: 8080
volumeMounts:
- name: config #The name(key) value must match pod volumes name(key) value
mountPath: /path/to/config.json
volumes:
- name: config
configMap:
name: jksconfig
Soln 01:
insert your config.json file content into a template
then use this template into your data against config.json
then run $ helm install command
finally,
{{define "config"}}
{
"a": "A",
"b": {
"b1": 1
}
}
{{end}}
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "my-app"
heritage: "{{ .Release.Service }}"
release: "{{ .Release.Name }}"
data:
config.json: {{ (include "config" .) | trim | quote }}