ExternalSecret configuration for Google Container Registry - kubernetes

My ExternalSecret resource references a Hashicorp key-value Vault secret that stores a Google service account (json).
The ExternalSecret will create a Secret of type kubernetes.io/dockerconfigjson.
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: gcr-external
namespace: vault-dev
spec:
refreshInterval:
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: gcr
creationPolicy: Owner
template:
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: '\{"auths": {"eu.gcr.io": {"username": "_json_key", "password": {{ .data }} }}}'
data:
- secretKey: data
remoteRef:
key: gcp/sa
However, .dockerconfigjson string is not reading the *data *secretKey as it is referenced now with "password": {{ .data }}.
What's the correct way to reference it?

Related

Helm Charts create secrets in different namespace

I have the following secrets.yaml in templetes in Helm Charts:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
USER_NAME: YWRtaW4=
PASSWORD: MWYyZDFlMmU2N2Rm
I need to create the same secret in different namespace, for example, namespace test1, test2, test3, test4, how to specify the different namespace with the same secrets so the same secret can be created in different namespace?
You can set the namespace name in the metadata section like
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: test1
type: Opaque
data:
USER_NAME: YWRtaW4=
PASSWORD: MWYyZDFlMmU2N2Rm
You can set a for loop with helm to create one Secret definition in each namespace.
Update.
# values.yaml
namespaces:
- test1
- test2
# templates.secrets.tpl
{{- range .Values.namespaces }}
---
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: {{ . | quote }}
type: Opaque
data:
USER_NAME: YWRtaW4=
PASSWORD: MWYyZDFlMmU2N2Rm
{{- end }}
### output ###
---
# Source: base/templates/secrets.tpl
---
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: "test1"
type: Opaque
data:
USER_NAME: YWRtaW4=
PASSWORD: MWYyZDFlMmU2N2Rm
---
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: "test2"
type: Opaque
data:
USER_NAME: YWRtaW4=
PASSWORD: MWYyZDFlMmU2N2Rm

Is there any way to use configMaps in K8s with nested values to be used as environment variable in the pod?

I have the sample cm.yml for configMap with nested json like data.
kind: ConfigMap
metadata:
name: sample-cm
data:
spring: |-
rabbitmq: |-
host: "sample.com"
datasource: |-
url: "jdbc:postgresql:sampleDb"
I have to set environment variables, spring-rabbitmq-host=sample.com and spring-datasource-url= jdbc:postgresql:sampleDb in the following pod.
kind: Pod
metadata:
name: pod-sample
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: sping-rabbitmq-host
valueFrom:
configMapKeyRef:
name: sample-cm
key: <what should i specify here?>
- name: spring-datasource-url
valueFrom:
configMapKeyRef:
name: sample-cm
key: <what should i specify here?>
Unfortunately it won't be possible to pass values from the configmap you created as separate environment variables because it is read as a single string.
You can check it using kubectl describe cm sample-cm
Name: sample-cm
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","data":{"spring":"rabbitmq: |-\n host: \"sample.com\"\ndatasource: |-\n url: \"jdbc:postgresql:sampleDb\""},"kind":"Con...
Data
====
spring:
----
rabbitmq: |-
host: "sample.com"
datasource: |-
url: "jdbc:postgresql:sampleDb"
Events: <none>
ConfigMap needs key-value pairs so you have to modify it to represent separate values.
Simplest approach would be:
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-cm
data:
host: "sample.com"
url: "jdbc:postgresql:sampleDb"
so the values will look like this:
kubectl describe cm sample-cm
Name: sample-cm
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","data":{"host":"sample.com","url":"jdbc:postgresql:sampleDb"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"s...
Data
====
host:
----
sample.com
url:
----
jdbc:postgresql:sampleDb
Events: <none>
and pass it to a pod:
apiVersion: v1
kind: Pod
metadata:
name: pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: sping-rabbitmq-host
valueFrom:
configMapKeyRef:
name: sample-cm
key: host
- name: spring-datasource-url
valueFrom:
configMapKeyRef:
name: sample-cm
key: url

Kubernetes ConfigMap to write Node details to file

How can I use ConfigMap to write cluster node information to a JSON file?
The below gives me Node information :
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(#.type=="Hostname")].address}'
How can I use Configmap to write the above output to a text file?
You can save the output of command in any file.
Then use the file or data inside file to create configmap.
After creating the configmap you can mount it as a file in your deployment/pod.
For example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: appname
name: appname
namespace: development
spec:
selector:
matchLabels:
app: appname
tier: sometier
template:
metadata:
creationTimestamp: null
labels:
app: appname
tier: sometier
spec:
containers:
- env:
- name: NODE_ENV
value: development
- name: PORT
value: "3000"
- name: SOME_VAR
value: xxx
image: someimage
imagePullPolicy: Always
name: appname
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
I added the following configuration in deployment:
volumeMounts:
- name: your-volume-name
mountPath: "your/path/to/store/the/file"
readOnly: true
volumes:
- name: your-volume-name
configMap:
name: your-configmap-name
items:
- key: your-filename-inside-pod
path: your-filename-inside-pod
To create ConfigMap from file:
kubectl create configmap your-configmap-name --from-file=your-file-path
Or just create ConfigMap with the output of your command:
apiVersion: v1
kind: ConfigMap
metadata:
name: your-configmap-name
namespace: your-namespace
data:
your-filename-inside-pod: |
output of command
At first save output of kubect get nodes command into JSON file:
$ exampleCommand > node-info.json
Then create proper ConfigMap.
Here is an example:
apiVersion: v1
kind: ConfigMap
metadata:
name: example-config
data:
node-info.json: |
{
"array": [
1,
2
],
"boolean": true,
"number": 123,
"object": {
"a": "egg",
"b": "egg1"
},
"string": "Welcome"
}
Then remember to add following lines below specification section in pod configuration file:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
You can also use PodPresent.
PodPreset is an object that enable to inject information egg. environment variables into pods during creation time.
Look at the example below:
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: example
spec:
selector:
matchLabels:
app: your-pod
env:
- name: DB_PORT
value: "6379"
envFrom:
- configMapRef:
name: etcd-env-config
key: node-info.json
but remember that you have to also add:
env:
- name: NODE_CONFIG_JSON
valueFrom:
configMapKeyRef:
name: example-config
key: node-info.json
section to your pod definition proper to your PodPresent and ConfigMap configuration.
More information you can find here: podpresent, pod-present-configuration.

kubernetes cronjob and updating a secret

Below is my python script to update a secret so I can deploy to kubernetes using kubectl. So it works fine. But I want to create a kubernetes cron job that will run a docker container to update a secret from within a kubernetes cluster. How do I do that? The aws secret lasts only 12 hours to I have to regenerate from within the cluster so I can pull if pod crash etc...
This there an internal api I have access to within kubernetes?
cmd = """aws ecr get-login --no-include-email --region us-east-1 > aws_token.txt"""
run_bash(cmd)
f = open('aws_token.txt').readlines()
TOKEN = f[0].split(' ')[5]
SECRET_NAME = "%s-ecr-registry" % (self.region)
cmd = """kubectl delete secret --ignore-not-found %s -n %s""" % (SECRET_NAME,namespace)
print (cmd)
run_bash(cmd)
cmd = """kubectl create secret docker-registry %s --docker-server=https://%s.dkr.ecr.%s.amazonaws.com --docker-username=AWS --docker-password="%s" --docker-email="david.montgomery#gmail.com" -n %s """ % (SECRET_NAME,self.aws_account_id,self.region,TOKEN,namespace)
print (cmd)
run_bash(cmd)
cmd = "kubectl describe secrets/%s-ecr-registry -n %s" % (self.region,namespace)
print (cmd)
run_bash(cmd)
cmd = "kubectl get secret %s-ecr-registry -o yaml -n %s" % (self.region,namespace)
print (cmd)
As it happens I literally just got done doing this.
Below is everything you need to set up a cronjob to roll your AWS docker login token, and then re-login to ECR, every 6 hours. Just replace the {{ variables }} with your own actual values.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: {{ namespace }}
name: ecr-cred-helper
rules:
- apiGroups: [""]
resources:
- secrets
- serviceaccounts
- serviceaccounts/token
verbs:
- 'delete'
- 'create'
- 'patch'
- 'get'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: ecr-cred-helper
namespace: {{ namespace }}
subjects:
- kind: ServiceAccount
name: sa-ecr-cred-helper
namespace: {{ namespace }}
roleRef:
kind: Role
name: ecr-cred-helper
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-ecr-cred-helper
namespace: {{ namespace }}
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
annotations:
name: ecr-cred-helper
namespace: {{ namespace }}
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
serviceAccountName: sa-ecr-cred-helper
containers:
- command:
- /bin/sh
- -c
- |-
TOKEN=`aws ecr get-login --region ${REGION} --registry-ids ${ACCOUNT} | cut -d' ' -f6`
echo "ENV variables setup done."
kubectl delete secret -n {{ namespace }} --ignore-not-found $SECRET_NAME
kubectl create secret -n {{ namespace }} docker-registry $SECRET_NAME \
--docker-server=https://{{ ECR_REPOSITORY_URL }} \
--docker-username=AWS \
--docker-password="${TOKEN}" \
--docker-email="${EMAIL}"
echo "Secret created by name. $SECRET_NAME"
kubectl patch serviceaccount default -p '{"imagePullSecrets":[{"name":"'$SECRET_NAME'"}]}' -n {{ namespace }}
echo "All done."
env:
- name: AWS_DEFAULT_REGION
value: eu-west-1
- name: AWS_SECRET_ACCESS_KEY
value: '{{ AWS_SECRET_ACCESS_KEY }}'
- name: AWS_ACCESS_KEY_ID
value: '{{ AWS_ACCESS_KEY_ID }}'
- name: ACCOUNT
value: '{{ AWS_ACCOUNT_ID }}'
- name: SECRET_NAME
value: '{{ imagePullSecret }}'
- name: REGION
value: 'eu-west-1'
- name: EMAIL
value: '{{ ANY_EMAIL }}'
image: odaniait/aws-kubectl:latest
imagePullPolicy: IfNotPresent
name: ecr-cred-helper
resources: {}
securityContext:
capabilities: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: Default
hostNetwork: true
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: 0 */6 * * *
successfulJobsHistoryLimit: 3
suspend: false
I add my solution for copying secrets between namespaces using cronjob because this was the stack overflow answer that was given to me when searching for secret copying using CronJob
In the source namespace, you need to define Role, RoleBinding and 'ServiceAccount`
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-user-user-secret-service-account
namespace: source-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: demo-user-role
namespace: source-namespace
rules:
- apiGroups: [""]
resources: ["secrets"]
# Secrets you want to have access in your namespace
resourceNames: ["demo-user" ]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: demo-user-cron-role-binding
namespace: source-namespace
subjects:
- kind: ServiceAccount
name: demo-user-user-secret-service-account
namespace: source-namespace
roleRef:
kind: Role
name: demo-user-role
apiGroup: ""
and CronJob definition will look like so:
apiVersion: batch/v1
kind: CronJob
metadata:
name: demo-user-user-secret-copy-cronjob
spec:
schedule: "* * * * *"
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 5
successfulJobsHistoryLimit: 3
startingDeadlineSeconds: 10
jobTemplate:
spec:
template:
spec:
containers:
- name: demo-user-user-secret-copy-cronjob
image: bitnami/kubectl:1.25.4-debian-11-r6
imagePullPolicy: IfNotPresent
command:
- "/bin/bash"
- "-c"
- "kubectl -n source-namespace get secret demo-user -o json | \
jq 'del(.metadata.creationTimestamp, .metadata.uid, .metadata.resourceVersion, .metadata.ownerReferences, .metadata.namespace)' > /tmp/demo-user-secret.json && \
kubectl apply --namespace target-namespace -f /tmp/demo-user-secret.json"
restartPolicy: Never
securityContext:
privileged: false
allowPrivilegeEscalation: true
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop: [ "all" ]
serviceAccountName: demo-user-user-secret-service-account
In the target namespace you also need Role and RoleBinding so that CronJob in source namespace can copy over the secrets.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: target-namespace
name: demo-user-role
rules:
- apiGroups: [""]
resources:
- secrets
verbs:
- 'list'
- 'delete'
- 'create'
- 'patch'
- 'get'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: demo-user-role-binding
namespace: target-namespace
subjects:
- kind: ServiceAccount
name: demo-user-user-secret-service-account
namespace: source-namespace
roleRef:
kind: Role
name: demo-user-role
apiGroup: ""
In your target namespace deployment you can read in the secrets as regular files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 1
...
spec:
containers:
- name: my-app
image: [image-name]
volumeMounts:
- name: your-secret
mountPath: /opt/your-secret
readOnly: true
volumes:
- name: your-secret
secret:
secretName: demo-user
items:
- key: ca.crt
path: ca.crt
- key: user.crt
path: user.crt
- key: user.key
path: user.key
- key: user.p12
path: user.p12
- key: user.password
path: user.password

Kubernetes - How to define ConfigMap built using a file in a yaml?

At present I am creating a configmap from the file config.json by executing:
kubectl create configmap jksconfig --from-file=config.json
I would want the ConfigMap to be created as part of the deployment and tried to do this:
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
data:
config.json: |-
{{ .Files.Get "config.json" | indent 4 }}
But doesn't seem to work. What should be going into configmap.yaml so that the same configmap is created?
---UPDATE---
when I do a helm install dry run:
# Source: mychartv2/templates/jks-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
data:
config.json: |
Note: I am using minikube as my kubernetes cluster
Your config.json file should be inside your mychart/ directory, not inside mychart/templates
Chart Template Guide
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
config.json: |-
{{ .Files.Get "config.json" | indent 4}}
config.json
{
"val": "key"
}
helm install --dry-run --debug mychart
[debug] Created tunnel using local port: '52091'
[debug] SERVER: "127.0.0.1:52091"
...
NAME: dining-saola
REVISION: 1
RELEASED: Fri Nov 23 15:06:17 2018
CHART: mychart-0.1.0
USER-SUPPLIED VALUES:
{}
...
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: dining-saola-configmap
data:
config.json: |-
{
"val": "key"
}
EDIT:
But I want it the values in the config.json file to be taken from values.yaml. Is that possible?
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
config.json: |-
{
{{- range $key, $val := .Values.json }}
{{ $key | quote | indent 6}}: {{ $val | quote }}
{{- end}}
}
values.yaml
json:
key1: val1
key2: val2
key3: val3
helm install --dry-run --debug mychart
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mangy-hare-configmap
data:
config.json: |-
{
"key1": "val1"
"key2": "val2"
"key3": "val3"
}
Here is an example of a ConfigMap that is attached to a Deployment:
ConfigMap:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
data:
config.json: |-
{{ .Files.Get "config.json" | indent 4 }}
Deployment:
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: jksapp
labels:
app: jksapp
spec:
selector:
matchLabels:
app: jksapp
template:
metadata:
labels:
app: jksapp
containers:
- name: jksapp
image: jksapp:1.0.0
ports:
- containerPort: 8080
volumeMounts:
- name: config #The name(key) value must match pod volumes name(key) value
mountPath: /path/to/config.json
volumes:
- name: config
configMap:
name: jksconfig
Soln 01:
insert your config.json file content into a template
then use this template into your data against config.json
then run $ helm install command
finally,
{{define "config"}}
{
"a": "A",
"b": {
"b1": 1
}
}
{{end}}
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "my-app"
heritage: "{{ .Release.Service }}"
release: "{{ .Release.Name }}"
data:
config.json: {{ (include "config" .) | trim | quote }}