I created a helm chart which has secrets.yaml as:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: appdbpassword
stringData:
password: password#1
My pod is:
apiVersion: v1
kind: Pod
metadata:
name: expense-pod-sample-1
spec:
containers:
- name: expense-container-sample-1
image: exm:1
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
envFrom:
- secretRef:
name: appdbpassword
Whenever I run the kubectl get secrets command, I get the following secrets:
name Type Data Age
appdbpassword Opaque 1 41m
sh.helm.release.v1.myhelm-1572515128.v1 helm.sh/release.v1 1 41m
Why am I getting that extra secret? Am I missing something here?
Helm v2 used ConfigMaps by default to store release information. The ConfigMaps were created in the same namespace of the Tiller (generally kube-system).
In Helm v3 the Tiller was removed, and the information about each release version had to go somewhere:
In Helm 3, release information about a particular release is now
stored in the same namespace as the release itself.
Furthermore, Helm v3 uses Secrets as default storage driver instead of ConfigMaps (i.e., it's expected that you see these helm secrets for each namespace that has a release version on it).
There is an option to helm upgrade to limit the number of old deployment secrets that are kept:
--history-max int limit the maximum number of revisions saved per release.
Use 0 for no limit (default 10)
This is because there is no Tiller anymore in Helm 3. Hence, release information is now stored in the same namespace as the release itself as a secret.
Which Helm uses as the default storage driver.
Related
I have a pod with the following specs
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
env:
- name: WATCH_NAMESPACE
valueFrom:
configMapKeyRef:
name: watch-namespace-config
key: WATCH_NAMESPACE
restartPolicy: Always
I also created a ConfigMap
kubectl create configmap watch-namespace-config \
--from-literal=WATCH_NAMESPACE=dev
The pod looks for values in the watch-namespace-config configmap.
When I manually change the configmap values, I want the pod to restart automatically to reflect this change. Checking if that is possible in any way.
This is currently a feature in progress https://github.com/kubernetes/kubernetes/issues/22368
For now, use Reloader - https://github.com/stakater/Reloader
It watches if some change happens in ConfigMap and/or Secret; then performs a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset, Statefulset and Rollout
How to use it - https://github.com/stakater/Reloader#how-to-use-reloader
As you mentioned correctly once you update a ConfigMap or Secret the Deployment/Pod/Stateful set is not updated.
An optional solution for this scenario is to use Kustomization.
Kustomization generates a unique name every time you update the ConfigMap/Secret with a generated hash, for example: ConfigMap-xxxxxx.
If you will will use:
kubectl kustomize . | kubectl apply -f -
kubectl will "update" the changes with the new config map values.
Working Example(s) using Kustomization:
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/08-Kustomization
I am trying to map kubernetes secret value to a environment variable . My secret is as shown below
apiVersion: v1
kind: Secret
metadata:
name: test-secret
type: opaque
data:
tls.crt: {{ required "A valid value is required for tls.crt" .Values.tlscrt }}
Mapped the key to environment variable in the deployment yaml
env:
- name: TEST_VALUE
valueFrom:
secretKeyRef:
name: test-secret
key: tls.crt
The value gets mapped when i do helm install. However when i do helm upgrade , the changed value is not reflected in the environment variable , it still has the old value. Can anyone please help here ?
Changes to secret or configMap data are not reflected in existing pods. You have to delete and recreate the pod in order to see changes. There are ways to automate the process (see this Q/A for example: Helm chart restart pods when configmap changes) and they all have one thing in common: you need to modify something in pod definition to trigger a restart. It does not happen when you update a linked secret or a configMap because the link remains the same.
I have a simple Helm chart that consists of a Deployment and a ConfigMap. The ConfigMap looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.APP_NAMESPACE }}-config
data:
LOGGED_OUT_MSG: "{{ .Values.LOGGED_OUT_MSG }}"
The ConfigMap is mounted as an envfrom in the Pod template:
...
envFrom:
- configMapRef:
name: {{ .Values.APP_NAMESPACE }}-config
For one of my non-production environments I have the file override.yaml:
# override.yaml
LOGGED_OUT_MSG: "You are logged out (DEV)"
I then do a Helm upgrade like this:
$ helm upgrade -f override.yaml mychart .
What I assumed would happen was that if I make a change to override.yaml and run the above helm upgrade command that Helm would notice that the value of LOGGED_OUT_MSG has changed and do a rolling restart of my Pods. However, that does not happen. Instead, I have to manually delete the Pods so that the change comes through.
Is there a way to run helm upgrade so that changes in override.yaml trigger Helm to do a rolling restart of the Pods?
There is no way to do it by default AFAIK.
You are looking for reloader by stakater.
"Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets and Statefulsets."
This will require installing the tool in your cluster and adding an annotation to your deployment.
https://github.com/stakater/Reloader
I would like to avoid keeping secret in the Git as a best practise, and store it in AWS SSM.
Is there any way to get the value from AWS System Manager and use to create Kubernetes Secret?
I manage to create secret by fetching value from AWS Parameter store using the following script.
cat <<EOF | ./kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: kiali
namespace: istio-system
type: Opaque
data:
passphrase: $(echo -n "`aws ssm get-parameter --name /dev/${env_name}/kubernetes/kiali_password --with-decrypt --region=eu-west-2 --output text --query Parameter.Value`" | base64 -w0)
username: $(echo -n "admin" | base64 -w0)
EOF
For sure, 12factors requires to externalize configuration outside Codebase.
For your question, there is an attempt to integrate AWS SSM (AWS Secret Manager) to be used as the single source of truth for Secrets.
You just need to deploy the controller :
helm repo add secret-inject https://aws-samples.github.io/aws-secret-sidecar-injector/
helm repo update
helm install secret-inject secret-inject/secret-inject
Then annotate your deployment template with 2 annotations:
template:
metadata:
annotations:
secrets.k8s.aws/sidecarInjectorWebhook: enabled
secrets.k8s.aws/secret-arn: arn:aws:secretsmanager:us-east-1:123456789012:secret:database-password-hlRvvF
Other steps are explained here.
But I think that I highlighted the most important steps which clarifies the approach.
You can use GoDaddy external secrets. Installing it, creates a controller, and the controller will sync the AWS secrets within specific intervals. After creating the secrets in AWS SSM and installing GoDaddy external secrets, you have to create an ExternalSecret type as follows:
apiVersion: 'kubernetes-client.io/v1'
kind: ExtrenalSecret
metadata:
name: cats-and-dogs
secretDescriptor:
backendType: secretsManager
data:
- key: cats-and-dogs/mysql-password
name: password`
This will create a Kubernetes secrets for you. That secret can be exposed to your service as an environment variable or through volume mount.
Use Kubernetes External Secret. This below solution uses Secret Manager (not SSM) but servers the purpose.
Deploy using Helm
$ `helm repo add external-secrets https://external-secrets.github.io/kubernetes-external-secrets/`
$ `helm install kubernetes-external-secrets external-secrets/kubernetes-external-secrets`
Create new secret with required parameter in AWS Secret Manager:
For example - create a secret with secret name as "dev/db-cred" with below values.
{"username":"user01","password":"pwd#123"}
Secret.YAML:
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: my-kube-secret
namespace: my-namespace
spec:
backendType: secretsManager
region: us-east-1
dataFrom:
- dev/db-cred
Refer it in helm values file as below
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: my-kube-secret
key: password
I have two applications running in K8. APP A has write access to a data store and APP B has read access.
APP A needs to be able to change APP B's running deployment.
How we currently do this is manually by kicking off a process in APP A which adds a new DB in the data store (say db bob). Then we do:
kubectl edit deploy A
And change an environment variable to bob. This starts a rolling restart of all the pods of APP B. We would like to automate this process.
Is there anyway to get APP A to change the deployment config of APP B in k8?
Firstly answering your main question:
Is there anyway to get a service to change the deployment config of another service in k8?
From my understanding you are calling it Service A and B for it's purpose in the real life, but to facilitate understanding I suggested an edit to call them APP A and APP B, because:
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
So if in your question you meant:
"Is there anyway to get APP A to change the deployment config of APP B in k8?"
Then Yes, you can give a pod admin privileges to manage other components of the cluster using the kubectl set env command to change/add envs.
In order to achieve this, you will need:
A Service Account with needed permissions in the namespace.
NOTE: In my example below since I don't know if you are working with multiple namespaces I'm using a ClusterRole, granting cluster-admin to a specific user. If you use only 1 namespace for these apps, consider a Role instead.
A ClusterRoleBinding binding the permissions of the service account to a role of the Cluster.
The Kubectl client inside the pod (manually added or modifying the docker-image) on APP A
Steps to Reproduce:
Create a deployment to apply the cluster-admin privileges, I'm naming it manager-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: manager-deploy
labels:
app: manager
spec:
replicas: 1
selector:
matchLabels:
app: manager
template:
metadata:
labels:
app: manager
spec:
serviceAccountName: k8s-role
containers:
- name: manager
image: gcr.io/google-samples/node-hello:1.0
Create a deployment with a environment var, mocking your Service B. I'm naming it deploy-env.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: env-deploy
labels:
app: env-replace
spec:
replicas: 1
selector:
matchLabels:
app: env-replace
template:
metadata:
labels:
app: env-replace
spec:
serviceAccountName: k8s-role
containers:
- name: env-replace
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DATASTORE_NAME
value: "john"
Create a ServiceAccount and a ClusterRoleBinding with cluster-admin privileges, I'm naming it service-account-for-pod.yaml (notice it's mentioned in manager-deploy.yaml:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-role
subjects:
- kind: ServiceAccount
name: k8s-role
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-role
Apply the service-account-for-pod.yaml, deploy-env.yaml, manager-deploy.yamland list current environment variables from deploy-env pod:
$ kubectl apply -f manager-deploy.yaml
deployment.apps/manager-deploy created
$ kubectl apply -f deploy-env.yaml
deployment.apps/env-deploy created
$ kubectl apply -f service-account-for-pod.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-role created
serviceaccount/k8s-role created
$ kubectl exec -it env-deploy-fbd95bb94-hcq75 -- printenv
DATASTORE_NAME=john
Shell into the manager pod, download the kubectl binary and apply the kubectl set env deployment/deployment_name VAR_NAME=VALUE:
$ kubectl exec -it manager-deploy-747c9d5bc8-p684s -- /bin/bash
root#manager-deploy-747c9d5bc8-p684s:/# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# chmod +x ./kubectl
root#manager-deploy-747c9d5bc8-p684s:/# mv ./kubectl /usr/local/bin/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# kubectl set env deployment/env-deploy DATASTORE_NAME=bob
Verify the env var value on the pod (notice that the pod is recreated when deployment is modified:
$ kubectl exec -it env-deploy-7f565ffc4-t46zc -- printenv
DATASTORE_NAME=bob
Let me know in the comments if you have any doubt on how to apply this solution to your environment.
You could give service A access to your cluster (install kubectl and allow traffic from that NAT of service A to your cluster master) and with some cron jobs or jenkins / ssh or something that will execute your commands do it. You can also do kubectl patch or get the current config of second deployment kubectl get deployment <name> -o yaml --export > deployment.yaml and edit it there with some regex/awk/sed and then apply although the --export method is getting deprecated so you might aswell on service A download the GIT repo and apply the new config like that.
Thank you for the answers all (upvoted as they were both correct). I am just putting my own answer to document exactly what solved it for me.
In my case I just needed to make use of the patch url available on k8. That plus the this example worked.
All I needed to do was create a service account to restrict who can patch where. Restrict that account to Service A and use the java client in Service A to update the chart of Service B. After that the pods would roll and done.