Kubernetes Secret override failing for pod env variables - kubernetes

I deployed a k8s secret.
I deployed a pod that uses the above secret.
Now I wanted to change the secret data, so I updated my k8s secret.
And restarted my pod., so that it reads new data.
I did things in the sequence mentioned above.
But still, my service inside the pod is reading old secret data.
Now, I have two doubts:
Can someone please explain why is this happening??
Do secrets override my env variables(if same), when do they exactly
override, is it after the service comes up??

Related

Set a Kubernetes Secret with data from an external Key Vault

I'm using ArgoCD (a GitOps tool) to deploy my Helm project in a Kubernetes cluster. My cluster is configured to connect to my external key vault, so once my application is deployed to my cluster, it can fetch an access token and grab any secret it needs in the vault. My project also has a use case in which is spins up a new pod within my cluster to run some batch process, and this pod uses a imagePullSecret to connect to my private registry that contains an image for this pod.
If this imagePullSecret is present in my cluster and in the same namespace as my project, this process works. However, I'm trying to keep my project IaC as cluster agnostic as possible. Basically, should I spin up a new Kubernetes cluster, I can deploy my project to this cluster without any manual changes needed, and the project has everything it needs to run.
What I want to do is add a yaml file in my template folder that defines a secret in the cluster (which will be my imagePullSecret) so that I know that secret will always be there. Since this secret is project dependent, assume that the IaC code to deploy the cluster will not do this task. Of course, to be secure, I don't want to have the actual secret data in the file itself.
As my cluster is already seamlessly connected to an external Key Vault, I'm wondering if there's a way to fetch this information from the vault at the time of deployment. So the secret yaml would look like:
kind: Secret
apiVersion: v1
metadata:
name: superSecretData
namespace: projectNameSpace
# Maybe some connection annotation needed here?
type: dockerconfigjson
data:
secretData: <Data from Key Vault>
I saw there was some small project for this, but I wanted to know if there is a more...standard way to do it. When searching online for a solution, there are many to set secrets in a Pod, but I want to actually create a Kubernetes secret in my namespace. My GitOps tool may be able to do this, and may be a solution to this problem, but if I can avoid this middleman situation (KV -> ArgoCD -> k8s secret -> project instead of just KV -> k8s secret -> project) I would like to go that route

K8s RBAC needed when no API calls?

My pod is running with the default service account. My pod uses secrets through mounted files and config maps but this is defined in yaml and the pod does not contain kubectl or similar component.
Is there a point of using RBAC for anything if I don't call the API? The best practices state "Enable or configure RBAC rules that restrict reading data in Secrets (including via indirect means)."
Only things that call the Kubernetes API, like the kubectl command and the various Kubernetes SDK libraries, use RBAC. For your basic application, you as the user need permission to create deployments, create secrets, etc. but if you have cluster-administrator permissions you don't need anything special setup.
You could imagine an orchestrator application that wanted to farm out work by creating Kubernetes Jobs. In this case the orchestrator itself would need an RBAC setup; typically its Helm chart or other deployment YAML would contain a Role (to create Jobs), a ServiceAccount, and a RoleBinding, and set its own Deployment to run using that ServiceAccount. This isn't the "normal" case of a straightforward HTTP-based application (Deployment/Service/Ingress) with a backing database (StatefulSet/Service).
... restrict reading data in Secrets ...
If you can kubectl get secret -o yaml then the Secret values are all but there to read; they are base64 encoded but not encrypted at all. It's good practice to limit the ability to do this. This having been said, you can also create a Pod, mounting the Secret, and make the main container command be to dump out the Secret value to somewhere readable, so even then Secrets aren't that secret. It's still a good practice, but not required per se, particularly in an evaluation or test cluster.

How to restart Kubernetes pod when a secret is updated in Hashicorp Vault?

Have successfully implemented Vault with Kubernetes and applications running in K8s are getting their environment variables from Hashicorp vault. Everything is great! But, want to take a step forward and want to restart the pod whenever a change is made to the secret in the Vault, as of now, we have to restart the pod manually to reset environment variables whenever we make changes to Vault secret. How this can be achieved? Have heard about confd but not sure how it can be implemented!
Use reloader https://github.com/stakater/Reloader. We found it quite useful in our cluster. It does a rolling update hence you can change config with zero downtime too. Also if you made some errors in configmap you can easily do a rollback.
A couple ideas, depending on how much effort you want to put into it:
Just restart the pod every so often. A hacky way to do this is with a liveness probe, like this answer. Drawback is you can't use the liveness probe as a real health check without additional scripting.
Create an operator that polls Vault for changes and instructs Kubernetes to restart the pod when a change is detected. Not sure if Vault has an events API that you could use for that.
https://www.vaultproject.io/docs/agent/template#renewals-and-updating-secrets
If a secret or token isn't renewable or leased, Vault Agent will fetch the secret every 5 minutes. This is not configurable. Non-renewable secrets include (but not limited to) KV Version 2.

Do I need Namespace, Secret, ServiceAccount, and ConfigMap in my ingress.yaml?

I am practicing k8s from katacoda. Currently I am working on ingress.yaml. This is chapter has extra kind of services come to the yaml file. They are Namespace, Secret, ServiceAccount, and ConfigMap.
For Secret I can read on other chapter to understand it later.
Questions:
Do I need to use Namespace, ServiceAccount, and ConfigMap in my ingress.yaml?
Suppose I Caddy to make https. Secret from the example is a hardcode. How can I achieve automatically renew after certain period?
Do I need to use Namespace, ServiceAccount, and ConfigMap in my ingress.yaml?
No, it's not required. They are different Kubernetes resources and can be created independently. But you can place several resource definition in one YAML file just for convenience.
Alternatively you can create separate YAML file for each resource and place them all in the same directory. After that one of the following commands can be used to create resources in bulk:
kubectl create -f project/k8s/development
kubectl create -f project/k8s/development --recursive
Namespace is just a placeholder for Kubernetes resources. It should be created before any other resources use it.
ServiceAccount is used as a security context to restrict permission for specific automation operations.
ConfigMap is used as a node independent source of configuration/file/environment for pods.
Suppose I Caddy to make https. Secret from the example is a hardcode. How can I achieve automatically renew after certain period?
Not quite clear question, but I believe you can use cert-manager for that.
cert-manager is quite popular solution to manage certificates for Kubernetes cluster.

Update kubernetes secrets doesn't update running container env vars

Currenly when updating a kubernetes secrets file, in order to apply the changes, I need to run kubectl apply -f my-secrets.yaml. If there was a running container, it would still be using the old secrets. In order to apply the new secrets on the running container, I currently run the command kubectl replace -f my-pod.yaml .
I was wondering if this is the best way to update a running container secret, or am I missing something.
Thanks.
For k8s' versions >v1.15: kubectl rollout restart deployment $deploymentname: this will
restart pods incrementally without causing downtime.
The secret docs for users say this:
Mounted Secrets are updated automatically
When a secret being already consumed in a volume is updated, projected keys are eventually updated as well. The update time depends on the kubelet syncing period.
Mounted secrets are updated. The question is when. In case a the content of a secret is updated does not mean that your application automatically consumes it. It is the job of your application to watch file changes in this scenario to act accordingly. Having this in mind you currently need to do a little bit more work. One way I have in mind right now would be to run a scheduled job in Kubernetes which talks to the Kubernetes API to initiate a new rollout of your deployment. That way you could theoretically achieve what you want to renew your secrets. It is somehow not elegant, but this is the only way I have in mind at the moment. I still need to check more on the Kubernetes concepts myself. So please bear with me.
Assuming we have running pod mypod [mounted secret as mysecret in pod spec]
We can delete the existing secret
kubectl delete secret mysecret
recreate the same secret with updated file
kubectl create secret mysecret <updated file/s>
then do
kubectl apply -f ./mypod.yaml
check the secrets inside mypod, it will be updated.
In case anyone (like me) want to force rolling update pods which are using those secrets. From this issue, the trick is to update an Env variable inside the container, then k8s will automatically rolling update entire pods
kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"mycontainer","env":[{"name":"RESTART_","value":"'$(date +%s)'"}]}]}}}}'
By design, Kubernetes won't push Secret updates to running Pods. If you want to update the Secret value for a Pod, you have to destroy and recreate the Pod. You can read more about it here.