I currently have Flux and the helm operator installed in my cluster via the helm charts. The flux deployment is monitoring a git repo where I have a .flux.yaml which I pass a folder context via the flux deployment git-path flag. This is used to run kustomize to patch which values files I want to use for the deployment. Some of these environments have files that are encrypted via sops.
I have configured Flux with sops enabled. sops/helm secrets is using an aws kms key, so locally, I assume a role which I have granted access to encrypt/decrypt with the specified kms arn. The issue I am running into is getting these secrets decrypted prior to the helm deploy. I currently end up with the encrypted values in the final kubernetes resource. Cant seem to find any additional documentation about configuring aws access/secret keys to be used by sops on the flux side, nor anything on the helm operator to potentially do it via helm secrets. Any tips would be greatly appreciated!
It turns out there was no issue with decrypting the secret. the flux pod runs sops using the node role (which I had granted access to decypt with the necessary kms key), and was successfully decrypting secrets. I tested this by execing into the pod and trying sops -d on the file containing my secrets.
The issue ended up being that I wasnt actually passing the decrypted file to my helmrelease. I ended up accomplishing this by using the following .flux.yaml:
version: 1
patchUpdated:
generators:
- command: sops -d --output secrets.yaml secrets.enc.yaml && kustomize build .
- command: rm secrets.yaml
patchFile: ../base/flux-patch.yaml
I originally had my secrets file formatted like a helm values file, but instead updated it to be able to patch the base helmrelease file values section with the decrypted values. This results in all the decrypted values being consumed by the helmrelease. The second command removes the decrypted secrets.yaml file so that it doesnt end up getting committed back to the repo.
Keep in mind that this results in the helmrelease in the cluster containing all of your secrets, so you need to manage access to helmrelease objects accordingly.
Related
I need to create and deploy on an existing Kubernetes cluster, an application based on a docker image which is hosted on private Harbor repo on a remote server.
I could use this if the repo was public:
kubectl create deployment <deployment_name> --image=<full_path_to_remote_repo>:<tag>
Since the repo is private, the username, password etc. are required for it to be pulled. How do I modify the above command to embed that information?
Thanks in advance.
P.S.
I'm looking for a way that doesn't involve creating a secret using kubectl create secret and then creating a yaml defining the deployment.
The goal is to have kubectl pull the image using the supplied creds and deploy it on the cluster without any other steps. Could this be achieved with a single (above) command?
Edit:
Creating and using a secret is acceptable if there was a way to specify the secret as an option in kubectl command rather than specify it in a yaml (really trying to avoid yaml). Is there a way of doing that?
There are no flags to pass an imagePullSecret to kubectl create deployment, unfortunately.
If you're coming from the world of Docker Compose or Swarm, having one line deployments is fairly common. But even these deployment tools use underlying configuration and .yml files, like docker-compose.yml.
For Kubernetes, there is official documentation on pulling images from private registries, and there is even special handling for docker registries. Check out the article on creating Docker config secrets too.
According to the docs, you must define a secret in this way to make it available to your cluster. Because Kubernetes is built for resiliency/scalability, any machine in your cluster may have to pull your private image, and therefore each machine needs access to your secret. That's why it's treated as its own entity, with its own manifest and YAML file.
We are trying to create an environment using crossplane and argocd. Once Crossplane generates the database and saves the credentials to a secret on the management cluster. After we are deploying the credentials from management cluster to our destination cluster to a secret.
Now we need to pass the credentials from secret a to secret B which the application knows about. The issue starts when argo do not use helm install but template thus lookup function don't work. We thought about using vault as a middle man but we are not sure how to load values from secret to vault.
Anyway if you encounter such an issue or have some sort of a solution we'll be very happy to hear.
Thank you
You need to commit the (encrypted) secrets somewhere for ArgoCD to pick them up. That is the whole point of GitOps.
Alternatively you can try using https://argo-cd.readthedocs.io/en/stable/user-guide/parameters/ but this is considered a temporary workaround
I'm using helm 3.4.2 for upgrade my charts to my AKS cluster and I saw that every time I deploy something new, it creates a new secret called sh.helm.v... This is the first time I'm using helm.
I was reading the doc and found that at version 3.x helm is using secrets to store driver as default. Cool, but every time I deploy it creates a new secret and I'm now sure if this is the best to keep it all in my cluster.
Soo, should I keep then all in my cluster? Like, every time I deploy some thing, it creates a secret and live there
or
Can I remove the last before? Like, deploy v5 now and erase v1, v2, v3 and keep the v4 and v5 for some reason. If it's ok to do it, does anyone has a clue for how to do it? Using a bash ou kubectl?
Thanks a lot!
So yes, There are few major changes in Helm3, comparing to Helm2.
Secrets are now used as the default storage driver
In Helm 3, Secrets are now used as the default storage driver. Helm 2 used ConfigMaps by default to store release information. In
Helm 2.7.0, a new storage backend that uses Secrets for storing
release information was implemented, and it is now the default
starting in Helm 3.
Also
Release Names are now scoped to the Namespace
In Helm 3, information about a particular release is now stored in the
same namespace as the release itself. With this greater alignment to
native cluster namespaces, the helm list command no longer lists all
releases by default. Instead, it will list only the releases in the
namespace of your current kubernetes context (i.e. the namespace shown
when you run kubectl config view --minify). It also means you must
supply the --all-namespaces flag to helm list to get behaviour similar
to Helm 2.
Soo, should I keep then all in my cluster? Like, every time I deploy
some thing, it creates a secret and live there or
Can I remove the last before?
I dont thinks its a good practice to remove anything manually. If it is not mandatory necessary - sure better not touch them. However, you can delete unused ones, if you sure you will not need old revisions in the future.
#To check all secretes were created by helm:
kubectl get secret -l "owner=helm" --all-namespaces
#To delete revision you can simply remove appropriate secret..
kubectl delete secret -n <namespace> <secret-name>
Btw(just FYI), taking into an account the fact Helm3 is scoped to namespaces - you can simply delete deployment by deleting its corresponding namespace
And the last remark, maybe it would be useful for: you can pass --history-max to helm upgrade to
limit the maximum number of revisions saved per release. Use 0 for no
limit (default 10)
i want to populate configmaps from data inside vault in kubernetes. I just complete setup of vault and auth method as kubernetes(Service account) and userpass.
Can someone suggest easy way to integrate variables for application ? what to add in yaml file ? if i can populate configmap then i can easily use it to yaml.
how to changes will be affected if variable change on vault.
you can try using Vault CRD, when you create a custom resource of type vault, it will create a secrets using a data from the vault
You can use Vault CRD as Xavier Adaickalam mentioned.
Regarding the subject of variable changes, you have 2 ways of exposing variables inside Pods, using volumes and using environment variables. Volumes are updated automatically when the secrets are modified. Unfortunately, environment variables do not receive updates even if you modify your secrets. You have to restart your container if the values are modified.
I am using Helm charts to create and deploy applications into my K8s cluster.
One of my pods requires a config file with a SDK key to start and function properly. This SDK key is considered a secret and is stored in AWS Secret Manager. I don't include the secret data in my Docker image. I want to be able to mount this config file at runtime. A ConfigMap seems to be a good option in this case, except that I have not been able to figure out how to obtain the SDK key from Secrets Manager during the chart installation. Part of my ConfigMap looks like this:
data:
app.conf: |
[sdkkey] # I want to be able to retrieve sdk from aws secrets manager
I was looking at ways to write shell commands to use AWS CLI to get secrets, but have not seen a way to execute shell commands from within a ConfigMap.
Any ideas or alternative solutions?
Cheers
K
tl;dr; You can't execute a ConfigMap, it is just a static manifest. Use an init container instead.
ConfigMaps are a static manifest that can be read from the Kubernetes API or injected into a container at runtime as a file or environment variables. There is no way to execute a ConfigMap.
Additionally, ConfigMaps should not be used for secret data, Kubernetes has a specific resource, called Secrets, to use for secret data. It can be used in similar ways to a ConfigMap, including being mounted as a volume or exposed as environment variables within the container.
Given your description it sounds like your best option would be to use an init container to retrieve the credentials and write them to a shared emptyDir Volume mounted into the container with the application that will use the credentials.