Use kubernetes secret with GKEPodOperator in Airflow - kubernetes

I am trying to use a GOOGLE_APPLICATION_CREDENTIALS secret with GKEPodOperator.
Basically I want to:
1. Upload the secret to GKE
2. Mount (?) the secret to a container
3. Use the secret when running the container.
Until now I have added the key.json-file to my image at build time, and I know this is not the correct way to do it.
I found this question: How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes
The difference is that they are not using GKEPodOperator.
What I have done:
1. Created the secret using:
kubectl create secret generic mysupersecret --from-file=service_account_key=key.json
I see there are volumes and volume_mounts parameters but I dont understand how to use them.
Can anyone give me a helping hand on this? Maybe I am about to do something stupid..

To use a Secret with your workloads, you can specify environment variables that reference the Secret's values, or mount a volume containing the Secret.
Please follow this link to using secrets and set volumes and volume_mounts.
This link refer to the Google general document for Authenticating to Cloud Platform with Service Accounts to use a GOOGLE_APPLICATION_CREDENTIALS secret. And this link describes how to use the KubernetesPodOperator to launch Kubernetes pods.

This is similar to passing secrets to the KubernetesPodOperator.
Check details here.
Here is quick sample.
influx_username = secret.Secret(
...
)
influx_pass = secret.Secret(
...
)
operator = GKEPodOperator(
task_id='task-id',
project_id='prj-id',
location='location',
cluster_name='cluster-name',
name='pod-name',
namespace='default',
image='image-path',
secrets=[influx_username, influx_pass],
)

Related

Populate kubernetes Configmap from hashicorp vault

i want to populate configmaps from data inside vault in kubernetes. I just complete setup of vault and auth method as kubernetes(Service account) and userpass.
Can someone suggest easy way to integrate variables for application ? what to add in yaml file ? if i can populate configmap then i can easily use it to yaml.
how to changes will be affected if variable change on vault.
you can try using Vault CRD, when you create a custom resource of type vault, it will create a secrets using a data from the vault
You can use Vault CRD as Xavier Adaickalam mentioned.
Regarding the subject of variable changes, you have 2 ways of exposing variables inside Pods, using volumes and using environment variables. Volumes are updated automatically when the secrets are modified. Unfortunately, environment variables do not receive updates even if you modify your secrets. You have to restart your container if the values are modified.

Automatically use secret when pulling from private registry

Is it possible to globally (or at least per namespace), configure kubernetes to always use an image pull secret when connecting to a private repo?
There are two use cases:
when a user specifies a container in our private registry in a deployment
when a user points a Helm chart at our private repo (and so we have no control over the image pull secret tag).
I know it is possible to do this on a service account basis but without writing a controller to add this to every new service account created it would get a bit of a mess.
Is there are way to set this globally so if kube tries to pull from registry X it uses secret Y?
Thanks
As far as I know, usually the default serviceAccount is responsible for pulling the images.
To easily add imagePullSecrets to a serviceAccount you can use the patch command:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "mySecret"}]}'
It's possible to use kubectl patch in a script that inserts imagePullSecrets on serviceAccounts across all namespaces.
If it´s too complicated to manage multiple namespaces you can have look at kubernetes-replicator, which syncs resources between namespaces.
Solution 2:
This section of the doc explains how you can set the private registry on a node basis:
Here are the recommended steps to configuring your nodes to use a
private registry. In this example, run these on your desktop/laptop:
Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json.
View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
Get a list of your nodes, for example:
If you want the names:
nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
If you want to get the IPs:
nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(#.type=="ExternalIP")]}{.address} {end}')
Copy your local .docker/config.json to one of the search paths list above. for example:
for n in $nodes; do scp ~/.docker/config.json root#$n:/var/lib/kubelet/config.json; done
Solution 3:
A (very dirty!) way I discovered to not need to set up an imagePullSecret on a deployment / serviceAccount basis is to:
Set ImagePullPolicy: IfNotPresent
Pulling the image in each node
2.1. manually using docker pull myrepo/image:tag.
2.2. using a script or a tool like docker-puller to automate that process.
Well, I think I don't need to explain how ugly is that.
PS: If it helps, I found an issue on kubernetes/kops about the feature of creating a global configuration for private registry.
Two simple questions, where are you running your k8s cluster? Where is your registry located?
Here there are a few approaches to your issue:
https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry

Passing the Google cloud service account file to traefik

As per https://docs.traefik.io/configuration/acme/
I've created a secret like so:
kubectl --namespace=gitlab-managed-apps create secret generic traefik-credentials \
--from-literal=GCE_PROJECT=<id> \
--from-file=GCE_SERVICE_ACCOUNT_FILE=key.json \
And passed it to the helm chart by using: --set acme.dnsProvider.$name=traefik-credentials
However I am still getting the following error:
{"level":"error","msg":"Unable to obtain ACME certificate for domains \"traefik.my.domain.com\" detected thanks to rule \"Host:traefik.my.domain.com\" : cannot get ACME client googlecloud: Service Account file missing","time":"2019-01-14T21:44:17Z"}
I don't know why/if traefik uses GCE_SERVICE_ACCOUNT_FILE variable. All Google tooling and 3rd party integrations use GOOGLE_APPLICATION_CREDENTIALS environment variable for that purpose (and all Google API clients automatically pick up this variable). So looks like traefik might have done a poor decision here calling it something else.
I recommend you look at the Pod spec of the traefik pod (fields volumes and volumeMounts to see if the Secret is mounted to the pod correctly).
If you follow this tutorial https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform you can learn how to mount IAM Service accounts to any Pod. So maybe you can combine this with the Helm chart itself and figure out what you need to do to make this work.

How to execute shell commands from within a Kubernetes ConfigMap?

I am using Helm charts to create and deploy applications into my K8s cluster.
One of my pods requires a config file with a SDK key to start and function properly. This SDK key is considered a secret and is stored in AWS Secret Manager. I don't include the secret data in my Docker image. I want to be able to mount this config file at runtime. A ConfigMap seems to be a good option in this case, except that I have not been able to figure out how to obtain the SDK key from Secrets Manager during the chart installation. Part of my ConfigMap looks like this:
data:
app.conf: |
[sdkkey] # I want to be able to retrieve sdk from aws secrets manager
I was looking at ways to write shell commands to use AWS CLI to get secrets, but have not seen a way to execute shell commands from within a ConfigMap.
Any ideas or alternative solutions?
Cheers
K
tl;dr; You can't execute a ConfigMap, it is just a static manifest. Use an init container instead.
ConfigMaps are a static manifest that can be read from the Kubernetes API or injected into a container at runtime as a file or environment variables. There is no way to execute a ConfigMap.
Additionally, ConfigMaps should not be used for secret data, Kubernetes has a specific resource, called Secrets, to use for secret data. It can be used in similar ways to a ConfigMap, including being mounted as a volume or exposed as environment variables within the container.
Given your description it sounds like your best option would be to use an init container to retrieve the credentials and write them to a shared emptyDir Volume mounted into the container with the application that will use the credentials.

Kubernetes secrets encryption

I have pods who deployed to Kubernetes cluster (hosted with Google Cloud Kubernetes). Those pods are using some secret, which are plain text files. I added the secret to the yaml file and deployed the deployment. The application is working fine.
Now, let say that someone compromised my code and somehow get access to all my files on the container. In that case, the attacker can find the secrets directory and print all the secrets written there. It's a plain text.
Question:
Why it more secure use kubernetes-secrets instead of just a plain-text?
There are different levels of security and as #Vishal Biyani says in the comments, it sounds like you're looking for a level of security you'd get from a project like Sealed Secrets.
As you say, out of the box secrets doesn't give you encryption at the container level. But it does give controls on access through kubectl and the kubernetes APIs. For example, you could use role-based access control so that specific users could see that a secret exists without seeing (through the k8s APIs) what its value is.
In case you can create the secrets using a command instead of having it on the yaml file:
example:
kubectl create secret generic cloudsql-user-credentials --from-literal=username=[your user]--from-literal=password=[your pass]
you can also read it as
kubectl get secret cloudsql-user-credentials -o yaml
i also use the secret with 2 levels, the one is the kubernetes :
env:
- name: SECRETS_USER
valueFrom:
secretKeyRef:
name: cloudsql-user-credentials
key: username
the SECRETS_USER is a env var, which i use this value on jasypt
spring:
datasource:
password: ENC(${SECRETS_USER})
on the app start up you use the param : -Djasypt.encryptor.password=encryptKeyCode
/.m2/repository/org/jasypt/jasypt/1.9.2/jasypt-1.9.2.jar org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI input="encryptKeyCode" password=[pass user] algorithm=PBEWithMD5AndDES