Vault with airflow on kubernetes - kubernetes

I installed vault as a separate service with kubernetes as auth. I installed minikube and installed airflow on it. I enabled apache-airflow kubernetes as well.
Once I start airflow with kubernetes, I want to connect it to vault using airflow secret backend and kwargs using kubernetes auth type.
How to get the vault token returned from kubernetes secret login from airflow?
Please suggest any way or any example to achieve this.

Related

InfluxDb on Kubernetes with TLS ingress

I'm setting up influxdb2 on a kubernetes cluster using helm. I have enabled ingress and it works ok on port 80, but when I enable TLS and set the "secretName" to an existing TLS secret on kubernetes it times out on port 443. Is my assumption that "secretName" in the helm chart refers to a kubernetes cluster secret? Or is it a secret within influxdb itself? I can't find any useful documentation about this.
It is a reference to a new Kubernetes secret that is going to be created corresponding to the tls cert. It does not have to reference an existing secret. If you run kubectl get secrets after a successful apply , you would see a secret something like <cert-name>-afr5d

Kubernetes secret programmatically update

Is there a way to programmatically update a kubernetes secret from a pod? that is, not using kubectl.
I have a secret mounted on a pod and also exposed via an environment variable. I would like to modify it from my service but it looks that it's read only by default.
You can use the Kubernetes REST API with the pod's serviceaccount's token as credentials (found at /var/run/secrets/kubernetes.io/serviceaccount/token inside the pod), you just need to allow the service account to edit secrets in the namespace via a role.
See Secret for the API docs
The API server is internally reachable via https://kubernetes.default

Kubernetes, deploy from within a pod

We have an AWS EKS Kubernetes cluster with two factor authentication for all the kubectl commands.
Is there a way of deploying an app into this cluster using a pod deployed inside the cluster?
Can I deploy using helm charts or by specifying service account instead of kubeconfig file?
Can I specify a service account(use the one that is assigned to the pod with kubectl) for all actions of kubectl?
All this is meant to bypass two-factor authentication for the continuous deployment via Jenkins, by deploying jenkins agent into the cluster and using it for deployments. Thanks.
You can use a supported Kubernetes client library or Kubectl or directly use curl to call rest api exposed by Kubernetes API Server from within a pod.
You can use helm as well as long as you install it in the pod.
When you call Kubernetes API from within a pod by default service account is used.Service account mounted in the pod need to have role and rolebinding associated to be able to call Kubernetes API.

How to configure local kubectl to connect to kubernetes EKS cluster

I am very new at kubernetes.
I have created a cluster setup at AWS EKS
Now I want to configure kubectl at a local ubuntu server so that I can connect at AWS EKS cluster.
Need to understand the process. [ If at all it is possible ]
aws cli is used to create Kubernetes config (normally ~/.kube/config).
See details by:
aws eks update-kubeconfig help
You can follow this guide. You need to do following steps.
1.Installing kubectl
2.Installing aws-iam-authenticator
3.Create a kubeconfig for Amazon EKS
4.Managing Users or IAM Roles for your Cluster
Also take a look at configuring kubeconfig using AWS CLI here

k8s cluster running locally on Minkube should have AWS credentials to access resources on AWS

During development process - I'm running k8s cluster on my dev machine inside Minikube, running several services.
The services should access AWS resources, like S3 bucket. For that - the pods should somehow get AWS credentials.
What are the options to authenticate the pods with AWS user?
should I pass aws_access_key_id and aws_secret_access_key in the docker env?
How would it work on production (inside k8s on EKS)? does the node's role passed into the pods?
A good way to authenticate locally is to create a Kubernetes Secret containing the AWS credentials. You can then reference the secret in the environment variables of the deployment of your service, e.g.:
name: AWS_ACCESS_KEY
valueFrom:
secretKeyRef:
name: my-aws-secret
key: access-key
In EKS, all pods can access the role from the Node. This is of course not ideal for a production situation as most likely you want a more restricted set of permissions for a specific pod. You can check out kube2iam as a project you can use to restrict the AWS capabilities of a single pod.