How to access hashi corp vault secret in kubernetes - kubernetes

Hi I have added secret in my hashi corp vault in the below path
cep-kv/dev/sqlpassword
I am trying to access secret in my manifest as below
spec:
serviceAccountName: default
containers: # List
- name: cep-container
image: myinage:latest
env:
- name: AppSettings__Key
value: vault:cep-kv/dev/sqlpassword#sqlpassword
This is throwing error below
failed to inject secrets from vault: failed to read secret from path: cep-kv/dev/sqlpassword: Error making API request.\n\nURL: GET https://vaultnet/v1/cep-kv/dev/sqlpassword?version=-1\nCode: 403. Errors:\n\n* 1 error occurred:\n\t* permission denied\n\n" app=vault-env
Is the path I am trying to access is correct value:
vault:cep-kv/dev/sqlpassword#sqlpassword
I tried with below path too
value: vault:cep-kv/dev/sqlpassword
This says secret not found in respective path. Can someone help me to get secret from hashi corp vault. Any help would be appreciated. Thanks

As you are getting 403 permission you need to Configure Kubernetes authentication, you can configure authentication from the following step:
Enable the Kubernetes auth method:
vault enable auth kubernetes
Configure the Kubernetes authentication method to use the location of the Kubernetes API
vault write auth/kubernetes/config \
kubernetes_host=https://192.168.99.100:<your TCP port or blank for 443>
Create a named role:
vault write auth/kubernetes/role/demo \
bound_service_account_names=myapp \
bound_service_account_namespaces=default \
policies=default \
ttl=1h
Write out the ” myapp ” policy that enables the “read” capability for secrets at the path .
vault policy write myapp -path "yourpath"
{ capabilities = ["read"] }
For more information follow Configuration, Here is a blog explaining the usage of secrets in kubernetes.

Related

aws-iam-authenticator & EKS

I've deployed a test EKS cluster with the appropiate configMap, and users that are SSO'd in can access the clusters via exporting session creds (AWS_ACCESS_KEY_ID, SECRET_ACCESS_KEY_ID, AWS_SESSION_TOKEN etc) and having the aws-iam-authenticator client installed in their terminal. The problem comes in when users attempt to use an aws sso profile stored in ~/.aws/config using the aws-iam-authenticator. The error that's recieved when running any kubectl command is the following:
$ kubectl get all
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I've tested this on my local machine (AWS CLI v2) and I haven't had any success. I've exported an AWS profile found in the ~/.aws/config file via export AWS_PROFILE=User1 and running aws sts get-caller-identity correctly shows the profile being exported. I've switched between mulitple named profiles and each one gets the correct identity and permissions, however, when running any kubectl command I get the above error. I've also tried symlinking config -> credentials but no luck. The only way it works is if I export the access_key, secret_key, and session_token to the environment variables.
I suppose I can live with having to paste in the dynamic creds that come from AWS SSO, but my need to solve solutions won't let me give up :(. I was following the thread found in this github issue but no luck. The kube config file that I have setup is spec'd to AWS's documentation.
I suspect there may be something off with the aws-iam-authenticator server deployment, but nothing shows in the pod logs. Here's a snippet from the tools github page, which I think I followed correctly, but I did skip step 3 for reasons that I forgot:
The Kubernetes API integrates with AWS IAM Authenticator for
Kubernetes using a token authentication webhook. When you run
aws-iam-authenticator server, it will generate a webhook configuration
file and save it onto the host filesystem. You'll need to add a single
additional flag to your API server configuration:
Kube Config File
apiVersion: v1
clusters:
- cluster:
server: <endpoint-url>
certificate-authority-data: <base64-encoded-ca-cert>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "eks-cluster-name"
- "-r"
- "EKS-ADMIN-ROLE:ARN:::::::"
env:
- name: AWS_PROFILE
value: "USER"
The AWS CLI v2 now supports AWS SSO so I decided to update my Kube config file to leverage the aws command instead of aws-iam-authenticator. Authentication via SSO is now a breeze! It looks like AWS wanted to get away from having to have an additional binary to be able to authenticate in to EKS clusters which is fine by me! Hope this helps.

Pulling Images from GCR into GKE

Today is my first day playing with GCR and GKE. So apologies if my question sounds childish.
So I have created a new registry in GCR. It is private. Using this documentation, I got hold of my Access Token using the command
gcloud auth print-access-token
#<MY-ACCESS_TOKEN>
I know that my username is oauth2accesstoken
On my local laptop when I try
docker login https://eu.gcr.io/v2
Username: oauth2accesstoken
Password: <MY-ACCESS_TOKEN>
I get:
Login Successful
So now its time to create a docker-registry secret in Kubernetes.
I ran the below command:
kubectl create secret docker-registry eu-gcr-io-registry --docker-server='https://eu.gcr.io/v2' --docker-username='oauth2accesstoken' --docker-password='<MY-ACCESS_TOKEN>' --docker-email='<MY_EMAIL>'
And then my Pod definition looks like:
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: eu.gcr.io/<my-gcp-project>/<repo>/<my-app>:latest
ports:
- containerPort: 8090
imagePullSecrets:
- name: eu-gcr-io-registry
But when I spin up the pod, I get the ERROR:
Warning Failed 4m (x4 over 6m) kubelet, node-3 Failed to pull image "eu.gcr.io/<my-gcp-project>/<repo>/<my-app>:latest": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I verified my secrets checking the YAML file and doing a base64 --decode on the .dockerconfigjson and it is correct.
So what have I missed here ?
If your GKE cluster & GCR registry are in the same project: You don't need to configure authentication. GKE clusters are authorized to pull from private GCR registries in the same project with no config. (Very likely you're this!)
If your GKE cluster & GCR registry are in different GCP projects: Follow these instructions to give "service account" of your GKE cluster access to read private images in your GCR cluster: https://cloud.google.com/container-registry/docs/access-control#granting_users_and_other_projects_access_to_a_registry
In a nutshell, this can be done by:
gsutil iam ch serviceAccount:[PROJECT_NUMBER]-compute#developer.gserviceaccount.com:objectViewer gs://[BUCKET_NAME]
where [BUCKET_NAME] is the GCS bucket storing your GCR images (like artifacts.[PROJECT-ID].appspot.com) and [PROJECT_NUMBER] is the numeric GCP project ID hosting your GKE cluster.

How do I present a custom GCP service account to kubernetes workloads?

I'm configuring a highly available kubernetes cluster using GKE and terraform. Multiple teams will be running multiple deployments on the cluster and I anticipate most deployments will be in a custom namespace, mainly for isolation reasons.
One of our open questions is how to manage to manage GCP service accounts on the cluster.
I can create the cluster with a custom GCP service account, and adjust the permissions so it can pull images from GCR, log to stackdriver, etc. I think this custom service account will be used by the GKE nodes, instead of the default compute engine service account. Please correct me if I'm wrong on this front!
Each deployment needs to access a different set of GCP resources (cloud storage, data store, cloud sql, etc) and I'd like each deployment to have it's own GCP service account so we can control permissions. I'd also like running pods to have no access to the GCP service account that's available to the node running the pods.
Is that possible?
I've considered some options, but I'm not confident on the feasibility or desirability:
A GCP Service account for a deployment could be added to the cluster as a kubernetes secret, deployments could mount it as a file, and set GOOGLE_DEFAULT_CREDENTAILS to point to it
Maybe access to the metadata API for the instance can be denied to pods, or can the service account returned by the metadata API be changed?
Maybe there's a GKE (or kubernetes) native way to control the service account presented to pods?
You are on the right track. GCP service accounts can be used in GKE for PODs to assign permissions to GCP resources.
Create an account:
cloud iam service-accounts create ${SERVICE_ACCOUNT_NAME}
Add IAM permissions to the service account:
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member="serviceAccount:${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com" \
--role='roles/${ROLE_ID}'
Generate a JSON file for the service account:
gcloud iam service-accounts keys create \
--iam-account "${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com" \
service-account.json
Create a secret with that JSON:
kubectl create secret generic echo --from-file service-account.json
Create a deployment for your application using that secret:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
name: echo
spec:
containers:
- name: echo
image: "gcr.io/hightowerlabs/echo"
env:
- name: "GOOGLE_APPLICATION_CREDENTIALS"
value: "/var/run/secret/cloud.google.com/service-account.json"
- name: "PROJECT_ID"
valueFrom:
configMapKeyRef:
name: echo
key: project-id
- name: "TOPIC"
value: "echo"
volumeMounts:
- name: "service-account"
mountPath: "/var/run/secret/cloud.google.com"
volumes:
- name: "service-account"
secret:
secretName: "echo"
If you want to use various permissions for separate deployments, you need to create some GCP service accounts with different permissions, generate JSON tokens for them, and assign them to the deployments according to your plans. PODs will have access according to mounted service accounts, not to service the account assigned to the node.
For more information, you can look through the links:
Authenticating to Cloud Platform with Service Accounts
Google Cloud Service Accounts with Google Container Engine (GKE) - Tutorial

kubernetes: Authentication to ui with default config file fails

I have successfully set up a kubernetes cluster on AWS using kops and the following commands:
$ kops create cluster --name=<my_cluster_name> --state=s3://<my-state-bucket> --zones=eu-west-1a --node-count=2 --node-size=t2.micro --master-size=t2.small --dns-zone=<my-cluster-dns>
$ kops update cluster <my-cluster-name> --yes
When accessing the dashboard, I am prompted to either enter a token or
Please select the kubeconfig file that you have created to configure access to the cluster.
When creating the cluster, ~/.kube/config was created that has the following form:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
<some_key_or_token_here>
server: https://api.<my_cluster_url>
name: <my_cluster_name>
contexts:
- context:
cluster: <my_cluster_name>
user: <my_cluster_name>
name: <my_cluster_name>
current-context: <my_cluster_name>
kind: Config
preferences: {}
users:
- name: <my_cluster_name>
user:
as-user-extra: {}
client-certificate-data:
<some_key_or_certificate>
client-key-data:
<some_key_or_certificate>
password: <password>
username: admin
- name:<my-cluster-url>-basic-auth
user:
as-user-extra: {}
password: <password>
username: admin
Why when pointing the kubernetes ui to the above file, I get
Authentication failed. Please try again.
I tried the same and had the same problem. It turns out that kops creates a certificate based authentication. Certificate based authentication can't be used on the web UI interface. Instead, I tried using the token based authentication. Next question, where do you find the token?
kubectl describe secret
This will show you the default token for the cluster. I assume this is very bad security practice but if you're using the UI to improve your learning and understanding then it will get you moving in the right direction.
This Dashboard wiki page is about authentication. That's where I discovered how to do it.
In order to enable basic auth in Dashboard --authentication-mode=basic flag has to be provided. By default it is set to --authentication-mode=token
To get the token or understand more about access control please refer here

Kubernetes: how to enable API Server Bearer Token Auth?

I've been trying to enabled token auth for HTTP REST API Server access from a remote client.
I installed my CoreOS/K8S cluster controller using this script: https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/controller-install.sh
My cluster works fine. This is a TLS installation so I need to configure any kubectl clients with the client certs to access the cluster.
I then tried to enable token auth via running:
echo `dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null`
this gives me a token. I then added the token to a token file on my controller containing a token and default user:
$> cat /etc/kubernetes/token
3XQ8W6IAourkXOLH2yfpbGFXftbH0vn,default,default
I then modified the /etc/kubernetes/manifests/kube-apiserver.yaml to add in:
- --token-auth-file=/etc/kubernetes/token
to the startup param list
I then reboot (not sure the best way to restart API Server by itself??)
At this point, kubectl from a remote server quits working(won't connect). I then look at docker ps on the controller and see the api server. I run docker logs container_id and get no output. If I look at other docker containers I see output like:
E0327 20:05:46.657679 1 reflector.go:188]
pkg/proxy/config/api.go:33: Failed to list *api.Endpoints:
Get http://127.0.0.1:8080/api/v1/endpoints?resourceVersion=0:
dial tcp 127.0.0.1:8080: getsockopt: connection refused
So it appears that my api-server.yaml config it preventing the API Server from starting properly....
Any suggestions on the proper way to configure API Server for bearer token REST auth?
It is possible to have both TLS configuration and Bearer Token Auth configured, right?
Thanks!
I think your kube-apiserver dies because it's can't find the /etc/kubernetes/token. That's because on your deployment the apiserver is a static pod therefore running in a container which in turn means it has a different root filesystem than that of the host.
Look into /etc/kubernetes/manifests/kube-apiserver.yaml and add a volume and a volumeMount like this (I have omitted the lines that do not need changing and don't help in locating the correct section):
kind: Pod
metadata:
name: kube-apiserver
spec:
containers:
- name: kube-apiserver
command:
- ...
- --token-auth-file=/etc/kubernetes/token
volumeMounts:
- mountPath: /etc/kubernetes/token
name: token-kubernetes
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/token
name: token-kubernetes
One more note: the file you quoted as token should not end in . (dot) - maybe that was only a copy-paste mistake but check it anyway. The format is documented under static token file:
token,user,uid,"group1,group2,group3"
If your problem perists execute the command below and post the output:
journalctl -u kubelet | grep kube-apiserver