Vault sidecar injector permission denied only for vault enterprise - kubernetes

I am trying to explore vault enterprise but getting permission denied for sidecar when I use the vault enterprise but seems to work fine when I tried to use local vault server.
Here is the repository that contains a working example with the local vault vault-sidecar-injector-app
Vault config
export VAULT_ADDR="https://vault-cluster.vault.c1c633fa-91ef-4e86-b025-4f31b3f14730.aws.hashicorp.cloud:8200"
export VAULT_NAMESPACE="admin"
#install agent
helm upgrade --install vault hashicorp/vault --set "injector.externalVaultAddr=$VAULT_ADDR"
vault auth enable kubernetes
# get certs & host
VAULT_HELM_SECRET_NAME=$(kubectl get secrets --output=json | jq -r '.items[].metadata | select(.name|startswith("vault-token-")).name')
TOKEN_REVIEW_JWT=$(kubectl get secret $VAULT_HELM_SECRET_NAME --output='go-template={{ .data.token }}' | base64 --decode)
KUBE_CA_CERT=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.certificate-authority-data}' | base64 --decode)
KUBE_HOST=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.server}')
# set Kubernetes config
vault write auth/kubernetes/config \
token_reviewer_jwt="$TOKEN_REVIEW_JWT" \
kubernetes_host="$KUBE_HOST" \
kubernetes_ca_cert="$KUBE_CA_CERT" \
issuer="https://kubernetes.default.svc.cluster.local" \
disable_iss_validation="true" \
disable_local_ca_jwt="true"
vault auth enable approle
# create admin policy
vault policy write admin admin-policy.hcl
vault write auth/approle/role/admin policies="admin"
vault read auth/approle/role/admin/role-id
# generate secret
vault write -f auth/approle/role/admin/secret-id
#Enable KV
vault secrets enable -version=2 kv
I can see the role and policy
Admin policy
Here is the admin policy for the enterprise
path "*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
Deploy Script for helm
here is the deploy script, tried hcp-root root policy but no luck
RELEASE_NAME=demo-managed
NAMESPACE=default
ENVIRONMENT=develop
export role_id="f9782a53-823e-2c08-81ae-abc"
export secret_id="1de3b8c5-18c7-60e3-24ca-abc"
export VAULT_ADDR="https://vault-cluster.vault.c1c633fa-91ef-4e86-b025-4f31b3f14730.aws.hashicorp.cloud:8200"
export VAULT_TOKEN=$(vault write -field="token" auth/approle/login role_id="${role_id}" secret_id="${secret_id}")
vault write auth/kubernetes/role/${NAMESPACE}-${RELEASE_NAME} bound_service_account_names=${RELEASE_NAME} bound_service_account_namespaces=${NAMESPACE} policies=hcp-root ttl=1h
helm upgrade --install $RELEASE_NAME ../helm-chart --set environment=$ENVIRONMENT --set nameOverride=$RELEASE_NAME
also tried with root token
RELEASE_NAME=demo-managed
NAMESPACE=default
ENVIRONMENT=develop
vault write auth/kubernetes/role/${NAMESPACE}-${RELEASE_NAME} bound_service_account_names=${RELEASE_NAME} bound_service_account_namespaces=${NAMESPACE} policies=hcp-root ttl=1h
helm upgrade --install $RELEASE_NAME ../helm-chart --set environment=$ENVIRONMENT --set nameOverride=$RELEASE_NAME
Sidecar config
With namespace annotation, as my understanding namespace is required
vault.hashicorp.com/namespace - configures the Vault Enterprise namespace to be used when requesting secrets from Vault.
https://www.vaultproject.io/docs/platform/k8s/injector/annotations
vault.hashicorp.com/namespace : "admin"
Error
| Error making API request.
|
| URL: PUT https://vault-cluster.vault.c1c633fa-91ef-4e86-b025-4f31b3f14730.aws.hashicorp.cloud:8200/v1/admin/auth/kubernetes/login
| Code: 403. Errors:
|
| * permission denied
Without namespace annotation getting below error
| URL: PUT https://vault-cluster.vault.c1c633fa-91ef-4e86-b025-4f31b3f14730.aws.hashicorp.cloud:8200/v1/auth/kubernetes/login
| Code: 400. Errors:
|
| * missing client token
Even enabling debug logs vault.hashicorp.com/log-level : "debug" does not help me with this error, any help or suggestions will be appreciated.
Also tried
https://support.hashicorp.com/hc/en-us/articles/4404389946387-Kubernetes-auth-method-Permission-Denied-error
So seems like I am missing something very specific to the vault enterprise

Finally able to resolve the weird issue with vault, posting as an answer might help someone else.
The only thing that I missed to understand the flow between vault server, sidecar, and Kubernetes.
Kubernetes should be reachable to vault enterprise for Token review API calls. As you can see the when sidecar makes a request to the vault, then the vault enterprise server performs a token review API call.
Use the /config endpoint to configure Vault to talk to Kubernetes. Use kubectl cluster-info to validate the Kubernetes host address and TCP port.
https://www.vaultproject.io/docs/auth/kubernetes
| Error making API request.
|
| URL: PUT https://vault-cluster.vault.c1c633fa-91ef-4e86-b025-4f31b3f14730.aws.hashicorp.cloud:8200/v1/admin/auth/kubernetes/login
| Code: 403. Errors:
|
| * permission denied
backoff=2.99s
This error does not indicate it has a connectivity issue but this also happens when the vault is not able to communicate with the Kubernetes cluster.
Kube Host
vault write auth/kubernetes/config \
token_reviewer_jwt="$TOKEN_REVIEW_JWT" \
kubernetes_host="$KUBE_HOST" \
kubernetes_ca_cert="$KUBE_CA_CERT" \
issuer="https://kubernetes.default.svc.cluster.local"
KUBE_HOST should be reachable for vault enterprise for tokenreview process.
So for the vault to communicate with our cluster, we need a few changes.
minikube start --apiserver-ips=14.55.145.30 --vm-driver=none
Now update the vaul-config.sh file
KUBE_HOST=$(kubectl config view --raw --minify --flatten --output='jsonpath={.clusters[].cluster.server}')
change this to
KUBE_HOST=""https://14.55.145.30:8443/"
No manual steps, for the first time configuration run
./vault-config.sh
and for the rest of the deployment in your CI/CD you can use
./vault.sh
Each release has only been able to access its own secrets.
Furter details can be found start-minikube-in-ec2
TLDR,
Note: Kubernetes cluster should be reachable to vault enterprise for authentication, so vault enterprise would not able to communicate with your local minikube cluster. Better to test it out on EC2

When you have set the Kubernetes auth into the vault you have used the
vault write auth/kubernetes/config \
token_reviewer_jwt="$TOKEN_REVIEW_JWT" \
kubernetes_host="$KUBE_HOST" \
kubernetes_ca_cert="$KUBE_CA_CERT" \
issuer="https://kubernetes.default.svc.cluster.local" \
disable_iss_validation="true" \
disable_local_ca_jwt="true"
you have mentioned
issuer="https://kubernetes.default.svc.cluster.local" \
it's HTTP and looks like running the local K8s server, not sure vault will be able to access this endpoint or not.
Did you try sending auth at this endpoint bypassing the Token & CA cert?
Your vault is able to connect and authenticate over the internet of your local cluster?
also i have faced this weird issue, with vault i used configure using both ways CLI and UI then it was working for me while in a single way it was giving error of permission denied.

Related

How to access kubeconfig file inside containers

I have a container where I used a bitnami/kubectl image.
Now I want to run a few kubectl commands inside that container.
How kubectl container aware of my kubeconfig file?
I know that I can mount the local kubeconfig file into containers and use it.
But is there any other way possible to access kubeconfig without using it as a volume mount?
I went throug the documentation of RBAC in Kubernetes.
Does configure role and role-binding alone is enough to run kubectl apply and kubectl delete commands successfully even without mounting kubeconfig file?
It would be really helpful if someone helps me with this.
Thanks in advance!
Now I want to run a few kubectl commands inside that container.
Why do you need it inside the container?
kubectl is your CLI to "communicate" with the cluster, the commands are passed to the kube-api, parsed, and executed usually by Admission controller.
Not clear why you need to run kubectl commands inside the container, since kubectl use your kubeconfig file for the communication (it will read the certificate path to the certificate data) and will be able to connect to your cluster.
How to run K8S API in your container?
The appropriate solution is to run an API query inside your container.
Every pod stores internally the Token & ServiceAccount which will allow you to query the API
Use the following script I'm using to query the API
https://github.com/nirgeier/KubernetesLabs/blob/master/Labs/21-KubeAPI/api_query.sh
#!/bin/sh
#################################
## Access the internal K8S API ##
#################################
# Point to the internal API server hostname
API_SERVER_URL=https://kubernetes.default.svc
# Path to ServiceAccount token
# The service account is mapped by the K8S API server in the pods
SERVICE_ACCOUNT_FOLDER=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace if required
# NAMESPACE=$(cat ${SERVICE_ACCOUNT_FOLDER}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICE_ACCOUNT_FOLDER}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICE_ACCOUNT_FOLDER}/ca.crt
# Explore the API with TOKEN and the Certificate
curl -X GET \
--cacert ${CACERT} \
--header "Authorization: Bearer ${TOKEN}" \
${API_SERVER_URL}/api
You can use kubectl without your kubeconfig file. Your pod is launched with a service account. And all kubectl commands will be executed with the service account privileges. So you have to use rbac to grant access rights to that service account first.

How to configure for Istio Multi-Primary on different networks

I tried to configure istio multicluster in different networkg using https://istio.io/latest/docs/setup/install/multicluster/ in AWS EC2 instances.
Tried creating 2 clusters and tried to verify the communication between cluster, but is failing.
In Istio doc, I found this statement "The API Server in each cluster must be accessible to the other clusters in the mesh" under https://istio.io/latest/docs/setup/install/multicluster/before-you-begin/. How to make API Server accessible to other cluster?
In the same link which you had used https://istio.io/latest/docs/setup/install/multicluster/multi-primary_multi-network/ the steps are there to provide access to api server.
Install a remote secret in cluster2 that provides access to cluster1’s API server.
$ istioctl x create-remote-secret \
--context="${CTX_CLUSTER1}" \
--name=cluster1 | \
kubectl apply -f - --context="${CTX_CLUSTER2}"
Install a remote secret in cluster1 that provides access to cluster2’s API server.
$ istioctl x create-remote-secret \
--context="${CTX_CLUSTER2}" \
--name=cluster2 | \
kubectl apply -f - --context="${CTX_CLUSTER1}"

Cannot install Helm chart when accessing GKE cluster directly

I've set up a basic GKE cluster using Autopilot settings. I am able to install Helm charts on it using kubectl with proper kubeconfig pointing to the GKE cluster.
I'd like to do the same without the kubeconfig, by providing the cluster details with relevant parameters.
To do that I'm running a docker container using alpine/helm image and passing the paramtrised command which looks like this:
docker run --rm -v $(pwd):/chart alpine/helm install <my_chart_name> /chart --kube-apiserver <cluster_endpoint> --kube-ca-file /chart/<cluster_certificate_file> --kube-as-user <my_gke_cluster_username> --kube-token <token>
unfortunately it returns :
Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://<cluster_endpoint>/version": dial tcp <cluster_endpoint>:80: i/o timeout
Is this even doable with GKE?
One challenge will be that GKE leverages a plugin (currently built in to kubectl itself but soon the standlone gke-gcloud-auth-plugin) to obtain an access token for the default gcloud user.
This token expires hourly.
If you can, it would be better to mount the kubeconfig (${HOME}/.kube/config) file into the container as it should (!) then authenticate as if it were kubectl which will not only leverage the access token correctly but will renew it as appropriate.
https://github.com/alpine-docker/helm
docker run \
--interactive --tty --rm \
--volume=${PWD}/.kube:/root/.kube \
--volume=${PWD}/.helm:/root/.helm \
--volume=${PWD}/.config/helm:/root/.config/helm \
--volume=${PWD}/.cache/helm:/root/.cache/helm \
alpine/helm ...
NOTE It appears there are several (.helm, .config and .cache) other local paths that may be required too.
Problem solved! A more experienced colleague has found the solution.
I should have used the address including "http://" protocol specification. That however still kept returning "Kubernetes cluster unreachable: " error, with "unknown" details instead.
I had been using incorect username. Instead the one from kubeconfig file, a new service account should be created and its name used instead in a form system:serviceaccount:<namespace>:<service_account>. However that would not alter the error either.
The service account lacked proper role, following command did the job: kubectl create rolebinding --clusterrole=cluster-admin --serviceaccount=<namespace>:<service_account>. Ofc, cluster-admin might now be the role we want to give away freely.

DO Kubernetes Cluster + GCP Container Registry

I have a Kubernetes cluster in Digital Ocean, I want to pull the images from a private repository in GCP.
I tried to create a secret that make me able to to pull the images following this article https://blog.container-solutions.com/using-google-container-registry-with-kubernetes
Basically, these are the steps
In the GCP account, create a service account key, with a JSON credential
Execute
kubectl create secret docker-registry gcr-json-key \
--docker-server=gcr.io \
--docker-username=_json_key \
--docker-password="$(cat ~/json-key-file.json)" \
--docker-email=any#valid.email
In the deployment yaml reference the secret
imagePullSecrets:
- name: gcr-json-key
I don't understand why I am getting 403. If there are some restriccions to use the registry outside google cloud, or if I missed some configuration something.
Failed to pull image "gcr.io/myapp/backendnodeapi:latest": rpc error: code = Unknown desc = failed to pull and unpack image "gcr.io/myapp/backendnodeapi:latest": failed to resolve reference "gcr.io/myapp/backendnodeapi:latest": unexpected status code [manifests latest]: 403 Forbidden
Verify that you have enabled the Container Registry API, Installed Cloud SDK and Service account you are using for authentication has permissions to access Container Registry.
Docker requires privileged access to interact with registries. On Linux or Windows, add the user that you use to run Docker commands to the Docker security group.
This documentation has details on prerequisites for container registry.
Note:
Ensure that the version of kubectl is the latest version.
I tried replicating by following the document you provided and it worked at my end, So ensure that all the prerequisites are met.
That JSON string is not a password.
The documentation suggests to either activate the service account:
gcloud auth activate-service-account [USERNAME]#[PROJECT-ID].iam.gserviceaccount.com --key-file=~/service-account.json
Or add the configuration to $HOME/.docker/config.json
And then run docker-credential-gcr configure-docker.
Kubernetes seems to demand a service-account token secret
and this requires annotation kubernetes.io/service-account.name.
Also see Configure Service Accounts for Pods.

Not able to browse Kubernetes dashboard for clusters created in Azure

Trying to access Kubernetes dashboard (Azure AKS) by using below command but getting error as attached.
az aks browse --resource-group rg-name --name aks-cluster-name --listen-port 8851
Please read AKS documentation of how to authenticate the dashboard from link. This also explains about how to enable the addon for newer version of k8s also.
Pasting here for reference
Use a kubeconfig
For both Azure AD enabled and non-Azure AD enabled clusters, a kubeconfig can be passed in. Ensure access tokens are valid, if your tokens are expired you can refresh tokens via kubectl.
Set the admin kubeconfig with az aks get-credentials -a --resource-group <RG_NAME> --name <CLUSTER_NAME>
Select Kubeconfig and click Choose kubeconfig file to open file selector
Select your kubeconfig file (defaults to $HOME/.kube/config)
Click Sign In
Use a token
For non-Azure AD enabled cluster, run kubectl config view and copy the token associated with the user account of your cluster.
Paste into the token option at sign in.
Click Sign In
For Azure AD enabled clusters, retrieve your AAD token with the following command. Validate you've replaced the resource group and cluster name in the command.
kubectl config view -o jsonpath='{.users[?(#.name == "clusterUser_<RESOURCE GROUP>_<AKS_NAME>")].user.auth-provider.config.access-token}'
Try to run this
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}').
You will get many values for some other keys such as Name, Labels, ..., token . The important one is the token that related to your name. Then copy that token and paste it.