Vault Injector with externalVaultAddr permission denied - kubernetes

I'm trying to connect Vault injector from my AWS Kubernetes cluster and login using the injector.externalVaultAddr="http://my-aws-instance-ip:8200"
Installation:
helm install vault \
--set='injector.externalVaultAddr=http://my-aws-instance-ip:8200' \
/path/to/my/vault-helm
After following the necessary steps here: https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar/ and customising this:
vault write auth/kubernetes/config \
token_reviewer_jwt="$(cat kube_token_from_my_k8s_cluster)" \
kubernetes_host="$KUBERNETES_HOST:443" \
kubernetes_ca_cert=#downloaded_ca_crt_from_my_k8s_cluster.crt
Now, after adding the deployment app.yaml and adding the vault annotations to read the secret/helloworld, it doesn't start my pod and shows this error on the container vault-agent-init
NAME READY STATUS RESTARTS AGE
app-aaaaaaa-bbbb 1/1 Running 0 34m
app-aaaaaaa-cccc 0/2 Init:0/1 0 34m
vault-agent-injector-xxxxxx-zzzzz 1/1 Running 0 35m
$ kubectl logs -f app-aaaaaaa-cccc -c vault-agent-init
...
URL: PUT http://my-aws-instance-ip:8200/v1/auth/kubernetes/login
Code: 403. Errors:
* permission denied" backoff=2.769289902
I have also tried manually doing it in my local:
$ export VAULT_ADDR="http://my-aws-instance-ip:8200"
$ curl --request POST \
--data "{\"role\": \"myapp\", \"jwt\": \"$(cat kube_token_from_my_k8s_cluster)\"}" \
$VAULT_ADDR/v1/auth/kubernetes/login
# RESPONSE OUTPUT
{"errors":["permission denied"]}

Try curling the Kubernetes API-service with the values you used to set up Kubernetes auth in Vault.
vault write auth/kubernetes/config \
token_reviewer_jwt="$TOKEN_REVIEW_JWT" \
kubernetes_host="$KUBE_HOST" \
kubernetes_ca_cert="$KUBE_CA_CERT"
curl --cacert <ca-cert-file> -H "Authorization: Bearer $TOKEN_REVIEW_JWT" $KUBE_HOST
If that doesn't work Vault won't be able to communicate with the cluster to get tokens verified.
I had this problem when trying to integrate Vault with a Rancher managed cluster, $KUBE_HOST was pointing to the rancher proxy so I needed to change it to the IP of the cluster and extract token and ca cert from the service account I was using.

Related

Kubernetes: How to get other pods' name from within a pod?

I'd like to find out the other pods' name running in the same single-host cluster. All pods are single-application containers. I have pod A (written in Java) that acts as a TCP/IP server and a few other pods (written in C++) connect to the server.
In pod A, I can get IP address of clients (other pods). How do I get their pods' name? I can't run kubectl commands because pod A has no kubectl installed.
Thanks,
You can directly call kube-apiserver with cURL.
First you need to have a serviceaccount binded to clusterrole to be able to send requests to apiserver.
kubectl create clusterrole listpods --verb=get,list,watch --resource=pods
kubectl create clusterrolebinding listpods --clusterrole=listpods --serviceaccount=default:default
Get a shell inside a container
kubectl exec -it deploy/YOUR_DEPLOYMENT -- sh
Define necessary parameters for your cURL, run below commands inside container
APISERVER=https://kubernetes.default.svc
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
TOKEN=$(cat ${SERVICEACCOUNT}/token)
CACERT=${SERVICEACCOUNT}/ca.crt
Send pods list request to apiserver
curl -s -k -H "Authorization: Bearer $TOKEN" -H 'Accept: application/json' $APISERVER/api/v1/pods | jq "[ .items[] | .metadata.name ]"
Done ! It will return you a json list of "kubectl get pods"
For more examples, you can check OpenShift RestAPI Reference. Also, if you are planning to do some programmatic stuff, I advice you to checkout official kubernetes-clients.
Credits for jq improvement to #moonkotte

Access kubernetes API remotely

I have a k8s cluster mounted in a Amazon EC2 instance, and i want configure the CI with GitLab. To do that, GitLab requested me the Kubernetes API URL.
I ran kubectl cluster-info to get the requested information and i can see 3 rows:
Kubernetes master https://10.10.1.253:6443
coredns https://10.10.1.253:6443
kubernetes-dashboard https://10.10.1.253:6443
I suppose that need the Kubernetes master URL but, is a private IP. How i can expose the API correctly ?
Any ideas ?
For better security keep the IPs of the kubernetes master nodes private and use LoadBalancer provided by AWS to expose the Kubernetes API Server. You could also configure TLS termination at the LoadBalancer.
use the kubectl config view to get the server address, it will looks like server: https://172.26.2.101:6443.
First you need to define your public ip of the master node or the load balancer if any as a DNS Alternative. You can do this by,
remove current apiserver certificates
sudo rm /etc/kubernetes/pki/apiserver.*
generate new certificates
sudo kubeadm init phase certs apiserver --apiserver-cert-extra-sans=<public_ip>
Then, you have to capture your admin key, cert and the ca cert from the .kube/config file
client-key-data:
echo -n "LS0...Cg==" | base64 -d > admin.key
client-certificate-data:
echo -n "LS0...Cg==" | base64 -d > admin.crt
certificate-authority-data:
echo -n "LS0...Cg==" | base64 -d > ca.crt
Now you can request your api through curl, example below to request pods info
curl https://<public_ip>:6443/api/v1/pods \
--key admin.key \
--cert admin.crt \
--cacert ca.crt
And of course make sure you allowed required ports

How to access kubernetes cluster using masterurl

I am trying to connect to kubernetes cluster using master url. However, I encounter an error when attempting the following command
Command: config, ConfigErr clientcmd.BuildConfigFromFlags("https://192.168.99.100:8443","")
Error: Get "https://192.168.99.100:8443/api/v1/namespaces": x509: certificate signed by unknown authority
Has anyone else encountered this and/or know how to solve this error?
Get the kube-apiserver endpoint by describing the service
kubectl describe svc kubernetes
This will list the endpoint for your APIServer like this:
Endpoints: 172.17.0.6:6443
Get the token to access the APIServer like this:
TOKEN=$(kubectl get secret $(kubectl get serviceaccount default -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
Query the APIServer with the retreived token:
curl -v https://172.17.0.6:6443/api/v1/nodes -k --header "Authorization:Bearer $TOKEN" --insecure
config, ConfigErr = clientcmd.BuildConfigFromFlags(masterurl,"")
config.BearerToken=token
config.Insecure=true
use this code to make it work.it worked for me

"unable to retrieve the complete list of server APIs: tap.linkerd.io/v1alpha1" error using Linkerd on private cluster in GKE

Why does the following error occur when I install Linkerd 2.x on a private cluster in GKE?
Error: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: tap.linkerd.io/v1alpha1: the server is currently unable to handle the request
The default firewall rules of a private cluster on GKE only permit traffic on ports 443 and 10250. This allows communication to the kube-apiserver and kubelet, respectively.
Linkerd uses ports 8443 and 8089 for communication between the control and the proxies deployed to the data plane.
The tap component uses port 8089 to handle requests to its apiserver.
The proxy injector and service profile validator components, both of which are types of admission controllers, use port 8443 to handle requests.
The Linkerd 2 docs include instructions for configuring your firewall on a GKE private cluster: https://linkerd.io/2/reference/cluster-configuration/
They are included below:
Get the cluster name:
CLUSTER_NAME=your-cluster-name
gcloud config set compute/zone your-zone-or-region
Get the cluster MASTER_IPV4_CIDR:
MASTER_IPV4_CIDR=$(gcloud container clusters describe $CLUSTER_NAME \
| grep "masterIpv4CidrBlock: " \
| awk '{print $2}')
Get the cluster NETWORK:
NETWORK=$(gcloud container clusters describe $CLUSTER_NAME \
| grep "^network: " \
| awk '{print $2}')
Get the cluster auto-generated NETWORK_TARGET_TAG:
NETWORK_TARGET_TAG=$(gcloud compute firewall-rules list \
--filter network=$NETWORK --format json \
| jq ".[] | select(.name | contains(\"$CLUSTER_NAME\"))" \
| jq -r '.targetTags[0]' | head -1)
Verify the values:
echo $MASTER_IPV4_CIDR $NETWORK $NETWORK_TARGET_TAG
# example output
10.0.0.0/28 foo-network gke-foo-cluster-c1ecba83-node
Create the firewall rules for proxy-injector and tap:
gcloud compute firewall-rules create gke-to-linkerd-control-plane \
--network "$NETWORK" \
--allow "tcp:8443,tcp:8089" \
--source-ranges "$MASTER_IPV4_CIDR" \
--target-tags "$NETWORK_TARGET_TAG" \
--priority 1000 \
--description "Allow traffic on ports 8843, 8089 for linkerd control-plane components"
Finally, verify that the firewall is created:
gcloud compute firewall-rules describe gke-to-linkerd-control-plane
In my case, it was related to linkerd/linkerd2#3497, when the Linkerd service had some internal problems and couldn't respond back to the API service requests. Fixed by restarting its pods.
Solution:
The steps I followed are:
kubectl get apiservices : If linkered apiservice is down with the error CrashLoopBackOff try to follow the step 2 otherwise just try to restart the linkered service using kubectl delete apiservice/"service_name". For me it was v1alpha1.tap.linkerd.io.
kubectl get pods -n kube-system and found out that pods like metrics-server, linkered, kubernetes-dashboard are down because of the main coreDNS pod was down.
For me it was:
NAME READY STATUS RESTARTS AGE
pod/coredns-85577b65b-zj2x2 0/1 CrashLoopBackOff 7 13m
Use kubectl describe pod/"pod_name" to check the error in coreDNS pod and if it is down because of /etc/coredns/Corefile:10 - Error during parsing: Unknown directive proxy, then we need to use forward instead of proxy in the yaml file where coreDNS config is there. Because CoreDNS version 1.5x used by the image does not support the proxy keyword anymore.
This was a linkerd issue for me. To diagnose any linkerd related issues, you can use the linkerd CLI and run linkerd check this should show you if there is an issue with linkerd and links on instructions to fix it.
For me, the issue was that linkerd root certs had expired. In my case, linkerd was experimental in a dev cluster so I removed it. However, if you need to update your certificates you can follow the instructions at the following link.
https://linkerd.io/2.11/tasks/replacing_expired_certificates/
Thanks to https://stackoverflow.com/a/59644120/1212371 I was put on the right path.

Kubernetes Engine API delete pod

I need to delete POD on my GCP kubernetes cluster. Actually in Kubernetes Engine API documentation I can find only REST api's for: projects.locations.clusters.nodePools, but nothing for PODs.
The GKE API is used to manage the cluster itself on an infrastructure level. To manage Kubernetes resources, you'd have to use the Kubernetes API. There are clients for various languages, but of course you can also directly call the API.
Deleting a Pod from within another or the same Pod:
PODNAME=ubuntu-xxxxxxxxxx-xxxx
curl https://kubernetes/api/v1/namespaces/default/pods/$PODNAME \
-X DELETE -k \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
From outside, you'd have to use the public Kubernetes API server URL and a valid token. Here's how you get those using kubectl:
APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
TOKEN=$(kubectl get secret $(kubectl get serviceaccount default -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
Here's more official information on accessing the Kubernetes API server.