kubectl --token=$TOKEN doesn't run with the permissions of the token - kubernetes

When I am using the command kubectl with the --token flag and specify a token, it still uses the administrator credentials from the kubeconfig file.
This is what I did:
NAMESPACE="default"
SERVICE_ACCOUNT_NAME="sa1"
kubectl create sa $SERVICE_ACCOUNT_NAME
kubectl create clusterrolebinding list-pod-clusterrolebinding \
--clusterrole=list-pod-clusterrole \
--serviceaccount="$NAMESPACE":"$SERVICE_ACCOUNT_NAME"
kubectl create clusterrole list-pod-clusterrole \
--verb=list \
--resource=pods
TOKEN=`kubectl get secrets $(kubectl get sa $SERVICE_ACCOUNT_NAME -o json | jq -r '.secrets[].name') -o json | jq -r '.data.token' | base64 -d`
# Expected it will fail but it doesn't because it uses the admin credentials
kubectl get secrets --token $TOKEN
The token have permissions to list pods, so I expect the kubectl get secrets --token $TOKEN to fail but it doesn't because it still uses the context of the administrator.
I don't create new context, I know kubectl have this ability to use bearer token and want to understand how to do it.
I also tried this kubectl get secrets --insecure-skip-tls-verify --server https://<master_ip>:6443 --token $TOKENand it also didn't return a Forbidden result.
If you test it you can use katacoda:
https://www.katacoda.com/courses/kubernetes/playground
EDIT:
I tried to create context with this:
NAMESPACE="default"
SERVICE_ACCOUNT_NAME="sa1"
CONTEXT_NAME="sa1-context"
USER_NAME="sa1-username"
CLUSTER_NAME="kubernetes"
kubectl create sa "$SERVICE_ACCOUNT_NAME" -n "$NAMESPACE"
SECRET_NAME=`kubectl get serviceaccounts $SERVICE_ACCOUNT_NAME -n $NAMESPACE -o json | jq -r '.secrets[].name'`
TOKEN=`kubectl get secrets $SECRET_NAME -n $NAMESPACE -o json | jq -r '.data | .token' | base64 -d`
# Create user with the JWT token of the service account
echo "[*] Setting credentials for user: $USER_NAME"
kubectl config set-credentials $USER_NAME --token=$TOKEN
# Makue sure the cluster name is correct !!!
echo "[*] Setting context: $CONTEXT_NAME"
kubectl config set-context $CONTEXT_NAME \
--cluster=$CLUSTER_NAME \
--namespace=$NAMESPACE \
--user=$USER_NAME
But when I tried kubectl get secrets --context $CONTEXT_NAME it still succeeded and was supposed fail because it doesn't have permissions for that.
Edit 2:
Option to run it correctly based on the kubectl API:
kubectl get pods --token `cat /home/natan/token` -s https://<ip>:8443 --certificate-authority /root/.minikube/ca.crt --all-namespaces
Or without TLS:
kubectl get pods --token `cat /home/natan/token` -s https://<ip>:8443 --insecure-skip-tls-verify --all-namespaces

This is tricky because if you are using client certificate for authenticating to kubernetes API server overriding token with kubectl is not going to work because the authentication with certificate happens early in the process during the TLS handshake.Even if you provide a token in kubectl it will be ignored.This is the reason why you are able to get secrets because the client certificate have permission to get secrets and the token is ignored.
So if you want to use kubectl token the kubeconfig file should not have client certificate and then you can override that token with --token flag in Kubectl. See the discussion in the question on how to create a kubeconfig file for a service account token.
Also you can view the bearer token being sent in kubectl command using command
kubectl get pods --v=10 2>&1 | grep -i bearer

This command will help you verify the authorization of the service account.
kubectl auth can-i <verb> <resources> --as=system:serviceaccount:<namespace>:<service account name>
kubectl auth can-i get pods --as=system:serviceaccount:default:default

Related

How to access kubernetes cluster using masterurl

I am trying to connect to kubernetes cluster using master url. However, I encounter an error when attempting the following command
Command: config, ConfigErr clientcmd.BuildConfigFromFlags("https://192.168.99.100:8443","")
Error: Get "https://192.168.99.100:8443/api/v1/namespaces": x509: certificate signed by unknown authority
Has anyone else encountered this and/or know how to solve this error?
Get the kube-apiserver endpoint by describing the service
kubectl describe svc kubernetes
This will list the endpoint for your APIServer like this:
Endpoints: 172.17.0.6:6443
Get the token to access the APIServer like this:
TOKEN=$(kubectl get secret $(kubectl get serviceaccount default -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
Query the APIServer with the retreived token:
curl -v https://172.17.0.6:6443/api/v1/nodes -k --header "Authorization:Bearer $TOKEN" --insecure
config, ConfigErr = clientcmd.BuildConfigFromFlags(masterurl,"")
config.BearerToken=token
config.Insecure=true
use this code to make it work.it worked for me

Configure Client commands from command line

In IBM Cloud Private EE, I need to go to the Web UI User > Configure client, copy the kubectl config commands and then run these 5 commands on my client machine.
I deployed the IBM Cloud private EE on 5 VMs and have access to the master node. I am wondering if there is a way to capture these kubectl config commands directly from the docker containers without having a need to go to the Web UI.
For example: I did not want to download the kubectl client from google (as I just want to use same kubectl version which is in the ICP containers) and I used the following command to get it from the container itself.
docker run --rm -v $(pwd):/data -e LICENSE=accept \
ibmcom/icp-inception:2.1.0.1-ee \
cp -r /usr/local/bin/kubectl /data
Then, I copied this to all VM guests so that I could access kubectl from any guest.
chmod +x kubectl
for host in $(awk '/192.168.142/ {print $3}' /etc/hosts)
do
scp kubectl $host:/bin
done
Where - 192.168.142 is the subnet of my VM guests.
But, I could not figure out how to get Configure Client commands without having to go to the Web UI. I need this to automate client kubectl command so that my environment is ready for kubectl commands through simple scripts.
You should use Vagrant to automate those steps.
For instance, IBM/deploy-ibm-cloud-private/Vagrantfile has this section:
install_kubectl = <<SCRIPT
echo "Pulling #{image_repo}/kubernetes:v#{k8s_version}..."
sudo docker run -e LICENSE=#{license} --net=host -v /usr/local/bin:/data #{image_repo}/kubernetes:v#{k8s_version} cp /kubectl /data &> /dev/null
kubectl config set-credentials icpadmin --username=admin --password=admin &> /dev/null
kubectl config set-cluster icp --server=http://127.0.0.1:8888 --insecure-skip-tls-verify=true &> /dev/null
kubectl config set-context icp --cluster=icp --user=admin --namespace=default &> /dev/null
kubectl config use-context icp &> /dev/null
SCRIPT
See more at "Kubernetes, IBM Cloud Private, and Vagrant, oh my!", from Tim Pouyer.
#VonC provided useful tips. This is how the service account token can be obtained.
Get the token from a running container - Tip from this link.
RUNNIGCONTAINER=$(docker ps | grep k8s_cloudiam-apikeys_auth | awk '{print $1}')
TOKEN=$(docker exec -t $RUNNIGCONTAINER cat /var/run/secrets/kubernetes.io/serviceaccount/token)
I already know the name of the IBM Cloud Private cluster name, master node and the default user name. The only missing link was the token. Please note that the script used by Tim is using password and the only difference was - I wanted to use token instead of the password.
So use the scripts.
kubectl config set-cluster ${CLUSTERNAME}.icp --server=https://$MASTERNODE:8001 --insecure-skip-tls-verify=true
kubectl config set-context ${CLUSTERNAME}.icp-context --cluster=${CLUSTERNAME}.icp
kubectl config set-credentials admin --token=$TOKEN
kubectl config set-context ${CLUSTERNAME}.icp-context --user=$DEFAULTUSERNAME --namespace=default
kubectl config use-context ${CLUSTERNAME}.icp-context
# get token
icp_auth_token=`curl -s -k -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" \
-d "grant_type=password&username=${myuser}&password=${mypass}&scope=openid" \
https://${icp_server}:8443/idprovider/v1/auth/identitytoken --insecure | \
sed 's/{//g;s/}//g;s/\"//g' | \
awk -F ':' '{print $7}'`
# setup context
kubectl config set-cluster ${icp_server} --server=https://${icp_server}:8001 --insecure-skip-tls-verify=true
kubectl config set-credentials ${icp_server}-user --token=${icp_auth_token}
kubectl config set-context ${icp_server}-context --cluster=${icp_server} --user=${icp_server}-user
kubectl config use-context ${icp_server}-context

After adding a service account, it obtains all permissions by default

Upon creating a service account, it seems to be getting access to all resources by default (as if it gets a copy of all my permissions). This is on GKE.
Are Service Accounts supposed to have default access to resources (upon SA creation), or am I missing something?
As per bitnami guide, service account by default will not have access to any resource until it is assigned Roles/ClusterRoles via respective bindings.
This is a simple bash script I'm running to depict the issue I'm seeing.
original_context=ehealth-dev
kubectl create sa eugene-test --context $original_context
sa_secret=$(kubectl get sa eugene-test --context $original_context -o json | jq -r .secrets[].name)
kubectl get secret --context $original_context $sa_secret -o json | jq -r '.data["ca.crt"]' | base64 -D > /tmp/my_ca.crt
user_token=$(kubectl get secret --context $original_context $sa_secret -o json | jq -r '.data["token"]' | base64 -D)
original_cluster_name=my_long_cluster_name
endpoint=`kubectl config view -o jsonpath="{.clusters[?(#.name == \"$original_cluster_name\")].cluster.server}"`
kubectl config set-credentials my_user --token=$user_token
kubectl config set-cluster my_cluster \
--embed-certs=true \
--server=$endpoint \
--certificate-authority=/tmp/my_ca.crt
kubectl config set-context my_context \
--cluster=my_cluster \
--user=my_user \
--namespace=default
kubectl config use-context my_context
kubectl get pods -n my_namespace # ------ it works! :-(
kubectl delete sa eugene-test --context $original_context
kubectl config delete-cluster my_cluster
Early versions of GKE enabled static authorization that gave all service accounts full API permissions. That is no longer the default as of 1.8.
Versions prior to 1.8 can disable this permissive permission with the --no-enable-legacy-authorization flag to gcloud

Running dashboard inside play-with-kubernetes

I'm trying to start a dashboard inside play-with-kubernetes
Commands I'm running:
start admin node
kubeadm init --apiserver-advertise-address $(hostname -i)
start network
kubectl apply -n kube-system -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
allow master to hold nodes(?)
kubectl taint nodes --all node-role.kubernetes.io/master-
Wait until dns is up
kubectl get pods --all-namespaces
join node (copy from admin startup, not from here)
kubeadm join --token 43d52c.d72308004d523ac4 10.0.21.3:6443
download and run dashboard
curl -L -s https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml | sed 's/targetPort: 8443/targetPort: 8443\n type: NodePort/' | \
kubectl apply -f -
Unfortunatelly dashboard is not available.
What should I do to correctly deploy it inside play-with-kubernetes?
You need heapster for dashboard to work. So execute these as well:
kubectl apply -f https://github.com/kubernetes/heapster/raw/master/deploy/kube-config/rbac/heapster-rbac.yaml
kubectl apply -f https://github.com/kubernetes/heapster/raw/master/deploy/kube-config/influxdb/heapster.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
Also, unless you want to fiddle with authentication you need to grant dashboard admin privileges with something like this:
kubectl create clusterrolebinding insecure-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard
Eventually a port link will appear (30xxx) but you will need to change the url scheme to https from http - and convince your browser that you don't care about the insecure certificate.
You should have a working dashboard now. Piece of cake ;)

can Kubectl remember me?

I have implemented basic authentication on my kubernetes api-server, now I am trying to configure my ./kube/config file in a way I could simply run, kubectl get pods
kubectl config set-cluster digitalocean \
--server=https://SERVER:6443 \
--insecure-skip-tls-verify=true \
--api-version="v1"
kubectl config set-context digitalocean --cluster=digitalocean --user=admin
kubectl config set-credentials admin --password="PASSWORD"
kubectl config use-context digitalocean
But now, it asks for credentials twice like :
dev#desktop: ~/code/go/src/bitbucket.org/cescoferraro
$ kubectl get pods
Please enter Username: admin
enter Password: PASSWORD
Please enter Username: admin
Please enter Password: PASSWORD
NAME READY STATUS RESTARTS AGE
or I need to pass the flags like
kubectl get pods --username=admin --password=PASSWORD
is this the default behavior? I want my config to know me. What can I do?
Can you provide the output of kubectl config view? I think the problem might be you need to do something like
kubectl config set-credentials cluster-admin --username=admin --password=PASSWORD
instead of
kubectl config set-credentials admin --password="PASSWORD".