kubectl proxy incorrectly uses kube-system default user - kubernetes

I have a Kubernetes cluster (v1.8.4) set up on Google Kubernetes Engine. I have properly set up and configured kubectl on my local machine, and I can use this command to successfully create and delete assets on Kubernetes.
However, when I run kubectl proxy to view the admin dashboard, I get errors indicating the connection is made with some kube-system default user that doesn't exist:
configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps at the cluster scope: Unknown user "system:serviceaccount:kube-system:default"
To verify, I have run
gcloud container clusters get-credentials <cluster> --zone us-central1-f --project <project>
and my context is set correctly:
kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* gke_<project>_us-central1-f_<cluster> gke_<project>_us central1-f_<cluster> gke_<project>_us-central1-f_<cluster>
minikube minikube minikube
Shouldn't kubectl proxy use my google domain credentials when I access the dashboard?

Related

kubectl get nodes from pod (NetworkPolicy)

I try to run using Python kubectl to get nodes inside the POD.
How I should set up a Network Policy for this pod?
I tried to connect my namespace to the kube-system namespace, but it was not working.
Thanks.
As per Accessing the API from a Pod:
The recommended way to locate the apiserver within the pod is with the
kubernetes.default.svc DNS name, which resolves to a Service IP which
in turn will be routed to an apiserver.
The recommended way to authenticate to the apiserver is with a service
account credential. By kube-system, a pod is associated with a service account, and a credential (token) for that service account is placed into the filesystem tree of each container in that pod, at /var/run/secrets/kubernetes.io/serviceaccount/token.
All you need is a service account with enough privileges, and use the API server DNS name as stated above. Example:
# Export the token value
export TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# Use kubectl to talk internally with the API server
kubectl --insecure-skip-tls-verify=true \
--server="https://kubernetes.default.svc:443" \
--token="${TOKEN}" \
get nodes
The Network Policy may be restrictive and prevent this type of call, however, by default, the above should work.

Kubernetes understanding output of - kubectl auth can-i

I'm trying to understand why on one cluster an operation is permitted but on the other i'm getting the following
Exception encountered setting up namespace watch from Kubernetes API v1 endpoint https://10.100.0.1:443/api: namespaces is forbidden: User \"system:serviceaccount:kube-system:default\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope ({\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"namespaces is forbidden: User \\\"system:serviceaccount:kube-system:default\\\" cannot list resource \\\"namespaces\\\" in API group \\\"\\\" at the cluster scope\",\"reason\":\"Forbidden\",\"details\":{\"kind\":\"namespaces\"},\"code\":403}\n)"
I'm managing two Kubernetes clusters -
clusterA booted with Kops version v1.14.8
clusterB booted on AWS EKS version v1.14.9-eks-f459c0
So i've tried using the kubectl auth command to try figuring out and I do see that on one i'm allowed however on the second i'm not as you can see:
kubectl config use-context clusterA
Switched to context "clusterA".
kubectl auth can-i list pods --as=system:serviceaccount:kube-system:default -n kube-system
yes
kubectl config use-context clusterB
Switched to context "clusterB".
kubectl auth can-i list pods --as=system:serviceaccount:kube-system:default -n kube-system
no
Is there a way to understand what are these two decisions based on yes/no?
Thanks for helping out!
The decision yes/no is based on whether there is a clusterrole and a clusterrolebinding or rolebinding which permits the default serviceaccount in kube-system namespace to perform verb list on resource namespace.
The trick in case of namespace resource is that there needs to be a clusterrole instead of role because namespace is a cluster scoped resource.
You check what are the clusterrole,role, clusterrolebinding,rolebinding exists in a kubernetes cluster using below command
kubectl get clusterrole,clusterrolebinding
kubectl get role,rolebinding -n namespacename
For more details refer Kubernetes RBAC here

kubernetes service account permissions

When I create a service account on my docker-desktop kubernetes environment on windows 10 using
kubectl create serviceaccount test -n test-namespace
if I run the following command it returns 'yes'
kubectl auth can-i create pods --all-namespaces --token <token from test service account>
but if I run the same on setup on a cloud managed kubernetes cluster it returns 'no'
What is the difference in the setups? I'm trying to limit control on a local cluster.
found the solution this also applies to windows https://github.com/docker/for-mac/issues/3694#issuecomment-619474504

Access Kubernetes API with kubectl failed after enabling RBAC

I'm trying to enable RBAC on my cluster and iadded those following line to the kube-apiserver.yml :
- --authorization-mode=RBAC
- --runtime-config=rbac.authorization.k8s.io/v1beta1
- --authorization-rbac-super-user=admin
and i did systemctl restart kubelet ;
the apiserver starts successfully but i'm not able to run kubectl command and i got this error :
kubectl get po
Error from server (Forbidden): pods is forbidden: User "kubectl" cannot list pods in the namespace "default"
where am I going wrong or i should create some roles to the kubectl user ? if so how that possible
Error from server (Forbidden): pods is forbidden: User "kubectl" cannot list pods in the namespace "default"
You are using user kubectl to access cluster by kubectl utility, but you set --authorization-rbac-super-user=admin, which means your super-user is admin.
To fix the issue, launch kube-apiserver with superuser "kubectl" instead of "admin."
Just update the value of the option: --authorization-rbac-super-user=kubectl.
Old question but for google searchers, you can use the insecure port:
If your API server runs with the insecure port enabled (--insecure-port), you can also make API calls via that port, which does not enforce authentication or authorization.
Source: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping
So add --insecure-port=8080 to your kube-apiserver options and then restart it.
Then run:
kubectl create clusterrolebinding kubectl-cluster-admin-binding --clusterrole=cluster-admin --user=kubectl
Then turn the insecure-port off.

Login to GKE via service account with token

I am trying to access my Kubernetes cluster on google cloud with the service account, but I am not able to make this works. I have a running system with some pods and ingress. I want to be able to update images of deployments.
I would like to use something like this (remotely):
kubectl config set-cluster cluster --server="<IP>" --insecure-skip-tls-verify=true
kubectl config set-credentials foo --token="<TOKEN>"
kubectl config set-context my-context --cluster=cluster --user=foo --namespace=default
kubectl config use-context cluster
kubectl set image deployment/my-deployment boo=eu.gcr.io/project-123456/image:v1
So I created the service account and then get the secret token:
kubectl create serviceaccount foo
kubectl get secret foo-token-gqvgn -o yaml
But, when I try to update the image in any deployment, I receive:
error: You must be logged in to the server (Unauthorized)
IP address for API I use the address, which is shown in GKE administration as cluster endpoint IP.
Any suggestions? Thanks.
I have tried to recreate your problem.
Steps I have followed
kubectl create serviceaccount foo
kubectl get secret foo-token-* -o yaml
Then, I have tried to do what you have done
What I have used as token is base64 decoded Token.
Then I tried this:
$ kubectl get pods
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:foo" cannot list pods in the namespace "default": Unknown user "system:serviceaccount:default:foo"
This gave me error as expected. Because, I need to grant permission to this ServiceAccount.
How can I grant permission to this ServiceAccount? I need to create ClusterRole & ClusterRoleBinding with necessary permission.
Read more to learn more role-based-access-control
I can do another thing
$ kubectl config set-credentials foo --username="admin" --password="$PASSWORD"
This will grant you admin authorization.
You need to provide cluster credential.
Username: admin
Password: -----
You will get this info in GKE -> Kubernetes Engine -> {cluster} -> Show credential