Would like to know if anyone has experienced the point of need to call kubectl commands from the node?
What i'm trying to do for this example is to access the master from the node and get some valuable information.
bafontainha ~ kubectl get pods --server https://localhost:6443 --insecure-skip-tls-verify=true
Please enter Username: service-account
Please enter Password: error: the server doesn't have a resource type "pods"
kubectl need either a bearer token passed inline or a kubeconfig file with a token or a certificate to authenticate the client to the Kubernetes API Server.
By giving --insecure-skip-tls-verify=true kubectl will not verify the API Server endpoints authenticity but that does not mean that you can call Kubernetes API without a valid bearer token or a client certificate which essentially proves the identity of the client calling the API Server.
I think the easiest option would be just copy the working kubeconfig file to the node VM into .kube/config location and execute kubectl get pods
Related
I have a pod that exposes a port (Server). Other pods (Clients) can communicate with it.
The server can find remote IP and port on a socket (when the client connects to it). I am looking for a way to get the client's pod name (from its IP and port).
I saw a bunch of questions/answers about getting pod names via kubectl. However, I am not sure whether I can do kubectl from within a cluster itself.
I am trying to figure out what is available for something running on the cluster. It's ok if it requires some special privileges. It's more complicated if it requires authentication.
List all the Pods with the List Pods API operation and parse the JSON response for the podIP field (e.g. with jq or some other JSON parsing tool) to find the JSON object of the Pod that has your desired IP address. Then, extract the metadata.name field from this JSON object to get the name of the Pod.
You can do this by either directly using the Kubernetes API (e.g. with curl) or with kubectl (e.g. kubectl get pods -o json | jq ...). In any case, you must include in this request the ServiceAccount token of the ServiceAccount used by the Pod from which you are issuing the request (if you use the Kubernetes API directly, as a Bearer token in the Authorization header, and if you use kubectl with the --token command-line flag).
Regarding authorisation, you need a Role allowing the list verb on the pods resource and a RoleBinding that binds this Role to the ServiceAccount that your Pod is using (by default, Pods use a ServiceAccount named default in their namespace, but you can specify a custom ServiceAccount with the serviceAccountName field of the Pod).
Whatever can be done via kubectl can be done via direct requests to the API server as well. All you need is to have proper ServiceAccount set up for your pod. Once you have it - you use can plenty of libraries dedicated for communication with k8s API server.
I try to run using Python kubectl to get nodes inside the POD.
How I should set up a Network Policy for this pod?
I tried to connect my namespace to the kube-system namespace, but it was not working.
Thanks.
As per Accessing the API from a Pod:
The recommended way to locate the apiserver within the pod is with the
kubernetes.default.svc DNS name, which resolves to a Service IP which
in turn will be routed to an apiserver.
The recommended way to authenticate to the apiserver is with a service
account credential. By kube-system, a pod is associated with a service account, and a credential (token) for that service account is placed into the filesystem tree of each container in that pod, at /var/run/secrets/kubernetes.io/serviceaccount/token.
All you need is a service account with enough privileges, and use the API server DNS name as stated above. Example:
# Export the token value
export TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# Use kubectl to talk internally with the API server
kubectl --insecure-skip-tls-verify=true \
--server="https://kubernetes.default.svc:443" \
--token="${TOKEN}" \
get nodes
The Network Policy may be restrictive and prevent this type of call, however, by default, the above should work.
Using minikube, when running the following command:
kubectl -v=11 --kubeconfig /dev/null --insecure-skip-tls-verify -s http://localhost:8001 --token "invalid" -n namespace get pods
I have an answer when I don't want one. And I don't know how it was authorized. Moreover, if I use a valid token with specific rights, these are not used.
kubectl --token=$TOKEN doesn't run with the permissions of the token doesn't answer my question as I specified to used /dev/null as a config file.
Any idea ?
I will try to summarize the answer I provided in the comments.
The question was: Why does running kubectl -s http://localhost:8001 --kubeconfig /dev/null --token <invalid_token> (where :8001 is a port opened by kubectl proxy) repoonds as if I was authorized, when it shouldn't beacause I set all possible authorization options to null or incorrect values?
The answer is that kubectl proxy opens a port and handles all authorization for you so you dont have to. Now to access REST api of kubernetes all you need to do is to use curl localhost:8001/.... No tokens and certificates.
Because you are already authorized with kubectl proxy, using kubectl and pointing it to localhost:8001 is causing that it won't need to authorize and you won't need any tokens to access k8s.
As an alternative you can check what happens when you run the same but instead of connecting through kubectl proxy you use kubernetes port directly.
You mentioned that you are using minikube so by default that would be port 8443
$ kubectl --kubeconfig /dev/null -s https://$(minikube ip):8443 --token "invalid" --insecure-skip-tls-verify get pods
error: You must be logged in to the server (Unauthorized)
As you see now it works as expected.
I'm trying to enable RBAC on my cluster and iadded those following line to the kube-apiserver.yml :
- --authorization-mode=RBAC
- --runtime-config=rbac.authorization.k8s.io/v1beta1
- --authorization-rbac-super-user=admin
and i did systemctl restart kubelet ;
the apiserver starts successfully but i'm not able to run kubectl command and i got this error :
kubectl get po
Error from server (Forbidden): pods is forbidden: User "kubectl" cannot list pods in the namespace "default"
where am I going wrong or i should create some roles to the kubectl user ? if so how that possible
Error from server (Forbidden): pods is forbidden: User "kubectl" cannot list pods in the namespace "default"
You are using user kubectl to access cluster by kubectl utility, but you set --authorization-rbac-super-user=admin, which means your super-user is admin.
To fix the issue, launch kube-apiserver with superuser "kubectl" instead of "admin."
Just update the value of the option: --authorization-rbac-super-user=kubectl.
Old question but for google searchers, you can use the insecure port:
If your API server runs with the insecure port enabled (--insecure-port), you can also make API calls via that port, which does not enforce authentication or authorization.
Source: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#privilege-escalation-prevention-and-bootstrapping
So add --insecure-port=8080 to your kube-apiserver options and then restart it.
Then run:
kubectl create clusterrolebinding kubectl-cluster-admin-binding --clusterrole=cluster-admin --user=kubectl
Then turn the insecure-port off.
I am trying to access my Kubernetes cluster on google cloud with the service account, but I am not able to make this works. I have a running system with some pods and ingress. I want to be able to update images of deployments.
I would like to use something like this (remotely):
kubectl config set-cluster cluster --server="<IP>" --insecure-skip-tls-verify=true
kubectl config set-credentials foo --token="<TOKEN>"
kubectl config set-context my-context --cluster=cluster --user=foo --namespace=default
kubectl config use-context cluster
kubectl set image deployment/my-deployment boo=eu.gcr.io/project-123456/image:v1
So I created the service account and then get the secret token:
kubectl create serviceaccount foo
kubectl get secret foo-token-gqvgn -o yaml
But, when I try to update the image in any deployment, I receive:
error: You must be logged in to the server (Unauthorized)
IP address for API I use the address, which is shown in GKE administration as cluster endpoint IP.
Any suggestions? Thanks.
I have tried to recreate your problem.
Steps I have followed
kubectl create serviceaccount foo
kubectl get secret foo-token-* -o yaml
Then, I have tried to do what you have done
What I have used as token is base64 decoded Token.
Then I tried this:
$ kubectl get pods
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:foo" cannot list pods in the namespace "default": Unknown user "system:serviceaccount:default:foo"
This gave me error as expected. Because, I need to grant permission to this ServiceAccount.
How can I grant permission to this ServiceAccount? I need to create ClusterRole & ClusterRoleBinding with necessary permission.
Read more to learn more role-based-access-control
I can do another thing
$ kubectl config set-credentials foo --username="admin" --password="$PASSWORD"
This will grant you admin authorization.
You need to provide cluster credential.
Username: admin
Password: -----
You will get this info in GKE -> Kubernetes Engine -> {cluster} -> Show credential