I just setup a kubenetes cluster base on this link https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#multi-platform
I check with kubectl get nodes, then the master node is Ready, but when I access to the link https://k8s-master-ip:6443/
it show the error: User "system:anonymous" cannot get path "/".
What is the trick I am missing ?
Hope you see something like this:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
This is good as not everyone should be able to access the cluster, if you want to see the services run "kubectl proxy", this should enable access to the services from the outside world.
C:\dev1> kubectl proxy
Starting to serve on 127.0.0.1:8001
And when you hit 127.0.0.1:8001 you should see the list of services.
The latest kubernetes deployment tools enable RBAC on the cluster. Jenkins is relegated to the catch-all user system:anonymous when it accesses https://192.168.70.94:6443/api/v1/.... This user has almost no privileges on kube-apiserver.
The bottom-line is, Jenkins needs to authenticate with kube-apiserver - either with a bearer token or a client cert that's signed by the k8s cluster's CA key.
Method 1. This is preferred if Jenkins is hosted in the k8s cluster:
Create a ServiceAccount in k8s for the plugin
Create an RBAC profile (ie. Role/RoleBinding or ClusterRole/ClusterRoleBinding) that's tied to the ServiceAccount
Config the plugin to use the ServiceAccount's token when accessing the URL https://192.168.70.94:6443/api/v1/...
Method 2. If Jenkins is hosted outside the k8s cluster, the steps above can still be used. The alternative is to:
Create a client cert that's tied to the k8s cluster's CA. You have to find where the CA key is kept and use it to generate a client cert.
Create an RBAC profile (ie. Role/RoleBinding or ClusterRole/ClusterRoleBinding) that's tied to the client cert
Config the plugin to use the client cert when accessing the URL https://192.168.70.94:6443/api/v1/...
Both methods work in any situation. I believe Method 1 will be simpler for you because you don't have to mess around with the CA key.
By default, your clusterrolebinding has system:anonymous set which blocks the cluster access.
Execute the following command, it will set a clusterrole as cluster-admin which will give you the required access.
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
Related
I have a best-practice question:
I have a Django application running somewhere (non-k8s) where my end-user accounts are tracked. Separately, I have a k8s cluster that has a service for every user. When a new user signs up in Django, a new service should be created in the cluster.
What is the best practice for doing this? Two options I see are:
Have a long-lived service in the cluster, something like user-pod-creator, which exposes an API to the Django side, allowing it to ask for a pod to be created.
Give the Django permissions to use the cluster's API directly; have it create (and delete) pods as it wishes.
Intuitively I prefer the first because of the separation of concerns it creates and because of security reasons. But the second would give a lot of flexibility to the Django app so that it can not only create and delete pods, but it can have more visibility into the cluster if need be with direct API calls, instead of me having to expose new API endpoints in user-pod-creator or some other service.
Option 2 is a valid approach and can be solved with a service account.
Create a ServiceAccount for your Django app:
kubectl create serviceaccount django
This ServiceAccount points to a Secret, and this Secret contains a token.
Find out the Secret associated with the ServiceAccount:
kubectl get serviceaccount django -o yaml
Get the token in the Secret:
kubectl get secret django-token-d2tz4 -o jsonpath='{.data.token}'
Now you can use this token as an HTTP bearer token in the Kubernetes API requests from your Django app outside the cluster.
That is, include the token in the HTTP Authorization header of the Kubernetes API requests like this:
Authorization: Bearer <TOKEN>
In this way, your request passes the authentication stage in the API server. However, the service account has no permissions yet (authorisation).
You can assign the required permissions to your service account with Roles and RoleBindings:
kubectl create role django --verb <...> --resource <...>
kubectl create rolebinding django --role django --serviceaccount django
Regarding security: grant only the minimum permissions needed to the service account (principle of least privileges), and if you think someone stole the service account token, you can just delete the service account with kubectl delete serviceaccount django to invalidate the token.
See also here for an example of the presented approach. Especially:
Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API.
I am creating a kubernetes cluster with kubeadm, and I have done this literally, may be 100 times, and I am getting permission issues from the very beginning.
The context:
So, I first tried with k8s 1.15.1, and I was getting the following error when tried installing pod network (bunch of them; 1 for each object):
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "calico-kube-controllers", Namespace: "kube-system"
Object: &{map["apiVersion":"v1" "kind":"ServiceAccount" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "name":"calico-kube-controllers" "namespace":"kube-system"]]}
from server for: "https://docs.projectcalico.org/v3.8/manifests/calico.yaml": serviceaccounts "calico-kube-controllers" is forbidden: User "system:node:master" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system": can only create tokens for individual service accounts
I noticed the user is different (why is my master node the user configured in the config file?):
system:node:master
And this user has no permissions for almost anything:
root#master:~# kubectl auth can-i create deploy
no
I wanted to configure the user, but I haven't kept kubeadm token, and to get the token I get permission errors too.
root#master:~# kubeadm token list
failed to list bootstrap tokens: secrets is forbidden: User "system:node:master" cannot list resource "secrets" in API group "" in the namespace "kube-system": No Object name found
So, I kept trying; the same scenario for 1.14.4. No permissions for anything.
Then I tried the last version that I had tried already, which is 1.14.3, and it worked as expected. The user is kubernetes-admin, and has permissions for everything:
root#master:~$ kubectl auth can-i create clusterrolebinding
yes
I wanted to check the release notes, but there is no much information, or I don't know interpret it. Does anyone have any information about what are the changes, or what am I doing wrong?
I'm am currently configuring Heketi Server (Deployed on K8S clusterA) to interact with my Glusterfs cluster that is deployed as a DaemonSet on another K8S cluster ClusterB.
One of the configurations required by Heketi to connect to GlusterFS K8S cluster are :
"kubeexec": {
"host" :"https://<URL-OF-CLUSTER-WITH-GLUSTERFS>:6443",
"cert" : "<CERTIFICATE-OF-CLUSTER-WITH-GLUSTERFS>",
"insecure": false,
"user": "WHERE_DO_I_GET_THIS_FROM",
"password": "<WHERE_DO_I_GET_THIS_FROM>",
"namespace": "default",
"backup_lvm_metadata": false
},
As you can see, it requires a user and password. I have no idea where to get that from.
One thing that comes to mind is creating a service account on ClusterB and using the token to authenticate but Heketi does not seem to be taking that as an authentication mechanism.
The cert is something that I got from /usr/local/share/ca-certificates/kube-ca.crt but I have no idea where to get the user/password from. Any idea what could be done?
If I do a kubectl config view I only see certificates for the admin user of my cluster.
That could only mean one thing: basic HTTP auth.
You can specify a username/password in a file when you start the kube-apiserver with the --basic-auth-file=SOMEFILE option.
I am getting following error while accessing the app deployed on Azure kubernetes service
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {},
"code": 403
}
I have followed all steps as given here https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-prepare-app
I know that this is something to do with authentication and RBAC, but i don't know what exactly is wrong and where should i make changes.
Just follow the steps in the link you posted. You will be successful in finishing that. The destination of each step below:
Create the image and make sure it can work without any error.
Create an Azure Container Registry and push the image into the registry.
Create a Service Principal for the AKS to let it just can pull the image from the registry.
Change the yaml file and make it pull the image from the Azure Registry, then crate pods in the AKS nodes.
You just need these four steps to run the application on AKS. Then get the IP address through the command kubectl get service azure-vote-front --watch like the step 4. If you can not access the application, check your steps carefully again.
Also, you can check all the pods status through the command kubectl describe pods or one pod with kubectl describe pod podName.
Update
I test with the image you provide and the result here:
And you can get the service information and know which port you should use to browse.
I've got a username and password, how do I authenticate kubectl with them?
Which command do I run?
I've read through: https://kubernetes.io/docs/reference/access-authn-authz/authorization/ and https://kubernetes.io/docs/reference/access-authn-authz/authentication/ though can not find any relevant information in there for this case.
kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif
https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_config_set-credentials/
The above does not seem to work:
kubectl get pods
Error from server (Forbidden): pods is forbidden: User "client" cannot list pods in the namespace "default": Unknown user "client"
Kubernetes provides a number of different authentication mechanisms. Providing a username and password directly to the cluster (as opposed to using an OIDC provider) would indicate that you're using Basic authentication, which hasn't been the default option for a number of releases.
The syntax you've listed appears right, assuming that the cluster supports basic authentication.
The error you're seeing is similar to the one here which may suggest that the cluster you're using doesn't currently support the authentication method you're using.
Additional information about what Kubernetes distribution and version you're using would make it easier to provide a better answer, as there is a lot of variety in how k8s handles authentication.
You should have a group set for the authenticating user.
Example:
password1,user1,userid1,system:masters
password2,user2,userid2
Reference:
"Use a credential with the system:masters group, which is bound to the cluster-admin super-user role by the default bindings."
https://kubernetes.io/docs/reference/access-authn-authz/rbac/