I have my Kubernetes cluster running in GKE I want to run an application outside the cluster and talk to the Kubernetes API.
By using password retrieved from running:
gcloud container clusters get-credentials cluster-2 --log-http
I am able to access the API with Basic authentication.
But I want to create multiple Kubernetes service accounts and configure them with required authorization and use appropriately.
So, I created service accounts and obtained the tokens using:
kubectl create serviceaccount sauser1
kubectl get serviceaccounts sauser1 -o yaml
kubectl get secret sauser1-token-<random-string-as-retrieved-from-previous-command> -o yaml
If I try to access the Kubernetes API with the obtained token using Bearer authentication then I get a 401 HTTP error. I thought that some permissions may have to be set for the service account, so based on the documentation here, I created below YAML file:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
subjects:
- kind: ServiceAccount
name: sauser1
namespace: default
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
and tried to apply it using the below command:
kubectl apply -f default-sa-rolebinding.yaml
I got the following error:
clusterrolebinding "pod-reader" created
Error from server (Forbidden): error when creating "default-sa-rolebinding.yaml"
: clusterroles.rbac.authorization.k8s.io "pod-reader" is forbidden: attempt to g
rant extra privileges: [PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["g
et"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule
{Resources:["pods"], APIGroups:[""], Verbs:["list"]}] user=&{xyz#gmail.
com [system:authenticated] map[authenticator:[GKE]]} ownerrules=[PolicyRule{Res
ources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:[
"create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healt
hz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/versi
on"], Verbs:["get"]}] ruleResolutionErrors=[]
I dont know how to proceed from here. Is my approach correct or am I missing something here?
UPDATE: As per the post referred by #JanosLenart in the comments, modified the kubectl command and the above error is not observed. But accessing the API, still gives 401 error. The curl command that I am using is:
curl -k -1 -H "Authorization: Bearer <token>" https://<ip-address>/api/v1/namespaces/default/pods -v
#Janos pointed out the potential problem, however I think you will need an actual Cloud IAM Service Account as well, because you said:
I want to run an application outside the cluster [...]
If you're authenticating to GKE from outside, I believe you can only use the Google IAM identities. (I might be wrong, if so, please let me know.)
In this case, what you need to do:
Create an IAM service account and download a json key file of it.
set GOOGLE_APPLICATION_CREDENTIALS to that file.
either:
use RBAC like in your question to give permissions to the email address of the IAM Service Account
use IAM Roles to give the IAM Service Account on the GKE API (e.g. Container Developer role is usually sufficient)
Use kubectl command against the cluster (make sure you have a .kube/config file with the cluster's IP/CA cert beforehand) with the environment variable above, it should work.
YMMV
I managed to get it work without USING an actual Cloud IAM Service Account
First, I decided to use an shell inside GKE's k8s cluster by running
kubectl run curl-random --image=radial/busyboxplus:curl -i --tty --rm
Second, I made sure I decoded my token by copying the token and then running through
pbpaste | base64 -D
Third, I created the rolebinding for the serviceaccount, NOT the username.
kubectl create clusterrolebinding shaoserverless-cluster-admin-binding --clusterrole=cluster-admin --serviceaccount=default:shaoserverless
The third step was particularly tricky but I got the inspiration since the error message used to be
Unknown user \"system:serviceaccount:default:shaoserverless\"",
Lastly, then this works
curl -k -1 -H "Authorization: Bearer <token>" https://<ip-address>/api/v1/namespaces/default/pods -v
Related
Is possible to gain k8s cluster access with serviceaccount token?
My script does not have access to a kubeconfig file, however, it does have access to the service account token at /var/run/secrets/kubernetes.io/serviceaccount/token.
Here are the steps I tried but it is not working.
kubectl config set-credentials sa-user --token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubectl config set-context sa-context --user=sa-user
but when the script ran "kubectl get rolebindings" I get the following error:
Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:test:default" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "test"
Is possible to gain k8s cluster access with serviceaccount token?
Certainly, that's the point of a ServiceAccount token. The question you appear to be asking is "why does my default ServiceAccount not have all the privileges I want", which is a different problem. One will benefit from reading the fine manual on the topic
If you want the default SA in the test NS to have privileges to read things in its NS, you must create a Role scoped to that NS and then declare the relationship explicitly. SAs do not automatically have those privileges
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: test
name: test-default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: whatever-role-you-want
subjects:
- kind: ServiceAccount
name: default
namespace: test
but when the script ran "kubectl get pods" I get the following error: Error from server (Forbidden): rolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:test:default" cannot list resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "test"
Presumably you mean you can kubectl get rolebindings, because I would not expect running kubectl get pods to emit that error
Yes, it is possible. For instance, if you login K8S dashboard via token it does use the same way.
Follow these steps;
Create a service account
$ kubectl -n <your-namespace-optional> create serviceaccount <service-account-name>
A role binding grants the permissions defined in a role to a user or set of users. You can use a predefined role or you can create your own. Check this link for more info. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-example
$ kubectl create clusterrolebinding <binding-name> --clusterrole=cluster-admin --serviceaccount=<namespace>:<service-account-name>
Get the token name
$ TOKENNAME=`kubectl -n <namespace> get serviceaccount/<service-account-name> -o jsonpath='{.secrets[0].name}'`
Finally, get the token and set the credentials
$ kubectl -n <namespace> get secret $TOKENNAME -o jsonpath='{.data.token}'| base64 --decode
$ kubectl config set-credentials <service-account-name> --token=<output from previous command>
$ kubectl config set-context --current --user=<service-account-name>
If you follow these steps carefully your problem will be solved.
I have full admin access to a GKE cluster, but I want to be able to create a kubernetes context with just read only privileges. This way I can prevent myself from accidentally messing with the cluster. However, I still want to be able to switch into a mode with full admin access temporarily when I need to make changes (I would probably use cloud shell for this to fully distinguish the two)
I haven't much docs about this - it seems I can set up roles based on my email but not have two roles for one user.
Is there any way to do this? Or any other way to prevent fat-finger deleting prod?
There are a few ways to do this with GKE. A context in your KUBECONFIG consists of a cluster and a user. Since you want to be pointing at the same cluster, it's the user that needs to change. Permissions for what actions users can perform on various resources can be controlled in a couple ways, namely via Cloud IAM policies or via Kubernetes RBAC. The former applies project-wide, so unless you want to create a subject that has read-only access to all clusters in your project, rather than a specific cluster, it's preferable to use the more narrowly-scoped Kubernetes RBAC.
The following types of subjects can authenticate with a GKE cluster and have Kubernetes RBAC policies applied to them (see here):
a registered (human) GCP user
a Kubernetes ServiceAccount
a GCloud IAM service account
a member of a G Suite Google Group
Since you're not going to register another human to accomplish this read-only access pattern and G Suite Google Groups are probably overkill, your options are a Kubernetes ServiceAccount or a GCloud IAM service account. For this answer, we'll go with the latter.
Here are the steps:
Create a GCloud IAM service account in the same project as your Kubernetes cluster.
Create a local gcloud configuration to avoid cluttering your default one. Just as you want to create a new KUBECONFIG context rather than modifying the user of your current context, this does the equivalent thing but for gcloud itself rather than kubectl. Run the command gcloud config configurations create <configuration-name>.
Associate this configuration with your GCloud IAM service account: gcloud auth activate-service-account <service_account_email> --key-file=</path/to/service/key.json>.
Add a context and user to your KUBECONFIG file so that you can authenticate to your GKE cluster as this GCloud IAM service account as follows:
contexts:
- ...
- ...
- name: <cluster-name>-read-only
context:
cluster: <cluster-name>
user: <service-account-name>
users:
- ...
- ...
- name: <service-account-name>
user:
auth-provider:
name: gcp
config:
cmd-args: config config-helper --format=json --configuration=<configuration-name>
cmd-path: </path/to/gcloud/cli>
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
Add a ClusterRoleBinding so that this subject has read-only access to the cluster:
$ cat <<EOF | kubectl apply -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: <any-name>
subjects:
- kind: User
name: <service-account-email>
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
EOF
Try it out:
$ kubectl use-context <cluster-name>-read-only
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
Error from server (Forbidden): namespaces is forbidden: User "<service-account-email>" cannot create resource "namespaces" in API group "" at the cluster scope: Required "container.namespaces.create" permission.
$ kubectl use-context <original-namespace>
$ kubectl get all --all-namespaces
# see all the pods and stuff
$ kubectl create namespace foo
namespace/foo created
Is it possible to enable k8s basic auth in AWS EKS?
I need it to make Jenkins Kubernetes plugin work when Jenkins is deployed outside k8s.
You can use service account tokens.
Read more about it here: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens
You can use service account tokens (as Bearer Tokens).
Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a service account, simply use the kubectl create serviceaccount (NAME) command. This creates a service account in the current namespace and an associated secret.
kubectl create serviceaccount jenkins
serviceaccount "jenkins" created
Check an associated secret:
kubectl get serviceaccounts jenkins -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
# ...
secrets:
- name: jenkins-token-1yvwg
The created secret holds the public CA of the API server and a signed JSON Web Token (JWT).
kubectl get secret jenkins-token-1yvwg -o yaml
apiVersion: v1
data:
ca.crt: (APISERVER'S CA BASE64 ENCODED)
namespace: ZGVmYXVsdA==
token: (BEARER TOKEN BASE64 ENCODED)
kind: Secret
metadata:
# ...
type: kubernetes.io/service-account-token
The signed JWT can be used as a bearer token to authenticate as the given service account.
If you need more control, install nginx-ingress and then tell it to enforce HTTP Basic authentication.
https://kubernetes.github.io/ingress-nginx/examples/auth/basic/
https://aws.amazon.com/blogs/opensource/network-load-balancer-nginx-ingress-controller-eks/
https://www.rfc-editor.org/rfc/rfc7617
After creating a new GKE cluster, creating a cluster role failed with the following error:
Error from server (Forbidden): error when creating "./role.yaml":
clusterroles.rbac.authorization.k8s.io "secret-reader" is forbidden:
attempt to grant extra privileges: [PolicyRule{Resources:["secrets"],
APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["secrets"],
APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"],
APIGroups:[""], Verbs:["list"]}] user=&{XXX#gmail.com
[system:authenticated] map[authenticator:[GKE]]} ownerrules= .
[PolicyRule{Resources:["selfsubjectaccessreviews"
"selfsubjectrulesreviews"], APIGroups:["authorization.k8s.io"], Verbs:
["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis"
"/apis/*" "/healthz" "/swagger-2.0.0.pb-v1" "/swagger.json"
"/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}]
ruleResolutionErrors=[]
My account has the following permissions in IAM:
Kubernetes Engine Admin
Kubernetes Engine Cluster Admin
Owner
This is my role.yaml (from the Kubernetes docs):
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
According to the RBAC docs of GCloud, I need to
create a RoleBinding that gives your Google identity a cluster-admin role before attempting to create additional Role or ClusterRole permissions.
So I tried this:
export GCP_USER=$(gcloud config get-value account | head -n 1)
kubectl create clusterrolebinding cluster-admin-binding
--clusterrole=cluster-admin --user=$GCP_USER
which succeeded, but I still get the same error when creating the cluster role.
Any ideas what I might be doing wrong?
According to Google Container Engine docs you must first create a RoleBinding that grants you all of the permissions included in the role you want to create.
Get current google identity
$ gcloud info | grep Account
Account: [myname#example.org]
Grant cluster-admin to your current identity
$ kubectl create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=myname#example.org
Clusterrolebinding "myname-cluster-admin-binding" created
Now you can create your ClusterRole without any problem.
I found the answer in CoreOS FAQ / Troubleshooting check it out for more information.
#S.Heutmaker`s comment led me to the solution.
For me, the solution was to create the cluster-admin-binding with the correct casing on the email address. Check the casing in error message or google cloud console IAM
$ kubectl create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin --user=MyName#example.org
That's the correct solution. Is the GCP_USER obtained the same as the XXX#gmail.com username in the role creation error message?
If you got the casing right, try to add both googlemail domain variants (i.e. #gmail.com and #googlemail.com). For me gcloud info | grep Account returned <name>#googlemail.com but I had to create a clusterrole binding with <name>#gmail.com for the command to work.
Even after granting cluster roles to user, I get Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope. (get nodes)
I have the following set for the user:
- context:
cluster: kubernetes
user: user#gmail.com
name: user#kubernetes` set in the ~/.kube/config file
and the below added to admin.yaml to create cluster-role and cluster-rolebindings:
kind: CluserRouster: kubernetes user: nsp#gmail.com name: nsp#kubernetese
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
oidckind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-binding
subjects:
- kind: User
name: nsp#gmail.com
roleRef:
kind: ClusterRole
name: admin-role
When I try the command I still get error.
kubectl --username=user#gmail.com get nodes
Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope. (get nodes)
Can someone please suggest on how to proceed.
Your problem is not with your ClusterRoleBindings but rather with user authentication. Kubernetes tells you that it identified you as system:anonymous (which is similar to *NIX's nobody) and not nsp#example.com (to which you applied your binding).
In your specific case the reason for that is that the username flag uses HTTP Basic authentication and needs the password flag to actually do anything. But even if you did supply the password, you'd still need to actually tell the API server to accept that specific user.
Have a look at this part of the Kubernetes documentation which deals with different methods of authentication. For the username and password authentication to work, you'd want to look at the Static Password File section, but I would actually recommend you go with X509 Client Certs since they are more secure and are operationally much simpler (no secrets on the Server, no state to replicate between API servers).
In my case i was receiving nearly similar error due to RBAC
Error
root#k8master:~# kubectl cluster-info dump --insecure-skip-tls-verify=true
Error from server (Forbidden): nodes is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Solution:
As Solution i have done below things to reconfigure my user to access cluster
cd $HOME
sudo whoami
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
echo "export KUBECONFIG=$HOME/admin.conf" | tee -a ~/.bashrc
After doing above when i take cluster dump i got result
root#k8master:~# kubectl cluster-info
Kubernetes master is running at https://192.168.10.15:6443
KubeDNS is running at https://192.168.10.15:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy