I am following this helm + secure - guide:
https://www.cockroachlabs.com/docs/stable/orchestrate-cockroachdb-with-kubernetes.html#helm
I deployed the cluster with this command: $ helm install my-release --values my-values.yaml cockroachdb/cockroachdb --namespace=thesis-crdb
This is how it looks: $ helm list --namespace=thesis-crdb
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-release thesis-crdb 1 2021-01-31 17:38:52.8102378 +0100 CET deployed cockroachdb-5.0.4 20.2.4
Here is how it looks using: $ kubectl get all --namespace=thesis-crdb
NAME READY STATUS RESTARTS AGE
pod/my-release-cockroachdb-0 1/1 Running 0 7m35s
pod/my-release-cockroachdb-1 1/1 Running 0 7m35s
pod/my-release-cockroachdb-2 1/1 Running 0 7m35s
pod/my-release-cockroachdb-init-fhzdn 0/1 Completed 0 7m35s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-release-cockroachdb ClusterIP None <none> 26257/TCP,8080/TCP 7m35s
service/my-release-cockroachdb-public ClusterIP 10.xx.xx.x <none> 26257/TCP,8080/TCP 7m35s
NAME READY AGE
statefulset.apps/my-release-cockroachdb 3/3 7m35s
NAME COMPLETIONS DURATION AGE
job.batch/my-release-cockroachdb-init 1/1 43s 7m36s
In the my-values.yaml-file I only changed the tls from false to true:
tls:
enabled: true
So far so good, but from here on the guide isn't really working for me anymore. I try as they say with getting the csr: kubectl get csr --namespace=thesis-crdb
No resources found
Ok, perhaps not needed. I carry on to deploy the client-secure
I download the file: https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml
And changes the serviceAccountName: cockroachdb to serviceAccountName: my-release-cockroachdb.
I try to deploy it with $ kubectl create -f client-secure.yaml --namespace=thesis-crdb but it throws this error:
Error from server (Forbidden): error when creating "client-secure.yaml": pods "cockroachdb-client-secure" is forbidden: error looking up service account thesis-crdb/my-release-cockroachdb: serviceaccount "my-release-cockroachdb" not found
Anyone got an idea how to solve this? I'm fairly sure it's something with the namespace that is messing it up.
I have tried to put the namespace in the metadata-section
metadata:
namespace: thesis-crdb
And then try to deploy it with: kubectl create -f client-secure.yaml but to no avail:
Error from server (Forbidden): error when creating "client-secure.yaml": pods "cockroachdb-client-secure" is forbidden: error looking up service account thesis-crdb/my-release-cockroachdb: serviceaccount "my-release-cockroachdb" not found
You mention in question that you have changed serviceAccountName in YAML.
And changes the serviceAccountName: cockroachdb to serviceAccountName: my-release-cockroachdb.
So Root Cause of your issue is related with ServiceAccount misconfiguration.
Background
In your cluster you have something called ServiceAccount.
When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin, unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service Account (for example, default).
To ServiceAccount you also should configure RBAC which grants you permissions to create resources.
Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization.
RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API.
If you don't have proper RBAC permissions you will not be able to create resources.
In Kubernetes you can find Role and ClusterRole. Role sets permissions within a particular namespace and ClusterRole sets permissions in whole cluster.
Besides that, you also need to bind roles using RoleBinding and ClusterRoleBinding.
In addition, if you would use Cloud environment, you would also need special rights in project. Your guide provides instructions to do it here.
Root cause
I've checked cockroachdb chart and it creates ServiceAccount, Role, ClusterRole, RoleBinding and ClusterRoleBinding for cockroachdb and prometheus. There is no configuration for my-release-cockroachdb.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cockroachdb
...
verbs:
- create
- get
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cockroachdb
labels:
app: cockroachdb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cockroachdb
...
In client-secure.yaml you change serviceAccountName to my-release-cockroachdb and Kubernetes cannot find that ServiceAccount as it was not created by cluster administrator or cockroachdb chart.
To list ServiceAccounts in default namespace you can use command $ kubectl get ServiceAccount, however if you would check all ServiceAccounts in cluster you should add -A to your command - $ kubectl get ServiceAccount -A.
Solution
Option 1 is to use existing ServiceAccount with proper permissions like SA created by cockroachdb chart which is cockroachdb, not my-release-cockroachdb.
Option 2 is to create ServiceAccount, Role/ClusterRole and RoleBinding/ClusterRoleBinding for my-release-cockroachdb.
Related
I am using Kubernetes version 1.24, I have created a secret for my service account manually, but when I run kubectl get serviceaccounts, it is showing that I do not have any secrets for that service account?
If you are creating the secret manually you have to manually add the secret to the service account.
You can edit the existing service account using the command kubectl edit sa <name of sa> or else create the YAML and reapply the changes to configure those.
However, if you are creating the ServiceAccount it will auto-generate the secret token.
bash-4.2$ kubectl get sa
NAME SECRETS AGE
default 1 11d
bash-4.2$ kubectl create sa test
serviceaccount/test created
bash-4.2$ kubectl get secret
NAME TYPE DATA AGE
default-token-dvgd8 kubernetes.io/service-account-token 3 11d
test-token-k6dpd kubernetes.io/service-account-token 3 7s
bash-4.2$ kubectl get sa
NAME SECRETS AGE
default 1 11d
test 1 59s
bash-4.2$
Update
If you are on K8s version 1.24
The serviceaccount won't create the secret automatically.
You have to create it manually.
Example :
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: token-secret
annotations:
kubernetes.io/service-account.name: "<SA name>"
If you just want to create the token you can use the : kubectl create token <Name>
Read more about it : https://medium.com/#harsh.manvar111/k8s-v1-24-is-unable-to-create-a-serviceaccount-secret-798f8454e6e7
When creating a secret manually, it needs to be manually added to the ServiceAccount. You can use kubectl edit for this.
When running (on GCP):
$ helm upgrade \
--values ./values.yaml \
--install \
--namespace "weaviate" \
"weaviate" \
weaviate.tgz
It returns;
UPGRADE FAILED
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "ku
be-system"
Error: UPGRADE FAILED: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in t
he namespace "kube-system"
UPDATE: based on solution
$ vim rbac-config.yaml
Add to the file:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Run:
$ kubectl create -f rbac-config.yaml
$ helm init --service-account tiller --upgrade
Note: based on Helm v2.
tl;dr: Setup Helm with the appropriate authorization settings for your cluster, see https://v2.helm.sh/docs/using_helm/#role-based-access-control
Long Answer
Your experience is not specific to the Weaviate Helm chart, rather it looks like Helm is not setup according to the cluster authorization settings. Other Helm commands should fail with the same or a similar error.
The following error
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "ku
be-system"
means that the default service account in the kube-system namespace is lacking permissions. I assume you have installed Helm/Tiller in the kube-system namespace as this is the default if no other arguments are specified on helm init. Since you haven't created a specific Service Account for Tiller to use it defaults to the default service account.
Since you are mentioning that you are running on GCP, I assume this means you are using GKE. GKE by default has RBAC Authorization enabled. In an RBAC setting no one has any rights by default, all rights need to be explicitly granted.
The helm docs list several options on how to make Helm/Tiller work in an RBAC-enabled setting. If the cluster has the sole purpose of running Weaviate you can choose the simplest option: Service Account with cluster-admin role. The process described there essentially creates a dedicated service account for Tiller, and adds the required ClusterRoleBinding to the existing cluster-admin ClusterRole. Note that this effectively makes Helm/Tiller an admin of the entire cluster.
If you are running a multi-tenant cluster and/or want to limit Tillers permissions to a specific namespace, you need to choose one of the alternatives.
I am interacting with a GKE cluster and trying to understand what are my permissions
➢ kubectl get roles --all-namespaces
NAMESPACE NAME AGE
istio-system istio-ingressgateway-sds 38d
kube-public system:controller:bootstrap-signer 38d
kube-system cloud-provider 38d
kube-system extension-apiserver-authentication-reader 38d
kube-system gce:cloud-provider 38d
kube-system sealed-secrets-key-admin 38d
kube-system system::leader-locking-kube-controller-manager 38d
kube-system system::leader-locking-kube-scheduler 38d
kube-system system:controller:bootstrap-signer 38d
kube-system system:controller:cloud-provider 38d
kube-system system:controller:token-cleaner 38d
kube-system system:fluentd-gcp-scaler 38d
kube-system system:pod-nanny 38d
However I do not see any role associated with me.
How am I interacting with the k8s cluster?
How can I see whoami and what are my permissions?
The command and output you are sharing refers to Kubernetes RBAC Authorization (not exclusive of GKE). You can find the definition for each role HERE
If you want to be specific to GKE you can use both Cloud Identity and Access Management and Kubernetes RBAC to control access to your GKE cluster.
Cloud IAM is not specific to Kubernetes; it provides identity management for multiple Google Cloud Platform products, and operates primarily at the level of the GCP project.
Kubernetes RBAC is a core component of Kubernetes and allows you to create and grant roles (sets of permissions) for any object or type of object within the cluster. You can find more information on how RBAC integrates with GKE HERE
You don’t see any roles associated to you because are querying the roles for all the namespaces and most likely you haven’t define a single one.
You are interacting with your cluster from the cloud shell. Before connecting to your cluster you must had run the following command.
gcloud container clusters get-credentials CLUSTER_NAME --zone ZONE --project PROJECT_ID
You authenticate to the cluster using the same user you authenticate with to login to GCP. More information on authentication for kubectl HERE
You can get role binding and cluster roles based on namespace or resource as seen in my example commands.
kubectl get rolebinding POD_NAME -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"pod-reader-binding","namespace":"default"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"pod-reader"},"subjects":[{"kind":"User","name":"example#foo.com"},{"kind":"ServiceAccount","name":"johndoe"},{"kind":"User","name":"test-account#example.google.com.iam.gserviceaccount.com"},{"kind":"Group","name":"accounting-group#example.com"}]}
creationTimestamp: xxxx-xx-xxxx:xx:xxZ
name: pod-reader-binding
namespace: default
resourceVersion: "1502640"
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/default/rolebindings/pod-reader-binding
uid: de1775dc-cd85-11e9-a07d-42010aa800c2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: example#foo.com
In the above examle my user [example#foo.com] is a member of APIGroup [rbac.authorization.k8s.io] so his actions on the pod will be limited by the permission he is giving with RBAC for example if you want to give this user readd access you need to specify the the following line in thr YAML
verbs: ["get", "watch", "list"]
Finally there are many Predefined GKE Roles that grants different permissions to GCP user or service accounts. You can find each role and permissions HERE
I have a Kubernetes v1.9.3 (no OpenShift) cluster I'd like to manage with ManageIQ (gaprindashvili-3 running as a Docker container).
I prepared the k8s cluster to interact with ManageIQ following these instructions. Notice that I performed the steps listed in the last section only (Prepare cluster for use with ManageIQ), as the previous ones were for setting up a k8s cluster and I already had a running one.
I successfully added the k8s container provider to ManageIQ, but the dashboard reports nothing: 0 nodes, 0 pods, 0 services, etc..., while I do have nodes, services and running pods on the cluster. I looked at the content of /var/log/evm.log of ManageIQ and found this error:
[----] E, [2018-06-21T10:06:40.397410 #13333:6bc9e80] ERROR – : [KubeException]: events is forbidden: User “system:serviceaccount:management-infra:management-admin” cannot list events at the cluster scope: clusterrole.rbac.authorization.k8s.io “cluster-reader” not found Method:[block in method_missing]
So the ClusterRole cluster-reader was not defined in the cluster. I double checked with kubectl get clusterrole cluster-reader and it confirmed that cluster-reader was missing.
As a solution, I tried to create cluster-reader manually. I could not find any reference of it in the k8s doc, while it is mentioned in the OpenShift docs. So I looked at how cluster-reader was defined in OpenShift v3.9. Its definition changes across different OpenShift versions, I picked 3.9 as it is based on k8s v1.9 which is the one I'm using. So here's what I found in the OpenShift 3.9 doc:
Name: cluster-reader
Labels: <none>
Annotations: authorization.openshift.io/system-only=true
rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
[*] [] [get]
apiservices.apiregistration.k8s.io [] [] [get list watch]
apiservices.apiregistration.k8s.io/status [] [] [get list watch]
appliedclusterresourcequotas [] [] [get list watch]
I wrote the following yaml definition to create an equivalent ClusterRole in my cluster:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-reader
rules:
- apiGroups: ["apiregistration"]
resources: ["apiservices.apiregistration.k8s.io", "apiservices.apiregistration.k8s.io/status"]
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["*"]
verbs: ["get"]
I didn't include appliedclusterresourcequotas among the monitored resources because it's my understanding that is an OpenShift-only resource (but I may be mistaken).
I deleted the old k8s container provider on ManageIQ and created a new one after having created cluster-reader, but nothing changed, the dashboard still displays nothing (0 nodes, 0 pods, etc...). I looked at the content of /var/log/evm.log in ManageIQ and this time these errors were reported:
[----] E, [2018-06-22T11:15:39.041903 #2942:7e5e1e0] ERROR -- : MIQ(ManageIQ::Providers::Kubernetes::ContainerManager::EventCatcher::Runner#start_event_monitor) EMS [kubernetes-01] as [] Event Monitor Thread aborted because [events is forbidden: User "system:serviceaccount:management-infra:management-admin" cannot list events at the cluster scope]
[----] E, [2018-06-22T11:15:39.042455 #2942:7e5e1e0] ERROR -- : [KubeException]: events is forbidden: User "system:serviceaccount:management-infra:management-admin" cannot list events at the cluster scope Method:[block in method_missing]
So what am I doing wrong? How can I fix this problem?
If it can be of any use, here you can find the whole .yaml file I'm using to set up the k8s cluster to interact with ManageIQ (all the required namespaces, service accounts, cluster role bindings are present as well).
For the ClusterRole to take effect it must be bound to the group management-infra or user management-admin.
Example of creating group binding:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-cluster-state
subjects:
- kind: Group
name: management-infra
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-reader
apiGroup: rbac.authorization.k8s.io
After applying this file changes will take place immediately. No need to restart cluster.
See more information here.
Desired Outcome:
I want to set up a CSV file with userids and passwords and access Kubernetes Dashboard as a full admin, preferably from anywhere with a browser. I am just learning kubernetes and want to experiment with cluster management, deployments, etc. This is just for learning and is not a Production Setup. I am using Kubernetes version 1.9.2 and created a 3-machine cluster (master and 2 workers)
Background/What I've done so far:
I read the Dashboard README and I created an admin-user and admin-role-binding with the files shown below. I can then use the kubectl describe secret command to get the admin user's token. I run kubectl proxy on the cluster master and authenticate to the Dashboard with that token using a browser running on the cluster master. All of this works.
admin-user.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
admin-role-binding.yaml:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
I can login to the dashboard as admin IF:
I run kubectl proxy
I access the dashboard with a browser where I ran command (1)
I use the "token" option to login and paste the admin user's token which I get using the kubectl describe secret command.
What I'd like to do:
Set up a CSV file with userids/passwords
Login as admin with userid/password
Be able to login from anywhere
To that end, I created a CSV file, e.g. /home/chris/myusers.txt:
mypasswd,admin,42
I did not know what value to use for id so I just punted with 42.
I then edited the file:
/etc/kubernetes/manifests/kube-apiserver.yaml and adding this line:
--basic-auth-file=/home/chris/myusers.txt
and then restarting kubelet:
sudo systemctl restart kubelet
However when I did that, my cluster stopped working and I couldn't access the Dashboard so I reverted back where I still use the admin user's token.
My questions are:
Is it possible to do what I'm trying to do here?
What id values do I use in the user CSV file? What groups would I specify?
What other changes do I need to make to get all of this to work? If I modify the apiserver manifest to use a file with userids/passwords, does that mess up the rest of the configuration for my cluster?
You can try this one it is working for me. Taking reference from here.
volumeMounts:
- mountPath: /etc/kubernetes/auth.csv
name: kubernetes-dashboard
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/auth.csv
name: kubernetes-dashboard
How to config simple login/pass authentication for kubernetes desktop UI