Not able to login to Kubernetes Dashboard with token - kubernetes

I am new with Kubernetes. I have created the control node and wanted to add a service user to login in dashboard.
root#bm-mbi-01:~# cat admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
root#bm-mbi-01:~# cat admin-user-clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
root#bm-mbi-01:~# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-kd8c8
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 226e0ea4-9d2e-480e-8b1d-709b9860e561
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjVZOS02T3M2T3AwNUZhQXA3NDdJZENXZlpIU2F6UUtNdEdJNmd3MFg0WEEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWtkOGM4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMjZlMGVhNC05ZDJlLTQ4MGUtOGIxZC03MDliOTg2MGU1NjEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.OfRZlszXRt5AKxCumqSicPOkIK6g-fqPzitH_DjqskFxz6SzwYoDeFIPqyQ8O_6SFFgU6b-lgwiRmZtoj3dTKxr04PDl_t37KD7QTmBtX33vrW_sgq2EFbRkaiRxyTvFPjQDmo04iiyOQmlfzj67MIbgYYmem3NaTqgqx-j-SEi-CKTwVM4JyGa3GrTN7xeRfsFNSq1YOV6Yx1keyiD-gVEZiDxkBCJcdCJOM6p6q1s3cXgH1KWIDYkGXIHFX1f0tvu4xlr_-jgpSVehaAU98WN9DtgXL16ny1ckgKL1mPpBezrjVrf4k1lOSsXHWuE1cnlG9SnUIhbZ9k11HQJNtw
root#bm-mbi-01:~#
Used this token to login in dashboard. But after clicking in login, no response.

With IP, the URL is browsable but login button clicking is not working.
Finally solved it by doing SSH local port formwarding
Kubenetes Proxy was started by:
root#bm-mbi-01:~# kubectl proxy --address=10.20.200.75 --accept-hosts=.* &
SSH tunnel from my local PC to bm-mbi-01 server
✘ s.c#MB-SC  ~  ssh -L 8001:localhost:8001 bmadmin#bm-mbi-01
enter image description here

Related

Why new created ServiceAccount has 0 secrets

I have Kubernetes version 1.24.3, and I created a new service account named "deployer", but when I checked it, it shows it doesn't have any secrets.
This is how I created the service account:
kubectl apply -f - << EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: deployer
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: deployer-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources:
- deployments
verbs: ["list", "get", "describe", "apply", "delete", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: deployer-crb
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: deployer-role
subjects:
- kind: ServiceAccount
name: deployer
namespace: default
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: token-secret
annotations:
kubernetes.io/service-account.name: deployer
EOF
When I checked it, it shows that it doesn't have secrets:
cyber#manager1:~$ kubectl get sa deployer
NAME SECRETS AGE
deployer 0 4m32s
cyber#manager1:~$ kubectl get sa deployer -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"deployer","namespace":"default"}}
creationTimestamp: "2022-10-13T08:36:54Z"
name: deployer
namespace: default
resourceVersion: "2129964"
uid: cd2bf19f-92b2-4830-8b5a-879914a18af5
And this is the secret that should be associated to the above service account:
cyber#manager1:~$ kubectl get secrets token-secret -o yaml
apiVersion: v1
data:
ca.crt: <REDACTED>
namespace: ZGVmYXVsdA==
token: <REDACTED>
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{"kubernetes.io/service-account.name":"deployer"},"name":"token-secret","namespace":"default"},"type":"kubernetes.io/service-account-token"}
kubernetes.io/service-account.name: deployer
kubernetes.io/service-account.uid: cd2bf19f-92b2-4830-8b5a-879914a18af5
creationTimestamp: "2022-10-13T08:36:54Z"
name: token-secret
namespace: default
resourceVersion: "2129968"
uid: d960c933-5e7b-4750-865d-e843f52f1b48
type: kubernetes.io/service-account-token
What can be the reason?
Update:
The answer help, but for the protocol, it doesn't matter, the token works even it shows 0 secrets:
kubectl get pods --token `cat ./token` -s https://192.168.49.2:8443 --certificate-authority /home/cyber/.minikube/ca.crt --all-namespaces
Other Details:
I am working on Kubernetes version 1.24:
cyber#manager1:~$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:44:59Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:23:26Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
You can delete it by running:
kubectl delete clusterroles deployer-role
kubectl delete clusterrolebindings deployer-crb
kubectl delete sa deployer
kubectl delete secrets token-secret
Reference to Kubernetes 1.24 changes:
Change log 1.24
Creating secret through the documentation
Base on the change log, the auto-generation of tokens is no longer available for every service account.
The LegacyServiceAccountTokenNoAutoGeneration feature gate is beta, and enabled by default. When enabled, Secret API objects containing service account tokens are no longer auto-generated for every ServiceAccount. Use the TokenRequest API to acquire service account tokens, or if a non-expiring token is required, create a Secret API object for the token controller to populate with a service account token by following this guide.
token-request-v1
stops auto-generation of legacy tokens because they are less secure
work-around
or you can use
kubectl create token SERVICE_ACCOUNT_NAME
kubectl create token deployer
Request a service account token.
Should the roleRef not reference the developer Role which has the name deployer role. Inwould try to replace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: sdr
with
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: deplyer-role

How to get token from service account?

I'm new to Kubernetes. I need to get token from service account which was created by me. I used kubectl get secrets command and I got "No resources found in default namespace." as return. Then I used kubectl describe serviceaccount deploy-bot-account command to check my service account. It returns me as below.
Name: deploy-bot-account
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>
How can I fix this issue?
When service account is crated, k8s automatically creates a secrets and maps the same to sa. The secret contains ca.crt, token and namespace that are required for authN against API server.
refer the following commands
# kubectl create serviceaccount sa1
# kubectl get serviceaccount sa1 -oyaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa1
namespace: default
secrets:
- name: sa1-token-l2hgs
You can retrieve the token from the secret mapped to the service account as shown below
# kubectl get secret sa1-token-l2hgs -oyaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EUXlNakV4TVRVeE1Wb1hEVE13TURReU1ERXhNVFV4TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT2lCCk5RTVFPU0Rvdm5IcHQ2MjhkMDZsZ1FJRmpWbGhBb3Q2Uk1TdFFFQ3c3bFdLRnNPUkY4aU1JUDkrdjlJeHFBUEkKNWMrTXkvamNuRWJzMTlUaWEz-NnA0L0pBT25wNm1aSVgrUG1tYU9hS3gzcm13bFZDZHNVQURsdWJHdENhWVNpMQpGMmpBUXRCMkZrTUN2amRqNUdnNnhCTXMrcXU2eDNLQmhKNzl3MEFxNzZFVTBoTkcvS2pCOEd5aVk4b3ZKNStzCmI2LzcwYU53TE54TVU3UjZhV1d2OVJhUmdXYlVPY2RxcWk4WnZtcTZzWGZFTEZqSUZ5SS9GeHd6SWVBalNwRjEKc0xsM1dHVXZONkxhNThUdFhrNVFhVmZKc1JDUGF0ZjZVRzRwRVJDQlBZdUx-lMzl4bW1LVk95TEg5ditsZkVjVApVcng5Qk9LYmQ4VUZrbXdpVSs4Q0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKMkhUMVFvbkswWnFJa0kwUUJDcUJUblRoT0cKeE56ZURSalVSMEpRZTFLT2N1eStZMWhwTVpYOTFIT3NjYTk0RlNiMkhOZy9MVGkwdnB1bWFGT2d1SE9ncndPOQpIVXZVRFZPTDlFazF5SElLUzBCRHdrWDR5WElMajZCOHB1Wm1FTkZlQ0cyQ1I5anpBVzY5ei9CalVYclFGVSt3ClE2OE9YSEUybzFJK3VoNzBiNzhvclRaaC9hVUhybVAycXllakM2dUREMEt1QzlZcGRjNmVna2U3SkdXazJKb3oKYm5OV0NHWklEUjF1VFBiRksxalN5dTlVT1MyZ1dzQ1BQZS8vZ2JqUURmUmpyTjJldmt2RWpBQWF0OEpsd1FDeApnc3ZlTEtCaTRDZzlPZDJEdWphVmxtR2YwUVpXR1FmMFZGaEFlMzIxWE5hajJNL2lhUXhzT3FwZzJ2Zz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaW-FJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkluTmhNUzEwYjJ0bGJpMXNNbWhuY3lJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZ5ZG1salpTMWhZMk52ZFc1MExtNWhiV1VpT2lKellURWlMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObGNuWnBZMlV0WVdOamIzVnVkQzUxYVdRaU9pSXhaRFUyWW1Vd09DMDRORGt4TFRFeFpXRXRPV0ppWWkwd01qUXlZV014TVRBd01UVWlMQ0p6ZFdJaU9pSnplWE4wWlcwNmMyVnlkbWxqWldGalkyOT-FiblE2WkdWbVlYVnNkRHB6WVRFaWZRLmFtdGFORHZUNE9DUlJjZVNpTUE0WjhxaExIeTVOMUlfSG12cTBPWDdvV3RVNzdEWl9wMnVTVm13Wnlqdm1DVFB0T01acUhKZ29BX0puYUphWmlIU3IyaGh3Y2pTN2VPX3dhMF8tamk0ZXFfa0wxVzVNMDVFSG1YZFlTNzdib-DAtZ29jTldxT2RORVhpX1VBRWZLR0RwMU1LeFpFdlBjamRkdDRGWVlBSmJ5LWRqdXNhRjhfTkJEclhJVUNnTzNLUUlMeHZtZjZPY2VDeXYwR3l4ajR4SWRPRTRSSzZabzlzSW5qY0lWTmRvVm85Y3o5UzlvaGExNXdrMWl2VDgwRnBqU3dnUUQ0OTFqdEljdFppUkJBQzIxZkhYMU5scENaQTdIb3Zvck5Yem9maGpmUG03V0xRUUYyQjc4ZkktUEhqMHM2RnNpMmI0NUpzZzFJTTdXWU50UQ==
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: sa1
kubernetes.io/service-account.uid: 1d56be08-8491-11ea-9bbb-0242ac110015
name: sa1-token-l2hgs
namespace: default
type: kubernetes.io/service-account-token

Creation of namespace is forbidden for default user, tring to install meshery for EKS

I executed the below command:
kubectl create namespace meshery
And I get an error like below:
Error from server (Forbidden): namespaces is forbidden: User "system:serviceaccount:default:default" cannot create resource "namespaces" in API group "" at the cluster scope
The Steps I executed before that were as follows:
[ec2-user#ip-10-0-0-43 ~]$ kubectl create serviceaccount meshery
Error from server (AlreadyExists): serviceaccounts "meshery" already exists
[ec2-user#ip-10-0-0-43 ~]$ kubectl create clusterrolebinding meshery-binding --clusterrole=cluster-admin \
> --serviceaccount=default:meshery
error: failed to create clusterrolebinding: clusterrolebindings.rbac.authorization.k8s.io "meshery-binding" already exists
[ec2-user#ip-10-0-0-43 ~]$ kubectl get secrets
NAME TYPE DATA AGE
bookinfo-details-token-tm654 kubernetes.io/service-account-token 3 40h
bookinfo-productpage-token-lr9zq kubernetes.io/service-account-token 3 40h
bookinfo-ratings-token-2gc5h kubernetes.io/service-account-token 3 40h
bookinfo-reviews-token-8k76p kubernetes.io/service-account-token 3 40h
default-token-zwx6k kubernetes.io/service-account-token 3 3d
meshery-token-x94qk kubernetes.io/service-account-token 3 3d
[ec2-user#ip-10-0-0-43 ~]$ kubectl describe secret default-token-zwx6k
Name: default-token-zwx6k
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: default
kubernetes.io/service-account.uid: 33a3496d-db4c-4fb3-b634-204560210f90
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 7 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlJ4RV82SFR1Q3ltQVp2dHZBMEpNd2RkaTVqM2hQOHB3SURIZDRoVW9lRGcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tend4NmsiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjMzYTM0OTZkLWRiNGMtNGZiMy1iNjM0LTIwNDU2MDIxMGY5MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.TdvS4w0i0ky4dWoqrCL4PrggkpbdxlwqAhPpVQuItqCIPThB_IbCbve6KCMKSePNhO6Kw_TV9TiCiZMSzoqc0T_4PnrAcj48IafKi8_JbcNACeoR7KbSNnYigL8Ou1uQFmcM2Wu2FVjaaCg1tVUC4T0oCPH9MQLnyXIbs7lZk6Ip0Cu0qm-86XyyRSdg5m6qc9FkJqZJfiu65EOmNZhhDbx452PmZ4Ag73WcJKCTDMfZBDq5FiQM4eZtpgTjFec0980JpoBqQppVYOyjSh5sjKqkJNo-BcRDiVcAJRM23gDF5Xu4OABvWX3-cgpwb0cdZ0Xx-RK3xomzSu2Qstn5pw
[ec2-user#ip-10-0-0-43 ~]$ kubectl config set-credentials meshery --token=eyJhbGciOiJSUzI1NiIsImtpZCI6IlJ4RV82SFR1Q3ltQVp2dHZBMEpNd2RkaTVqM2hQOHB3SURIZDRoVW9lRGcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tend4NmsiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjMzYTM0OTZkLWRiNGMtNGZiMy1iNjM0LTIwNDU2MDIxMGY5MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.TdvS4w0i0ky4dWoqrCL4PrggkpbdxlwqAhPpVQuItqCIPThB_IbCbve6KCMKSePNhO6Kw_TV9TiCiZMSzoqc0T_4PnrAcj48IafKi8_JbcNACeoR7KbSNnYigL8Ou1uQFmcM2Wu2FVjaaCg1tVUC4T0oCPH9MQLnyXIbs7lZk6Ip0Cu0qm-86XyyRSdg5m6qc9FkJqZJfiu65EOmNZhhDbx452PmZ4Ag73WcJKCTDMfZBDq5FiQM4eZtpgTjFec0980JpoBqQppVYOyjSh5sjKqkJNo-BcRDiVcAJRM23gDF5Xu4OABvWX3-cgpwb0cdZ0Xx-RK3xomzSu2Qstn5pw
User "meshery" set.
[ec2-user#ip-10-0-0-43 ~]$ kubectl config set-context --current --user=meshery
Context "arn:aws:eks:us-east-1:632078958246:cluster/icluster1" modified.
[ec2-user#ip-10-0-0-43 ~]$ kubectl config view --minify --flatten > config_aws_eks.yaml
[ec2-user#ip-10-0-0-43 ~]$ cat config_aws_eks.yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01URXhOVEF6TXpNeU0xb1hEVE13TVRFeE16QXpNek15TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTXl0ClorVkI5am13WnNtL1l0TEpVRTB1aWhsWXdiangwcWdpQnNuSE8yKzhOZCtDSTZBS2prM2FsOTFTbWtGZGRsWngKSmY2eXBRTXBsTzVTeko4akpWQWVSMFVoa01vVFJnUDZFR2JRQWZvaFFleG1uMGlneWhnYjV1bGY0WFpTWUN4cwpSMVNJVXpXUVNDcFNEdmkxSDNaSFFVdUFQVkE3cytPYlZZMXIvU3FPVE1nYlVxZFduRXBSbkRzbzNpbXNRemljCk80Q1NtMU9jQXl3eUNqWjNLcHRzUWY1eFpoUytwT1FPbmJ2cWRXRVBCNHBFK29zNGxtZFNOKy81ZU5xaFh4SysKYW5hT2NyMGo5anZVa2VObmc0Z3JsQXFCb3FXL0FCdHhudmlXYmYvd2hYRm5NTUluSDV3dFN6aUhpd0E3TlVPVwpzYTJ2Yk8xcHNQOGlIQkQ2SHYwQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJdWJ4dzJhb2RPWFZCU3ZPdGtuYUJOYkxYL1QKdk5lNWhvcWVjT3pPTFdlcWs4RGdqbldtUUpXT3VFQjBBNEtRRys2WTcvem1qTWJTRG9mbGNqd1pNRWp0VHFSagpDSnNXVnV6b0dtVWVoN29PaUgyTkFtSHVzcFc2N0oxdVZnWFYrVG55Nld5YnhuTjdIQWhTcHlIWUZ2MERySVhxCkZJVWdxOTJBWFJVMklPTld3Wk1HOUY0QndqUEt0QU0ranU3RUNFWXo0SGhHSjlNa01ZcXUvMFlOa3FoNU84cjEKSGpvWDZyRTNqWlE0d0NvMkkrZ1UrRE1RRmsyVGdvb1NMWXBUT2pnaHBGWXlRRTNUMlZBaGdYZnVsMXVZZitmNQoycTl1cTEybWE4RmpFUmRteXp3cy9Jb3UxTElZR1AxdVlmSkc5aDkxZDZpbTRwS1hUTkhGRERJU1NIaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://BE1866C372B4FCB9E011E90A2BA78F79.gr7.us-east-1.eks.amazonaws.com
name: arn:aws:eks:us-east-1:632078958246:cluster/icluster1
contexts:
- context:
cluster: arn:aws:eks:us-east-1:632078958246:cluster/icluster1
user: meshery
name: arn:aws:eks:us-east-1:632078958246:cluster/icluster1
current-context: arn:aws:eks:us-east-1:632078958246:cluster/icluster1
kind: Config
preferences: {}
users:
- name: meshery
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlJ4RV82SFR1Q3ltQVp2dHZBMEpNd2RkaTVqM2hQOHB3SURIZDRoVW9lRGcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tend4NmsiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjMzYTM0OTZkLWRiNGMtNGZiMy1iNjM0LTIwNDU2MDIxMGY5MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.TdvS4w0i0ky4dWoqrCL4PrggkpbdxlwqAhPpVQuItqCIPThB_IbCbve6KCMKSePNhO6Kw_TV9TiCiZMSzoqc0T_4PnrAcj48IafKi8_JbcNACeoR7KbSNnYigL8Ou1uQFmcM2Wu2FVjaaCg1tVUC4T0oCPH9MQLnyXIbs7lZk6Ip0Cu0qm-86XyyRSdg5m6qc9FkJqZJfiu65EOmNZhhDbx452PmZ4Ag73WcJKCTDMfZBDq5FiQM4eZtpgTjFec0980JpoBqQppVYOyjSh5sjKqkJNo-BcRDiVcAJRM23gDF5Xu4OABvWX3-cgpwb0cdZ0Xx-RK3xomzSu2Qstn5pw
Goal to accomplish:
Install and Configure Meshery for EKS Cluster.
Reference links:
https://meshery.layer5.io/docs/installation/platforms/eks
https://github.com/layer5io/meshery/blob/master/docs/pages/installation/kubernetes.md
EDIT: I set kube context as per your advice, but still not getting there:
[ec2-user#ip-10-0-0-43 ~]$ kubectl get users
Please enter Username:
[ec2-user#ip-10-0-0-43 ~]$ kubectl get ns
Please enter Username:
I have followed your steps and the instructions that you provided and I managed to reproduce your issue:
➜ ~ kubectl create namespace meshery
Error from server (Forbidden): namespaces is forbidden: User "system:serviceaccount:default:meshery" cannot create resource "namespaces" in API group "" at the cluster scope
Switching the context back does allow me to create that desired namespace which leads to conclusion that the meshery roles are not properly setup:
➜ ~ kubectl config set-context --current --user=minikube
Context "minikube" modified.
➜ ~ kubectl create namespace meshery
namespace/meshery created
After careful looking into the issue i found out that ClusterRole name referenced in ClusterRoleBinding is incorrect and serviceaccount is referenced inside the ClusterRole name:
➜ ~ kubectl get clusterrolebinding meshery-binding -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
managedFields:
- apiVersion: rbac.authorization.k8s.io/v1
manager: kubectl-create
operation: Update
name: meshery-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin--serviceaccount=default:meshery
Which means that the command from the documentation is incorrectly written since there should be space between cluster-admin and --serviceaccount=default:meshery.
kubectl create clusterrolebinding meshery-binding --clusterrole=cluster-admin\--serviceaccount=default:meshery
Once I have corrected the space:
kubectl create clusterrolebinding meshery-binding --clusterrole=cluster-admin --serviceaccount=default:meshery
You can see that the ClusterRoleBinding looks correct now:
➜ ~ kubectl get clusterrolebinding meshery-binding -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
managedFields:
- apiVersion: rbac.authorization.k8s.io/v1
manager: kubectl-create
operation: Update
name: meshery-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: meshery
namespace: default
Now switching context to meshery works as expected:
➜ ~ kubectl config set-context --current --user=meshery
Context "minikube" modified.
➜ ~ kubectl create namespace meshery
namespace/meshery created

K8s Dashboard not logging in (k8s version 1.11)

I did K8s(1.11) cluster using kubeadm tool. It 1 master and one node in the cluster.
I applied dashboard UI there.
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Created service account (followed this link: https://github.com/kubernetes/dashboard/wiki/Creating-sample-user)
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
and
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
Start kube proxy: kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
And access dashboard from remote host using this URL: http://<k8s master node IP>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Its asking for token for login: got token using this command: kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
After copy and apply the token in browser.. its not logging in. Its not showing authentication error too… Not sure wht is wrong with this? Is my token wrong or my kube proxy command wrong?
I recreated all the steps in accordance to what you've posted.
Turns out the issue is in the <k8s master node IP>, you should use localhost in this case. So to access the proper dashboard, you have to use:
http://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
When you start kubectl proxy - you create a tunnel to your apiserver on the master node. By default, Dashboard is starting with ServiceType: ClusterIP. The Port on the master node in this mode is not open, and that is the reason you can't reach it on the 'master node IP'. If you would like to use master node IP, you have to change the ServiceType to NodePort.
You have to delete the old service and update the config by changing service type to NodePort as in the example below (note that ClusterIP is not there because it is assumed by default).
Create a new yaml file name newservice.yaml
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Delete the old service
kubectl delete service kubernetes-dashboard -n kube-system
Apply the new service
kubectl apply -f newservice.yaml
Run describe service
kubectl describe svc kubernetes-dashboard -n kube-system | grep "NodePort"
and you can use that port with the IP address of the master node
Type: NodePort
NodePort: <unset> 30518/TCP
http://<k8s master node IP>:30518/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Note that the port number is generated randomly and yours will be probably different.

Kubernete-dashboard is not deploying

I am trying to install kubernete-dashboard on my cluster.
I am running the below command:-
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Error:-
Error from server (BadRequest): error when creating "https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml": RoleBinding in version "v1" cannot be handled as a RoleBinding: no kind "RoleBinding" is registered for version "rbac.authorization.k8s.io/v1"
Any suggestion ?
You can try to create a Service account in your cluster and a user administrator:
Use this file...
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
Create user
Create sample user (if using RBAC - on by default on new installs with kops / kubeadm):
kubectl create -f sample-user.yaml
Get login token:
kubectl -n kube-system get secret | grep admin-user
kubectl -n kube-system describe secret admin-user-token-<id displayed by previous command>
Login to dashboard
Apply kubectl proxy
Go to http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
Login with user and pass
kubectl config view
Login: admin
Password: the password that is listed in ~/.kube/config (open file in editor and look for "password: ..."
Choose for login token and enter the login token from the previous step
Login with minikube
minikube dashboard --url