Kubernetes ApiVersion for a Resource kind - kubernetes

How can I get an ApiVersion for some resource kind, using the kubectl.
Example:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1 ( complete string )
I tried kubectl api-resources, and kubectl api-versions but could not find some complete mapping. Is there any way to concatenate the output of those commands and get the complete string for each resource kind? Or maybe there is some other command.

I'm not aware of a kubectl command that can provide mapping between all resources in a cluster - built-in and custom, used or not - and their API Groups and group versions (apiVersion: $GROUP_NAME/$VERSION).
If using curl and jq is an option, the following one-liner will provide such mapping:
for v in `curl -ks https://<k8s-master>:<port>/apis | jq -r .groups[].versions[].groupVersion`; do for r in `curl -ks "https://<k8s-master>:<port>/apis/${v}" | jq -r '.resources[]?.kind' | sort -u`; do echo ${r} - ${v}; done ; done
Few explanations:
replace <k8s-master>:<port> with the name/IP and port of the Kubernetes API
Server master.
break it into multiple lines for better readability.
-k in the curl command is to trust any cert.
-s in the curl command is for silent output. E.g. no progress output.
-r in the jq command is for raw output. E.g. no double quotes will be output for strings.
change echo ${r} - ${v} in a way to output the mapping as desired.
Notice that the above doesn't handle api/v1. It is a legacy API Group and its resources are now also under named groups - see https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-groups
Partial output for a Kubernetes cluster:
APIService - apiregistration.k8s.io/v1
APIService - apiregistration.k8s.io/v1beta1
DaemonSet - extensions/v1beta1
Deployment - extensions/v1beta1
DeploymentRollback - extensions/v1beta1
...
Role - rbac.authorization.k8s.io/v1
RoleBinding - rbac.authorization.k8s.io/v1
ClusterRole - rbac.authorization.k8s.io/v1beta1
ClusterRoleBinding - rbac.authorization.k8s.io/v1beta1
Role - rbac.authorization.k8s.io/v1beta1
RoleBinding - rbac.authorization.k8s.io/v1beta1
...

You can try
kubectl get roles --all-namespaces -o jsonpath='{.items[*].apiVersion}'

Related

Kubernetes: cannot see network policies created with calico

apiVersion: projectcalico.org/v3
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
By using kubectl get networkpolicy, I can see only the policies created by networking.k8s.io/v1 and not those created by projectcalico.org/v3. Any suggestion how to see the latter ones?
kubectl get XXX does not display all the resources in the cluster, in your case you cannot see CRD
You can find your object with kubectl get crds
Then kubectl get <crd name> -A
In your case it would be:
# Get all the CRD from the desired type
kubectl get projectcalico.org/v3 -A
# Now grab the desired name and do whatever you want with it
kubectl describe <CRD>/<resource name> -n <namespace>

patch a configmap from file or with json

I want to edit a configmap from aws-auth during a vagrant deployment to give my vagrant user access to the EKS cluster. I need to add a snippet into the existing aws-auth configmap. How do i do this programmatically?
If you do a kubectl edit -n kube-system configmap/aws-auth you get
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::123:role/nodegroup-abc123
username: system:node:{{EC2PrivateDNSName}}
kind: ConfigMap
metadata:
creationTimestamp: "2019-05-30T03:00:18Z"
name: aws-auth
namespace: kube-system
resourceVersion: "19055217"
selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
uid: 0000-0000-0000
i need to enter this bit in there somehow.
mapUsers: |
- userarn: arn:aws:iam::123:user/sergeant-poopie-pants
username: sergeant-poopie-pants
groups:
- system:masters
I've tried to do a cat <<EOF > {file} EOF then patch from file. But that option doesn't exist in patch only in the create context.
I also found this: How to patch a ConfigMap in Kubernetes
but it didn't seem to work. or perhaps i didn't really understand the proposed solutions.
There are a few ways to automate things. The direct way would be kubectl get configmap -o yaml ... > cm.yml && patch ... < cm.yml > cm2.yml && kubectl apply -f cm2.yml or something like that. You might want to use a script that parses and modifies the YAML data rather than a literal patch to make it less brittle. You could also do something like EDITOR="myeditscript" kubectl edit configmap ... but that's more clever that I would want to do.
First, note that the mapRoles and mapUsers are actually treated as a string, even though it is structured data (yaml).
While this problem is solvable by jsonpatch, it is much easier using jq and kubectl apply like this:
kubectl get cm aws-auth -o json \
| jq --arg add "`cat add.yaml`" '.data.mapUsers = $add' \
| kubectl apply -f -
Where add.yaml is something like this (notice the lack of extra indentation):
- userarn: arn:aws:iam::123:user/sergeant-poopie-pants
username: sergeant-poopie-pants
groups:
- system:masters
See also https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html for more information.
Here is a kubectl patch one-liner for patching the aws-auth configmap:
kubectl patch configmap -n kube-system aws-auth -p '{"data":{"mapUsers":"[{\"userarn\": \"arn:aws:iam::0000000000000:user/john\", \"username\": \"john\", \"groups\": [\"system:masters\"]}]"}}'

How to edit configmap in kubernetes and override the values from a different yaml file?

I want to edit the configmap and replace the values. But it should be done using a different YAML in I ll specify overriding values as part of that file.
I was trying using kubectl edit cm -f replace.yaml but this didn't work so i want to know the structure in which the new file should be.
apiVersion: v1
kind: ConfigMap
metadata:
name: int-change-change-management-service-configurations
data:
should_retain_native_dn: "False"
NADC_IP: "10.11.12.13"
NADC_USER: "omc"
NADC_PASSWORD: "hello"
NADC_PORT: "991"
plan_compare_wait_time: "1"
plan_prefix: ""
ingress_ip: "http://10.12.13.14"
Now lets us assume NADC_IP should be changed and So I would like to know how should be structure of the YAML file and using which command it can be served?
The override taking place should only be during helm test for example when i run
helm test <release-name>?
kubectl replace -f replace.yaml
If you have a configmap in place like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
should_retain_native_dn: "False"
NADC_IP: "10.11.12.13"
and you want to change the value of NADC_IP create a manifest file like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
should_retain_native_dn: "False"
NADC_IP: "12.34.56.78" # the new IP
and run kubectl replace -f replace.yaml
To update variable in configmap you need to take two steps:
First, update the value of variable:
kubectl create configmap <name_of_configmap> --from-literal=<var_name>=<new_value> -o yaml --dry-run | kubectl replace -f -
So in your case it will looks like this:
kubectl create configmap int-change-change-management-service-configurations --from-literal=NADC_IP=<new_value> -o yaml --dry-run | kubectl replace -f -
Second step, restart the pod:
kubectl delete pod <pod_name>
App will use new value from now. Let me know, if it works for you.
kubectl get cm {configmap name} -o=yaml --export > filename.yaml
You can try this it will give you yaml format
kubectl get configmap
int-change-change-management-service-configurations -o yaml
You can copy the content and replace it inside new yaml file and apply the changes
EDIT : 1
If you want to edit over terminal you can run
kubectl edit configmap {configmap name}
It will use vim editor and you can replace value from terminal using edit command.
EDIT : 2
kubectl get cm {configmap name} -o=yaml --export > filename.yaml

How to sign in kubernetes dashboard?

I just upgraded kubeadm and kubelet to v1.8.0. And install the dashboard following the official document.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
After that, I started the dashboard by running
$ kubectl proxy --address="192.168.0.101" -p 8001 --accept-hosts='^*$'
Then fortunately, I was able to access the dashboard thru http://192.168.0.101:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
I was redirected to a login page like this which I had never met before.
It looks like that there are two ways of authentication.
I tried to upload the /etc/kubernetes/admin.conf as the kubeconfig but got failed. Then I tried to use the token I got from kubeadm token list to sign in but failed again.
The question is how I can sign in the dashboard. It looks like they added a lot of security mechanism than before. Thanks.
As of release 1.7 Dashboard supports user authentication based on:
Authorization: Bearer <token> header passed in every request to Dashboard. Supported from release 1.6. Has the highest priority. If present, login view will not be shown.
Bearer Token that can be used on Dashboard login view.
Username/password that can be used on Dashboard login view.
Kubeconfig file that can be used on Dashboard login view.
— Dashboard on Github
Token
Here Token can be Static Token, Service Account Token, OpenID Connect Token from Kubernetes Authenticating, but not the kubeadm Bootstrap Token.
With kubectl, we can get an service account (eg. deployment controller) created in kubernetes by default.
$ kubectl -n kube-system get secret
# All secrets with type 'kubernetes.io/service-account-token' will allow to log in.
# Note that they have different privileges.
NAME TYPE DATA AGE
deployment-controller-token-frsqj kubernetes.io/service-account-token 3 22h
$ kubectl -n kube-system describe secret deployment-controller-token-frsqj
Name: deployment-controller-token-frsqj
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=deployment-controller
kubernetes.io/service-account.uid=64735958-ae9f-11e7-90d5-02420ac00002
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZXBsb3ltZW50LWNvbnRyb2xsZXItdG9rZW4tZnJzcWoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVwbG95bWVudC1jb250cm9sbGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjQ3MzU5NTgtYWU5Zi0xMWU3LTkwZDUtMDI0MjBhYzAwMDAyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRlcGxveW1lbnQtY29udHJvbGxlciJ9.OqFc4CE1Kh6T3BTCR4XxDZR8gaF1MvH4M3ZHZeCGfO-sw-D0gp826vGPHr_0M66SkGaOmlsVHmP7zmTi-SJ3NCdVO5viHaVUwPJ62hx88_JPmSfD0KJJh6G5QokKfiO0WlGN7L1GgiZj18zgXVYaJShlBSz5qGRuGf0s1jy9KOBt9slAN5xQ9_b88amym2GIXoFyBsqymt5H-iMQaGP35tbRpewKKtly9LzIdrO23bDiZ1voc5QZeAZIWrizzjPY5HPM1qOqacaY9DcGc7akh98eBJG_4vZqH2gKy76fMf0yInFTeNKr45_6fWt8gRM77DQmPwb3hbrjWXe1VvXX_g
Kubeconfig
The dashboard needs the user in the kubeconfig file to have either username & password or token, but admin.conf only has client-certificate. You can edit the config file to add the token that was extracted using the method above.
$ kubectl config set-credentials cluster-admin --token=bearer_token
Alternative (Not recommended for Production)
Here are two ways to bypass the authentication, but use for caution.
Deploy dashboard with HTTP
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
Dashboard can be loaded at http://localhost:8001/ui with kubectl proxy.
Granting admin privileges to Dashboard's Service Account
$ cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
EOF
Afterwards you can use Skip option on login page to access Dashboard.
If you are using dashboard version v1.10.1 or later, you must also add --enable-skip-login to the deployment's command line arguments. You can do so by adding it to the args in kubectl edit deployment/kubernetes-dashboard --namespace=kube-system.
Example:
containers:
- args:
- --auto-generate-certificates
- --enable-skip-login # <-- add this line
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
TL;DR
To get the token in a single oneliner:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
This assumes that your ~/.kube/config is present and valid. And also that kubectl config get-contexts indicates that you are using the correct context (cluster and namespace) for the dashboard you are logging into.
Explanation
I derived this answer from what I learned from #silverfox's answer. That is a very informative write up. Unfortunately it falls short of telling you how to actually put the information into practice. Maybe I've been doing DevOps too long, but I think in shell. It's much more difficult for me to learn or teach in English.
Here is that oneliner with line breaks and indents for readability:
kubectl -n kube-system describe secret $(
kubectl -n kube-system get secret | \
awk '/^deployment-controller-token-/{print $1}'
) | \
awk '$1=="token:"{print $2}'
There are 4 distinct commands and they get called in this order:
Line 2 - This is the first command from #silverfox's Token section.
Line 3 - Print only the first field of the line beginning with deployment-controller-token- (which is the pod name)
Line 1 - This is the second command from #silverfox's Token section.
Line 5 - Print only the second field of the line whose first field is "token:"
If you don't want to grant admin permission to dashboard service account, you can create cluster admin service account.
$ kubectl create serviceaccount cluster-admin-dashboard-sa
$ kubectl create clusterrolebinding cluster-admin-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=default:cluster-admin-dashboard-sa
And then, you can use the token of just created cluster admin service account.
$ kubectl get secret | grep cluster-admin-dashboard-sa
cluster-admin-dashboard-sa-token-6xm8l kubernetes.io/service-account-token 3 18m
$ kubectl describe secret cluster-admin-dashboard-sa-token-6xm8l
I quoted it from giantswarm guide - https://docs.giantswarm.io/guides/install-kubernetes-dashboard/
Combining two answers: 49992698 and 47761914 :
# Create service account
kubectl create serviceaccount -n kube-system cluster-admin-dashboard-sa
# Bind ClusterAdmin role to the service account
kubectl create clusterrolebinding -n kube-system cluster-admin-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:cluster-admin-dashboard-sa
# Parse the token
TOKEN=$(kubectl describe secret -n kube-system $(kubectl get secret -n kube-system | awk '/^cluster-admin-dashboard-sa-token-/{print $1}') | awk '$1=="token:"{print $2}')
You need to follow these steps before the token authentication
Create a Cluster Admin service account
kubectl create serviceaccount dashboard -n default
Add the cluster binding rules to your dashboard account
kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard
Get the secret token with this command
kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode
Choose token authentication in the Kubernetes dashboard login page
Now you can able to login
A self-explanatory simple one-liner to extract token for kubernetes dashboard login.
kubectl describe secret -n kube-system | grep deployment -A 12
Copy the token and paste it on the kubernetes dashboard under token sign in option and you are good to use kubernetes dashboard
All the previous answers are good to me. But a straight forward answer on my side would come from https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md. Just use kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}'). You will have many values for some keys (Name, Namespace, Labels, ..., token). The most important is the token that corresponds to your name. copy that token and paste it in the token box. Hope this helps.
You can get the token:
kubectl describe secret -n kube-system | grep deployment -A 12
Take the Token value which is something like
token: eyJhbGciOiJSUzI1NiIsI...
Use port-forward to /kubernetes-dashboard:
kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 8080:443 --address='0.0.0.0'
Access the Site Using:
https://<IP-of-Master-node>:8080/
Provide the Token when asked.
Note the https on the URL. Tested site on Firefox because With new Updates Google Chrome has become strict of not allowing traffic from unknown SSL certificates.
Also note, the 8080 port should be opened in the VM of Master Node.
However, if you are using After kubernetes 1.24 version,
creating service accounts will not generate tokens
, instead should use following command.
kubectl -n kubernetes-dashboard create token admin-user
this is finally what works now (2023)
create two files create-service-cccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
and create-cluster-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
then run
kubectl apply -f create-service-cccount.yaml
kubectl apply -f create-cluster-role-binding.yaml
kubectl -n kubernetes-dashboard create token admin-user
for latest update please check
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
The skip login has been disabled by default due to security issues. https://github.com/kubernetes/dashboard/issues/2672
in your dashboard yaml add this arg
- --enable-skip-login
to get it back
Download
https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
add
type: NodePort for the Service
And then run this command:
kubectl apply -f kubernetes-dashboard.yaml
Find the exposed port with the command :
kubectl get services -n kube-system
You should be able to get the dashboard at http://hostname:exposedport/
with no authentication
An alternative way to obtain the kubernetes-dashboard token:
kubectl -n kubernetes-dashboard get secret -o=jsonpath='{.items[?(#.metadata.annotations.kubernetes\.io/service-account\.name=="kubernetes-dashboard")].data.token}' | base64 --decode
Explanation:
Get all the secret in the kubernetes-dashboard name space.
Look at the items array, and match for: metadata -> annotations -> kubernetes.io/service-account.name == kubernetes-dashboard
Print data -> token
Decode content. (If you perform kubectl describe secret, the token is already decoded.)
For version 1.26.0/1.26.1 at 2023,
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
kubectl create serviceaccount admin-user -n kubernetes-dashboard
kubectl create clusterrolebinding dashboard-admin -n kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=admin-user
kubectl -n kubernetes-dashboard create token admin-user
The newest guide: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

RBAC - Limit access for one service account

I want to limit the permissions to the following service account, created it as follows:
kubectl create serviceaccount alice --namespace default
secret=$(kubectl get sa alice -o json | jq -r .secrets[].name)
kubectl get secret $secret -o json | jq -r '.data["ca.crt"]' | base64 -d > ca.crt
user_token=$(kubectl get secret $secret -o json | jq -r '.data["token"]' | base64 -d)
c=`kubectl config current-context`
name=`kubectl config get-contexts $c | awk '{print $3}' | tail -n 1`
endpoint=`kubectl config view -o jsonpath="{.clusters[?(#.name == \"$name\")].cluster.server}"`
kubectl config set-cluster cluster-staging \
--embed-certs=true \
--server=$endpoint \
--certificate-authority=./ca.crt
kubectl config set-credentials alice-staging --token=$user_token
kubectl config set-context alice-staging \
--cluster=cluster-staging \
--user=alice-staging \
--namespace=default
kubectl config get-contexts
#kubectl config use-context alice-staging
This has permission to see everything with:
kubectl --context=alice-staging get pods --all-namespaces
I try to limit it with the following, but still have all the permissions:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: no-access
rules:
- apiGroups: [""]
resources: [""]
verbs: [""]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: no-access-role
subjects:
- kind: ServiceAccount
name: alice
namespace: default
roleRef:
kind: ClusterRole
name: no-access
apiGroup: rbac.authorization.k8s.io
The idea is to limit access to a namespace to distribute tokens for users, but I do not get it ... I think it may be for inherited permissions but I can not disabled for a single serviceacount.
Using: GKE, container-vm
THX!
Note that service accounts are not meant for users, but for processes running inside pods (https://kubernetes.io/docs/admin/service-accounts-admin/).
In Create user in Kubernetes for kubectl you can find how to create a user account for your K8s cluster.
Moreover, I advise you to check whether RBAC is actually enabled in your cluster, which could explain that a user can do more operations that expected.