Annotation has apiVersion v1beta1 - kubernetes

I am planning to upgrade Kubernetes clusters from 1.21 to 1.22. I was going through the release notes and noticed ClusterRole, RoleBinding and ClusterRoleBinding should use rbac.authorization.k8s.io/v1 as rbac.authorization.k8s.io/v1beta1 is being deprecated.
Here is the output from one of my resource rolebinding/test-rw. apiversion says rbac.authorization.k8s.io/v1 but in annotations, it says rbac.authorization.k8s.io/v1beta1. Why does annotation have v1beta1 version? is it because it was initially deployed with v1beta version and later on updated to v1 version?
$ kubectl get RoleBinding/test-rw -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1beta1","kind":"RoleBinding","metadata":{"annotations":{},"name":"test-rw","namespace":"default"},"roleRef":{"apiGroup":"","kind":"ClusterRole","name":"admin"},"subjects":[{"apiGroup":"","kind":"Group","name":"test-rw"}]}
creationTimestamp: "2017-08-18T11:40:22Z"
name: test-rw
namespace: default
resourceVersion: "214"
uid: f8a89do8-885f-11e9-8dd8-12afbb11be0c

You can use the kubectl api-versions to check the available API versions.
or kubectl explain pod to check the version
In Annotation, it's last applied config but what you are seeing is API server preferred API version.
Kubectl is the client and it will show the API server preferred version or whatever you request for using that.
kubectl get RoleBinding.v1beta1 test-rw -o yaml
So API version while creating Rolebilding not get affected when you are trying to get using kubectl.

Related

Service account secret is not listed. How to fix it?

I have used kubectl create serviceaccount sa1 to create service account. Then I used kubectl get serviceaccount sa1 -oyaml command to get service account info. But it returns as below.
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-05-16T08:03:50Z"
name: sa1
namespace: default
resourceVersion: "19651"
uid: fdddacba-be9d-4e77-a849-95ca243781cc
I need to get,
secrets:
- name: <secret>
part. but it doesn't return secrets. How to fix it?
In Kubernetes 1.24, ServiceAccount token secrets are no longer automatically generated. See "Urgent Upgrade Notes" in the 1.24 changelog file:
The LegacyServiceAccountTokenNoAutoGeneration feature gate is beta, and enabled by default. When enabled, Secret API objects containing service account tokens are no longer auto-generated for every ServiceAccount. Use the TokenRequest API to acquire service account tokens, or if a non-expiring token is required, create a Secret API object for the token controller to populate with a service account token by following this guide. (#108309, #zshihang)
This means, in Kubernetes 1.24, you need to manually create the Secret; the token key in the data field will be automatically set for you.
apiVersion: v1
kind: Secret
metadata:
name: sa1-token
annotations:
kubernetes.io/service-account.name: sa1
type: kubernetes.io/service-account-token
Since you're manually creating the Secret, you know its name: and don't need to look it up in the ServiceAccount object.
This approach should work fine in earlier versions of Kubernetes too.

Whats difference between "kubectl auth reconcile" and "kubectl apply" for working with RBAC?

I have some average yaml file defining some average role resource, all yaml should reflect my resource's desired state.
To get new average role into cluster I usually run kubectl apply -f my-new-role.yaml
but now I see this (recommended!?) alternative kubectl auth reconcile -f my-new-role.yaml
Ok, there may be RBAC relationships, ie Bindings, but shouldn't an apply do same thing?
Is there ever a case where one would update (cluster) roles but not want their related (cluster) bindings updated?
The kubectl auth reconcile command-line utility has been added in Kubernetes v1.8.
Properly applying RBAC permissions is a complex task because you need to compute logical covers operations between rule sets.
As you can see in the CHANGELOG-1.8.md:
Added RBAC reconcile commands with kubectl auth reconcile -f FILE. When passed a file which contains RBAC roles, rolebindings, clusterroles, or clusterrolebindings, this command computes covers and adds the missing rules. The logic required to properly apply RBAC permissions is more complicated than a JSON merge because you have to compute logical covers operations between rule sets. This means that we cannot use kubectl apply to update RBAC roles without risking breaking old clients, such as controllers.
The kubectl auth reconcile command will ignore any resources that are not Role, RoleBinding, ClusterRole, and ClusterRoleBinding objects, so you can safely run reconcile on the full set of manifests (see: Use 'kubectl auth reconcile' before 'kubectl apply')
I've created an example to demonstrate how useful the kubectl auth reconcile command is.
I have a simple secret-reader RoleBinding and I want to change a binding's roleRef (I want to change the Role that this binding refers to):
NOTE: A binding to a different role is a fundamentally different binding (see: A binding to a different role is a fundamentally different binding).
# BEFORE
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secret-reader
subjects:
- kind: ServiceAccount
name: service-account-1
namespace: default
# AFTER
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secret-creator
subjects:
- kind: ServiceAccount
name: service-account-1
namespace: default
As we know, roleRef is immutable, so it is not possible to update this secret-admin RoleBinding using kubectl apply:
$ kubectl apply -f secret-admin.yml
The RoleBinding "secret-admin" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"Role", Name:"secret-creator"}: cannot change roleRef
Instead, we can use kubectl auth reconcile. If a RoleBinding is updated to a new roleRef, the kubectl auth reconcile command handles a delete/recreate related objects for us.
$ kubectl auth reconcile -f secret-admin.yml
rolebinding.rbac.authorization.k8s.io/secret-admin reconciled
reconciliation required recreate
Additionally, you can use the --remove-extra-permissions and --remove-extra-subjects options.
Finally, we can check if everything has been successfully updated:
$ kubectl describe rolebinding secret-admin
Name: secret-admin
Labels: <none>
Annotations: <none>
Role:
Kind: Role
Name: secret-creator
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount service-account-1 default

rbac.authorization.k8s.io/v1beta1 is deprecated warning

root#vsr-kub005:~# kubectl apply -f gitlab-admin-service-account.yaml
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+;
What will happen if I apply v1 instead of v1beta1?
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+;
The warning is self-explanatory. In API rbac.authorization.k8s.io/v1beta1 the ClusterRoleBinding object is deprecated and you won't able to use it in k8s version v1.22+.
The API v1 is the stable version and you should always use the stable one if possible.

How to get kubernetes applications to change deploy configs

I have two applications running in K8. APP A has write access to a data store and APP B has read access.
APP A needs to be able to change APP B's running deployment.
How we currently do this is manually by kicking off a process in APP A which adds a new DB in the data store (say db bob). Then we do:
kubectl edit deploy A
And change an environment variable to bob. This starts a rolling restart of all the pods of APP B. We would like to automate this process.
Is there anyway to get APP A to change the deployment config of APP B in k8?
Firstly answering your main question:
Is there anyway to get a service to change the deployment config of another service in k8?
From my understanding you are calling it Service A and B for it's purpose in the real life, but to facilitate understanding I suggested an edit to call them APP A and APP B, because:
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
So if in your question you meant:
"Is there anyway to get APP A to change the deployment config of APP B in k8?"
Then Yes, you can give a pod admin privileges to manage other components of the cluster using the kubectl set env command to change/add envs.
In order to achieve this, you will need:
A Service Account with needed permissions in the namespace.
NOTE: In my example below since I don't know if you are working with multiple namespaces I'm using a ClusterRole, granting cluster-admin to a specific user. If you use only 1 namespace for these apps, consider a Role instead.
A ClusterRoleBinding binding the permissions of the service account to a role of the Cluster.
The Kubectl client inside the pod (manually added or modifying the docker-image) on APP A
Steps to Reproduce:
Create a deployment to apply the cluster-admin privileges, I'm naming it manager-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: manager-deploy
labels:
app: manager
spec:
replicas: 1
selector:
matchLabels:
app: manager
template:
metadata:
labels:
app: manager
spec:
serviceAccountName: k8s-role
containers:
- name: manager
image: gcr.io/google-samples/node-hello:1.0
Create a deployment with a environment var, mocking your Service B. I'm naming it deploy-env.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: env-deploy
labels:
app: env-replace
spec:
replicas: 1
selector:
matchLabels:
app: env-replace
template:
metadata:
labels:
app: env-replace
spec:
serviceAccountName: k8s-role
containers:
- name: env-replace
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DATASTORE_NAME
value: "john"
Create a ServiceAccount and a ClusterRoleBinding with cluster-admin privileges, I'm naming it service-account-for-pod.yaml (notice it's mentioned in manager-deploy.yaml:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-role
subjects:
- kind: ServiceAccount
name: k8s-role
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-role
Apply the service-account-for-pod.yaml, deploy-env.yaml, manager-deploy.yamland list current environment variables from deploy-env pod:
$ kubectl apply -f manager-deploy.yaml
deployment.apps/manager-deploy created
$ kubectl apply -f deploy-env.yaml
deployment.apps/env-deploy created
$ kubectl apply -f service-account-for-pod.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-role created
serviceaccount/k8s-role created
$ kubectl exec -it env-deploy-fbd95bb94-hcq75 -- printenv
DATASTORE_NAME=john
Shell into the manager pod, download the kubectl binary and apply the kubectl set env deployment/deployment_name VAR_NAME=VALUE:
$ kubectl exec -it manager-deploy-747c9d5bc8-p684s -- /bin/bash
root#manager-deploy-747c9d5bc8-p684s:/# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# chmod +x ./kubectl
root#manager-deploy-747c9d5bc8-p684s:/# mv ./kubectl /usr/local/bin/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# kubectl set env deployment/env-deploy DATASTORE_NAME=bob
Verify the env var value on the pod (notice that the pod is recreated when deployment is modified:
$ kubectl exec -it env-deploy-7f565ffc4-t46zc -- printenv
DATASTORE_NAME=bob
Let me know in the comments if you have any doubt on how to apply this solution to your environment.
You could give service A access to your cluster (install kubectl and allow traffic from that NAT of service A to your cluster master) and with some cron jobs or jenkins / ssh or something that will execute your commands do it. You can also do kubectl patch or get the current config of second deployment kubectl get deployment <name> -o yaml --export > deployment.yaml and edit it there with some regex/awk/sed and then apply although the --export method is getting deprecated so you might aswell on service A download the GIT repo and apply the new config like that.
Thank you for the answers all (upvoted as they were both correct). I am just putting my own answer to document exactly what solved it for me.
In my case I just needed to make use of the patch url available on k8. That plus the this example worked.
All I needed to do was create a service account to restrict who can patch where. Restrict that account to Service A and use the java client in Service A to update the chart of Service B. After that the pods would roll and done.

Kubernetes RBAC permissions - unknown 'clusterrole' flag when attempting to grant permissions?

I am using the Mirantis kubeadm-dind-cluster repository (https://github.com/Mirantis/kubeadm-dind-cluster) as my Kubernetes install; I came across this error when attempting to run a container -
panic: customresourcedefinitions.apiextensions.k8s.io is forbidden: User "system:serviceaccount:default:default" cannot create customresourcedefinitions.apiextensions.k8s.io at the cluster scope
So I attempted to add cluster-admin permissions to my account:
kubectl create clusterrolebinding serviceaccounts-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccounts
And get the following error:
Error: unknown flag: --clusterrole
Why is this? How do I fix this or get around it? I'm not sure how to convert the command into a YAML file to "kubectl create -f" to but it seems like that might be the way to go.
All three nodes are on version 1.8.6.
What version of kubectl are you using? Be sure you are using a version that includes the kubectl create clusterrolebinding command
If your version of kubectl does not support that command, you can try creating it directly via a yaml file (though I'm not sure whether 1.5.x kubectl was happy submitting versions of API objects it didn't know about):
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: serviceaccounts-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts