How to update api versions list in Kubernetes - kubernetes

I am trying to use "autoscaling/v2beta2" apiVersion in my configuration following this tutorial. And also I am on Google Kubernetes Engine.
However I get this error:
error: unable to recognize "backend-hpa.yaml": no matches for kind "HorizontalPodAutoscaler" in version "autoscaling/v2beta2"
When I list the available api-versions:
$ kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
certmanager.k8s.io/v1alpha1
cloud.google.com/v1beta1
coordination.k8s.io/v1beta1
custom.metrics.k8s.io/v1beta1
extensions/v1beta1
external.metrics.k8s.io/v1beta1
internal.autoscaling.k8s.io/v1alpha1
metrics.k8s.io/v1beta1
networking.gke.io/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scalingpolicy.kope.io/v1alpha1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
So indeed I am missing autoscaling/v2beta2.
Then I check my kubernetes version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.6", GitCommit:"abdda3f9fefa29172298a2e42f5102e777a8ec25", GitTreeState:"clean", BuildDate:"2019-05-08T13:53:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.6-gke.13", GitCommit:"fcbc1d20b6bca1936c0317743055ac75aef608ce", GitTreeState:"clean", BuildDate:"2019-06-19T20:50:07Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}
So it looks like I have a version of 1.13.6. Supposedly autoscaling/v2beta2 is available since 1.12.
So why it is not available for me?

Unfortunaltely HPA autoscaling API v2beta2 is not available on GKE yet. It can be freely used with Kubeadm and Minikube.
There is already opened issue on issuetracker - https://issuetracker.google.com/135624588

Related

Unable to deploy emissary-ingress in local kubernetes cluster. Fails with `error validating data: ValidationError(CustomResourceDefinition.spec)`

I'm trying to install emissary-ingress using the instructions here.
It started failing with error no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta". I searched and found an answer on Stack Overflow which said to update apiextensions.k8s.io/v1beta1 to apiextensions.k8s.io/v1 which I did.
It also asked to use the admissionregistration.k8s.io/v1 which my kubectl already uses.
When I run the kubectl apply -f filename.yml command, the above error was gone and a new error started popping in with error: error validating data: ValidationError(CustomResourceDefinition.spec): unknown field "validation" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec;
What should I do next?
My kubectl version - Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:32:41Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
minikube version - minikube version: v1.23.2
commit: 0a0ad764652082477c00d51d2475284b5d39ceed
EDIT:
The custom resource definition yml file: here
The rbac yml file: here
The validation field was officially deprecated in apiextensions.k8s.io/v1.
According to the official kubernetes documentation, you should use schema as a substitution for validation.
Here is a SAMPLE code using schema instead of validation:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: crontabs.stable.example.com
spec:
group: stable.example.com
versions:
- name: v1
served: true
storage: true
---> schema: <---
# openAPIV3Schema is the schema for validating custom objects.
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
pattern: '^(\d+|\*)(/\d+)?(\s+(\d+|\*)(/\d+)?){4}$'
image:
type: string
replicas:
type: integer
minimum: 1
maximum: 10

Two Kubernetes clusters act differently for RBAC

I have created an application that needs to have access to list, create, update and delete different Kubernetes resources and I created a clusterrole for it as below. Everything works fine on my local K8s cluster that runs on Microk8s but when I deployed it on a bare-metal cluster with the same version of K8s I am getting errors that I don't have proper access.
How this is possible (both should act the same) and is there a way to find these errors in advance?
My clusterrole:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: {{ .Release.Namespace }}-cluster-manager-role
rules:
- apiGroups: ["","apps","core", "autoscaling"] # --> I was getting error that I cannot create HPA but after I added "autoscaling" to the apigroup now I can create HPA
resources: ["*", "namespaces"]
verbs: ["get", "watch", "list", "patch", "create", "delete", "update"]
# ================
# Current clusterrole on microk8s (which allows me to do all the things)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: "2021-05-31T12:05:58Z"
name: default-cluster-manager-role
resourceVersion: "937643"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/default-cluster-manager-role
uid: 16fb63d6-1261-48a9-bc7f-5c8fffb72c9d
rules:
- apiGroups:
- ""
- apps
- core
resources:
- '*'
- namespaces
verbs:
- get
- watch
- list
- patch
- create
- delete
- update
Kubernetes version:
# Microk8s
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.15", GitCommit:"2adc8d7091e89b6e3ca8d048140618ec89b39369", GitTreeState:"clean", BuildDate:"2020-09-02T11:31:21Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
# Bare-metal
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.15", GitCommit:"2adc8d7091e89b6e3ca8d048140618ec89b39369", GitTreeState:"clean", BuildDate:"2020-09-02T11:31:21Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Some of the errors that I got:
time="2021-06-22T08:45:31Z" level=error msg="Getting list of PVCs for namespace wws-test failed." func=src/k8s.CreateClusterRole file="/src/k8s/k8s.go:1304"
time="2021-06-22T08:45:31Z" level=error msg="clusterroles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:wws:wws-cluster-manager-sa\" cannot create resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope" func=src/k8s.CreateClusterRole file="/src/k8s/k8s.go:1305"
time="2021-06-22T08:45:31Z" level=error msg="Getting list of PVCs for namespace wws-test failed." func=src/k8s.CreateClusterRoleBinding file="/src/k8s/k8s.go:1232"
time="2021-06-22T08:45:31Z" level=error msg="clusterrolebindings.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:wws:wws-cluster-manager-sa\" cannot create resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope" func=src/k8s.CreateClusterRoleBinding file="/src/k8s/k8s.go:1233"
time="2021-06-22T08:45:32Z" level=error msg="Getting list of PVCs for namespace wws-test failed." func=src/k8s.CreateRole file="/src/k8s/k8s.go:1448"
time="2021-06-22T08:45:32Z" level=error msg="roles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:wws:wws-cluster-manager-sa\" cannot create resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"wws-test\"" func=src/k8s.CreateRole file="/src/k8s/k8s.go:1449"
You should have a look at the ClusterRoleBindings (k get ClusterRoleBinding -o wide) that are applied to the ServiceAccount: system:serviceaccount:wws:wws-cluster-manager-sa)
I guess that on Minikube your user can do anything on your local cluster.
But, the real cluster would not allow you to create new ClusterRoles/CluterRoleBindings with the default user.
I don't know why this happened but I solved the issue by using * for all three fields of apiGroups, resources and verbs:
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
I am aware that this is not a clean and perfect solution especially if you want more control on the role and the resources or the verbs that the role should have access to but since no one (even I posted this on Kubernetes repo github as an issue) knows why this happened and I don't have time to dig deeper into this I accept my own answer.

kube-apiserver not coming up after adding --admission-control-config-file flag

root#ubuntu151:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.10", GitCommit:"62876fc6d93e891aa7fbe19771e6a6c03773b0f7", GitTreeState:"clean", BuildDate:"2020-10-15T01:43:56Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
My admission webhooks require authentication so I'm restarting the apiserver by specifying the location of the admission control configuration file via the --admission-control-config-file flag. Doing as:
root#ubuntu151:~# vi /etc/kubernetes/manifests/kube-apiserver.yaml
...
spec:
containers:
- command:
- --admission-control-config-file=/var/lib/kubernetes/kube-AdmissionConfiguration.yaml
...
root#ubuntu151:~# vi /var/lib/kubernetes/kube-AdmissionConfiguration.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ValidatingAdmissionWebhook
configuration:
apiVersion: apiserver.config.k8s.io/v1
kind: WebhookAdmissionConfiguration
kubeConfigFile: /var/lib/kubernetes/kube-config.yaml
- name: MutatingAdmissionWebhook
configuration:
apiVersion: apiserver.config.k8s.io/v1
kind: WebhookAdmissionConfiguration
kubeConfigFile: /var/lib/kubernetes/kube-config.yaml
kubeConfigFile: /var/lib/kubernetes/kube-config.yaml is the file I copied from ~/.kube/config
Now my kube-apiserver is not coming up .
Please help !
Thanks in Advance !

How does priorityClass Works

I try to use priorityClass.
I create two pods, the first has system-node-critical priority and the second cluster-node-critical priority.
Both pods need to run in a node labeled with nodeName: k8s-minion1 but such a node has only 2 cpus while both pods request 1.5 cpu.
I then expect that the second pod runs and the first is in pending status. Instead, the first pod always runs no matter the classpriority I affect to the second pod.
I even tried to label the node afted I apply my manifest but does not change anything.
Here is my manifest :
apiVersion: v1
kind: Pod
metadata:
name: firstpod
spec:
containers:
- name: container
image: nginx
resources:
requests:
cpu: 1.5
nodeSelector:
nodeName: k8s-minion1
priorityClassName: cluster-node-critical
---
apiVersion: v1
kind: Pod
metadata:
name: secondpod
spec:
containers:
- name: container
image: nginx
resources:
requests:
cpu: 1.5
priorityClassName: system-node-critical
nodeSelector:
nodeName: k8s-minion1
It is worth noting that I get an error "unknown object : priorityclass" when I do kubectl get priorityclass and when I export my running pod in yml with kubectl get pod secondpod -o yaml, I cant find any classpriority: field.
Here Are my version infos:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Any ideas why this is not working?
Thanks in advance,
Abdelghani
PriorityClasses first appeared in k8s 1.8 as alpha feature.
It graduated to beta in 1.11
You are using 1.10 and this means that this feature is in alpha.
Alpha features are not enabled by default so you would need to enable it.
Unfortunately k8s version 1.10 is no longer supported, so I'd suggest to upgrade at least to 1.14 where priorityClass feature became stable.

{ambassador ingress} Not able to use canary and add_request_headers in the same Mapping

I want to pass a few custom headers to a canary service. On adding both the mappings to the template, it is disregarding the weight and adding the header to 100% of the traffic and routing them to the canary service.
Below is my ambassador service config
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: flag_off_mapping
prefix: /web-app/
service: web-service-flag
weight: 99
---
apiVersion: ambassador/v1
kind: Mapping
name: flag_on_mapping
prefix: /web-app/
add_request_headers:
x-halfbakedfeature: enabled
service: web-service-flag
weight: 1
I expect 99% of the traffic to hit the service without any additional headers and 1% of the traffic to hit the service with x-halfbakedfeature: enabled header added to the request object.
Ambassador: 0.50.3
Kubernetes environment [AWS L7 ELB]
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-04T04:48:03Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:36:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
$
Apologies for X-posting in Github and SO.
Please take a look here:
As workaround could you consider:
"Make another service pointing to the same canary instances with an ambassador
annotation containing the same prefix and headers required."