How to enable autoscaling/v2beta2 api-versions in minikube - kubernetes

I don't find autoscaling/v2beta2 or beta1 when I run the command $kubectl api-versions. But I need it for memory autoscaling. What to do ?
To enable autoscaling/v2beta2

Most likely you're using latest Minikube with Kubernetes 1.26 where autoscaling/v2beta2 API is no longer served:
The autoscaling/v2beta2 API version of HorizontalPodAutoscaler is no
longer served as of v1.26.
Read more: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#horizontalpodautoscaler-v126
So the solution might be either changing API version to autoscaling/v2 in your manifests or use older version of Minikube/Kubernetes.

Related

Annotation has apiVersion v1beta1

I am planning to upgrade Kubernetes clusters from 1.21 to 1.22. I was going through the release notes and noticed ClusterRole, RoleBinding and ClusterRoleBinding should use rbac.authorization.k8s.io/v1 as rbac.authorization.k8s.io/v1beta1 is being deprecated.
Here is the output from one of my resource rolebinding/test-rw. apiversion says rbac.authorization.k8s.io/v1 but in annotations, it says rbac.authorization.k8s.io/v1beta1. Why does annotation have v1beta1 version? is it because it was initially deployed with v1beta version and later on updated to v1 version?
$ kubectl get RoleBinding/test-rw -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1beta1","kind":"RoleBinding","metadata":{"annotations":{},"name":"test-rw","namespace":"default"},"roleRef":{"apiGroup":"","kind":"ClusterRole","name":"admin"},"subjects":[{"apiGroup":"","kind":"Group","name":"test-rw"}]}
creationTimestamp: "2017-08-18T11:40:22Z"
name: test-rw
namespace: default
resourceVersion: "214"
uid: f8a89do8-885f-11e9-8dd8-12afbb11be0c
You can use the kubectl api-versions to check the available API versions.
or kubectl explain pod to check the version
In Annotation, it's last applied config but what you are seeing is API server preferred API version.
Kubectl is the client and it will show the API server preferred version or whatever you request for using that.
kubectl get RoleBinding.v1beta1 test-rw -o yaml
So API version while creating Rolebilding not get affected when you are trying to get using kubectl.

Is it mandatory to upgrade CRDs deprecated apiVersions?

I have a few external CRDs with old apiVersion applied in the cluster, and operators based on those CRDs deployed.
As said in official docs about Kubernetes API and Feature Removals in 1.22.
You can use the v1 API to retrieve or update existing objects, even if they were created using an older API version. If you defined any custom resources in your cluster, those are still served after you upgrade.
Based on the quote, does it mean I could leave those apiextensions.k8s.io/v1beta1 CRDs in the cluster? Will controllers/operators continue to work normally?
The custom resources will still be served after you upgrade
Suppose we define a resource called mykind
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: mykinds.grp.example.com
spec:
group: grp.example.com
versions:
- name: v1beta1
served: true
storage: true
Then, on any cluster where this has been applied I can always define a mykind resource:
apiVersion: grp.example.com/v1beta1
kind: Mykind
metadata:
name: mykind-instance
And this resource will still be served normally after upgrade even if the CRD for mykind was created under v1beta1.
However, anything in the controller / operator code referencing v1beta1 CRD won't work. This could be applying the CRD itself (if your controller has permissions to do that) for example. That's something to watch out for if your operator is managed by the Operator Lifecycle Manager. But watching for changes in the CRs would be unaffected by the upgrade.
So if your controller / operator isn't watching CustomResourceDefinitions then technically you can leave these CRDs on the cluster and your operator will work as normal. But you won't be able to uninstall + reinstall should you need to.
Another thing to explore is if / how that might affect your ability to bump API versions later though.

How to find the correct api version in Kubernetes?

I have a question about the usage of apiVersion in Kuberntes.
For example I am trying to deploy traefik 2.2.1 into my kubernetes cluster. I have a traefik middleware deployment definition like this:
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: https-redirect
spec:
redirectScheme:
scheme: https
permanent: true
port: 443
When I try to deploy my objects with
$ kubectl apply -f middleware.yaml
I got the following error message:
unable to recognize "middleware.yaml": no matches for kind "Middleware" in version "traefik.containo.us/v1alpha1"
The same object works fine with Traefik version 2.2.0 but not with version 2.2.1.
On the traefik documentation there is no example other the ones using the version "traefik.containo.us/v1alpha1"
I dont't hink that my deployment issue is specific to traefik. It is a general problem with conflicting versions. Is there any way how I can figure out which apiVersions are supported in my cluster environment?
There are so many outdated examples posted around using deprecated apiVersions that I wonder if there is some kind of official apiVersion directory for kubernetes? Or maybe there is some kubectl command which I can ask for apiversions?
Most probably crds for traefik v2 are not installed. You could use below command which lists the API versions that are available on the Kubernetes cluster.
kubectl api-versions | grep traefik
traefik.containo.us/v1alpha1
Use below command to check crds installed on the Kubernetes cluster.
kubectl get crds
NAME CREATED AT
ingressroutes.traefik.containo.us 2020-05-09T13:58:09Z
ingressroutetcps.traefik.containo.us 2020-05-09T13:58:09Z
ingressrouteudps.traefik.containo.us 2020-05-09T13:58:09Z
middlewares.traefik.containo.us 2020-05-09T13:58:09Z
tlsoptions.traefik.containo.us 2020-05-09T13:58:09Z
tlsstores.traefik.containo.us 2020-05-09T13:58:09Z
traefikservices.traefik.containo.us 2020-05-09T13:58:09Z
Check traefik v1 vs v2 here
I found that if I just run the kubectl apply again after a few moments it will then work.

Difference between API versions v2beta1 and v2beta2 in Horizontal Pod Autoscaler?

The Kubernetes Horizontal Pod Autoscaler walkthrough in https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ explains that we can perform autoscaling on custom metrics. What I didn't understand is when to use the two API versions: v2beta1 and v2beta2. If anybody can explain, I would really appreciate it.
Thanks in advance.
The first metrics autoscaling/V2beta1 doesn't allow you to scale your pods based on custom metrics. That only allows you to scale your application based on CPU and memory utilization of your application
The second metrics autoscaling/V2beta2 allows users to autoscale based on custom metrics. It allow autoscaling based on metrics coming from outside of Kubernetes. A new External metric source is added in this api.
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
It will identify a specific metric to autoscale on based on metric name and a label selector. Those metrics can come from anywhere like a stackdriver or prometheus monitoring application and based on some query from prometheus you want to scale your application.
It would always better to use V2beta2 api because it can do scaling on CPU and memory as well as on custom metrics, while V2beta1 API can scale only on internal metrics.
The snippet I mentioned in answer denotes how you can specify the target CPU utilisation in V2beta2 API
UPDATE: v2beta1 is deprecated in 1.19 and you should use v2beta2 going forward.
Also, v2beta2 added the new api field spec.behavior in 1.18 which allows you to define how fast or slow pods are scaled up and down.
Originally, both versions were functionally identical but had different APIs.
autoscaling/v2beta2 was released in Kubernetes version 1.12 and the release notes state:
We released autoscaling/v2beta2, which cleans up and unifies the API
The "cleans up and unifies the API" is referring to that fact that v2beta2 consistently uses the MetricIdentifier and MetricTarget objects:
spec:
metrics:
external:
metric: MetricIdentifier
target: MetricTarget
object:
describedObject: CrossVersionObjectReference
metric: MetricIdentifier
target: MetricTarget
pods:
metric: MetricIdentifier
target: MetricTarget
resource:
name: string
target: MetricTarget
type: string
In v2beta1, those fields have pretty different specs, making it (in my opinion) more difficult to figure out how to use.
How to check differences between HPA versions in general?
I would provide additional answer which I think would be also suitable for other version differences in the future.
Run kubectl api-versions and check which version your cluster is supporting.
Go to the K8S API site and comapre autoscaling versions:
MetricSpec v2beta2 autoscaling Vs MetricSpec v2beta1 autoscaling .
(*) Just notice that you're in the correct K8S version in the url:
https:// kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#metricspec-v2beta1-autoscaling
In case you need to drive the horizontal pod autoscaler with a custom external metric, and only v2beta1 is available to you (I think this is true of GKE still), we do this routinely in GKE. You need:
A stackdriver monitoring metric, possibly one you create yourself,
If the metric isn't derived from sampling Stackdriver logs, a way to publish data to the stackdriver monitoring metric, such as a cronjob that runs no more than once per minute (we use a little python script and Google's python library for monitoring_v3), and
A custom metrics adapter to expose Stackdriver monitoring to the HPA (e.g., in Google, gcr.io/google-containers/custom-metrics-stackdriver-adapter:v0.10.0). There's a tutorial on how to deploy this adapter here. You'll need to ensure that you grant the required RBAC stuff to the service account running the adapter, as shown here. You may or may not want to grant the principal that deploys the configuration cluster-admin role as described in the tutorial; we use Helm 2 w/ Tiller and are careful to grant least privilege to Tiller to deploy.
Configure your HPA this way:
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
...
spec:
scaleTargetRef:
kind: e.g., StatefulSet
name: name-of-pod-to-scale
apiVersion: e.g., apps/v1
minReplicas: 1
maxReplicas: ...
metrics:
type: External
external:
metricName: "custom.googleapis.com|your_metric_name"
metricSelector:
matchLabels:
resource.type: "generic_task"
resource.labels.job: ...
resource.labels.namespace: ...
resource.labels.project_id: ...
resourcel.labels.task_id: ...
targetValue: e.g., 0.7 (i.e., if you publish a metric that measures the ratio between demand and current capacity)
If you ask kubectl for your HPA object, you won't see autoscaling/v2beta1 settings, but this works well:
kubectl get --raw /apis/autoscaling/v2beta1/namespaces/your-namespace/horizontalpodautoscalers/your-autoscaler | jq
So far, we've only exercised this on GKE. It's clearly Stackdriver-specific. To the extent that Stackdriver can be deployed on other public managed k8s platforms, it might actually be portable. Or you might end up with a different way to publish a custom metric for each platform, using a different metrics publishing library in your cronjob, and a different custom metrics adapter. We know that one exists for Azure, for example.

Kubernetes 1.8.10 kube-apiserver priorityclasses error

New cluster 1.8.10 spinned with kops.
In K8S 1.8 there is a new feature Pod Priority and Preemption.
More information: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#how-to-use-priority-and-preemption
kube-apiserver is logging errors
I0321 16:27:50.922589 7 wrap.go:42] GET
/apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations:
(140.067µs) 404 [[kube-apiserver/v1.8.10 (linux/amd64)
kubernetes/044cd26] 127.0.0.1:47500] I0321 16:27:51.257756 7
wrap.go:42] GET
/apis/scheduling.k8s.io/v1alpha1/priorityclasses?resourceVersion=0:
(168.391µs) 404 [[kube-apiserver/v1.8.10 (linux/amd64)
kubernetes/044cd26] 127.0.0.1:47500] E0321 16:27:51.258176 7
reflector.go:205]
k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:73:
Failed to list *scheduling.PriorityClass: the server could not find
the requested resource (get priorityclasses.scheduling.k8s.io)
I quite not understand why. No one should access it as it's not even enabled yet (it's alpha).
No pod is using priorityClassName.
Running explain:
kubectl explain priorityclass error: API version:
scheduling.k8s.io/v1alpha1 is not supported by the server. Use one of:
[apiregistration.k8s.io/v1beta1 extensions/v1beta1 apps/v1beta1
apps/v1beta2 authentication.k8s.io/v1
authentication.k8s.io/v1beta1 authorization.k8s.io/v1
authorization.k8s.io/v1beta1 autoscaling/v1 autoscaling/v2beta1
batch/v1 batch/v1beta1 certificates.k8s.io/v1beta1
networking.k8s.io/v1 policy/ v1beta1
rbac.authorization.k8s.io/v1 rbac.authorization.k8s.io/v1beta1
storage.k8s.io/v1 storage.k8s.io/v1beta1 apiextensions.k8s.io/v1beta1
v1]
Is this normal or kops specific?
I think it is related to that Kops option in its config (kops get --name $NAME -oyaml):
kubeAPIServer:
runtimeConfig:
admissionregistration.k8s.io/v1alpha1: "true"
Anyway, all components working thru the API server and it is not a surprise that sometimes based on configuration it is trying to call some disable features. At least it has to check which APIs a supported, so why :)
So, I think you don't need to worry about it, that is the configuration-related message. Don't worry about it. Or just enable that feature, it will solve warning messages.