I have created a Kubernetes autoscaler, but I need to change its parameters. How do I update it?
I've tried the following, but it fails:
✗ kubectl autoscale -f docker/production/web-controller.yaml --min=2 --max=6
Error from server: horizontalpodautoscalers.extensions "web" already exists
You can always interactively edit the resources in your cluster. For your autoscale controller called web, you can edit it via:
kubectl edit hpa web
If you're looking for a more programmatic way to update your horizontal pod autoscaler, you would have better luck describing your autoscaler entity in a yaml file, as well. For example, here's a simple Replication Controller, paired with a Horizontal Pod Autoscale entity:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 2
template:
metadata:
labels:
run: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: nginx
namespace: default
spec:
maxReplicas: 3
minReplicas: 2
scaleTargetRef:
apiVersion: v1
kind: ReplicationController
name: nginx
With those contents in a file called nginx.yaml, updating the autoscaler could be done via kubectl apply -f nginx.yaml.
You can use the kubectl patch command as well, to see its current status
kubectl get hpa <autoscaler-name-here> -o json
An example output:
{
"apiVersion": "autoscaling/v1",
"kind": "HorizontalPodAutoscaler",
"metadata": {
...
"name": "your-auto-scaler",
"namespace": "your-namespace",
...
},
"spec": {
"maxReplicas": 50,
"minReplicas": 2,
"scaleTargetRef": {
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"name": "your-deployment"
},
"targetCPUUtilizationPercentage": 40
},
"status": {
"currentReplicas": 1,
"desiredReplicas": 2,
"lastScaleTime": "2017-12-13T16:23:41Z"
}
}
If you want to update the minimum number of replicas:
kubectl -n your-namespace patch hpa your-auto-scaler --patch '{"spec":{"minReplicas":1}}'
The same logic applies to other parameters found in the autoscaler configuration, change minReplicas to maxReplicas if you want to update the maximum number of allowed replicas.
First delete the autoscaler and then re-create it:
✗ kubectl delete hpa web
✗ kubectl autoscale -f docker/production/web-controller.yaml --min=2 --max=6
I tried multiple options but the one which works the best is
First delete the existing hpa
kubectl delete hpa web
then recreate another one
kubectl autoscale -f docker/production/web-controller.yaml --min=2 --max=6
Related
First all of, for some reasons, I'm using an unsupported and obsolete version of Kubernetes (1.12), and I can't upgrade.
I'm trying to configure the scheduler to avoid running pods on some nodes by changing the node score when the scheduler try to find the best available node, and I would like to do that on scheduler level and not by using nodeAffinity at deployment, replicaset, pod, etc level (therefore all pods will be affected by this change).
After reading the k8s docs here: https://kubernetes.io/docs/reference/scheduling/config/#scheduling-plugins and checking that some options were already present in 1.12, I'm trying to use the NodePreferAvoidPods plugins.
In the documentation the plugin specifies:
Scores nodes according to the node annotation scheduler.alpha.kubernetes.io/preferAvoidPods
Which if understand correctly should do the work.
So, i've updated the static manifest for kube-scheduler.yaml to use the following config:
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
profiles:
- plugins:
score:
enabled:
- name: NodePreferAvoidPods
weight: 100
clientConnection:
kubeconfig: /etc/kubernetes/scheduler.conf
But adding the following annotation
scheduler.alpha.kubernetes.io/preferAvoidPods: to the node doesn't seem to work.
For testing I'm made a basic nginx deployment with a replica equal to the number of worker nodes (4).
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
Then I check where the pods where created with kubectl get pods -owide
So, I believe some options are required for this annotation to works.
I've tried to set the annotation to "true", "1" but k8s refuse my change and I can't figure what are the valid options for this annotation and I can't find any documentation about that.
I've checked within git release for 1.12, this plugin was already present (at least there are some lines of codes), I don't think the behavior or settings changed much since.
Thanks.
So from source Kubernetes codes here a valid value for this annoation:
{
"preferAvoidPods": [
{
"podSignature": {
"podController": {
"apiVersion": "v1",
"kind": "ReplicationController",
"name": "foo",
"uid": "abcdef123456",
"controller": true
}
},
"reason": "some reason",
"message": "some message"
}
]
}`
But there is no details on how to predict the uid and no answer where given when asked by another one on github years ago: https://github.com/kubernetes/kubernetes/issues/41630
For my initial question which was to avoid scheduling pods on node, I found an other method by using the well-known taint node.kubernetes.io/unschedulable and the value PreferNoSchedule
Tainting a node with this command do the job and this taint seem persistent across cordon/uncordon (a cordon will set to NoSchedule and uncordon will set it back to PreferNoSchedule).
kubectl taint node NODE_NAME node.kubernetes.io/unschedulable=:PreferNoSchedule
I have managed to install Prometheus and it's adapter and I want to use one of the pod metrics for autoscaling
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . |grep "pods/http_request".
"name": "pods/http_request_duration_milliseconds_sum",
"name": "pods/http_request",
"name": "pods/http_request_duration_milliseconds",
"name": "pods/http_request_duration_milliseconds_count",
"name": "pods/http_request_in_flight",
Checking api I want to use pods/http_request and added it to my HPA configuration
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: app
namespace: app
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app
minReplicas: 4
maxReplicas: 8
metrics:
- type: Pods
pods:
metric:
name: http_request
target:
type: AverageValue
averageValue: 200
After applying the yaml and check the hpa status it shows up as <unkown>
$ k apply -f app-hpa.yaml
$ k get hpa
NAME REFERENCE TARGETS
app Deployment/app 306214400/2000Mi, <unknown>/200 + 1 more...
But when using other pod metrics such as pods/memory_usage_bytes the value is properly detected
Is there a way to check the proper values for this metric? and how do I properly add it for my hpa configuration
Reference https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/manage_cluster/hpa.html
1st deploy metrics server, it should be up and running.
$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Then in a few sec. metrics server deployed. check HPA it should resolved.
$ kubectl get deployment -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
.
.
kube-system metrics-server 1/1 1 1 34s
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
ha-xxxx-deployment Deployment/xxxx-deployment 1%/5% 1 10 1 6h46m
I've created a deployment which exposes a custom metric through an endpoint and an APIService that registers this custom metric, so I can use it in an HPA to autoscale the deployment. To achieve this, I've followed this tutorial.
It worked well while using an apiregistration.k8s.io/v1beta1 APIService. The metric was exposed correctly and the HPA could read it and scale accordingly. I've tried to update the APIService to version apiregistration.k8s.io/v1 (as v1beta1 is deprecated and removed in Kubernetes v1.22), but then the HPA couldn't pick the metric anymore, with this message:
Message
-------
unable to get metric threatmessages: Service on test services-metrics-service/unable to fetch
metrics from custom metrics API: the server is currently unable to handle the request
(get services.custom.metrics.k8s.io services-metrics-service)
If I manually request the metric, it exists though:
kubectl get --raw /apis/custom.metrics.k8s.io/v1/namespaces/test/services/services-metrics-service/threatmessages |jq .
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1",
"metadata": {
"selfLink": "custom.metrics.k8s.io/v1"
},
"items": [
{
"metricName": "threatmessages",
"timestamp": "2021-02-09T14:43:39.321Z",
"value": "0",
"describedObject": {
"kind": "Service",
"namespace": "test",
"name": "services-metrics-service",
"apiVersion": "/v1"
}
}
]
}
Here are my APIService and HPA resources:
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1.custom.metrics.k8s.io
spec:
insecureSkipTLSVerify: true
group: custom.metrics.k8s.io
groupPriorityMinimum: 1000
versionPriority: 5
service:
name: services-metrics-service
namespace: test
port: 443
version: v1
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: services-parallel-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: services-parallel-deployment
minReplicas: 1
maxReplicas: 10
metrics:
- type: Object
object:
describedObject:
kind: Service
name: services-metrics-service
metric:
name: threatmessages
target:
type: AverageValue
averageValue: 4k
behavior:
scaleDown:
stabilizationWindowSeconds: 30
policies:
- type: Pods
value: 1
periodSeconds: 30
What am I doing wrong? Or are these 2 versions just not compatible for some reason?
According to APIService 1.22 documentation you can find information:
Migrate manifests and API clients to use the apiregistration.k8s.io/v1 API version, available since v1.10.
All existing persisted objects are accessible via the new API
No notable changes
First of all, v1 is available for the version you are using (v.1.19).
Secondly, and more important, All existing persisted objects are accessible via the new API and No notable changes. This means that the objects created using v1beta1 don't need to be updated or modified. They will be available and working even if they were created using v1beta1. That is, after upgrading to version v 1.22, you should have no issue, the same objects will simply be accessible (and, I would think, accessed by HPA) as if they had been created using v1. What is more, they may already (in version 1.19) be accessible as v1, as I will explain next, so you can check now if everything is fine.
I have run some quick tests on a v 1.19 GKE cluster and found if something contains apiVersion: apiregistration.k8s.io/v1beta1 (in fact, using exactly what the OP provided in in question:
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1.custom.metrics.k8s.io
spec:
insecureSkipTLSVerify: true
group: custom.metrics.k8s.io
groupPriorityMinimum: 1000
versionPriority: 5
service:
name: services-metrics-service
namespace: test
port: 443
version: v1
except using v1beta1 as apiVersion) is applied, and then a get command is used to obtain what was created $ kubectl get apiservices/v1.custom.metrics.k8s.io --output=json, an object marked with apiVersion v1, not v1beta1, will be obtained ("apiVersion": "apiregistration.k8s.io/v1"). And if, intead, apiVersion .../v1 is used during the creation instead of apiVersion: apiregistration.k8s.io/v1beta1, the same is obtained. If either of those is deleted, the other one is gone. It is the same object behind the scenes. Yet it is marked as v1.
As a result of all the above, you should simply revert back to what you were doing when you deployed using v1beta1, given it worked and will keep on working according to APIService 1.22 documentation. You can also run $ kubectl get apiservices/v1.custom.metrics.k8s.io --output=json command, either $ kubectl get APIService --output=json or $ kubectl get APIService.apiregistration.k8s.io --output=json once deployed v1beta1 version, to understand if object is already marked as using v1 behind the scenes despite having created the object with v1beta1 - like is happening in my case.
Creating objects with v1 is not necessary if they were already created with v1beta1.
kubectl get pod pod_name -n namespace_name -o json shows:
"labels": {
"aadpodidbinding": "sa-customerxyz-uat-msi",
"app": "cloudsitemanager",
"customer": "customerxyz",
"istio.io/rev": "default",
"pod-template-hash": "b87d9fcbf",
"security.istio.io/tlsMode": "istio",
"service.istio.io/canonical-name": "cloudsitemanager",
"service.istio.io/canonical-revision": "latest"
}
I am deploying with the following manifest yaml snippet:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudsitemanager
labels:
app: cloudsitemanager
customer: customerxyz
version: 0.1.0-beta.201
spec:
replicas: 1
selector:
matchLabels:
app: cloudsitemanager
customer: customerxyz
template:
metadata:
labels:
app: cloudsitemanager
customer: customerxyz
version: 0.1.0-beta.201
aadpodidbinding: sa-customerxyz-uat-msi
I expect to see 4 custom labels in the running pod manifest: app, customer, version, aadpodidbinding. However, I only see 3 of the custom labels. The label "version" does not show.
I had the same issue with running Istio + Kiali as Kiali was not showing the version. I tried adding the "version" label under the spec of the Deployment but it didn't work. After adding the "version" label for the POD's spec (.spec.template.metadata.labels) it started applying the "version" labels for the newly created pod and Kiali now shows the version number instead of "latest"
What is the command to delete replication controller and its pods?
I am taking a course to learn k8s on pluralsight. I am trying to delete the pods that I have just created using Replication controller. Following is my YAML:
apiVersion: v1
kind: ReplicationController
metadata:
name: hello-rc
spec:
replicas: 2
selector:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-ctr
image: nigelpoulton/pluralsight-docker-ci:latest
ports:
- containerPort: 8080
If I do 'kubectl get pods' following is the how it looks on my mac:
I have tried the following two commands to delete the pods that are created in the Minikube cluster on my mac, but they are not working:
kubectl delete pods hello-world
kubectl delete pods hello-rc
Could someone help me understand what I am missing?
you can delete the pods by deleting the replication controller that created them
kubectl delete rc hello-rc
also, because pods created are just managed by ReplicationController, you can delete only theReplicationController and leave the pods running
kubectl delete rc hello-rc --cascade=false
this means the pods are no longer managed .you can create a new ReplicationController with the
proper label selector and manage them again
Also,instead of replicationcontrollers, you can use replica sets.
They behave in a similar way, but they have more expressive
pod selectors. For example, a ReplicationController can’t match pods with 2 labels
below command is just enough
kubectl delete rc hello-rc
One more thing is that ReplicationController is deprecated rather ReplicaSets is preferred