Pulumi: how to correctly use HorizontalPodAutoscaler - kubernetes

I'm trying to setup HPA v1 for my pods using Pulumi
new HorizontalPodAutoscaler(
`${prefixedHPAName}`,
{
apiVersion: 'autoscaling/v1',
kind: 'HorizontalPodAutoscaler',
metadata: {
name: 'worker',
clusterName: 'redacted',
namespace: namespaceName
},
spec: {
maxReplicas: 3,
minReplicas: 1,
scaleTargetRef: {
apiVersion: 'apps/v1',
kind: 'Deployment',
name: 'worker'
},
targetCPUUtilizationPercentage: 50,
}
}
);
but I keep getting the following error
kubernetes:autoscaling/v1:HorizontalPodAutoscaler (tushar-routex-routex-hpa):
error: configured Kubernetes cluster is unreachable: unable to load Kubernetes client configuration from kubeconfig file: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Are there any examples on how to set HPA for my pods via pulumi?

The problem is not related to HPA nor pulumi, its a common problem with kubectl/kubernetes clients, where the client cannot find the kube config file to connect to the K8S API.
Check if you have a config file in the path ~/.kube/config, then try to export the path before using pulumi
export KUBECONFIG=~/.kube/config
If you don't have a file, you can check the documentation of your cloud provider to create it.

Related

FailedGetPodsMetric: for HPA autoscaling

I am trying to autoscale using custom metrics, with metric type "http_request". My following command is showing correct output:
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq
Below is my hpa.yaml file:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
minReplicas: 2
maxReplicas: 10
metrics:
- type: Pods
pods:
metricName: http_requests
targetAverageValue: 1
but my scaling is failing due to
the HPA was unable to compute the replica count:
unable to get metric http_requests: unable to fetch metrics from custom metrics API: an error on the server`
("Internal Server Error: \"/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%!A(MISSING)/http_requests?labelSelector=app%!D(MISSING)podinfo\": the server could not find the requested resource")
has prevented the request from succeeding (get pods.custom.metrics.k8s.io *)
Please help me out in this :)
Seems like you are missing pods in your cluster that match the provided deployment specification. Can you check if your podinfo deployment is running? And that it has healthy pods in it?
The command works because you're only checking the availability of the metrics endpoint. This simply implies that the endpoint is live to start providing metrics, doesn't guarantee that you will receive metrics (without any resources).

How to set requests per second limit on GKE and Kong Ingress?

I have a cluster on GKE and I want to set a limit for incoming requests, but I cannot find a way to do it using Kong Ingress Controller. I can't find any documentation or info about this specific topic.
Following the steps in this article, I achieved the desired results by adding the rate limit plugin in my kongo ingress. To do so, first, update / create your ingress definition and add the annotations defined below:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: func
namespace: default
annotations:
kubernetes.io/ingress.class: kong # <-- THIS
plugins.konghq.com: http-ratelimit # <-- THIS
spec:
...
After, to finally set the rate-limit, use this definition and apply it in your kubernetes cluster:
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: http-ratelimit
namespace: default
config:
policy: local
second: 1
plugin: rate-limiting
This will create a restriction of 1 request per second in your ingress. If you want anything different, just change the config section with your own configuration. Check the plugin's documentation for all possible configurations.

Max pods per node

Dear members of stackoverflow,
It is possible to configure the maximum number of pods per node in the yaml configuration file of a kubernetes deployment? For example something as
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: cdn-akamai-pipe
spec:
template:
metadata:
labels:
app: cdn-akamai-pipe
max-pods-per-node: 10
Thanks
This is a kubelet setting that can be set using the --max-pods flag, https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#kubelet as such there is no way to set this using yaml configuration. If you are using a managed service this can generally be set during cluster creation
Maximum number of pods should be set on Kubelet and not in deployment yaml
--max-pods int32
Number of Pods that can run on this Kubelet. (default 110)

Kubernetes HPA fails to detect a successfully published custom metric from Stackdriver

I'm trying to scale a Kubernetes Deployment using a HorizontalPodAutoscaler, which listens to a custom metrics through Stackdriver.
I'm having a GKE cluster, with a Stackdriver adapter enabled.
I'm able to publish the custom metric type to Stackdriver, and following is the way it's being displayed in Stackdriver's Metric Explorer.
This is how I have defined my HPA:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: example-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: custom.googleapis.com|worker_pod_metrics|baz
targetValue: 400
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-app-group-1-1
After successfully creating example-hpa, executing kubectl get hpa example-hpa, always shows TARGETS as <unknown>, and never detects the value from custom metrics.
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
example-hpa Deployment/test-app-group-1-1 <unknown>/400 1 10 1 18m
I'm using a Java client which runs locally to publish my custom metrics.
I have given the appropriate resource labels as mentioned here (hard coded - so that it can run without a problem in local environment). I have followed this document to create the Java client.
private static MonitoredResource prepareMonitoredResourceDescriptor() {
Map<String, String> resourceLabels = new HashMap<>();
resourceLabels.put("project_id", "<<<my-project-id>>>);
resourceLabels.put("pod_id", "<my pod UID>");
resourceLabels.put("container_name", "");
resourceLabels.put("zone", "asia-southeast1-b");
resourceLabels.put("cluster_name", "my-cluster");
resourceLabels.put("namespace_id", "mynamespace");
resourceLabels.put("instance_id", "");
return MonitoredResource.newBuilder()
.setType("gke_container")
.putAllLabels(resourceLabels)
.build();
}
What am I doing wrong in the above-mentioned steps please? Thank you in advance for any answers provided!
EDIT [RESOLVED]:
I think I have had some misconfigurations, since kubectl describe hpa [NAME] --v=9 showed me some 403 status code, as well as I was using type: External instead of type: Pods (Thanks MWZ for your answer, pointing out this mistake).
I managed to fix it by creating a new project, a new service account, and a new GKE cluster (basically everything from the beginning again). Then I changed my yaml file as follows, exactly as this document explains.
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: test-app-group-1-1
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: test-app-group-1-1
minReplicas: 1
maxReplicas: 5
metrics:
- type: Pods # Earlier this was type: External
pods: # Earlier this was external:
metricName: baz # metricName: custom.googleapis.com|worker_pod_metrics|baz
targetAverageValue: 20
I'm now exporting as custom.googleapis.com/baz, and NOT as custom.googleapis.com/worker_pod_metrics/baz. Also, now I'm explicitly specifying the namespace for my HPA in the yaml.
Since you can see your custom metric in Stackdriver GUI I'm guessing metrics are correctly exported. Based on Autoscaling Deployments with Custom Metrics I believe you wrongly defined metric to be used by HPA to scale the deployment.
Please try using this YAML:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: example-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- type: Pods
pods:
metricName: baz
targetAverageValue: 400
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-app-group-1-1
Please have in mind that:
The HPA uses the metrics to compute an average and compare it to the
target average value. In the application-to-Stackdriver export
example, a Deployment contains Pods that export metric. The following
manifest file describes a HorizontalPodAutoscaler object that scales a
Deployment based on the target average value for the metric.
Troubleshooting steps described on the page above can also be useful.
Side-note
Since above HPA is using beta API autoscaling/v2beta1 I got error when running kubectl describe hpa [DEPLOYMENT_NAME]. I ran kubectl describe hpa [DEPLOYMENT_NAME] --v=9 and got response in JSON.
It is a good practice to put some unique labels to target your metrics. Right now, based on metrics labelled in your java client, only pod_id looks unique which can't be used due to its stateless nature.
So, I would suggest you try introducing a deployment/metrics wide unqiue identifier.
resourceLabels.put("<identifier>", "<could-be-deployment-name>");
After this, you can try modifying your HPA with something similar to following:
kind: HorizontalPodAutoscaler
metadata:
name: example-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: custom.googleapis.com|worker_pod_metrics|baz
metricSelector:
matchLabels:
# define labels to target
metric.labels.identifier: <deployment-name>
# scale +1 whenever it crosses multiples of mentioned value
targetAverageValue: "400"
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-app-group-1-1
Apart from this, this setup has no issues and should work smooth.
Helper command to see what metrics are exposed to HPA :
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/custom.googleapis.com|worker_pod_metrics|baz" | jq

Why would /var/run/secrets/kubernetes.io/serviceaccount/token be an empty file in a Pod?

I'm using a vanilla minikube environment.
I'm not specifying any service account-related instructions in my bare-bones simple Pod .yaml file.
Inside a deployed Pod, /var/run/secrets/kubernetes.io/serviceaccount/token is empty. What are the possible causes for this?
As mentioned in the docs
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
automountServiceAccountToken: false
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
So double check your pod file and check your ServiceAccount configuration with
kubectl describe serviceaccount build-robot to see if you are disabling the automount.
I was having this issue with minikube v1.13.1 on Ubuntu 18.04, using the 'none' driver to run Kubernetes v1.19.2 on Docker 19.03.6.
I was seeing that the serviceaccount token secret was correctly populated in Kubernetes, and that it was mounted as a volume for each pod, but that the directory (both in the pod and on the node itself) was empty.
I found the problem was caused by disabling the 'storage-provisioner' addon, and resolved by re-enabling it.