How to get list of metrics available for HPA? - kubernetes

I have GCP cluster which contains GKE application:
I want to scale application using HPA
Based on supporting for metrics
HPA able to read metrics from
metrics.k8s.io (resource metrics)
custom.metrics.k8s.io(custom metrics)
external.metrics.k8s.io(external metrics)
How could I check what metrics available? How could try this API on my own ? Is it possible at all?
P.S.
Based on suggested answer I executed command:
kubectl get --raw https://MY-KUBE-APISERVER-IP:6443/apis/metrics.k8s.io/v1beta1/namespaces/default/pods
Response is:
{
"items": [
{
"metadata": {
"name": "prometheus-adapter-69fcdd56bc-2plh7",
"namespace": "default",
"selfLink": "/\r\napis/metrics.k8s.io/v1beta1/namespaces/default/pods/prometheus-adapter-69fcdd56bc-2plh7",
"creationTimestamp": "2020-02-05T10:56:02Z"
},
"timestamp": "2020-02-05T10:55:22Z",
"window": "30s",
"containers": [
{
"name": "prometheus-adapter",
"usage": {
"cpu": "15\r\n31939n",
"memory": "10408Ki"
}
}
]
},
{
"metadata": {
"name": "stackdriver-exporter-76fdbc9d8f-c285l",
"namespace": "default",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/stackdriver-exporter-76fdbc9d8f-c285l",
"creationTimestamp": "2020-0\r\n2-05T10:56:02Z"
},
"timestamp": "2020-02-05T10:55:22Z",
"window": "30s",
"containers": [
{
"name": "stackdriver-exporter",
"usage": {
"cpu": "79340n",
"memory": "2000Ki"
}
}
]
}
],
"kind": "PodMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods"
}
}
$ kubectl top pods
NAME CPU(cores) MEMORY(bytes)
prometheus-adapter-69fcdd56bc-2plh7 2m 10Mi
stackdriver-exporter-76fdbc9d8f-c285l 1m 1Mi
But I still don't see all metrics available for HPA

Metrics server exposes metrics via below APIs.
/nodes - all node metrics; type []NodeMetrics
/nodes/{node} - metrics for a specified node; type NodeMetrics
/namespaces/{namespace}/pods - all pod metrics within namespace with
support for all-namespaces; type []PodMetrics
/namespaces/{namespace}/pods/{pod} - metrics for a specified pod;
type PodMetrics
You can view available metrics as below for example
$ kubectl get --raw https://KUBE-APISERVER-IP:6443/apis/metrics.k8s.io/v1beta1
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "metrics.k8s.io/v1beta1",
"resources": [
{
"name": "nodes",
"singularName": "",
"namespaced": false,
"kind": "NodeMetrics",
"verbs": [
"get",
"list"
]
},
{
"name": "pods",
"singularName": "",
"namespaced": true,
"kind": "PodMetrics",
"verbs": [
"get",
"list"
]
}
]
}
$ kubectl get --raw https://KUBE-APISERVER-IP:6443/apis/metrics.k8s.io/v1beta1/namespaces/default/pods
{
"kind": "PodMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods"
},
"items": []
}
$ kubectl get --raw https://KUBE-APISERVER-IP:6443/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods
{
"kind": "PodMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods"
},
"items": [
{
"metadata": {
"name": "coredns-bcccf59f-jfl6x",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/coredns-bcccf59f-jfl6x",
"creationTimestamp": "2021-02-17T20:31:29Z"
},
"timestamp": "2021-02-17T20:30:27Z",
"window": "30s",
"containers": [
{
"name": "coredns",
"usage": {
"cpu": "1891053n",
"memory": "8036Ki"
}
}
]
},
{
"metadata": {
"name": "coredns-bcccf59f-vmfvv",
"namespace": "kube-system",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/coredns-bcccf59f-vmfvv",
"creationTimestamp": "2021-02-17T20:31:29Z"
},
"timestamp": "2021-02-17T20:30:25Z",
"window": "30s",
"containers": [
{
"name": "coredns",
"usage": {
"cpu": "1869226n",
"memory": "8096Ki"
}
}
]
}
]
}
You can also use command kubectl top pods which internally calls the above API.
Custom Metrics
These are provided by adapters developed by vendors and what metrics are available will depend on the adapter. Once you know the metrics name You can use API to access it.
You can view available metrics as below and get the metrics name.
kubectl get --raw https://KUBE-APISERVER-IP:6443/apis/custom.metrics.k8s.io/v1beta1
External Metrics
These are provided by adapters developed by vendors and what metrics are available will depend on the adapter. Once you know the metrics name You can use API to access it.
You can view available metrics as below and get the metrics name.
kubectl get --raw https://KUBE-APISERVER-IP:6443/apis/external.metrics.k8s.io/v1beta1
Edit:
You already have Prometheus adapter but if the metric is not exposed as custom metrics to be consumable by HPA then you need to expose the required metrics. Refer to this guide for this.

On GKE case is bit different.
As default Kubernetes have some built-in metrics (CPU and Memory). If you want to use HPA based on this metric you will not have any issues.
In GCP concept:
Custom Metrics are used when you want to use metrics exported by Kubernetes workload or metric attached to Kubernetes object such as Pod or Node.
External Metrics- Metrics sent to Workspaces with a metric type beginning external.googleapis.com are known as external metrics. The metrics are typically exported by open-source projects and third-party providers. More details can be found here.
Stackdriver Monitoring treats external metrics the same as custom metrics, with one exception. For external metrics, a resource_type of global is invalid and results in the metric data being discarded.
As GKE is integrated with Stackdriver
Google Kubernetes Engine (GKE) includes native integration with Stackdriver Monitoring and Stackdriver Logging. When you create a GKE cluster, Stackdriver Kubernetes Engine Monitoring is enabled by default and provides a monitoring dashboard specifically tailored for Kubernetes.
With Stackdriver Kubernetes Engine Monitoring, you can control whether or not Stackdriver Logging collects application logs. You also have the option to disable the Stackdriver Monitoring and Stackdriver Logging integration altogether.
Check Available Metrics
As you are using cloud environment - GKE, you can find all default available metrics by curiling localhost on proper port. You have to SSH to one of Nodes and then curl metric-server $ curl localhost:10255/metrics.
Second way is to check available metrics documentation.
IMPORTANT
You can see available metrics, however to use them in HPA you need deploy Adapters like Stackdriver adapter or Prometheus adapter. In default (version 1.13.11) GKE cluster you have already deployed metrics-server, heapster(deprecated in newer versions) and prometheus-to-sd-XXX. If you would like to use Stackdriver, you would have many config already applied, but if you want to use Prometheus you will need adjust Prometheus operators, adapters, deployments. Details can be found here.
On GKE docs, you can find some tutorials to use HPA with Custom Metrics or with HPA with External Metrics. You can also read about GKE monitoring with Prometheus and Stackdriver, depends on your needs.
As GKE is integrated with Stackdriver you can read article about Enabling Monitoring.

Related

How to scale a Kubernetes deployment which uses a Persistent Volume Claim to 2 Pods?

I have a Kubernetes deployment (apache flume to be exact) which needs to store persistent data. It has a PVC set up and bind to a path, which works without problem.
When I simply increase the scale of the deployment through kubernetes dashboard, it gives me an error saying multiple pods are trying to attach the same persistent volume. My deployment description is something like this (I tried to remove irrelevant parts)
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "myapp-deployment",
"labels": {
"app": "myapp",
"name": "myapp-master"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "myapp",
"name": "myapp-master"
}
},
"template": {
"spec": {
"volumes": [
{
"name": "myapp-data",
"persistentVolumeClaim": {
"claimName": "myapp-pvc"
}
}
],
"containers": [
{
"name": "myapp",
"resources": {},
"volumeMounts": [
{
"name": "ingestor-data",
"mountPath": "/data"
}
]
}
]
}
},...
Each pod should get its own persistent space (but with same pathname), so one doesn't mess with the others'. I tried to add a new volume in the volume array above, and a volume mount in the volume mount array, but it didn't work (I guess it meant "bind two volumes to a single container")
What should I change to have 2 pods with separate persistent volumes? What should I change to have N number of pods and N number of PVC's so I can freely scale the deployment up and down?
Note: I saw a similar question here which explains N number of pods cannot be done using deployments. Is it possible to do what I want with only 2 pods?
You should use a StatefulSet for that. This is for pods with persistent data that should survive a pod restart. Replicas have a certain order and are named in that way (my-app-0, my-app-1, ...). They are stopped and restarted in this order and will mount the same volume after a restart/update.
With the StatefulSet you can use a volumeClaimTemplates to dynamically create new PersistentVolumes with the creation of a new pod. So every time a pod is created a volume get provisioned by your storage class.
From docs:
The volumeClaimTemplates will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
See docs for more details:
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components
Each pod should get its own persistent space (but with same pathname), so one doesn't mess with the others'.
For this reason, use a StatefulSet instead. Most things will work the same way, except that each Pod will get its own unique Persistent Volume.

How to access the application deployed in minikube k8s cluster

I have installed the minikube, deployed the hello-minikube application and opened the port. Basically I have followed the getting started tutorial at https://kubernetes.io/docs/setup/learning-environment/minikube/#quickstart.
The problem starts when I want to open the URL where the deployed application is running obtained by running minikube service hello-minikube --url.
I get http://172.17.0.7:31198 and that URI cannot be opened, since that IP does not exist locally. Changing it to http://localhost:31198 does not work either (so adding an entry to hosts file won't work I guess).
The application is running, I can query the cluster and obtain the info through http://127.0.0.1:50501/api/v1/namespaces/default/services/hello-minikube:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "hello-minikube",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/hello-minikube",
"uid": "56845ce6-bbba-45e5-a1b6-d094949438cf",
"resourceVersion": "1578",
"creationTimestamp": "2020-03-10T10:33:41Z",
"labels": {
"app": "hello-minikube"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 8080,
"targetPort": 8080,
"nodePort": 31198
}
],
"selector": {
"app": "hello-minikube"
},
"clusterIP": "10.108.152.177",
"type": "NodePort",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
}
}
}
λ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube NodePort 10.108.152.177 <none> 8080:31198/TCP 4h34m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h42m
How to access the application deployed in minikube k8s cluster on localhost? Also minikube is running as a docker container on the machine with following ports 32770:2376 32769:8443 32771:22 exposed.
Found the solution in another thread - port forwarding
kubectl port-forward svc/hello-minikube 31191:8080
The first port is port that you will use on your machine (in the browser) and 8080 is the port defined when running the service.

Pod security context and NFS mounting

According to this post:
https://netapp.io/2018/06/15/highly-secure-kubernetes-persistent-volumes/
You can't use/mount an NFS share in a pod if the pod is not having security context as privileged.
I am running a pod , with external NFS mounted but I have not specified any security context other than uid/gid. Working RW fine.
How can I check if my pod is a normal one or is privileged.
You can check this using kubectl get pods yourpod -o json under .spec.containers.securityContext or in metadata
As an example I created 2 nginx pods:
nginx(with privileged: true)
"metadata": {
"annotations": {
"cni.projectcalico.org/podIP": "10.48.2.3/32",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"nginx\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}],\"securityContext\":{\"privileged\":true}}]}}\n",
"securityContext": {
"privileged": true
and
nginx-nonprivileged
"metadata": {
"annotations": {
"cni.projectcalico.org/podIP": "10.48.2.4/32",
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"nginx\"},\"name\":\"nginx-nonprivileged\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"nginx\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}\n",

No response when "externalTrafficPolicy" is set to "Local"

I cannot reach the following Kubernetes service when externalTrafficPolicy: Local is set. I access it directly through the NodePort but always get a timeout.
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "echo",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/services/echo",
"uid": "c1b66aca-cc53-11e8-9062-d43d7ee2fdff",
"resourceVersion": "5190074",
"creationTimestamp": "2018-10-10T06:14:33Z",
"labels": {
"k8s-app": "echo"
}
},
"spec": {
"ports": [
{
"name": "tcp-8080-8080-74xhz",
"protocol": "TCP",
"port": 8080,
"targetPort": 3333,
"nodePort": 30275
}
],
"selector": {
"k8s-app": "echo"
},
"clusterIP": "10.101.223.0",
"type": "NodePort",
"sessionAffinity": "None",
"externalTrafficPolicy": "Local"
},
"status": {
"loadBalancer": {}
}
}
I know that for this pods of the service need to be available on a node because traffic is not routed to other nodes. I checked this.
Not sure where you are connecting from and what command you are typing to test connectivity or what's your environment like. But this is most likely due to this known issue where the node ports are not reachable with externalTrafficPolicy set to Local if the kube-proxy cannot find the IP address for the node where it's running on.
This link sheds more light into the problem. Apparently --hostname-override on the kube-proxy is not working as of K8s 1.10. You have to specify the HostnameOverride option in the kube-proxy ConfigMap. There's also a fix described here that will make it upstream at some point in the future from this writing.
As Jagrut said, the link shared in Rico's answer does not contain the desired section with the patch anymore, so I'll share a different thread where stacksonstacks' answer worked for me: here . This solution consists in editing kube-proxy.yaml to include the HOST_IP argument.
In my case, request to node cluster/public ip which own deployment/pod.
spec:
clusterIP: 10.9x.x.x <-- request this ip
clusterIPs:
- 10.9x.x.x
externalTrafficPolicy: Local
or
NAME STATUS ... EXTERNAL-IP
master-node Ready 3.2x.x.x
worker-node Ready 13.1x.x.x <-- request this ip
additionally, alway to request same node's ip, use nodeSelector in Deployment.

daemonset doesn't create any pods

I'm trying to get going with Kubernetes DaemonSets and not having any luck at all. I've searched for a solution to no avail. I'm hoping someone here can help out.
First, I've seen this ticket. Restarting the controller manager doesn't appear to help. As you can see here, the other kube processes have all been started after the apiserver and the api server has '--runtime-config=extensions/v1beta1=true' set.
kube 31398 1 0 08:54 ? 00:00:37 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://dock-admin:2379 --address=0.0.0.0 --allow-privileged=false --portal_net=10.254.0.0/16 --admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota --runtime-config=extensions/v1beta1=true
kube 12976 1 0 09:49 ? 00:00:28 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://127.0.0.1:8080 --cloud-provider=
kube 29489 1 0 11:34 ? 00:00:00 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://127.0.0.1:8080
However api-versions only shows version 1:
$ kubectl api-versions
Available Server Api Versions: v1
Kubernetes version is 1.2:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"86327329213fed4af2661c5ae1e92f9956b24f55", GitTreeState:"clean"}
The DaemonSet has been created, but appears to have no pods scheduled (status.desiredNumberScheduled).
$ kubectl get ds -o json
{
"kind": "List",
"apiVersion": "v1",
"metadata": {},
"items": [
{
"kind": "DaemonSet",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "ds-test",
"namespace": "dvlp",
"selfLink": "/apis/extensions/v1beta1/namespaces/dvlp/daemonsets/ds-test",
"uid": "2d948b18-fa7b-11e5-8a55-00163e245587",
"resourceVersion": "2657499",
"generation": 1,
"creationTimestamp": "2016-04-04T15:37:45Z",
"labels": {
"app": "ds-test"
}
},
"spec": {
"selector": {
"app": "ds-test"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "ds-test"
}
},
"spec": {
"containers": [
{
"name": "ds-test",
"image": "foo.vt.edu:1102/dbaa-app:v0.10-dvlp",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {}
}
}
},
"status": {
"currentNumberScheduled": 0,
"numberMisscheduled": 0,
"desiredNumberScheduled": 0
}
}
]
}
Here is my yaml file to create the DaemonSet
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: ds-test
spec:
selector:
app: ds-test
template:
metadata:
labels:
app: ds-test
spec:
containers:
- name: ds-test
image: foo.vt.edu:1102/dbaa-app:v0.10-dvlp
ports:
- containerPort: 8080
Using that file to create the DaemonSet appears to work (I get 'daemonset "ds-test" created'), but no pods are created:
$ kubectl get pods -o json
{
"kind": "List",
"apiVersion": "v1",
"metadata": {},
"items": []
}
(I would have posted this as a comment, if I had enough reputation)
I am confused by your output.
kubectl api-versions should print out extensions/v1beta1 if it is enabled on the server. Since it does not, it looks like extensions/v1beta1 is not enabled.
But kubectl get ds should fail if extensions/v1beta1 is not enabled. So I can not figure out if extensions/v1beta1 is enabled on your server or not.
Can you try GET masterIP/apis and see if extensions is listed there?
You can also go to masterIP/apis/extensions/v1beta1 and see if daemonsets is listed there.
Also, I see kubectl version says 1.2, but then kubectl api-versions should not print out the string Available Server Api Versions (that string was removed in 1.1: https://github.com/kubernetes/kubernetes/pull/15796).
I have this issue in my cluster (k8s version: 1.9.7):
Daemonset controlled by "Daemonset controller" not "Scheduler", So I restart the controller manager, the problem sloved:
But I think this is a issue of kubernetes, some relation info:
Bug 1469037 - Sometime daemonset DESIRED=0 even this matched node
v1.7.4 - Daemonset DESIRED 0 (for node-exporter) #51785
I was facing a similar issue, then tried searching for the daemonset in the kube-system namespace, as mentioned here, https://github.com/kubernetes/kubernetes/issues/61342
I actually did get an output properly as well
For any case that the current state of pods is not equal to desired state (whether it was created by a DaemonSet, ReplicaSet, Deployment etc') I would first check the Kubelet on the current node:
$ sudo systemctl status kubelet
Or:
$ sudo journalctl -u kubelet
In many cases pods weren't created in my cluster because of errors like:
Couldn't parse as pod (Object 'Kind' is missing in 'null')
Which might occur after editing a resource's yaml in editor like vim.
Try:
$kubectl taint nodes --all node-role.kubernetes.io/master:NoSchedule-
master node cannot accept pods.