I can list all the custom.metrics available, but I don't know how to query an individual value. For example I have tried:
curl http://localhost:8001/apis/custom.metrics.k8s.io/v1beta1/ | jq .
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "deployments.apps/aws_sqs_approximate_number_of_messages_visible_average",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
But if I try this:
curl http://localhost:8001/apis/custom.metrics.k8s.io/v1beta1/deployments.apps/aws_sqs_approximate_number_of_messages_visible_average | jq .
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {
},
"code": 404
}
I get a 404. I've seen this issue which shows how to get a namespaced metric, but mine does not have a namespace? Is there a definition for how to use this API?
Just like Resource Metrics, Custom Metrics are bound to Kubernetes objects too.
What you're missing in your URL is the resource you want the metric to relate to.
For example the Pod the custom metric is related to, but the same is true for Deployments.
Try to adjust this url to your needs:
kubectl get --raw \
'/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pod/podinfo-67c9fd95d-fqk4g/http_requests_per_second' \
| jq .
Here are the slides for the talk we gave at FOSDEM 2019 on the Prometheus Adapter: https://speakerdeck.com/metalmatze/kubernetes-metrics-api?slide=26
I'll update this answer, once the video is available too.
Since I'm using DirectXMan12/k8s-prometheus-adapter there are a few things to know:
I think it can only work with namespaced metrics.
If a query does not return a metric for a particular time period in prometheus k8s-prometheus-adapter will report it as non-existent.
This is my actual problem.
Using the custom metrics API is very simple:
kubectl proxy to open a proxy to your kubernetes API
curl http://localhost:8001/apis/custom.metrics.k8s.io/v1beta1/ to list all custom metrics available.
For example you may see:
{
"name": "deployments.extensions/kube_deployment_status_replicas_available",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
We know it is namespaced from namespaced: true and beneath the namespace we can select via deployment from the name field.
So we would build our query like so:
curl http://localhost:8001/apis/custom.metrics.k8s.io/v1beta1/namespace/$NAMESPACE/deployments.extensions/$DEPLOYMENT/kube_deployment_status_replicas_available
At least I think that's how it should work, although if you do the same query without deployments.extensions section it will show the value for the namespace:
curl http://localhost:8001/apis/custom.metrics.k8s.io/v1beta1/namespace/$NAMESPACE/kube_deployment_status_replicas_available
Perhaps this is due to how the query executes in prometheus.
Related
I am trying to terminate the namespace argo in Kubernetes. In the past, I have succesfully followed the directions found here Kubernetes Namespaces stuck in Terminating status
this time, however, I am getting the following error message. What does it mean and how can I work around this?
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "namespaces \"argo\" is forbidden: User \"system:anonymous\" cannot update resource \"namespaces/finalize\" in API group \"\" in the namespace \"argo\"",
"reason": "Forbidden",
"details": {
"name": "argo",
"kind": "namespaces"
},
"code": 403
}
You need to use an authenticated user that has permissions for the subresource (or more often, for *).
keep getting this when trying to go to the web of a httpd pod, what permissions am i missing.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "pods \"pod-httpd\" is forbidden: User \"system:anonymous\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"name": "pod-httpd",
"kind": "pods"
},
"code": 403
}
The error is clear User "system:anonymous" means k8s recognising you as anonymous user and that is why it is giving forbidden reason for accessing the desired resources.
So, when you do curl https://<ip>:<port>/<endpoint> you are using TLS for the communication. In this type of communication you need to provide your CA (certificate authority, who signed your certificate) certificate, and your certificate and key to the curl like below, because in TLS server-client need to be verified.
curl https://<ip>:<port>/<endpoint> --key <your_key> --cert <your_cert> --cacert <ca_cert>
N.B: here you means the client
I have an instance of a Kubernetes Custom Resource that I want to patch via the Kubernetes API using a JSON patch.
This is my PATCH request:
PATCH /apis/example.com/v1alpha1/namespaces/default/mycrd/test HTTP/1.1
Accept: application/json
Content-Type: application/json-patch+json
[other headers omitted for brevity...]
[
{"op": "replace", "path": "/status/foo", value: "bar"}
]
I'm fairly certain that my request body is a valid JSON patch, and I've previously already successfully updated core (non-CRD) API resources using similar API calls. The CRD has a openAPIV3Schema defined that explicitly allows .status.foo to exist and to be of type string.
The request above is declined by the Kubernetes API server with the following response:
HTTP/1.1 422 Unprocessable Entity
Conent-Type: application/json
[other headers omitted for brevity...]
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server rejected our request due to an error in our request",
"reason": "Invalid",
"details": {},
"code": 422
}
According to the CRD documentation, CRDs should support PATCH request with the application/json-patch+json content type. But for some reason, the request appears to be invalid without Kubernetes bothering to tell me why. The API server pod did not have any relevant messages in its log stream, either.
The same error also occurs when using kubectl patch on the command line:
$ kubectl patch mycrd.example.com test --type=json -p '[{"op": "replace", "path": "/status/foo", "value": "bar"}]'
The "" is invalid
What are possible reasons for this error to occur? What options to I have for further debugging?
Found the (or at least, a partial) answer while still typing the question...
The Kubernetes API server will not recursively create nested objects for a JSON patch input. This behaviour is consistent with the JSON Patch specification in RFC 6902, section A.12:
A.12. Adding to a Nonexistent Target
An example target JSON document:
{ "foo": "bar" }
A JSON Patch document:
[
{ "op": "add", "path": "/baz/bat", "value": "qux" }
]
This JSON Patch document, applied to the target JSON document above,
would result in an error (therefore, it would not be applied),
because the "add" operation's target location that references neither
the root of the document, nor a member of an existing object, nor a
member of an existing array.
This is why the original request fails, when the Custom Resources does not have a .status property to begin with. The following two subsequent calls (the second one being the original one) will complete successfully:
$ kubectl patch mycrd.example.com test --type=json \
-p '[{"op": "replace", "path": "/status", "value": {}}]'
mycrd.example.com/test patched
$ kubectl patch mycrd.example.com test --type=json \
-p '[{"op": "replace", "path": "/status/foo", "value": "bar"}]'
mycrd.example.com/test patched
Obviously, replaceing the entire .status property with {} is not a good idea if that property already contains data that you want to keep.
A suitable alternative to a JSON patch in this scenario is a JSON Merge Patch:
PATCH /apis/example.com/v1alpha1/namespaces/default/mycrd/test HTTP/1.1
Accept: application/json
Content-Type: application/merge-patch+json
[other headers omitted for brevity...]
{
"status": {
"foo": "bar"
}
}
Or, alternatively, using kubectl:
$ kubectl patch mycrd.example.com test --type=merge \
-p '{"status": {"foo": "bar"}}'
mycrd.example.com/test patched
I'm trying to setup via script a kubernetes cluster on GCE, which always worked for the past, but I created a new project on GCE and I suddenly get all these permissions errors:
Example:
Error from server (Forbidden): serviceaccounts is forbidden: User "client" cannot list serviceaccounts in the namespace "default": Unknown user "client"
Also when I kubectl proxy and open http://localhost:8001/ I get:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"client\" cannot get path \"/\": Unknown user \"client\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
Could somebody hint me please into the right direction? Thx!
Duplicate of what does Unknown user "client" mean?:
Found out there is some issue with gcloud config. This command solved it:
gcloud config unset container/use_client_certificate
So I have a service like as follow:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "monitoring-grafana",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/services/monitoring-grafana",
"uid": "be0f72b2-c482-11e5-a22c-fa163ebc1085",
"resourceVersion": "143360",
"creationTimestamp": "2016-01-26T23:15:51Z",
"labels": {
"kubernetes.io/cluster-service": "true",
"kubernetes.io/name": "monitoring-grafana"
}
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 80,
"targetPort": 3000,
"nodePort": 0
}
],
"selector": {
"name": "influxGrafana"
},
"clusterIP": "192.168.182.76",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
However, whenever I try to access it through the proxy API, it always fails with this response.
http://10.32.10.44:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
Error: 'dial tcp 192.168.182.132:3000: getsockopt: no route to host'
Trying to reach: 'http://192.168.182.132:3000/'
It happens on all of my services also, not just the one posted.
What could be going wrong? Is something not installed?
Looking at the error you posted it seems like the traffic can not be routed from your master to the Docker subnet of your node. The easiest way to validate this is to open a shell on your master and perform a request on your podIP:daemonPort: curl -I http://192.168.182.132:3000
Each node in your cluster should be able to communicate with every other node, and every Docker subnet should be routable. For most deployments you will need to setup an extra network fabric to make this happen, like flannel or Weave.
Take a look at Getting started from Scratch >> Network
Something else is funny. The cluster IP used by your service (192.168.182.76) and the pod IP of the endpoint (192.168.182.132) seem to be in the same subnet. However you need 3 different subnets:
one for the hosts
one for the Docker bridges (--bip flag of Docker)
one for the service (--service-cluster-ip-range= of the API server)
In my case I didn't realize that I have active firewall that was simply preventing access to the ports needed by kubernetes. Quick and crude solution is to run systemctl stop firewalld on the master and all minion nodes and of course you can just open ports needed instead