Forbidden to access Kubernetes API Server - kubernetes

I have defined a ClusterRole for Prometheus:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
labels:
k8s-app: prometheus
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- endpoints
- services
- nodes
- pods
verbs:
- get
- watch
- list
- nonResourceURLs:
- /metrics
- /api/*
verbs:
- get
Prometheus is able to access the API-Servers /metrics route:
https://10.0.1.104:443/metrics
https://10.0.2.112:443/metrics
But I get "server returned HTTP status 403 Forbidden" on
https://kubernetes.default.svc:443/api/v1/nodes/ip-10-0-0-219.eu-west-1.compute.internal/proxy/metrics
and
https://kubernetes.default.svc:443/api/v1/nodes/ip-10-0-0-219.eu-west-1.compute.internal/proxy/metrics/cadvisor
I thought I had this covered by
- nonResourceURLs:
- /api/*
What am I missing?

I tried this myself and yes nodes/proxy is missing. (it works for me after adding it)
rules:
- apiGroups: [""]
resources:
- namespaces
- endpoints
- services
- nodes
- nodes/proxy <===
- pods
# From my K8s master
$ curl -k -H 'Authorization: Bearer <redacted>' https://localhost:6443/api/v1/nodes/ip-x-x-x-x.us-west-2.compute.internal/proxy/stats/summary
{
"node": {
"nodeName": "ip-x-x-x-x.us-west-2.compute.internal",
"systemContainers": [
{
"name": "kubelet",
"startTime": "2018-10-19T21:02:19Z",
"cpu": {
"time": "2018-11-09T23:51:15Z",
"usageNanoCores": 30779949,
"usageCoreNanoSeconds": 59446529195638
},
....
Removing it:
$ curl -k -H 'Authorization: Bearer <redacted>' https://localhost:6443/api/v1/nodes/ip-x-x-x-x.us-west-2.compute.internal/proxy/stats/summary
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "nodes \"ip-x-x-x-x.us-west-2.compute.internal\" is forbidden: User \"system:serviceaccount:default:prometheus-k8s\" cannot get resource \"nodes/proxy\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"name": "ip-x-x-x-x.us-west-2.compute.internal",
"kind": "nodes"
},
"code": 403
}

For those two endpoints, the rules may be missing nodes/metrics and nodes/proxy for (sub)resources, and possibly the proxy verb.
If acceptable from security standpoint, it will be much easier to assign the cluster-reader role to the prometheus' service account.

Related

Setting environment variables in kubernetes manifest using "kubectl set env"

I am trying to update a helm-deployed deployment so that it uses a secret stored as a k8s secret resource. This must be set as the STORAGE_PASSWORD environment variable in my pod.
In my case, the secret is in secrets/redis and the data item is redis-password:
$ kubectl get secret/redis -oyaml
apiVersion: v1
data:
redis-password: XXXXXXXXXXXXXXXX=
kind: Secret
metadata:
name: redis
type: Opaque
I have tried:
$ kubectl set env --from secret/redis deployment/gateway --keys=redis-password
Warning: key redis-password transferred to REDIS_PASSWORD
deployment.apps/gateway env updated
When I look in my updated deployment manifest, I see the variable has been added but (as suggested) the variable has been set to REDIS_PASSWORD:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
key: redis-password
name: redis
I have also tried kubectl patch with a replace operation, but I can't get the syntax correct to have the secret inserted.
How do I change the name of the environment variable to STORAGE_PASSWORD?
Given a deployment that looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
template:
spec:
containers:
- image: alpinelinux/darkhttpd
name: darkhttpd
args:
- --port
- "9991"
ports:
- name: http
protocol: TCP
containerPort: 9991
env:
- name: EXAMPLE_VAR
value: example value
The syntax for patching in your secret would look like:
kubectl patch deploy/example --patch='
{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "darkhttpd",
"env": [
{
"name": "STORAGE_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "redis",
"key": "redis-password"
}
}
}
]
}
]
}
}
}
}
'
Or using a JSONPatch style patch:
kubectl patch --type json deploy/example --patch='
[
{
"op": "add",
"path": "/spec/template/spec/containers/0/env/-",
"value": {
"name": "STORAGE_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "redis",
"key": "redis-password"
}
}
}
}
]
'
Neither one is especially pretty because you're adding a complex nested structure to an existing complex nested structure.
you may also update resources with kubectl edit:
kubectl edit deployment gateway
then edit the yaml file
# - name: REDIS_PASSWORD
- name: STORAGE_PASSWORD
valueFrom:
secretKeyRef:
key: redis-password
name: redis
FYI: https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#kubectl-edit

the server could not find the metric nginx_vts_server_requests_per_second for pods

I installed the kube-prometheus-0.9.0, and want to deploy a sample application on which to test the Prometheus metrics autoscaling, with the following resource manifest file: (hpa-prome-demo.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-prom-demo
spec:
selector:
matchLabels:
app: nginx-server
template:
metadata:
labels:
app: nginx-server
spec:
containers:
- name: nginx-demo
image: cnych/nginx-vts:v1.0
resources:
limits:
cpu: 50m
requests:
cpu: 50m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: hpa-prom-demo
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "80"
prometheus.io/path: "/status/format/prometheus"
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: nginx-server
type: NodePort
For testing purposes, used a NodePort Service and luckly I can get the http repsonse after applying the deployment. Then I installed
Prometheus Adapter via Helm Chart by creating a new hpa-prome-adapter-values.yaml file to override the default Values values, as follows.
rules:
default: false
custom:
- seriesQuery: 'nginx_vts_server_requests_total'
resources:
overrides:
kubernetes_namespace:
resource: namespace
kubernetes_pod_name:
resource: pod
name:
matches: "^(.*)_total"
as: "${1}_per_second"
metricsQuery: (sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))
prometheus:
url: http://prometheus-k8s.monitoring.svc
port: 9090
Added a rules rule and specify the address of Prometheus. Install Prometheus-Adapter with the following command.
$ helm install prometheus-adapter prometheus-community/prometheus-adapter -n monitoring -f hpa-prome-adapter-values.yaml
NAME: prometheus-adapter
LAST DEPLOYED: Fri Jan 28 09:16:06 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
Finally the adatper was installed successfully, and can get the http response, as follows.
$ kubectl get po -nmonitoring |grep adapter
prometheus-adapter-665dc5f76c-k2lnl 1/1 Running 0 133m
$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
But it was supposed to be like this,
$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
Why I can't get the metrics pods/nginx_vts_server_requests_per_second? as a result, below query was also failed.
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
Error from server (NotFound): the server could not find the metric nginx_vts_server_requests_per_second for pods
Anybody cloud please help? many thanks.
ENV:
helm install all Prometheus charts from prometheus-community https://prometheus-community.github.io/helm-chart
k8s cluster enabled by docker for mac
Solution:
I met the same problem, from Prometheus UI, i found it had namespace label and no pod label in metrics as below.
nginx_vts_server_requests_total{code="1xx", host="*", instance="10.1.0.19:80", job="kubernetes-service-endpoints", namespace="default", node="docker-desktop", service="hpa-prom-demo"}
I thought Prometheus may NOT use pod as a label, so i checked Prometheus config and found:
121 - action: replace
122 source_labels:
123 - __meta_kubernetes_pod_node_name
124 target_label: node
then searched
https://prometheus.io/docs/prometheus/latest/configuration/configuration/ and do the similar thing as below under every __meta_kubernetes_pod_node_name i searched(ie. 2 places)
125 - action: replace
126 source_labels:
127 - __meta_kubernetes_pod_name
128 target_label: pod
after a while, the configmap reloaded, UI and API could find pod label
$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": [
{
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
{
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
}
]
}
It is worth knowing that using the kube-prometheus repository, you can also install components such as Prometheus Adapter for Kubernetes Metrics APIs, so there is no need to install it separately with Helm.
I will use your hpa-prome-demo.yaml manifest file to demonstrate how to monitor nginx_vts_server_requests_total metrics.
First of all, we need to install Prometheus and Prometheus Adapter with appropriate configuration as described step by step below.
Copy the kube-prometheus repository and refer to the Kubernetes compatibility matrix in order to choose a compatible branch:
$ git clone https://github.com/prometheus-operator/kube-prometheus.git
$ cd kube-prometheus
$ git checkout release-0.9
Install the jb, jsonnet and gojsontoyaml tools:
$ go install -a github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb#latest
$ go install github.com/google/go-jsonnet/cmd/jsonnet#latest
$ go install github.com/brancz/gojsontoyaml#latest
Uncomment the (import 'kube-prometheus/addons/custom-metrics.libsonnet') + line from the example.jsonnet file:
$ cat example.jsonnet
local kp =
(import 'kube-prometheus/main.libsonnet') +
// Uncomment the following imports to enable its patches
// (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
// (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
// (import 'kube-prometheus/addons/node-ports.libsonnet') +
// (import 'kube-prometheus/addons/static-etcd.libsonnet') +
(import 'kube-prometheus/addons/custom-metrics.libsonnet') + <--- This line
// (import 'kube-prometheus/addons/external-metrics.libsonnet') +
...
Add the following rule to the ./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet file in the rules+ section:
{
seriesQuery: "nginx_vts_server_requests_total",
resources: {
overrides: {
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
name: { "matches": "^(.*)_total", "as": "${1}_per_second" },
metricsQuery: "(sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))",
},
After this update, the ./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet file should look like this:
NOTE: This is not the entire file, just an important part of it.
$ cat custom-metrics.libsonnet
// Custom metrics API allows the HPA v2 to scale based on arbirary metrics.
// For more details on usage visit https://github.com/DirectXMan12/k8s-prometheus-adapter#quick-links
{
values+:: {
prometheusAdapter+: {
namespace: $.values.common.namespace,
// Rules for custom-metrics
config+:: {
rules+: [
{
seriesQuery: "nginx_vts_server_requests_total",
resources: {
overrides: {
namespace: { resource: 'namespace' },
pod: { resource: 'pod' },
},
},
name: { "matches": "^(.*)_total", "as": "${1}_per_second" },
metricsQuery: "(sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))",
},
...
Use the jsonnet-bundler update functionality to update the kube-prometheus dependency:
$ jb update
Compile the manifests:
$ ./build.sh example.jsonnet
Now simply use kubectl to install Prometheus and other components as per your configuration:
$ kubectl apply --server-side -f manifests/setup
$ kubectl apply -f manifests/
After configuring Prometheus, we can deploy a sample hpa-prom-demo Deployment:
NOTE: I've deleted the annotations because I'm going to use a ServiceMonitor to describe the set of targets to be monitored by Prometheus.
$ cat hpa-prome-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-prom-demo
spec:
selector:
matchLabels:
app: nginx-server
template:
metadata:
labels:
app: nginx-server
spec:
containers:
- name: nginx-demo
image: cnych/nginx-vts:v1.0
resources:
limits:
cpu: 50m
requests:
cpu: 50m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: hpa-prom-demo
labels:
app: nginx-server
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: nginx-server
type: LoadBalancer
Next, create a ServiceMonitor that describes how to monitor our NGINX:
$ cat servicemonitor.yaml
kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
name: hpa-prom-demo
labels:
app: nginx-server
spec:
selector:
matchLabels:
app: nginx-server
endpoints:
- interval: 15s
path: "/status/format/prometheus"
port: http
After waiting some time, let's check the hpa-prom-demo logs to make sure that it is scrapped correctly:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hpa-prom-demo-bbb6c65bb-49jsh 1/1 Running 0 35m
$ kubectl logs -f hpa-prom-demo-bbb6c65bb-49jsh
...
10.4.0.9 - - [04/Feb/2022:09:29:17 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3771 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:29:32 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3771 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:29:47 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:30:02 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.0.9 - - [04/Feb/2022:09:30:17 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
10.4.2.12 - - [04/Feb/2022:09:30:23 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
...
Finally, we can check if our metrics work as expected:
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq . | grep -A 7 "nginx_vts_server_requests_per_second"
"name": "pods/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": true,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
--
"name": "namespaces/nginx_vts_server_requests_per_second",
"singularName": "",
"namespaced": false,
"kind": "MetricValueList",
"verbs": [
"get"
]
},
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/nginx_vts_server_requests_per_second"
},
"items": [
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "hpa-prom-demo-bbb6c65bb-49jsh",
"apiVersion": "/v1"
},
"metricName": "nginx_vts_server_requests_per_second",
"timestamp": "2022-02-04T09:32:59Z",
"value": "533m",
"selector": null
}
]
}

I have an RBAC problem, but everything I test seems ok?

This is a continuation of the problem described here (How do I fix a role-based problem when my role appears to have the correct permissions?)
I have done much more testing and still do not understand the error
Error from server (Forbidden): pods is forbidden: User "dma" cannot list resource "pods" in API group "" at the cluster scope
UPDATE: Here is another hint from the API server
watch chan error: etcdserver: mvcc: required revision has been compacted
I found this thread, but I am working in the current kubernetes
How fix this error "watch chan error: etcdserver: mvcc: required revision has been compacted"?
My user exists
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
dma 77m kubernetes.io/kube-apiserver-client kubernetes-admin <none> Approved,Issued
The clusterrole exists
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"kubelet-runtime"},"rules":[{"apiGroups":["","extensions","apps","argoproj.io","workflows.argoproj.io","events.argoproj.io","coordination.k8s.io"],"resources":["*"],"verbs":["*"]},{"apiGroups":["batch"],"resources":["jobs","cronjobs"],"verbs":["*"]}]}
creationTimestamp: "2021-12-16T00:24:56Z"
name: kubelet-runtime
resourceVersion: "296716"
uid: a4697d6e-c786-4ec9-bf3e-88e3dbfdb6d9
rules:
- apiGroups:
- ""
- extensions
- apps
- argoproj.io
- workflows.argoproj.io
- events.argoproj.io
- coordination.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- '*'
The sandbox namespace exists
NAME STATUS AGE
sandbox Active 6d6h
My user has authority to operate in the kubelet cluster and the namespace "sandbox"
{
"apiVersion": "rbac.authorization.k8s.io/v1",
"kind": "ClusterRoleBinding",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"rbac.authorization.k8s.io/v1\",\"kind\":\"ClusterRoleBinding\",\"metadata\":{\"annotations\":{},\"name\":\"dma-kubelet-binding\"},\"roleRef\":{\"apiGroup\":\"rbac.authorization.k8s.io\",\"kind\":\"ClusterRole\",\"name\":\"kubelet-runtime\"},\"subjects\":[{\"kind\":\"ServiceAccount\",\"name\":\"dma\",\"namespace\":\"argo\"},{\"kind\":\"ServiceAccount\",\"name\":\"dma\",\"namespace\":\"argo-events\"},{\"kind\":\"ServiceAccount\",\"name\":\"dma\",\"namespace\":\"sandbox\"}]}\n"
},
"creationTimestamp": "2021-12-16T00:25:42Z",
"name": "dma-kubelet-binding",
"resourceVersion": "371397",
"uid": "a2fb6d5b-8dba-4320-af74-71caac7bdc39"
},
"roleRef": {
"apiGroup": "rbac.authorization.k8s.io",
"kind": "ClusterRole",
"name": "kubelet-runtime"
},
"subjects": [
{
"kind": "ServiceAccount",
"name": "dma",
"namespace": "argo"
},
{
"kind": "ServiceAccount",
"name": "dma",
"namespace": "argo-events"
},
{
"kind": "ServiceAccount",
"name": "dma",
"namespace": "sandbox"
}
]
}
My user has the correct permissions
{
"apiVersion": "rbac.authorization.k8s.io/v1",
"kind": "Role",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"rbac.authorization.k8s.io/v1\",\"kind\":\"Role\",\"metadata\":{\"annotations\":{},\"name\":\"dma\",\"namespace\":\"sandbox\"},\"rules\":[{\"apiGroups\":[\"\",\"apps\",\"autoscaling\",\"batch\",\"extensions\",\"policy\",\"rbac.authorization.k8s.io\",\"argoproj.io\",\"workflows.argoproj.io\"],\"resources\":[\"pods\",\"configmaps\",\"deployments\",\"events\",\"pods\",\"persistentvolumes\",\"persistentvolumeclaims\",\"services\",\"workflows\"],\"verbs\":[\"get\",\"list\",\"watch\",\"create\",\"update\",\"patch\",\"delete\"]}]}\n"
},
"creationTimestamp": "2021-12-21T19:41:38Z",
"name": "dma",
"namespace": "sandbox",
"resourceVersion": "1058387",
"uid": "94191881-895d-4457-9764-5db9b54cdb3f"
},
"rules": [
{
"apiGroups": [
"",
"apps",
"autoscaling",
"batch",
"extensions",
"policy",
"rbac.authorization.k8s.io",
"argoproj.io",
"workflows.argoproj.io"
],
"resources": [
"pods",
"configmaps",
"deployments",
"events",
"pods",
"persistentvolumes",
"persistentvolumeclaims",
"services",
"workflows"
],
"verbs": [
"get",
"list",
"watch",
"create",
"update",
"patch",
"delete"
]
}
]
}
My user is configured correctly on all nodes
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://206.81.25.186:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: dma
name: dma#kubernetes
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: dma
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
Based on this website, I have been searching for a watch event.
I think have rebuilt everything above the control plane but the problem persists.
The next step would be to rebuild the entire cluster, but it would be so much more satisfying to find the actual problem.
Please help.
FIX:
So the policy for the sandbox namespace was wrong. I fixed that and the problem is gone!
I think finally understand RBAC (policies and all). Thank you very much to members of the Kubernetes slack channel. These policies have passed the first set of tests for a development environment ("sandbox") for Argo workflows. Still testing.
policies.yaml file:
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dev
namespace: sandbox
rules:
- apiGroups:
- "*"
attributeRestrictions: null
resources: ["*"]
verbs:
- get
- watch
- list
- apiGroups: ["argoproj.io", "workflows.argoproj.io", "events.argoprpj.io"]
attributeRestrictions: null
resources:
- pods
- configmaps
- deployments
- events
- pods
- persistentvolumes
- persistentvolumeclaims
- services
- workflows
- eventbus
- eventsource
- sensor
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dma-dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dev
subjects:
- kind: User
name: dma
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dma-admin
subjects:
- kind: User
name: dma
namespace: sandbox
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: access-nginx
namespace: sandbox
spec:
podSelector:
matchLabels:
app: nginx
ingress:
- from:
- podSelector:
matchLabels:
run: access
...

How to expose kubernetes metric server api to curl from inside the pod?

I am using metric server to get the usage of my Kubernetes cluster. But in order to use it from outside the host, I need to use "kubectl proxy". But i don't want to do that as it is not intended to run on background. I want it to be run as a service continuously
How can i achieve these
expected output
curl clusterip:8001/apis/metrics.k8s.io/v1beta1/nodes
{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
},
"items": [
{
"metadata": {
"name": "manhattan-master",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/manhattan-master",
"creationTimestamp": "2019-11-15T04:26:47Z"
},
"timestamp": "2019-11-15T04:26:33Z",
"window": "30s",
"usage": {
"cpu": "222998424n",
"memory": "3580660Ki"
}
}
]
I tried by using LoadBalancig service
metrics-server-service.yaml
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: main-port
externalTrafficPolicy: Local
type: LoadBalancer
kubectl describe service metrics-master -n kube-system
[root#manhattan-master 1.8+]# kubectl describe service metrics-server -n kube-system
Name: metrics-server
Namespace: kube-system
Labels: kubernetes.io/cluster-service=true
kubernetes.io/name=Metrics-server
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"kubernetes.io/cluster-service":"true","kubernetes.io/name":"Me...
Selector: k8s-app=metrics-server
Type: LoadBalancer
IP: 10.110.223.216
Port: <unset> 443/TCP
TargetPort: main-port/TCP
NodePort: <unset> 31043/TCP
Endpoints: 10.32.0.7:4443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 32208
Events: <none>
This is possible by creating a new service to expose the Metrics Server. Your Metrics Server Service should look like this:
apiVersion: v1
kind: Service
metadata:
labels:
kubernetes.io/name: Metrics-server-ext
name: metrics-server-ext
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/metrics-server
spec:
ports:
- port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
sessionAffinity: None
type: LoadBalancer
If you try to access this service you will face some problems with authorization and you need to do some things to give all necessary authorizations.
After creating the service you will need to create a Cluster Role Binding so our service can have access to the data:
$ kubectl create clusterrolebinding node-admin-default-svc --clusterrole=cluster-admin --serviceaccount=default:default
Before running curl command we need to get the token so we can pass this on our curl command:
$ TOKEN=$(kubectl get secrets -o jsonpath="{.items[?(#.metadata.annotations['kubernetes\.io/service-account\.name']=='default')].data.token}"|base64 --decode)
Get your service external IP:
kubectl get svc/metrics-server-ext -n kube-system -o jsonpath='{..ip}'
Your curl command should pass the Token key to get Authorization:
curl -k https://34.89.228.98/apis/metrics.k8s.io/v1beta1/nodes --header "Authorization: Bearer $TOKEN" --insecure
Sample output:
{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
},
"items": [
{
"metadata": {
"name": "gke-lab-default-pool-993de7d7-ntmc",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/gke-lab-default-pool-993de7d7-ntmc",
"creationTimestamp": "2019-11-19T10:26:52Z"
},
"timestamp": "2019-11-19T10:26:17Z",
"window": "30s",
"usage": {
"cpu": "52046272n",
"memory": "686768Ki"
}
},
{
"metadata": {
"name": "gke-lab-default-pool-993de7d7-tkj9",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/gke-lab-default-pool-993de7d7-tkj9",
"creationTimestamp": "2019-11-19T10:26:52Z"
},
"timestamp": "2019-11-19T10:26:21Z",
"window": "30s",
"usage": {
"cpu": "52320505n",
"memory": "687252Ki"
}
},
{
"metadata": {
"name": "gke-lab-default-pool-993de7d7-v7m3",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/gke-lab-default-pool-993de7d7-v7m3",
"creationTimestamp": "2019-11-19T10:26:52Z"
},
"timestamp": "2019-11-19T10:26:17Z",
"window": "30s",
"usage": {
"cpu": "45602403n",
"memory": "609968Ki"
}
}
]
}
EDIT:
You can also optionally access it from your pods since you created a Cluster Role Binding in your default Service Account with cluster-admin role.
As example, create a pod from a image that includes curl command:
$ kubectl run bb-$RANDOM --rm -i --image=ellerbrock/alpine-bash-curl-ssl --restart=Never --tty -- /bin/bash
Than you need to exec into you pod and run:
$ curl -k -X GET https://kubernetes.default/apis/metrics.k8s.io/v1beta1/nodes --header "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" --insecure
Here we are passing the same TOKEN mentioned before in a complete different way.

Prometheus Adapter empty custom metric items

I'm attempting to auto-scale a Kubernetes deployment with an HPA using Prometheus custom metrics with the Prometheus Adapter. These custom metrics are published to Prometheus via another deployment in another namespace which every minute queries a REST API for a particular metric and then publish the value of that metric to Prometheus. From there the adapter should be able to query Prometheus for said metric, with some additional labels as query criteria, and publish that metric with a new name. From there the HPA should be able to pick up this metric and scale based on its value.
Here the labels for my deployment which the adapter supposedly bases its matching off of:
Labels: app.kubernetes.io/instance=event-subscription-dev-dev
app.kubernetes.io/managed-by=Tiller-dev
app.kubernetes.io/name=event-subscription-dev
deployment-name=event-subscription-webhook-worker-dev
helm.sh/chart=event-subscription-0.1.0-dev
Here are the Prometheus Adapter Helm chart values/adapter rules:
logLevel: 1
metricsRelistInterval: 5s
prometheus:
url: 'http://<prometheus-url>'
rules:
custom:
- seriesQuery: '{__name__="event_subscription_current_message_lag"}'
name:
matches: "(.*)"
as: '${1}_webhooks'
resources:
overrides:
namespace: {resource: "namespace"}
pod: {resource: "pod"}
metricsQuery: 'sum(event_subscription_current_message_lag{queue="webhooks", container_name!="POD"})'
- seriesQuery: '{__name__="event_subscription_current_message_lag"}'
name:
matches: "(.*)"
as: '${1}_webhook_retries'
resources:
overrides:
namespace: {resource: "namespace"}
pod: {resource: "pod"}
metricsQuery: 'sum(event_subscription_current_message_lag{queue="webhook_retries", container_name!="POD"})'
And here is the metrics piece of my HPA spec:
metrics:
- type: Pods
pods:
metric:
name: event_subscription_current_message_lag_webhooks
target:
type: AverageValue
averageValue: 10
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 85
The problem I'm having here is not with the adapter querying for the metric and then publishing a new metric, but rather that the new metric has no value associated with it as the original metric does.
For example if I run kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 I do see my event_subscription_current_message_lag_webhooks and event_subscription_current_message_lag_webhook_retries metrics, but they don't have any value like the original event_subscription_current_message_lag metric does.
Here's output from kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/event-subscription/pods/*/event_subscription_current_message_lag"
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/event-subscription/pods/%2A/event_subscription_current_message_lag"
},
"items": [
{
"describedObject": {
"kind": "Pod",
"namespace": "event-subscription",
"name": "activemq-message-lag-retrieval-7bfc46b948-jr8kp",
"apiVersion": "/v1"
},
"metricName": "event_subscription_current_message_lag",
"timestamp": "2019-11-08T22:09:53Z",
"value": "1"
}
]
}
And here's the output for event_subscription_current_message_lag_webhooks and event_subscription_current_message_lag_webhook_retries:
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/event-subscription/pods/%2A/event_subscription_current_message_lag_webhooks"
},
"items": []
}
...
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/event-subscription/pods/%2A/event_subscription_current_message_lag_webhook_retries"
},
"items": []
}
I'm confused as to how the adapter is able to, seemingly, find my original metric, query for it, publish the new metric, but without the value, I would expect which in this case is 1.