I have a Kubernetes namespace with limited privileges which excludes the creation of ClusterRole and ClusterRoleBinding.
I want to monitor the resource consumption and pod-related metrics on the namespace level.
E.g., pod health and status, new pod creation, pod restarts, etc.
Although I can create an application-level metrics endpoint for custom metrics by exposing /metrics and adding the annotation prometheus.io/scrape: 'true'.
But is there a way to get resource consumption and pod-related metrics on the namespace level without Cluster Role and ClusterRoleBinding?
It is possible to get namespace level entities from kube-state-metrics.
Pull the helm chart for kube-state-metrics:
https://bitnami.com/stack/kube-state-metrics/helm
Edit the values.yaml file and make the following changes:
rbac:
create: false
useClusterRole: false
collectors:
- configmaps
- cronjobs
- daemonsets
- deployments
- endpoints
- horizontalpodautoscalers
- ingresses
- jobs
- limitranges
- networkpolicies
- poddisruptionbudgets
- pods
- replicasets
- resourcequotas
- services
- statefulsets
namespace: <current-namespace>
In the prometheus ConfigMap, add a job with the following configurations:
- job_name: 'kube-state-metrics'
scrape_interval: 1s
scrape_timeout: 500ms
static_configs:
- targets: ['{{ .Values.kube_state_metrics.service.name }}:8080']
Create a role binding:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kube-state-metrics
namespace: <current-namespace>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: <current-namespace>
Related
I have installed prometheus into my Kubernetes v1.17 KOPS cluster following kube-prometheus, ensuring the --authentication-token-webhook=true and --authorization-mode=Webhook prerequisets are set and the kube-prometheus/kube-prometheus-kops.libsonnet configuration specified.
I have then installed Postgres using https://github.com/helm/charts/tree/master/stable/postgresql using the supplied values-production.yaml with the following set:
metrics:
enabled: true
# resources: {}
service:
type: ClusterIP
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9187"
loadBalancerIP:
serviceMonitor:
enabled: true
namespace: monitoring
interval: 30s
scrapeTimeout: 10s
Both services are up and working, but prometheus doesn't discover any metrics from Postgres.
The logs on the metrics container on my postgres pods have no errors, and neither do any of the pods in the monitoring namespace.
What additional steps are required to have the Postgres metrics exporter reach Prometheus?
Try to update ClusterRole for Prometheus. By default, it hasn't permissions to retrieve a list of pods, services, and endpoints from non-monitoring namespace.
In my system the original ClusterRole was:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus-k8s
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
I've changed it to:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus-k8s
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- services
- endpoints
- pods
verbs:
- get
- list
- watch
- nonResourceURLs:
- /metrics
verbs:
- get
After those changes, Postgres metrics will be available for Prometheus.
I want to have a service account that can create a deployment. So I am creating a service account, then a role and then a rolebinding. The yaml files are below:
ServiceAccount:
apiVersion: v1
kind: ServiceAccount
metadata:
name: testsa
namespace: default
Role:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testrole
namespace: default
rules:
- apiGroups:
- ""
- batch
- apps
resources:
- jobs
- pods
- deployments
- deployments/scale
- replicasets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- scale
RoleBinding:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testrolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: testsa
namespace: default
roleRef:
kind: Role
name: testrole
apiGroup: rbac.authorization.k8s.io
But after applying these files, when I do the following command to check if the service account can create a deployment, it answers no.
kubectl auth can-i --as=system:serviceaccount:default:testsa create deployment
The exact answer is:
no - no RBAC policy matched
It works fine when I do checks for Pods.
What am I doing wrong?
My kubernetes versions are as follows:
kubectl version --short
Client Version: v1.16.1
Server Version: v1.12.10-gke.17
Since you're using a 1.12 cluster, you should include the extensions API group in the Role for the deployments resource.
This was deprecated in Kubernetes 1.16 in favor of the apps group: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/
I'm trying to restrict all pods except few from running with the privileged mode.
So I created two Pod security policies:
One allowing running privileged containers and one for restricting privileged containers.
[root#master01 vagrant]# kubectl get psp
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
privileged true RunAsAny RunAsAny RunAsAny RunAsAny false *
restricted false RunAsAny RunAsAny RunAsAny RunAsAny false *
Created the Cluster role that can use the pod security policy "restricted" and binded that role to all the serviceaccounts in the cluster
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp-restricted
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- restricted
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: psp-restricted
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp-restricted
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
Now I deploying a pod with "privileged" mode. But it is getting deployed. The created pod annotation indicating that the psp "privileged" was validated during the pod creation time. Why is that ? the restricted PSP should be validated right ?
apiVersion: v1
kind: Pod
metadata:
name: psp-test-pod
namespace: default
spec:
serviceAccountName: default
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
[root#master01 vagrant]# kubectl create -f pod.yaml
pod/psp-test-pod created
[root#master01 vagrant]# kubectl get pod psp-test-pod -o yaml |grep kubernetes.io/psp
kubernetes.io/psp: privileged
Kubernetes version: v1.14.5
Am i missing something here ? Any help is appreciated.
Posting the answer to my own question. Hope it will help someone
All my PodSecurityPolicy configurations are correct. The issue was, I tried to deploy a pod by its own not via any controller manager like Deployment/Replicaset/Daemonset etc..
Most Kubernetes pods are not created directly by users. Instead, they are typically created indirectly as part of a Deployment, ReplicaSet or other templated controller via the controller manager.
In the case of a pod deployed by its own, pod is created by kubectl not by controller manager.
In Kubernetes there is one superuser role named "cluster-admin". In my case, kubectl is running with superuser role "cluster-admin". This "cluster-admin" role has access to all the pod security policies. Because, to associate a pod security policy to a role, we need to use 'use' verbs and set 'resources' to 'podsecuritypolicy' in 'apiGroups'
In the cluster-admin role 'resources' * include 'podsecuritypolicies' and 'verbs' * include 'use'. So all policies will also enforce on the cluster-admin role as well.
[root#master01 vagrant]# kubectl get clusterrole cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: psp-test-pod
namespace: default
spec:
serviceAccountName: default
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
I deployed the above pod.yaml using the command kubectl create -f pod.yaml
Since I had created two pod security policies one for restriction and one for privileges, cluster-admin role has access to both policies. So the above pod will launch fine with kubectl because cluster-admin role has access to the privileged policy( privileged: false also works because admin role has access to restriction policy as well). This situation happens only if either a pod created directly by kubectl not by kube control managers or a pod has access to the "cluster-admin" role via serviceaccount
In the case of a pod created by Deployment/Replicaset etc..first kubectl pass the control to the controller manager, then the controller will try to deploy the pod after validating the permissions(serviceaccount, podsecuritypolicies)
In the below Deployment file, pod is trying to run with privileged mode. In my case, this deployment will fail because I already set the "restricted" policy as the default policy for all the serviceaccounts in the cluster. So no pod will able to run with privileged mode. If a pod needs to run with privileged mode, allow the serviceaccount of that pod to use the "privileged" policy
apiVersion: apps/v1
kind: Deployment
metadata:
name: pause-deploy-privileged
namespace: kube-system
labels:
app: pause
spec:
replicas: 1
selector:
matchLabels:
app: pause
template:
metadata:
labels:
app: pause
spec:
serviceAccountName: default
containers:
- name: pause
image: k8s.gcr.io/pause
securityContext:
privileged: true
I have created a service account with cluster role, is it possible to deploy pods across different namespaces with this service account through APIs?
Below is the template from which the role creation and binding is done:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: api-access
rules:
-
apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
resources:
- componentstatuses
- configmaps
- daemonsets
- deployments
- events
- endpoints
- horizontalpodautoscalers
- ingress
- jobs
- limitranges
- namespaces
- nodes
- pods
- persistentvolumes
- persistentvolumeclaims
- resourcequotas
- replicasets
- replicationcontrollers
- serviceaccounts
- services
verbs: ["*"]
- nonResourceURLs: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: api-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: api-access
subjects:
- kind: ServiceAccount
name: api-service-account
namespace: default
Kubernetes Service Accounts are not namespace objects, so answer of "can i use service account between namespaces?" is yes.
For second part: I don't know what you mean with API's but if it is kubernetes-apiserver then yes, you can use service account with kubectl make sure you are executing as service account. You can impersonate user for this and reference: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation
If you mean you built new API for deployment or using external deployer then you should deploy that with this service account as described here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
Yes, your service account will be able to create and act on resources in any namespace because a you've granted it these permissions at the cluster scope using a ClusterRoleBinding.
So i have 3 name spaces when i deployed prometheus on kubernetes i see the error in the logs. it is unable to monitor all the name spaces.
Error :
\"system:serviceaccount:development:default\" cannot list endpoints at the cluster scope"
level=error ts=2018-06-28T21:22:07.390161824Z caller=main.go:216 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:268: Failed to list *v1.Endpoints: endpoints is forbidden: User \"system:serviceaccount:devops:default\" cannot list endpoints at the cluster scope"
You'd better use a service account to access the kubernetes, and give the sa special privilidge that the prometheus needed. like the following:
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: kube-system
Presumes that you deploy prometheus in the kube-system namespace. Also you need specify the sa like ' serviceAccount: prometheus' in your prometheus deployment file .