eks:cloud-controller-manager cluster role in EKS/K8s - kubernetes

I noticed that a new cluster role - "eks:cloud-controller-manager" appeared in our EKS cluster. we never created it.I tried to find origin/creation of this cluster role but not able to find it.
any idea what does "eks:cloud-controller-manager" cluster role does in EKS cluster?
$ kubectl get clusterrole eks:cloud-controller-manager -o yaml
kind: ClusterRole
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"eks:cloud-controller-manager"},"rules":[{"apiGroups":[""],"resources":["events"],"verbs":["create","patch","update"]},{"apiGroups":[""],"resources":["nodes"],"verbs":["*"]},{"apiGroups":[""],"resources":["nodes/status"],"verbs":["patch"]},{"apiGroups":[""],"resources":["services"],"verbs":["list","patch","update","watch"]},{"apiGroups":[""],"resources":["services/status"],"verbs":["list","patch","update","watch"]},{"apiGroups":[""],"resources":["serviceaccounts"],"verbs":["create","get"]},{"apiGroups":[""],"resources":["persistentvolumes"],"verbs":["get","list","update","watch"]},{"apiGroups":[""],"resources":["endpoints"],"verbs":["create","get","list","watch","update"]},{"apiGroups":["coordination.k8s.io"],"resources":["leases"],"verbs":["create","get","list","watch","update"]},{"apiGroups":[""],"resources":["serviceaccounts/token"],"verbs":["create"]}]}
creationTimestamp: "2022-08-02T00:25:52Z"
name: eks:cloud-controller-manager
resourceVersion: "762242250"
uid: 34e568bb-20b5-4c33-8a7b-fcd081ae0a28
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- '*'
- apiGroups:
- ""
resources:
- serviceaccounts/token
verbs:
- create```
I tried to find this object in our Gitops repo but do not find it.

This role is created by AWS when you provision the cluster. This role is for the AWS cloud-controller-manager to integrate AWS services (eg. CLB/NLB, EBS) with Kubernetes. You will also find other roles like eks:fargate-manager to integrate with Fargate.

Related

k8s - give permission for all resources

Im trying to create a job to list all resources because my connection is terrible. Is there any way to give permission to a pod run the below command?
Here is ClusterRole that I am trying:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: istio-system
name: workaround
rules:
- apiGroups: [""]
resources:
- '*'
verbs:
- '*'
- apiGroups: ['*']
resources:
- '*'
verbs:
- '*'
The command is:
kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get -n ibm-rancher
If you are just looking to give your workload an admin role, you can use the prebuilt cluster-admin cluster role which should be available on every k8s cluster.
See the docs for more details - https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles

Cannot connect to Kubernetes Dashboard as non-admin user with kubectl proxy

I want to allow non-admin users to use the Kubernetes Dashboard to view the K8 objects in their namespaces. As cluster-admin, I have no issues connecting the the Kubernetes Dashboard using kubectl proxy. When I first attempted to access it with an application service account with read-only access to their entire namespace, I received the error below:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "services \"https:kubernetes-dashboard:\" is forbidden: User \"system:serviceaccount:ops-jenkins-lab:k8-dashboard-ops-jenkins-lab\" cannot get resource \"services/proxy\" in API group \"\" in the namespace \"kubernetes-dashboard\"",
"reason": "Forbidden",
"details": {
"name": "https:kubernetes-dashboard:",
"kind": "services"
},
"code": 403
}
I added additional RBAC roles to allow the application service account access to services and services/proxy in the kubernetes-dashboard namespace. Now I get the following error:
Forbidden (403): Http failure response for http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/api/v1/login: 403 Forbidden
If I create an ingress for the dashboard I can connect without out issue to the Kubernetes Dashboard using the same application service account and have access to view all the kubernetes objects within the namespace (once I switch from default to the correct namespace). I'd actually prefer to use the ingress but for some reason once I connect to the Kubernetes Dashboard via a browser it hijacks the ingress for all my other applications. No matter which ingress I try to connect to it automatically redirects me to the Kubernetes Dashboard. I have to clear all browser data to connect to other applications.
RBAC clusterrole and rolebinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
name: k8-dashboard
rules:
- apiGroups:
- extensions
- apps
resources:
- '*'
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- get
- list
- watch
- apiGroups:
- '*'
resources:
- persistentvolumeclaims
verbs:
- get
- list
- watch
- apiGroups:
- '*'
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- '*'
resources:
- events
verbs:
- get
- list
- watch
- apiGroups:
- '*'
resources:
- configmaps
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- '*'
verbs:
- patch
- apiGroups:
- apps
resources:
- deployments/scale
verbs:
- update
- apiGroups:
- ""
resources:
- pods/attach
- pods/exec
- pods/log
- pods/status
- pods/delete
verbs:
- '*'
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- delete
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- get
- delete
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
labels:
subjectName: k8-dashboard-sa
name: k8-dashboard-ops-jenkins-lab
namespace: ops-jenkins-lab
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: k8-dashboard
subjects:
- kind: ServiceAccount
name: k8-dashboard-ops-jenkins-lab
namespace: ops-jenkins-lab
So this leaves me with needing to connect to the Kubernetes Dashboard using kubectl proxy. I'm certain there's additional RBAC required when using kubectl proxy as a non-admin user; however, I have yet to figure it out. Any suggests?
Your ClusterRole is associated with RoleBinding, and in documentation you can read:
A RoleBinding can also reference a ClusterRole to grant the
permissions defined in that ClusterRole to resources inside the
RoleBinding’s namespace
This means that even though you are using ClusterRole, the permisions are limited to one namespace, which is ops-jenkins-lab in your case.
And a long as the dashboard you are trying to access is in kubernetes-dashboard namespace you won't be able to do it beacause your RoleBinding is in wrong namespace.
To allow k8-dashboard-ops-jenkins-lab serviceAccount to access resources in different namespace you should either create ClusterRoleBinding (clusterrolebindings are not namespaced) or (better option) RoleBinding in namespace you want to access (in your case that would be kubernetes-dashboard namespace).
Let me know if something needs more clarification.

How do you get prometheus metrics from postgresql?

I have installed prometheus into my Kubernetes v1.17 KOPS cluster following kube-prometheus, ensuring the --authentication-token-webhook=true and --authorization-mode=Webhook prerequisets are set and the kube-prometheus/kube-prometheus-kops.libsonnet configuration specified.
I have then installed Postgres using https://github.com/helm/charts/tree/master/stable/postgresql using the supplied values-production.yaml with the following set:
metrics:
enabled: true
# resources: {}
service:
type: ClusterIP
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9187"
loadBalancerIP:
serviceMonitor:
enabled: true
namespace: monitoring
interval: 30s
scrapeTimeout: 10s
Both services are up and working, but prometheus doesn't discover any metrics from Postgres.
The logs on the metrics container on my postgres pods have no errors, and neither do any of the pods in the monitoring namespace.
What additional steps are required to have the Postgres metrics exporter reach Prometheus?
Try to update ClusterRole for Prometheus. By default, it hasn't permissions to retrieve a list of pods, services, and endpoints from non-monitoring namespace.
In my system the original ClusterRole was:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus-k8s
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
I've changed it to:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus-k8s
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- services
- endpoints
- pods
verbs:
- get
- list
- watch
- nonResourceURLs:
- /metrics
verbs:
- get
After those changes, Postgres metrics will be available for Prometheus.

how to edit kubernetes cluster role define

I have created a cluster role yaml(rbac.yaml) like this:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups: [""]
resources: ["services","endpoints","secrets"]
verbs: ["get","list","watch"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","list","watch"]
Now I want to add a new apiGroups into the ClusterRole.
How to edit the ClusterRole and refresh? I search from kubernetes docs,but nothing to tell how to edit.what should I do to update the yaml?
You just need to modify the yaml and apply it again. Kubernetes API Server will take care of updating it into ETCD storage and it should take effect almost immediately.
You can also edit in directly via kubectl edit clusterrole clustebrolename but I don't recommend that because you loose the previous history. You should really be version controlling your yamls and apply the changes with kubectl apply

Kubernetes service account with cluster role

I have created a service account with cluster role, is it possible to deploy pods across different namespaces with this service account through APIs?
Below is the template from which the role creation and binding is done:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: api-access
rules:
-
apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
resources:
- componentstatuses
- configmaps
- daemonsets
- deployments
- events
- endpoints
- horizontalpodautoscalers
- ingress
- jobs
- limitranges
- namespaces
- nodes
- pods
- persistentvolumes
- persistentvolumeclaims
- resourcequotas
- replicasets
- replicationcontrollers
- serviceaccounts
- services
verbs: ["*"]
- nonResourceURLs: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: api-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: api-access
subjects:
- kind: ServiceAccount
name: api-service-account
namespace: default
Kubernetes Service Accounts are not namespace objects, so answer of "can i use service account between namespaces?" is yes.
For second part: I don't know what you mean with API's but if it is kubernetes-apiserver then yes, you can use service account with kubectl make sure you are executing as service account. You can impersonate user for this and reference: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation
If you mean you built new API for deployment or using external deployer then you should deploy that with this service account as described here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
Yes, your service account will be able to create and act on resources in any namespace because a you've granted it these permissions at the cluster scope using a ClusterRoleBinding.