kube-metrics-adapter installation at namespace level - kubernetes

I've a use-case for my REST-API. I'm deploying my REST service to kubernetes. So for Autoscaling my pod, I want to build Horizontal Pod Autoscaler based on custom-metrics like Request-Per-Second. So for that purpose we'll have to create a custom-metrics-adapter like this https://github.com/banzaicloud/banzai-charts/tree/master/kube-metrics-adapter.
But the problem that I'm facing, I don't have access to create any ClusterRole or creating APIService. I just want to know can we deploy this kube-metrics-adapter API at namespace level to which I've access, so that it can provide custom-metrics to HPA. Is it necessary to have this adapter at cluster level?

Looking at the chart, the RBAC templates are hard-coded to use clusterroles. You can't install this chart if tiller doesn't have cluster-level access. You could try modifying the chart to be namespace scoped, but you'll probably need some level of permissions to the metrics API to register your custom metrics.
Ask a cluster admin if you can get the clusterroles put in?

Related

custom rule for NetworkPolicy

could you please support me in understanding how to configure the NetworkPolicy in order to set rule, that only predefined user's role may have access for specific pod (or service)?
I have begun with Kubernetes and read "Kebernetes in action", but didn't found any description how to do it. In general, this request is Authorisation task and only solution (i suppose) is to apply some kind of CustomResourceDefinition and create my own controller for manage the behaviour of CustomNetworkPolicy. Am I on right way, or is there any appropriate solution?
My microservices current equipped with authorisation on application level, but i need to move this task on cluster level. One of a reason is, i.e. I can orchestrate access of users without to change the source code of microservices.
I will be very thankful for some example or clarification
Using NetworkPolicy you can only manage the incoming and outgoing traffic to/from pods. For authorization, you can leverage service mesh which provides many more functionalities without changing your source code. The most famous one is istio (https://istio.io/docs/tasks/security/authorization/authz-http/), you can check more of them.
You could use RBAC to control your cluster access permissions.
This link show how you could use RBAC to restrict a namespace from a specific user.
It works perfectly if you need your pods have a limited access to other pods or resources. You could create a serviceAccount with defined permissions and link this account in your deployment, for example. See this link
References:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://kubernetes.io/docs/reference/access-authn-authz/authorization/

How to set up Istio or Linkerd with namespace-level permissions (without cluster administration permission)?

We are using a K8s cluster but we don't have cluster level permissions, so we can only create Role and ServiceAccount on our namespaces and we need install a service mesh solution (Istio or Linkerd) only in our namespaces.
Our operation team will agree to apply CRDs on the cluster for us, so that part is taken care of, but we can’t request for cluster admin permissions to set up the service mesh solutions.
We think that it should be possible to do this if we change all the ClusterRoles and ClusterRoleBindings to Roles and RoleBindings on Helm charts.
So, the question is: how can we set up a service mesh using Istio or Linkerd without having admin permission on the K8s cluster?
Linkerd cannot function without certain ClusterRoles, ClusterRoleBindings, etc. However, it does provide a two-stage install mode where one phase corresponds to "cluster admin permissions needed" (aka give this to your ops team) and the other "cluster admin permissions NOT needed" (do this part yourself).
The set of cluster admin permissions needed is scoped down to be as small as possible, and can be inspected (The linkerd install config command simply outputs it to stdout.)
See https://linkerd.io/2/tasks/install/#multi-stage-install for details.
For context, we originally tried to have a mode that required no cluster-level privileges, but it became clear we were going against the grain with how K8s operates, and we ended up abandoning that approach in favor of making the control plane cluster-wide but multi-tenant.

Best practices when trying to implement custom Kubernetes monitoring system

I have two Kubernetes clusters representing dev and staging environments.
Separately, I am also deploying a custom DevOps dashboard which will be used to monitor these two clusters. On this dashboard I will need to show information such as:
RAM/HD Space/CPU usage of each deployed Pod in each environment
Pod health (as in if it has too many container restarts etc)
Pod uptime
All these stats have to be at a cluster level and also per namespace, preferably. As in, if I query a for a particular namespace, I have to get all the resource usages of that namespace.
So the webservice layer of my dashboard will send a service request to the master node of my respective cluster in order to fetch this information.
Another thing I need is to implement real time notifications in my DevOps dashboard. Every time a container fails, I need to catch that event and notify relevant personnel.
I have been reading around and two things that pop up a lot are Prometheus and Metric Server. Do I need both or will one do? I set up Prometheus on a local cluster but I can't find any endpoints it exposes which could be called by my dashboard service. I'm also trying to set up Prometheus AlertManager but so far it hasn't worked as expected. Trying to fix it now. Just wanted to check if these technologies have the capabilities to meet my requirements.
Thanks!
I don't know why you are considering your own custom monitoring system. Prometheus operator provides all the functionality that you mentioned.
You will end up only with your own grafana dashboard with all required information.
If you need custom notification you can set it up in Alertmanager creating correct prometheusrules.monitoring.coreos.com, you can find a lot of preconfigured prometheusrules in kubernetes-mixin
.
Using labels and namespaces in Alertmanager you can setup a correct route to notify person responsible for a given deployment.
Do I need both or will one do?, yes, you need both - Prometheus collects and aggregates metric when Metrick server exposes metrics from your cluster node for your Prometheus to scrape it.
If you have problems with Prometheus, Alertmanger and so on consider using helm chart as entrypoint.
Prometheus + Grafana are a pretty standard setup.
Installing kube-prometheus or prometheus-operator via helm will give you
Grafana, Alertmanager, node-exporter and kube-state-metrics by default and all be setup for kubernetes metrics.
Configure alertmanager to do something with the alerts. SMTP is usually the first thing setup but I would recommend some sort of event manager if this is a service people need to rely on.
Although a dashboard isn't part of your requirements, this will inform how you can connect into prometheus as a data source. There is docco on adding prometheus data source for grafana.
There are a number of prebuilt charts available to add to Grafana. There are some charts to visualise alertmanager too.
Your external service won't be querying the metrics directly with prometheus, in will be querying the collected data in prometheus stored inside your cluster. To access the API externally you will need to setup an external path to the prometheus service. This can be configured via an ingress controller in the helm deployment:
prometheus.ingress.enabled: true
You can do the same for the alertmanager API and grafana if needed.
alertmanager.ingress.enabled: true
grafana.ingress.enabled: true
You could use Grafana outside the cluster as your dashboard via the same prometheus ingress if it proves useful.

Is there a way to automate creation of dashboards on Grafana as and when new namespaces are added in a Kubernetes cluster?

I have been checking few references online for this particular use case but no solution was just a complete fit. I have a use case where I am considering to create a multi-tenant Kubernetes cluster with each tenant occupying a namespace. Since I would like to add each new tenant dynamically to my cluster, I would like to get the Monitoring of that namespace in Grafana with a predefined set of rules auto-configured.

Intercluster RBAC with service-account

Our infrastructure currently has 2 Kubernetes Cluster, with one Cluster (cluster-1) creating pods in another cluster (cluster-2). Since we are on kubernetes1.7.x, we are able to make this work.
However, with 1.8 Kubernetes added support for RBAC as a result of which we cannot create pods in the new cluster anymore.
We already added support for Service Accounts and made sure that RoleBindings are properly set-up. But the main issue is that the service-account is not propagated outside of the cluster (and rightly so). The user that cluster-2 receives the request is called 'client', so when we added a RoleBinding with 'client' as a User, everything worked.
This is most certainly not the correct solution, as now any cluster that talks to Kubernetes API server can create a pod.
Is there support for RBAC that works cross cluster? Or, is there a way to propagate the service info through to the cluster we want to create the pods in?
P.S.: Our Kubernetes cluster are currently on GKE. But, we would like this to work on all Kubernetes-engine.
Your cluster-1 SA uses a kubecfg (for cluster-2) which resolves to the user "client". The only way to solve this is to generate a kubecfg (for cluster-2) with an identity associated (cert/token) for your cluster-1 SA. Lot of ways to do that: https://kubernetes.io/docs/admin/authentication/
Simplest way is to create an identical SA in cluster-2 and use its token in the kubecfg in cluster-1. Give RBAC only to that SA.