Prometheus Server with Dynamic Rules - kubernetes

I am currently working on a project that uses the Prometheus Server (Not the Prometheus Operator).
We're looking to introduce a way of modifying the PrometheusRules without having to redeploy it.
I'm completely new to containers and Kubernetes and a little over my head so I'm hoping someone could let me know if I'm wasting my time trying to make this work.
What I have thought of doing so far is:
Store the PrometheusRules in a configmap.
Apply the configmap of rules to the Prometheus Server configuration.
Create a sidecar to the Prometheus Server that can modify this configMap.
The sidecar will have an API exposed so users will have CRUD functionality for the rules.
When successful modifying a rule, the sidecar will trigger the reload endpoint on the Prometheus Server that causes it to reload its configuration file without having to restart the container.
Thanks

Your initial use case seems valid, though I would say there are better ways of achieving this.
For points 1,2 I would suggest you use the Prometheus Helm Chart for ease of use, better config management and deployment. This would keep track of Prometheus configuration as one single unit rather than maintaining the rule files separately.
For point 3,4:- Making direct untracked changes to the live configuration does not seem safe or secure. Using the helm chart mentioned above I would suggest you make the changes before deploying to the cluster (Use VCS like Git to track changes)
Best case scenario:- Also setup CI/CD pipelines to deploy changes instantly.
Use the reload API as mentioned to reload the new released config.
Explore more about Helm

Related

Deploy ELK stack with bootstrap lifecycle policies (ILP) on Kubernetes

I've deployed the ELK stack to Kubernetes cluster.
I'm also able to create the lifecycle policies on Kibana UI. My issue is everytime I want to bring the ELK to another cluster, or simply need to clear the Persistent Volume and re-deploy, all of my data gone. The logs doesn't matter - indeed I want the logs to be empty in the new environment, I just want to have my policies available for the new env.
I have an idea about every time deploying the ELK stack, I will bootstrap it with policies some how.
First approach: Call the APIs to create the policies, index templates... after the deployment are ready. This would need to check some readinessProb of Elasticsearch/Kibana and call the right commands => The peasants way.
Second approach: Find some way to extract the saved policies from elasticsearch "database". Then copy it to new elasticsearch deployment.
Is there anyone has experiences about this?

Multiple apps in single K8S deployment

I'm exploring K8S possibilities and I'm wonder is there any way to create deployments for two or more apps in single deployment so it is transactional - when something is wrong after deployment all apps are rollbacked. Also I want to mention that I'm not saying about pod with multiple containers because additional side car containers are rather intended for some crosscutting concerns like monitoring, authentication (like kerberos) and others but it is not recommended to put different apps in single pod. Having this in mind, is it possible to have single deployment that can produce 2+ kind of pods?
Is it possible to have single deployment that can produce 2+ kind of pods?
No. A Deployment creates only one kind of Pod. You can update a Deployment's contents, and it will incrementally replace existing Pods with new ones that match the updated Pod spec.
Nothing stops you from creating multiple Deployments, one for each kind of Pod, and that's probably the approach you're looking for here.
... when something is wrong after deployment all apps are rollbacked.
Core Kubernetes doesn't have this capability on its own; indeed, it has somewhat limited capacity to tell that something has gone wrong, other than a container failing its health checks or exiting.
Of the various tools in #SYN's answer I at least have some experience with Helm. It's not quite "transactional" in the sense you might take from a DBMS, but it does have the ability to manage a collection of related resources (a "release" of a "chart") and it has the ability to roll back an entire version of a release across multiple Deployments if required. See the helm rollback command.
Helm
As pointed out in comments, one way to go about this would be to use something like Helm.
Helm is some kind of client (as of v3. Previous also involved "tiller", a controller running in your kubernetes cluster: let's forget about that one/deprecated).
Helm uses "Charts" (more or less: templates, with default values you can override).
Kustomize
Another solution, similar to Helm, is Kustomize. Working from plain-text files (not templates), while making it simple to override / customize your objects before applying them to your Kubernetes cluster.
ArgoCD
While Kustomize and Helm are both standalone clients, we could also mention solutions such as ArgoCD.
The ArgoCD controller would run inside your Kubernetes cluster, allowing you to create "Application" objects.
Those Applications are processed by ArgoCD, driving deployment of your workloads (common sources for those applications would involve Helm Charts, Git repositories, ...).
The advantage of ArgoCD being that their controller may (depending on your configuration) be responsible for upgrading your applications over time (eg: if your source is a git repository, branch XXX, and someone pushes changes into that branch: argocd would apply those pretty much right away)
Operators
Although most of those solutions are pretty much unaware of how your application is running. Say you upgrade a deployment, driven by Helm, Kustomize or ArgoCD, and end up with some database pods stuck in crashloopbackoff: your application pods would get updated nevertheless, there's no automatic rollback to a previous working configuration.
Which brings us to another way to ship applications to Kubernetes: operators.
Operators are aware of the state of your workloads, and may be able to fix common errors ( depending on how it was coded, ... there's no magic ).
An operator is an application (can be in Go, Java, Python, Ansible playbooks, ... or whichever comes with some library communicating with a Kubernetes cluster API)
An operator is constantly connected to your Kubernetes cluster API. You would usually find some CustomResourceDefinitions specific to your operator, allowing you to describe the deployment of some component in your cluster. (eg: the elasticsearch operator introduces an object kind "ElasticSearch", and some "Kibana")
The operator watches for instances of the objects it managed (eg: ElasticSearch), eventually creating Deployment/StatefulSets/Services ...
If someone deletes an object that was created by your operator, it would/should be re-created by that operator, in a timely manner (mileage may vary, depending on which operator we're talking about ...)
A perfect sample for operators would be something like OpenShift 4 (OKD4). A Kubernetes cluster that comes with 10s of operators (SDN, DNS, machine configurations, ingress controller, kubernetes API server, etcd database, ...). The whole cluster is an assembly of operators: upgrading your cluster, each of those would manage the upgrade of the corresponding services, in an orchestrated way, ... one after the other, ... if anything fails, you're still usually left with enough replicas running to troubleshoot the issue, ...
Depending on what you're looking for, each option has advantages and inconvenients. Now if you're looking for "single deployment that can produce 2+ kind of pods", then ArgoCD or some home-grown operator would qualify.

Kong reboot in DB-less mode

Playing around with Kong in DB-less mode in a docker container. Trying to figure out if we can use it as a gateway for the company I work for. I currently mount a local folder onto my docker container and pass the path to the kong.yaml file to kong when it starts. When I need to update the configuration, I do a POST to the /config endpoint.
All good so far.
However, my concern is, how I am supposed to handle a Kong restart? The configuration I have will be generated in a separate micro-service from a PostGre database.
Kong will be running as an Ingress controller in our Kubernetes cluster. One thing I could do is expose an endpoint that generates a kong.yml config file based on my data in PostGre. Kong could hit that on start up. I think I can make it a part of its start command.
Anyway, this seems like perhaps a bit of a hack. I was wondering, are there are any best practices around this. I'm sure other people have faced this problem before :-)
Thanks!
Answer
Configuring Kong on Kubernetes is done through Kubernetes native resources (e.g. Ingress) and Kong Custom Resources (e.g. KongConsumer, KongPlugin, KongIngress).
The Kong Ingress Controller will make all necessary changes based on changes to those resources through the Kubernetes API Server.
Additional Info
I highly recommend going through these guides. They are comprehensive and highly educational.
Make sure to keep an eye on the logs coming out of the Kong Ingress Controller pod because this will tell you whether it has successfully reconciled changes based on those resources or not.
Also feel free to take a look at this project where we manage Kong CRs through an on-cluster REST API Microservice.

Kubernetes: How to manage multiple separate deployments of the same app

We're migrating our app's deployments from using VMs to Kubernetes and as my knowledge of Kubernetes is very limited, I'm lost how I could set up deployment for multiple clients.
Right now we have a separate VM for each client but how to separate the clients in Kubernetes in a way that will be cost and resource efficient and easy to manage?
I managed to create dev and staging environments using namespaces and this is working great.
To update dev and staging deployment I just use kubectl apply -f <file> --namespace staging.
Now I need to deploy app to production for several clients (50+). They should be completely separate from each other (using separate environment variables and secrets) while code should be the same. And I don't know what is the best way to achieve that.
Could you please hint me what is the right way for that in Kubernetes?
You can use Kustomize. It provides purely declarative approach to configuration customization to manage an arbitrary number of distinctly customized Kubernetes configurations.
https://github.com/kubernetes-sigs/kustomize/tree/master/examples
one (or a set of namespaces) by customer
kustomize has a very good patterns system to handle generic configuration and several adaptation by clients
use NetworkPolicy to isolate network between clients

How can I distrubute loads to Kubernetes Pods?

I have work defined in a file/config with the following format,
config1,resource9
config3,resource21
config5,resource10
How can I spin individual pods based on the configuration? If I add one more line to the configuration, Kubernetes need to spin one more pod and send the configuration line to that pod.
How to store the configuration in Kubernetes and spin up pods based on the configuration?
Take a look at Kubernetes Operators. The pattern adds a Kubernetes management layer to an application. Basically you run a kubernetes native app (the operator) that connects to the kubernetes API and takes care of the deployment management for you.
If you are familiar with helm, then a quick way to get started is with the helm example. This example will create a new Nginx deployment for each Custom Resource you create. The Custom Resource contains all the helm values nginx requires for a deployment.
As a first step you could customise the example so that all you need to do is manage the single Custom Resource to deploy or update the app.
If you want to take it further then you may run into some helm limitations pretty quickly, for advanced use cases you can use the go operator-sdk directly.
There are a number of projects operators to browse on https://operatorhub.io/