I've been exploring for a possible solution that would help to export certain metrics from prometheus to postgres for analytical purpose.
I came across the prometheus-postgres-adapter, unfortunately it will store the metrics in its own postgres, i.e a statefulset in k8s and doesn't support an external postgres like AWS RDS. There's an open issue for this: https://github.com/timescale/prometheus-postgresql-adapter/issues/10
Is there any other alternatives? or should we write our own adapter?
Related
I need to deploy Grafana in a Kubernetes cluster in a way so that I can have multiple persistent volumes stay in sync - similar to what they did here.
Does anybody know how I can use the master/slave architecture so that only 1 pod writes while the others read? How would I keep them in sync? Do I need additional scripts to do that? Can I use Grafana's built-in sqlite3 database or do I have to set up a different one (Mysql, Postgres)?
There's really not a ton of documentation out there about how to deploy statefulset applications other than Mysql or MongoDB.
Any guidance, experience, or even so much as a simple suggestion would be a huge help. Thanks!
StatefulSets are not what you think and have nothing to do with replication. They just handle the very basics of provisioning storage for each replica.
The way you do this is as you said by pointing Grafana at a "real" database rather than local Sqlite.
Once you do that, you use a Deployment because Grafana itself is then stateless like any other webapp.
I have Grafana running inside a Kubernetes Cluster and i want to push logs from outside of Kubernetes (apps not running in K8s/DB's etc) into kubernetes so i can view them inside the Grafana cluster. What's the best way of doing this?
So Grafana is a GUI for reporting on data stored in other databases. It sounds like you are capturing metrics from the cluster and this data is stored in another database. If you are running Prometheus this is the database for Grafana's time-series data. You also may end up running long-term storage systems like Thanos in the future for that data to keep it over time depending on the volume of data.
Back to logging... Similarly to use Grafana for logs you'll need to implement some kind of logging database. The most popular is the formerly open-source ELK (ElasticSearch, Logstash, Kibana) stack. You can now use OpenSearch which is an open-source version of ElasticSearch and Kibana. Most K8S distributions come with Fluentd which replaces logstash for sending data. You can also install Fluentd or Fluentbit on any host to send data to this stack. You'll find that Grafana is not the best for log analysis, so most people use Kibana (OpenSearch Dashboards). However you can use Grafana as well, it's just painful IMO.
Another option if you don't want to run ELK is using Grafana Loki, which is another open-source database for logging. It's a lot more simple, but also more limited as to how you can query the logs due to the way it indexes. It works nicely with Grafana, but once again this is not a full-text indexing technology so it will be a bit limited.
Hope this is helpful, let me know if you have questions!
I'm using GKE (Google Kubernetes Engine) on Google Cloud, and I have a Postgres container.
I want to configure Postgres to send its logs to Stackdriver in JSON format.
I did not find documentation for this, and I'm a newbie in Postgres. How can I do this?
I used this tutorial to install wordpress using kubernetes.
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
It is working as expected. But I will prefer to use Amazon RDS instead of mysql pods. I am not sure what changes are required.
In the wordpress deployment you just need to update the host and credentials for your amazon db
you don't need to deploy any of the mysql resources from the tutorial.
I want to have a MongoDB deployment as a service to my database per service type microservice architecture model.
Right now I am using helm packages to deploy mongo db by defining persistent volume and persistent volume claims.
But I want to deploy mongodb as HA with storing data in any EBS or so!
When I checked online for this solution everything suggests it with Portworx. But is there a way to do it without using Portworx?
Any help appreciated.