Grafana integration with springboot and postgres - postgresql

I am trying to use Grafana for visualization. I have a Spring-boot application which is integrated with PostgreSQL. I want to fetch the data from Postgres and show it in Grafana.
So far, I have found maven dependency for grafana which is as below :
<!-- https://mvnrepository.com/artifact/com.appnexus.grafana-client/grafana-api-java-client -->
<dependency>
<groupId>com.appnexus.grafana-client</groupId>
<artifactId>grafana-api-java-client</artifactId>
<version>1.0.5</version>
</dependency>
Can anyone please help with the tutorials examples?

If you want to just visualized the PostgreSQL database metrics on grafana then you can use this plugin.
However if you want to view the Spring Boot related metrics ( http response status, performance, jvm stats, connection pool stats etc.) then a more popular way is to setup Prometheus as the time series datastore and metrics scraper from where graphana can generate the visualization. For detailed information on setup of each component you can refer to this or this article.
There are even ready made grafana dashboards for Spring Boot like this. Of course this too uses Prometheus.

Related

automatic pulling REST API data to visualize it in Apache Superset

I work in a large enterprise and have a project to build some custom automated dashboards for our IT department, the small amount of data needs to be fetched only from the REST API endpoints. This process needs to be fully automated and there is not enough time to build a custom API wrapper. For this approach I was going to use Apache Airflow + Apache Superset tools. I have been googling for a couple of days for more easier open source solution than the Apache Airflow to move data from the REST API endpoints to visualize it in Superset. Please share your experience what would you choose instead of the Apache Airflow?
I chose to go with fhe following solution:
Apache Airflow + PostgreSQL + Grafana (instead of a Superset, because in Grafana you can actually create a drill-down option using a workaround)

Quarkus: MicroProfile metrics and Graphite

Is there a way to send microprofile metrics to graphite directly? The only guide with Quarkus I've found is this https://quarkus.io/guides/microprofile-metrics. I'm looking for something similar to what I'm using with Spring Boot https://micrometer.io/docs/registry/graphite. Do we have the same thing but in the Quarkus context (with native image support)?
The micrometer extension is recommended for metrics, and it has support for Graphite via a quarkiverse extension:
<dependency>
<groupId>io.quarkiverse.micrometer.registry</groupId>
<artifactId>quarkus-micrometer-registry-graphite</artifactId>
</dependency>
That's the quarkus issue that people can follow and give feedback https://github.com/quarkusio/quarkus/issues/10525

Helm: Datadog Agent with JDBC driver

I would like to use the Datadog Oracle Integration via the Helm Chart Datadog. Oracle Integration states To use the Oracle integration, either install the Oracle Instant Client libraries, or download the Oracle JDBC Driver.
I do not want to use a custom image to package the JDBC-driver, I want to use a standard image such as tag:7-jmx. Other options that come to mind (e.g. EFS volume with the driver inside) seem to be an overkill also.
Best option to me seems to be an init container that downloads the JDBC driver. But Datadog Helm Chart does not support custom init containers for the agents.
What's the best way to do this? To get an Datadog Agent with a JDBC driver via Helm?
Answer from Datadog Support to this:
Thanks again for reaching out to Datadog!
From looking further into this, there does not seem to be a way we can package the JDBC driver with the Datadog Agent. I understand that this is not desirable as you would prefer to use a standard image but I believe the best way to have these bundled together would be to have a custom image for your deployment.
Apologies for any inconveniences that this may cause.

How to check the SLA of a webservices

We currently deployed our Spring Boot Application in GKE(Google Kubernetes Engine) and we are currently using cloud endpoint to secure our web services. We have 11 web service developed which will be consumed by external clients. Is there any way i check the SLO (times, performance) of a webservice in cloud endpoint or in stackdriver.
You might want to check:
Spring sleuth
Jaeger operator
Jaeger is a opentracing standard and can help understand the values, and sleuth is a tool to integrate with spring, there are several options, you might want also to consider opencensus
First you need to expose metrics from your applications. Spring Sleuth is a great choice if you're using Spring Boot.
Then you need to collect the metrics and visualize them. Google provides a tool for that called Stackdriver Trace. It can also do metric-based alerts. You can find a sample setup for your use case here.
There are other performance monitoring services such as Dynatrace or Datadog.
If you want a self-hosted solution, you can use Zipkin which is inspired by an internal Google system called Dapper.
Have you looked at Google cloud console UI? Its "Endpoints" tag should show all services your project is running.

How to set up Apache Sling to use a relational DB

I am on Sling 11, which uses Jackrabbit Oak as content repository. I was wondering how to set up Sling to store the JCR repo on an RDBMS (DB2 to be specific).
I found this link on Jackrabbit Persistence, but looks like it does not apply to Oak and Oak documentation is mostly about MongoDB.
Also found an implementation of a Cassandra Resource Provider, although that seems designed to access specific paths mapped to Cassandra without using Oak.
Thanks,
Answering here but credit goes to Sling user's mailing list
Package the DB driver in an OSGi bundle
Download Sling's starter project
In boot.txt add a new running mode (in my case oak_db2)
[settings]
sling.run.mode.install.options=oak_tar,oak_mongo,oak_db2
Download Sling's datasource project and compile it.
In oak.txt configure the running mode (this will load the bundles for you in Felix):
[artifacts startLevel=15 runModes=oak_db2]
com.h2database/h2-mvstore/1.4.196
com.ibm.db2/jcc4/11.1
org.apache.sling/org.apache.sling.datasource/1.0.3-SNAPSHOT
And set-up the services that will manage persistence:
[configurations runModes=oak_db2]
org.apache.jackrabbit.oak.plugins.document.DocumentNodeStoreService
documentStoreType="RDB"
org.apache.sling.datasource.DataSourceFactory
url="jdbc:db2://10.1.2.3:50000/sling"
driverClassName="com.ibm.db2.jcc.DB2Driver"
username="****"
password="****"
datasource.name="oak"
Create a 'sling' named database.
run with java -jar -Dsling.run.modes=oak_db2 sling-starter.jar