How make prisma query named to use this name in logs - prisma

I know that I can add middleware to log query data: https://www.prisma.io/docs/concepts/components/prisma-client/middleware/logging-middleware
But has prisma special syntax to add name to queries, to use this names in middleware?
For example I have 3 queries to get users, but they are different, I want to add specific names to them, and log this names in logging middleware

Prisma has recently released support for metrics, Prisma metrics give you a detailed insight into how Prisma Client interacts with your database. You can use this insight to help diagnose performance issues with your application.
You can add global labels to your metrics to help you group and separate your metrics. Each instance of Prisma Client adds these labels to the metrics that it generates. For example, you can group your metrics by infrastructure region, or by server, with a label like { server: us_server1', 'app_version': 'one' }
Global labels work with JSON and Prometheus-formatted metrics.
Here's an example:
let metrics = prisma.$metrics.json({
globalLabels: { server: 'us_server1', app_version: 'one' },
})
console.log(metrics)

Related

I can't find CloudWatch metric in Grafana UI query editor/builder

I'm trying to create a Grafana dashboard that will reflect my AWS RDS cluster metrics.
For the simplicity I've chose CloudWatch as a datasource, It works well for showing the 'direct' metrics from the RDS cluster.
Problem is that we've switched to use RDS Proxy due the high number of connections we are required to support.
Now, I'm adjusting my dashboard to reflect few metrics that are lacking, most important is number of actual connections, which in AWS CloudWatch console presented by this query:
SELECT AVG(DatabaseConnections)
FROM SCHEMA("AWS/RDS", ProxyName,Target,TargetGroup)
WHERE Target = 'db:my-db-1'
AND ProxyName = 'my-db-rds-proxy'
AND TargetGroup = 'default'
Problem is that I can't find it anywhere in the CloudWatch Grafana query editor:
The only metric with "connections" is the standard DatabaseConnections which represents the 'direct' connections to the RDS cluster and not the connections to the RDS Proxy.
Any ideas?
That UI editor is generated from hardcoded list of metrics, which may not contain all metrics and dimensions (especially if they have been added recently), so in that case UI doesn't generate them in the selectbox.
But that is not a problem, because that selectbox is not a standard selectbox. It is an input, where you can write your own metric and dimension name. Just click there, write what you need and Hit enter to add (the same is applicable for:
Pro tip: don't use UI query builder (that's for beginners), but switch to Code and write your queries directly (anyway UI builder builds that query under the hood):
It would be nice if you create a Grafana PR - add these metrics and dimensions which are missing in the UI builder to metrics.go.
So for who ever will ever get here you should use ClientConnections and use the ProxyName as the dimension (which I didn't set initially
I was using old Grafana version (7.3.5) which didn't have it built in.

Spring data mongodb federation attempts -- how can I get interface methods to use a custom configured mongotemplate?

In my application, I need to be able to connect to any number of mongodb hosts, and any number of databases in any of those hosts to support at least this basic level of query federation. This is specified by configuration, so, for any given installation of our app, I cannot know ahead of time how many collections I will need to access. I based my attempt on configuration that I saw in this Baeldung article with some modifications to suit my requirements. My configuration looks something like this yaml:
datasources:
- name: source1
uri: mongodb://user1:pass1#127.0.0.1:27017
fq_collection: db1.coll1
- name: source2
uri: mongodb://user1:pass1#192.168.0.100:27017
fq_collection: db2.coll2
And, depending on the installation, there could be any number of datasources entries. So, in my #Configuration class, I can iterate through these entries that are injected via configuration properties. But I want to create a MongoTemplate that I can set up for each of these, since I cannot rely on the default MongoTemplate. The solution that I have attempted is to create a repository interface, and then to create a custom impl that will accept the configured MongoTemplate. When I use this code to create each Repository instance with its template:
public MongoRepository<String, Item> mongoCustomRepositories(MongoTemplate template) {
MyCustomMongoRepository customImpl = new MyCustomMongoRepositoryImpl(template);
MongoRepositoryFactory repositoryFactory = new MongoRepositoryFactory(template);
return repositoryFactory.getRepository(MyMongoRepository.class, customImpl);
}
And I call it from a #Bean method that returns the list of all of these repositories created from the config entries, I can inject the repositories into service classes.
UPDATE/EDIT: Ok, I set mongodb profiling to 2 in order to log the queries. It turns out that, in fact, the queries are being sent to mongodb, but the problem is that the collection names are not being set for the model. I can't believe that I forgot about this, but I did, so it was using the lower camel case model class name, which will make sure that there are no documents to be retrieved. I have default collection names, but the specific collection names are set in the configuration, like the example YAML shows. I have a couple of ideas, but if anyone has a suggestion about how to set these dynamically, then that would help a lot.
EDIT 2: I did a bunch of work and I have it almost working. You can see my work in my repo. However, in doing this, I uncovered a bug in spring-data-mongodb, and I filed an issue.

What is the role of Logstash Shipper and Logstash Indexer in ELK stack?

I have been studying online about ELK stack for my new project.
Although most of the tech blogs are about how to set ELK up.
Although I need more information to begin with.
What is Logstash ? Further, Logstash Shipper and Indexer.
What is Elasticsearch's role ?
Any leads will be appreciated too if not a proper answer.
I will try to explain the elk stack to you with an example.
Applications generate logs which all have the same format ( timestamp | loglevel | message ) on any machine in our cluster and write those logs to some file.
Filebeat (a logshipper from elk) tracks that file, gathers any updates to the file periodically and forwards them to logstash over the network. Unlike logstash Filebeat is a lightweight application that uses very little resources so I don't mind running it on every machine in the cluster. It notices when logstash is down and waits with tranferring data until logstash is running again (no logs are lost).
Logstash receives messages from all log shippers through the network and applies filters to the messages. In our case it splits up each entry into timestamp, loglevel and message. These are separate fields and can later be searched easily. Any messages that do not conform to that format will get a field: invalid logformat. These messages with fields are now forwarded to elastic search in a speed that elastic search can handle.
Elastic search stores all messages and indexes ( prepares for quick search) all the fields im the messages. It is our database.
We then use Kibana (also from elk) as a gui for accessing the logs. In kibana I can do something like: show me all logs from between 3-5 pm today with loglevel error whose message contains MyClass. Kibana will ask elasticsearch for the results and display them
I don't know, if this helps, but ... whatever... Let's take some really stupid example: I want to do statistics about squirrels in my neighborhood. Every squirrel has a name and we know what they look like. Each neighbor makes a log entry whenever he sees a squirrel eating a nut.
ElasticSearch is a document database that structures data in so called indices. It is able to save pieces (shards) of those indices redundantly on multiple servers and gives you great search functionalities. so you can access huge amounts of data very quickly.
Here we might have finished events that look like this:
{
"_index": "squirrels-2018",
"_id": "zr7zejfhs7fzfud",
"_version": 1,
"_source": {
"squirrel": "Bethany",
"neighbor": "A",
"#timestamp": "2018-10-26T15:22:35.613Z",
"meal": "hazelnut",
}
}
Logstash is the data collector and transformator. It's able to accept data from many different sources (files, databases, transport protocols, ...) with its input plugins. After using one of those input plugins all the data is stored in an Event object that can be manipulated with filters (add data, remove data, load additional data from other sources). When the data has the desired format, it can be distributed to many different outputs.
If neighbor A provides a MySQL database with the columns 'squirrel', 'time' and 'ate', but neighbor B likes to write CSVs with the columns 'name', 'nut' and 'when', we can use Logstash to accept both inputs. Then we rename the fields and parse the different datetime formats those neighbors might be using. If one of them likes to call Bethany 'Beth' we can change the data here to make it consistent. Eventually we send the result to ElasticSearch (and maybe other outputs as well).
Kibana is a visualization tool. It allows you to get an overview over your index structures and server status and create diagrams for your ElasticSearch data
Here we can do funny diagrams like 'Squirrel Sightings Per Minute' or 'Fattest Squirrel (based on nut intake)'

Link with context from Grafana to Kibana (retain time frame and lucene query)

I have Grafana setup with an Elasticsearch datasource and I am graphing 404 http status codes from my webserver.
I want to implement a drill down link to the Kibana associated with my Elasticsearch instance. The required URL is of this form:
https://my.elasticsearch.com/_plugin/kibana/#/discover?_g=(refreshInterval:(display:Off,section:0,value:0),time:(from:now-12h,mode:quick,to:now))&_a=(columns:!(_source),filters:!(),index:'cwl-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'status:404')),sort:!('#timestamp',desc))
For the from: and to: fields, I want to use the current "from" and "to" values that Grafana is using. And for the query: field, I want to use the value from the "Lucene query" of the associated metric.
Does Grafana expose some context object from which I can pull these values, and thus generate the necessary URL?
Or is there some other way?
It's now possible, starting Grafana 7.1.2:
complete working example:
https://kibana/app/kibana#/discover/my-search?_g=(time:(from:'${__from:date}',to:'${__to:date}'))&_a=(query:(language:lucene,query:'host:${host:lucene}'))
https://github.com/grafana/grafana/issues/25396

Grafana - Graph with metrics on demand

I am using Grafana for my application, where I have metrics being exposed from my data source on demand, and I want to monitor such on-demand metrics in Grafana in a user-friendly graph. For example, until an exception has been hit by my application, the data source does NOT expose the metric named 'Exception'. However, I want to create a graph before hand where I should be able to specify the metric 'Exception' and it should log it in the graph whenever my data source exposes the 'Exception' metric.
When I try to create a graph on Grafana using the web GUI, I'm unable to see these 'on-demand metrics' since they've not yet been exposed by my data source. However, I should be able to configure the graph such that in case these metrics are exposed then show them. If I go ahead and type out the non-exposed metric name in the metrics field, I get an error "Timeseries data request error".
Does Grafana provide a method to do this? If so, what am I missing?
It depends on what data source you are using (Graphite, InfluxDB, OpenTSDB?).
For graphite you can enter raw query mode (pen button). To specify what ever query you want, it does not need to exist. Same is true InfluxDB, you find the raw query mode in the hamburger menu drop down to the right of eacy query.
You can also use wildcards in a graphite query (or regex in InfluxDB) to create generic graphs that will add series to the graph as they come in.