Influxdb handler issue for sensu core 1.9 - sensu

We have a sensu setup for a large infra. We are running on sensu core 1.9 and have not updated to sensu go 6.8. We are trying to setup Influx DB handler for our influxdb database version 2.4, we currently cannot migrate our setup to sensu go immedietly and thus cannot use the latest influx db handler for sensugo. Can someone suggest or point me in the right direction.
For Influx DB v1.7 we are using an influx db handler where we are sending the data over UDP to port 8090 using udp type along with line protocol mutator and same config on influxdb.conf but we want the similar thing for influxdb v2.
Tried to configure handler using the env variables for token, bucket name etc but not working.

Related

InfluxDB Error: default retention policy not set for database in grafana after influx update from 1 to 2

I have updated my Influx database and also mapped the databases. But now I get the following problem in Grafana:
InfluxDB Error: default retention policy not set for database
InfluxDB Error: not executed
What could be the reason? I get the values via Flux without any problems. However, I would like to continue using InfluxQL
In order to continue using InfluxQL you will need to setup the Database/Retention Policy mapping for your new 2.x buckets, so that InfluxQL can treat them like 1.x databases. Have you done this already?
Docs to refer:
https://docs.influxdata.com/influxdb/cloud/query-data/influxql/dbrp/#create-dbrp-mappings
Example:
influx v1 dbrp create --default --bucket-id 520047e21111111 --db telegraf --rp default
I think you may change default to autogen (last parameter). I used default as it is used by Grafana 9? (Not confirmed). You see this in your error message:
InfluxDB Error: default retention policy not set for database
Of course you need create such mapping for each bucket you have.
Maybe you will find it also useful example connection Grafana 9.1 -> Influx 2.4.
See Configure InfluxDB authentication:: https://docs.influxdata.com/influxdb/v2.1/tools/grafana/?t=InfluxQL
In this format you need to pass Authorization header. With space in it!
Token y0uR5uP3rSecr3tT0k3n
You can generate token in Influx web GUI (it will be long and i think Base64 encoded?)

Why does my Tarantool Cartridge retrieve data from router instance sometimes?

I wonder why my tarantool cartridge cluster is not woring as it should.
I have a cartridge cluster running on kubernetes and cartridge image is generated from cartridge cli cartridge pack, and no changes were made to the those generated files.
Kubernetes cluster is deployed via helm with the following values:
https://gist.github.com/AlexanderBich/eebcf67786c36580b99373508f734f10
Issue:
When I make requests from pure php tarantool client, for example SELECT sql request it sometimes retrieves the data from storage instances but sometimes unexpectedly it responds to me with the data from router instance instead.
Same goes for INSERT and after I made same schema in both storage and router instances and made 4 requests it resulted in 2 rows being in storage and 2 being in router.
That's weird and as per reading the documentation I'm sure it's not the intended behaviour and I'm struggling to find the source of such behaviour and hope for your help.
SQL in tarantool doesn't work in cluster mode e.g. with tarantool-cartridge.
P.S. that was the response to my question from tarantool community in tarantool telegramchat

Save locust data to influxdb

I'm new to locust, influx and grafana and wanted to integrate locust with grafana for that, I have to use a time-based DB which was influx and wanted to store the locust data in influx DB. I have done some research online but no one has guided on how to do the same.
Do I have to write some script for it or it is just some commands task. My grafana locust and influx is running fine in local env with the help of docker container.
In your Locus scripts you need to create two functions
a) For Success
b) For Failure
Then assign these functions to Locus events.request_success, failure.
Using InfluxDB client you can write the json points to influx db.
Please refer following link.
https://www.qamilestone.com/post/real-time-monitoring-using-locust-with-influxdb-grafana

How to set up StatsD (along with Grafana & Graphite) as backend for Kamon?

I want to track Akka actor's metrics and for that I am using Kamon a JVM monitoring tool, which requires a backend service to post it's stats data so for this purpose I've decided to use open source StatsD with the combination of Grafana & Graphite. Here is the Grafana image which I ran in the docker (with the help of docker tool since I am on Mac), everything thing is working fine. I am able to see Grafana UI screen but its showing some random data in the graphs, may be these are example graphs. Now I am struggling on how to configure it with my own datasource. If anybody here had same experience in the past, can help me? Any kind of help would be appreciated.
The random graphs you are seeing are the default grafana test datasource.
You first need to configure a new datasource that points at the Graphite metrics. The important thing to realise here is that the URL to the Graphite datasource from Grafana is located within the same Docker container i.e. the localhost.
If you set up a new datasource with the following properties:
Name: graphite
Default: checked
Type: Graphite
URL: http://localhost:8000
Access: proxy
You should then have a datasource that points to the Graphite metric data within the Docker container.
Note - the default username/password for the Grafana UI is admin/admin.
Hope this helps.

Using solr cloud server how can we add a document

While adding a document using solr cloud server I am getting zookeeper connection exception.
How to resolve this?
Are you sure you're connected to ZooKeeper?
If you run Solr with embedded ZK the ZK port is Solr port - 1000, for example 8983-1000 => 7973
If you are sure that you use the correct connection details, then you can always increase the ZK connection time out in you CloudSolrServer object setZkConnectTimeout(int) - if you are using Solrj client.
This will probably fix the problem but you should investigate why you are getting ZK timeout.
In the new Solr versions the default timeout increased to 3000ms from 1800ms, so if you are using old version of Solr change it.