Karate gatling grafana integration - is it possible to write raw measurement data into influxDB? - grafana

I am a newbie in the Karate Gatling framework and trying to exchange current solution (Newman based) with KarateDSL + InfluxDB + Grafana.
What I've noticed, that the measurements data, which is being written in the InfluxDB is already aggregated and contains:
max, min, mean, percentiles50, percentiles75, etc. values.
It makes it not possible to display the actual graph in Grafana, as the values do not make sense anymore. Is it possible to write the raw measurements data in the InfluxDB (or any other DB) to be able to display the proper graph?
Thanks in advance
I was trying to find a setting which would allow to write the raw measures in the DB, but could not find it.
I would expect it should be possible to build such a graphical representation of the performance test results:

Related

Is there any way to download the entire google analytics 4 data for a certain period?

I was reading some API documents including https://developers.google.com/analytics/devguides/reporting/data/v1/basics. However, this API allows downloading certain dimensions and metrics.
request = RunReportRequest(
property=f"properties/{property_id}",
dimensions=[Dimension(name="country")],
metrics=[Metric(name="activeUsers")],
date_ranges=[DateRange(start_date="2020-09-01", end_date="2020-09-15")],
)
Is there any way to download the entire data as json or something?
BigQuery Export allows you to download all your raw events. Using the Data API, you could create individual reports with say 5 dimensions & metrics each; then, you could download your data in slices through say 10 of those reports.
BigQuery and the Data API will have different schemas. For example, BigQuery gives the event timestamp, and the most precise time granularity that the Data API gives is hour. So, your decision between the Data API & BigQuery may depend on which dimensions & metrics you need.
What dimensions & metrics are most important to you?
"Donwnload the entire data" is a vague statement. First you need to decide what kind of data you need. Aggregated data or Raw data?
Aggregated data e.g. daily / hourly activeUsers, events, sessions etc. can be extracted using Data API as you have already tried. Each API requests accepts fixed number of dimensions and metrics. So as per your business requirements you should decide the dimension-metric combinations and extract the data using the API.
Raw events can be extracted from BigQuery if you have linked GA4 property to it. Here is the BigQuery table schema for that where each row has clientId, timestamp and other event level details which are not available with Data API. Again based on business requirements you can write queries and extract the data.

How to delete a single data point in grafana?

I'm getting data from two solar inverters via influxdb & grafana. It works fine, but sometimes, due to an unknwon issue, one value of some parameters of one of the inverters is extremely high, way too high for it to make sense. Is there a way to delete a single data point so that it's no longer be shown in the graph?
Example image: efficiency of both solar inverters can't be higher than 100%
("Wirkungsgrad" is german for efficiency)
Grafana doesn't manage data in the InfluxDB usually (very likely your Grafana has read only access to the InfluxDB, otherwise it won't be very secure). So question if Grafana can delete InfluxDB data point is not very accurate.
You can:
1.) Execute DELETE InfluxDB query for that particular data point (as user with proper InfluxDB permissions - I guess only InfluDB admins can do that)
2.) On the Grafana level you can configure Y-Max:100 for your Y axe, so those 100+ values will be out of visible graph area
3.) You can even filter it out in the Grafana InfluxDB query, e.g. WHERE value <= 100, so in that case high values won't be event returned by InfluxDB

How to get all time-series Influxdb entries with one python query?

I've have a question about using Python together with InfluxDB. I've got multiple Rasperry PI's collecting time series data of sensors (like temperature,humidity,..) and saving them to my InfluxDB.
Now I want to use another PI to access that Influxdata and do some calculations, like the similarity of those time series. Because the number of queries can differ from time to time i want to dynamically ask for a list of all entries and then query that data.
I did that really helfull tutorial over here: https://www.influxdata.com/blog/getting-started-python-influxdb/
There its stated to use
client.get_list_database()
to get a list containing all databases, which returns in my case:
[{'name': 'db1'}, {'name': 'db2'}, {'name': 'sensordata'}]
My target now is to "go deeper" into the sensordata database and get a list of all time series whichare contained in these database, which are for example RP1-Temperature1,RP2-Brightness1,.., and so on. So to makes things clear, my magic query would contain the length of my query and the database and would return me a python dictionary containing the names and values of the time series.
Thanks in Advance!!
The Python Client allows you to query database with line protocol.
The command
SHOW series
will yield all series contained within a database.
What to do with the result is up to you and I think you should be good on your own from here.
Actually reading the Influx Python client documentation would have answered most of your question.

Apache Drill - Aggregation SUM query gives exponential result

When we do aggregated sum query in Drill for mongo storage , the output result is in exponential form .
Is there anywhere can we configure in drill , so that we can get output without exponential form ?
We dont want exponential result.
Thanks in Advance.
Drill provides two tools that display query results (and so would format numbers): the Drill web UI and the Sqlline command line tool. Are you using one of these? These tools are often used for experimental queries. I'm not aware of any way to customize the display in that UI.
That said, if you use the ODBC or JDBC driver, then numbers are stored in binary format and so any formatting would be done by the tool using the drivers to run queries. The xDBC drivers are more for production use as they handle large result sets better than the UI tools.
Now, a third possibility is that something in the Mongo plugin converts numbers to strings (VARCHAR). In that case, you might be able to cast the string back to a number.
If you can provide a bit more detail on the tool you are using, perhaps we can provide a bit more focused answer.

Graphite / Graphiti-esque tool with millisecond precision, optional aggregation

I need a timeseries datastore and visualization platform where I can dump experiment data into hierarchical namespaces and then go back later for analysis. Saving graph templates, linking to graphs and other features to go from analysis to presentation would be very useful. Initially I was really excited to read about Graphite and Graphiti, because they appear to fit the bill. However, the events I'm tracking are milliseconds apart and I need to keep millisecond precision without aggregation or averaging. It looks like the only way to make Graphite play nice is to aggregate up from statsd to metrics per second, which will obscure the events I'm interesting in. Optional aggregation would be fine in some cases, but not always.
Cube takes events with millisecond timestamps, but Cubism appears to be a rich library and not a full-fledged platform like Graphite. It also appears to be heavily real-time oriented. If I can't find a good stack to meet my needs I'll probably use Cube to store my data, but visualizing it with batch scripts that generate piles and piles of matplotlib graphs is not fun.
Am I misinformed, or is there another framework out there which will give me decent analysis/interactivity with an arbitrary time granularity?
Cubism.js is just a front-end for Graphite (and other back-ends, like Cube), so I think it would fit your needs.
You would need to setup a Graphite system to store your metrics (rather than Cube) with the appropriate level of detail (eg. millisecond), and then use Cubism's Graphite context to display it with the same step value.