Grafana transformations: calculate percentage in the table panel - grafana

i am using grafana-v7.3.6 on ubuntu.
i basically have a timeseries with different versions as tags. I want to create a table with each version and its percentage of the total value. I am using opentsdb-v2.4 as my datasource.
example:
time, metric, value
... result{version = 1}, 10
... result{version = 2}, 5
... result{version = 1}, 5
... result{version = 3}, 2
... result{version = 1}, 2
... result{version = 3}, 5
... result{version = 2}, 5
... result{version = 1}, 3
... result{version = 2}, 0
... result{version = 3}, 3
using series to rows transformations: i was able to get the following:
metric, value
result{version = 1}, 20
result{version = 2}, 10
result{version = 3}, 10
What i would like is the following:
metric, value
result{version = 1}, 50%
result{version = 2}, 25%
result{version = 3}, 25%
how can i achieve this?
any pointers/suggestions would be really appreciated. thank you.

You need to replace the query you're using with the following:
your-query/scalar(sum(your-query))

Related

query returning empty dataframe when called through a module [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 days ago.
Improve this question
I have the following setup.
A module "ExampleModule" installed via egg file which contains the following code.
def fetch_data(sql, sql_query_text):
data = sql(sql_query_text).toPandas()
print(data) # this gives me an EmptyDataframe with 0 rows and 28 columns
In my jupyter notebook that has pyspark kernel running, I have the following code:
from pyspark.sql import SQLContext
sqlContext = SQLContext(spark)
sql = sqlContext.sql
from ExampleModule import *
sql_text = "<THE SELECT QUERY>"
fetch_data(sql, sql_text)
This gives me empty dataframe. However, if I define a local function "fetch_data_local" it runs fine and gives me 43k rows as expected. Example local method:
def fetch_data_local(sql, sql_text):
data = sql(sql_text).toPandas()
print(data.size)
fetch_data_local(sql, sql_text)
Above function works fine and gives me 43k rows.
I had tried it using the Databricks Community edition. It works for me
spark.sparkContext.addPyFile("dbfs:/FileStore/shared_uploads/********#gmail.com/CustomModule.py")
from CustomModule import *
df = [{"Category": 'A', "date": '01/01/2022', "Indictor": 1},
{"Category": 'A', "date": '02/01/2022', "Indictor": 0},
{"Category": 'A', "date": '03/01/2022', "Indictor": 1},
{"Category": 'A', "date": '04/01/2022', "Indictor": 1},
{"Category": 'A', "date": '05/01/2022', "Indictor": 1},
{"Category": 'B', "date": '01/01/2022', "Indictor": 0},
{"Category": 'B', "date": '02/01/2022', "Indictor": 1},
{"Category": 'B', "date": '03/01/2022', "Indictor": 1},
{"Category": 'B', "date": '04/01/2022', "Indictor": 0},
{"Category": 'B', "date": '05/01/2022', "Indictor": 0},
{"Category": 'B', "date": '06/01/2022', "Indictor": 1}]
df = spark.createDataFrame(df)
df.write.mode("overwrite").saveAsTable("sample")
from pyspark.sql import SQLContext
sqlContext = SQLContext(spark)
sql = sqlContext.sql
sql_text = "select * from sample"
fetch_data(sql, sql_text)
Output
df:pyspark.sql.dataframe.DataFrame = [Category: string, Indictor: long ... 1 more field]
/databricks/spark/python/pyspark/sql/context.py:117: FutureWarning: Deprecated in 3.0.0. Use SparkSession.builder.getOrCreate() instead.
warnings.warn(
Category Indictor date
0 A 1 03/01/2022
1 A 1 04/01/2022
2 B 1 02/01/2022
3 B 1 03/01/2022
4 B 0 05/01/2022
5 B 1 06/01/2022
6 A 1 01/01/2022
7 A 0 02/01/2022
8 A 1 05/01/2022
9 B 0 01/01/2022
10 B 0 04/01/2022

Is there a way to check the value of previous value in iteration? dart

In the below code ,variable modelMap returns a map and I am iterating through that
var modelMap = parsedJson.values;
modelMap.forEach((element) {
modelName = element["ModelName"];
modelList.add(modelName) ;
});
Here I have to check if the previous iteration's element has a particular value and do my operations based on that..
for example if the pervious iteration has brandName equal to my current iteration's brandname then I will add the current model to the list.. if they are not equal I have to skip...Is it possible to do that?
You can just use some variable to save the previous. And compare to that. Here is an example where you don't take it if it is the same brandName:
List result = [];
final list = [
{'brandName': 1},
{'brandName': 2},
{'brandName': 2},
{'brandName': 3},
{'brandName': 3},
{'brandName': 3},
{'brandName': 5},
{'brandName': 3},
{'brandName': 1},
{'brandName': 1},
{'brandName': 6}
];
dynamic previous;
list.forEach((element) {
if (previous?['brandName'] != element['brandName']) {
result.add(element);
}
previous = element;
});
print(result);
//[{brandName: 1}, {brandName: 2}, {brandName: 3}, {brandName: 5}, {brandName: 3}, {brandName: 1}, {brandName: 6}]

Esqueleto `selectDistinct` not working

selectDistinct seems to not be working for me, it's probably a simple error.
the query:
info <- runDB $
E.selectDistinct $
E.from $ \(tp `E.InnerJoin` rnd `E.InnerJoin` h) -> do
E.on (rnd E.^. RoundId E.==. h E.^. HoleRound)
E.on (tp E.^. TpartTournament E.==. rnd E.^. RoundTourn)
E.where_ ((tp E.^. TpartTournament E.==. E.val tId ))
E.orderBy [E.asc (tp E.^. TpartId)]
return (tp, rnd, h)
I'm quite sure that this represents the sql query which works:
SELECT DISTINCT tpart.id, round.name, hole.hole_num, hole.score
from tpart
inner join round on round.tourn = tpart.tournament
inner join hole on hole.round = round.id
where tpart.tournament = 1;
To view the results I have a test handler to just print the result table. Notice that for tpart 1, round 1, there are multiple hole 1 and hole 2. In postgresql SELECT DISTINICT removed these duplicates.
TpartId, RoundName, holeNum, HoleScore
Key {unKey = PersistInt64 1}, round 1, 1, 6
Key {unKey = PersistInt64 1}, round 1, 2, 4
Key {unKey = PersistInt64 1}, round 1, 1, 6
Key {unKey = PersistInt64 1}, round 1, 2, 4
Key {unKey = PersistInt64 1}, round 1, 1, 6
Key {unKey = PersistInt64 1}, round 1, 2, 4
Key {unKey = PersistInt64 1}, round 2, 1, 3
Key {unKey = PersistInt64 1}, round 2, 2, 5
Key {unKey = PersistInt64 1}, round 2, 1, 3
Key {unKey = PersistInt64 1}, round 2, 2, 5
Key {unKey = PersistInt64 1}, round 2, 1, 3
Key {unKey = PersistInt64 1}, round 2, 2, 5
Key {unKey = PersistInt64 3}, round 1, 1, 6
Key {unKey = PersistInt64 3}, round 1, 2, 4
Key {unKey = PersistInt64 3}, round 1, 1, 6
Key {unKey = PersistInt64 3}, round 1, 2, 4
Key {unKey = PersistInt64 3}, round 1, 1, 6
Key {unKey = PersistInt64 3}, round 1, 2, 4
Key {unKey = PersistInt64 3}, round 2, 1, 3
Key {unKey = PersistInt64 3}, round 2, 2, 5
Key {unKey = PersistInt64 3}, round 2, 1, 3
Key {unKey = PersistInt64 3}, round 2, 2, 5
Key {unKey = PersistInt64 3}, round 2, 1, 3
Key {unKey = PersistInt64 3}, round 2, 2, 5
Sorry for the illegibility. Any help would be appreciated!
The error was that for a certain Hole, the hole's round AND the Hole's part must equal they're respective parts. Also, the inner join was redundant in that situation.

Increment matrix structure in MongoDb

I would like to have a matrix structure (a NxN integer matrix) and I want to increment values in it. Which is the right approach to model a matrix in MongoDb and to increment theirValues?
Lets consider we have:
1 2 3
4 5 6
7 8 9
You can store matrix as embedded array in mongodb in different ways:
1.Represent matrix as one-dimensional array and store like this:
{
_id: "1",
matrix: [1,2,3,4,5,6,7,8,9],
width: 3, // or store just size in case of NxN
height: 3,
}
Then to increment third element of matrix you will need following update:
db.matrix.update({_id: 1}, { $inc : { "matrix.2" : 1 } }
This approach is very lightweight, because you store as minimum data as possible, but you will need always calculate position of element to update, and you will need write additional code to deserialize matrix in your driver.
2.Store matrix in following way:
{
_id: "1",
matrix: [
{xy: "0-0", v: 1},
{xy: "1-0", v: 2},
{xy: "2-0", v: 3},
{xy: "0-1", v: 4},
...
]
}
Then to increment third element of first row in matrix you will need following update:
db.matrix.update({_id: 1, "matrix.xy": 2-0 }, { $inc : { "matrix.$.v" : 1 } }
This approach should be simpler from driver side, but you will need to store more information in a database.
Choose whatever you like more.
You could use the matrix indexes as field names:
{
_id: "1",
matrix: {
"0": {{"0": 0}, {"1": 0}, {"2": 0}}
"1": {{"0": 0}, {"1": 0}, {"2": 0}},
"2": {{"0": 0}, {"1": 0}, {"2": 0}},
"3": {{"0": 0}, {"1": 0}, {"2": 0}},
...
]
}
One advantage of this approach is that you don't have to initialize the matrix, as $inc will create the fields and assign to it the value you want to increment.
You will also be able to update multiple fields at a time or create the document if it doesn't exists using upsert=true.
Last, the notation for updating is quite clean and easy :
db.matrix.update({_id: 1}, { $inc : { "matrix.0.0" : 1, "matrix.0.1" : 2, ... } }

MapThread, Manipulate, Filter in Mathematica

I hope to be able to name that question properly soon.
Please Consider :
list1 = Tuples[Range[1, 5], 2];
list2 = Tuples[Range[3, 7], 2];
*I use the below mechanism to display all filtered eye fixations during a display. *
Manipulate[Row[
MapThread[Function[{list},
Graphics[
Point[{#[[1]], #[[2]]}]& /# Select[list,
(#[[1]] > f1 && #[[2]] > f2) &],
Frame -> True, PlotRange -> {{0, 10}, {0, 10}}]],
{{list1, list2}}]],
{f1, 0, 10}, {f2, 0, 10}]
Now, I would like to display each fixation (point) one at a time, cumulatively.
That is :
Given
list1 = {{1, 1}, {1, 2}, {1, 3}, {1, 4}, {1, 5}, {2, 1}, {2, 2}, {2, 3}, {2, 4},
{2, 5}, {3, 1}, {3, 2}, {3, 3}, {3, 4}, {3, 5}, {4, 1}, {4, 2}, {4, 3},
{4, 4}, {4, 5}, {5, 1}, {5, 2}, {5, 3}, {5, 4}, {5, 5}}
Use a slider to display the 1 to 25 Points here. But after filter the 1 to Length#Filtered Data
The Slider that control the Fixation number has yet a fixed boundary (25) , whereas it should have one equal to the Length of the filtered list.
But there is 2 due to Mapthread.
And I cannot extend the Mapthread to the Manipulate Control, could I ?
Manipulate[Row[MapThread[Function[{list},
Graphics[
Point[{#[[1]], #[[2]]}]& /# Select[list,
(#[[1]] > f1 && #[[2]] > f2) &]
[[1 ;; dd, All]],
Frame -> True, PlotRange -> {{0, 10}, {0, 10}}]],
{{list1, list2}}]],
{f1, 0, 10}, {f2, 0, 10},{dd,0,25}]
Perhaps something like:
(Beware of code efficiency)
list1 = Tuples[Range[1, 5], 2];
list2 = Tuples[Range[3, 7], 2];
f = (Select[#, (#[[1]] > f1 && #[[2]] > f2) &] &);
Manipulate[
Row#Graphics[Point##, Frame -> True, PlotRange -> {{0, 10}, {0, 10}}] & /#
Map[Map[f, {#}][[All, 1 ;; Min[dd, Length ## Map[f, {#}]], All]] &,
{list1, list2}],
{f1, 0, 10}, {f2, 0, 10}, {dd, 0, 25, 1}]
Try it with {dd, 0, 25, 1}. This both allows it to parse correctly (closing brace) and keeps it real, so to speak, by preventing dd from being real valued.