InfluxDB Flux query how to get distinct values from query - grafana

I have written the following Flux query in Grafana, I get two results for value. I would like to filter those two by distinct values by scenario key.
I would expect to have "main_flow" and "persons_end_user" results at the end. How can I achive this, I have tried with distinct() and unique(), but does not seem to work.

Related

multiple aggregations on same column using agg in pyspark

I am not able to get multiple metrics using agg as below.
table.select("date_time")\
.withColumn("date",to_timestamp("date_time"))\
.agg({'date_time':'max', 'date_time':'min'}).show()
I see that second aggregation overwriting first aggregation,
can someone help me to get multiple aggregations on same column?
I can't replicate and make sure that it works but I would suggest instead of using a dict for your aggregations try it like this:
table.select("date_time")\
.withColumn("date",to_timestamp("date_time"))\
.agg(min('date_time'), max('date_time')).show()

What does the distinct on clause mean in cloud datastore and how does it effect the reads?

This is what the cloud datastore doc says but I'm having a hard time understanding what exactly this means:
A projection query that does not use the distinct on clause is a small operation and counts as only a single entity read for the query itself.
Grouping
Projection queries can use the distinct on clause to ensure that only the first result for each distinct combination of values for the specified properties will be returned. This will return only the first result for entities which have the same values for the properties that are being projected.
Let's say i have a table for questions and i only want to get the question text sorted by the created date would this be counted as a single read and rest as small operations?
If your goal is to just project the date and text fields, you can create a composite index on those two fields. When you query, this is a small operation with all the results as a single read. You are not trying to de-duplicate (so no distinct/on) in this case and so it is a small operation with a single read.

How to Place a variable in pyspark groupby agg query

Hi I have query in which i want to place the variable data into the group by query
i Tried like this but it not working
dd2=(dd1.groupBy("hours").agg({'%s':'%s'})%(columnname1,input1))
In the columnname1 contain 'total' and input1 contain what kind of aggregation is required like mean or stddev.
i want this query to be dynamic.
Try this,
dd2=(dd1.groupBy("hours").agg({'{}'.format(columnname1):'{}'.format(input1)}))

Get total number of matches along with result in algolia

I am looking for something like FOUND_ROWS() in mysql select query in algolia results as I need to keep a track of how many total results to expect. Is there someway to get this in Algolia?
The proper way to obtain the number of results is to access the nbHits value which is available in the JSON response of every search call.

Use distinct and skip in a query

I tried running this:
db.col.find().skip(5).distinct("field1")
But it throws an error.
How to use them together?
I can use aggregation but results are different:
db.col.aggregate([{$group:{_id:'$field1'}}, {$skip:3},{$sort:{"field1":1}}])
What I want is links in sorted order i.e numbers should come first then capital letters and then small letters.
Distinct method must be run on COLLECTION not on cursor and returns an array. Read this
http://docs.mongodb.org/manual/reference/method/db.collection.distinct/
So you can't use skip after distinct.
May be you should use this query
db.col.aggregate([{$group:{_id:'$field1'}}, {$skip:3},{$sort:{"_id":1}}]) because field field1 will not exists in result after first clause of grouping.
Also I think you should do sort at first and then skip because in your query you skip 3 unsorted results and then sort them.
(If you provide more information about structure of your documents and what output you want it would be more clearly and I will correct answer properly)