Difference in where clause in Grafana query (to remove outliers) - grafana

I'm looking at cumulative energy data, so the graphs look like this:
Which isn't helpful at all since I want to look at changes and patterns. So I'm querying the difference using the non_negative_difference() function in the Grafana query tool.
However, the crazy outliers just overshadow everything and make it impossible to see anything reasonable. Like this:
My idea was to filter outliers directly using a WHERE clause. However, I seem to not be able to add the "non_negative_difference" data into the WHERE clause of Grafana. Any pointers/ideas how to do that correctly?

Related

QlikSense: How to group data by row and avoid duplicates

I'm accustomed to building out reporting in SSRS and PowerBI, but am now using QlikSense, so if what I'm trying to do flies in the face of what QlikSense is meant to do or is able to do, my apologies. But what I'd like to see a hierarchal flow that doesn't produce duplicate values in every row. Perhaps this will help:
Is this something that can be done?

Access: filter query results from a user-friendly form before loading

I am working on a database in Access to record process conditions and laboratory analysis on different types of samples.
The main query of interest contains for each sample all the results of the analysis.
The built-in filtering functions Access offers by clicking on the little arrow on the columns of the table work great and are very user-intuitive. However, the database is growing pretty fast and it is starting to take quite a long time to load the results of the query. Therefore I would like to filter the data BEFORE loading all the results.
Is there a way to do this in a way that is as user friendly and intuitive as the post-load filter option?
I have tried to create a form to do the filtering, but I can't figure out how to allow the user to choose multiple values from a combo-box. All the specific options like "begins with" "contains" etc. or "bigger than" "between" are useful, but not strictly necessary.
Thanks for any help.

Can Sphinx query be tested command line w/o an index?

If I just want to test that my query will work vs a term inside a sentence "and am selling an Iphone 3gs", is it possible to use command line to test this? This way I don't need to keep adding to and rotating an index but can simply tweak my query and the data I'd plan on storing. Mainly I am trying to tweak various query parameters like SENTENCE and PROXIMITY vs wordforms/stopwords/ignore_char and would like to be able to work fast and test different query structures vs test words/patterns.
In theory can use BuildExcerpts, to run an arbitrary query, against a block of text (make sure use query_mode=true).
http://sphinxsearch.com/docs/current.html#api-func-buildexcerpts
But even then not totally sure it will completely honour the query, not sure SENTENCE etc will truely work.
... but if you wanting to play with ignore_char etc, you are going to be modifying the index config file. So surely just quickly running indexer to rebuild the index, to see the results is not htat difficult.

Implement interval analysis on top of PostgreSQL

I have a couple of millions entries in a table which start and end timestamps. I want to implement an analysis tool which determines unique entries for a specific interval. Let's say between yesterday and 2 month before yesterday.
Depending on the interval the queries take between a couple of seconds and 30 minutes. How would I implement an analysis tool for a web front-end which would allow to quite quickly query this data, similar to Google Analytics.
I was thinking of moving the data into Redis and do something clever with interval and sorted sets etc. but I was wondering if there's something in PostgreSQL which would allow to execute aggregated queries, re-use old queries, so that for instance, after querying the first couple of days it does not start from scratch again when looking at different interval.
If not, what should I do? Export the data to something like Apache Spark or Dynamo DB and analysis in there to fill Redis for retrieving it quicker?
Either will do.
Aggregation is a basic task they all can do, and your data is smll enough to fit into main memory. So you don't even need a database (but the aggregation functions of a database may still be better implemented than if you rewrite them; and SQL is quite convenient to use.
Jusr do it. Give it a try.
P.S. make sure to enable data indexing, and choose the right data types. Maybe check query plans, too.

can we use loop functions in tableau

Can we use loop functions(for,while,do while) in tableau calculated Fields? If we can, how can we use the these functions in calculated fields and how can we initialise the variables which are declared in these functions?
No we can't. There are some hacks to do some calculations like that, using PREVIOUS_VALUE and other table calculations, but there is no loop functions in Tableau.
Why? Because Tableau isn't meant to be a data processing tool, but rather a data visualization tool. Don't get me wrong, Tableau engine is very good to process data, but only to perform "query-like" operations.
So why don't you post exactly what you are trying to achieve and we can think if it's possible to be accomplished with Tableau, or you require some pre-processing in your data