AWS LogInsight - Get aggregate from another agrregate - aws-cloudwatch-log-insights

I'm using stats to get per second count. How can I aggregate these per second results to a longer period e.g. max over of that per second value over 1 hour period?
filter (#message like /Http Method:PUT/ OR #message like /Http Method:POST/)
|stats count() as TPS by bin(1s) -> This gives TPS at every second
I want to use the above TPS and get something like this:
filter (#message like /Http Method:PUT/ OR #message like /Http Method:POST/)
|stats count() as TPS by bin(1s)|stats max(TPS) as PeakTPS by bin(1h)
Any idea how can I do this?
filter (#message like /Http Method:PUT/ OR #message like /Http Method:POST/)
|stats count() as TPS by bin(1s)|stats max(TPS) as PeakTPS by bin(1h)

Related

Getting the total number of records in a single knexjs query when using the limit() method

I use knexjs and postgresql. Is it possible in knexjs to get the total of records from the same query in which the limit is used?
For example:
knex.select().from('project').limit(50)
Is it possible to somehow get the total number of records in the same query if there are more than 50?
The question arose due to the fact that my query is much more complex, which uses a lot of subqueries and conditions, and I would not like to make this query twice to get the data in one query and the total number of records (I use the .count() method) from another.
I do not know your obscurification manager (knexjs?) but I would think you should be able to add the window version of the count() function to your select list. In plain SQL something like: Where ... represents your current select list. (see demo)
select ..., count(*) over() total_rows
from project
limit 5;
This works because the window count function counts all rows selected, after all rows selected, but before the LIMIT clause is applied. Note: This adds a column to the result set with the same value in every row.

kdb/q: use function in a select from partitioned table

I'm trying to get max drawdown from a partitioned table across multiple dates. The query works fine when run with a date constrained to a specific day. E.g.
select {max neg x-maxs x} pnl from trades where date=last date
It's getting map-reduced over multiple dates so the above query no longer works. I can make the query run over multiple dates by adding another aggregation:
select max {max neg x-maxs x} pnl from trades
but it's not getting the max drawdown from continuous sequence of trades but a maximum of daily drawdowns.
I wonder if there's a way to make it work with a single select without chaining selects like
select {max neg x-maxs x} pnl from select pnl from trades
I've got a rather big query to pull a lot of various metrics on the trades where max drawdown is just one of them. Using chained select means that I need to break the big query into two queries, map-reduced and non-map-reduced, and then join them back which would make the query look ugly.
Thanks!
Select query runs on each date in partition db and apply function to each date values and finally aggregates them depending upon the call (user defined function behaves differently than plain 'q' functions).
So I don't think you can combine that into one query. But there are ways you can look for to make your query more generalized and reusable for different scenarios.
For ex. convert your query to functional form and use variables in that query for column name and user function. Put this in one function which will accept column name and user function. Now you can call this function with different set of (column ;function). Something like :
runF:{[col;usrfunc] funtional_query_uses_col_userfunc }
All this depends on your use cases. Also check for memory usage as you'll be taking lot of data into memory.

2008R2 How do I average the total scores summed?

How do I do the Average in this coding?
Sum(qa.greeting1+qa.greeting2+qa.greeting3+qa.greeting5+qa.greeting6) as AverageTotalCareScore
Thank you!
It sounds like you have a query already for a result set (100 rows?) so I would suggest doing something like a CTE to find the average of a result set.
WITH CTE AS(
<your query here>
)
SELECT
AVG(column_to_be_averaged)
FROM CTE
Unless I misunderstood something, this should work out.
To calculate the average you can simply do like this:
select (qa.greeting1+qa.greeting2+qa.greeting3+qa.greeting5+qa.greeting6)/5
ie, sum of all the items divided by the number of items.
You can also use the AVG() function in SQL Server like this:
select AVG(column1)

See length (count) of query results in workbench

I just started using MySQL Workbench (6.1). The default limit for queries is 1,000 and that's fine I want to keep that.
But the results from the action output message will therefore always say "1000 rows returned".
Is there a setting to see the number of records that would be returned in the query had their been no limit? For sanity checking query results?
I know this is late by a few years, but I think you're asking for a way to see total row count in the bottom of the results pane, like in SQL Server. In SQL Server, you would also go in the messages pane and it would say how many rows were returned. I was actually looking for exactly what you were asking for as well, and seems like there is no way to find that. If you have an ID in your table that is just numeric and is in numeric order, you could order by ID desc and look at the biggest number there. That is what I've decided to do.
The result is not always "1000 rows returned". If there are less records than that you will get the actual count. If you want to know the total number of rows in a table do a select count(*) from table. Alternatively, you can switch off the automatic limit and have all records returned by MySQL Workbench, but that can be time + memory consuming for large tables.
I think removing the row limit will help. By default, MySQL workbench will limit the result set to 1000 rows but you can always disable the limit. Check out https://superuser.com/questions/240291/how-to-remove-1000-row-limit-in-mysql-workbench-queries on how to do that.
You can run a second query to check that
select count(*) from (your original query) as t;
this will return the total rows in actual result.
You can use the SQL count function. It returns the count of the total number of rows a query returns.
A sample query:
select count(*) from tableName where field1 = value1
In workbench, in the dropdown menu at the top, set it to dont limit Then run the query to extract data from table Then under the output pane below, the total count of the query results will be displayed in the message column

How can I get the last 10 records

I have a db with +- 60000 records in, I need to obtain the last 10 records entered, can this be done via postgres? I was thinking off maybe setting the offset to 50 990 and the limit to 10 or something similar, but not sure if this will be efficient?
Something like the following perhaps:
SELECT * FROM your_table
ORDER BY your_timestamp DESC
LIMIT 10
If you want the result sorted by the timestamp, you can wrap this in another query and sort again. You might want to take a look at the execution plan but this shouldn't be too inefficient.
ORDER BY id DESC LIMIT 10
If id is indexed, this will be very efficient. Could naturally also be a timestamp.