How do I do the Average in this coding?
Sum(qa.greeting1+qa.greeting2+qa.greeting3+qa.greeting5+qa.greeting6) as AverageTotalCareScore
Thank you!
It sounds like you have a query already for a result set (100 rows?) so I would suggest doing something like a CTE to find the average of a result set.
WITH CTE AS(
<your query here>
)
SELECT
AVG(column_to_be_averaged)
FROM CTE
Unless I misunderstood something, this should work out.
To calculate the average you can simply do like this:
select (qa.greeting1+qa.greeting2+qa.greeting3+qa.greeting5+qa.greeting6)/5
ie, sum of all the items divided by the number of items.
You can also use the AVG() function in SQL Server like this:
select AVG(column1)
Related
Maybe I'm overlooking something, but none of the answers I found solve my problem. I'm trying to get the sum and average from a column, but everything I see is getting sum and average from a row.
Here is the query I'm using:
SELECT product_name,unit_cost,units,total,SUM(total),AVG(total)
FROM products
GROUP BY product_name,unit_cost,total
And this is what I get:
It returns the exact same amounts. What I need is to add all values in the unit_cost column and return the SUM and AVG of all its values. What am I missing? What did I not understand? Thank you for taking the time to answer!
AVG and SUM as window functions and no grouping will do the job.
select product_name,unit_cost,units,total,
SUM(total) over all_rows as sum_of_all_rows,
AVG(total) over all_rows as avg_of_all_rows
from products
window all_rows as ();
The groups in your query contain just one row if total is a distinct value. This seems to be the case with your example. You can check this with a count aggregate (value = 1).
Removing total and probably unit_cost) from your select and group by clause should help.
Hy everyone. This is my first post on Stack Overflow so sorry if it is clumpsy in any way.
I work in Python and make postgresSQL requests to a google BigQuery database. The data structure looks like this :
sample of data
where time is represented in nanoseconds, and is not regularly spaced (it is captured real-time).
What I want to do is to select, say, the mean price over a minute, for each minute in a time range that i would like to give as a parameter.
This time range is currently a list of timestamps that I build externally, and I make sure they are separated by one minute each :
[1606170420000000000, 1606170360000000000, 1606170300000000000, 1606170240000000000, 1606170180000000000, ...]
My question is : how can I extract this list of mean prices given that list of time intervals ?
Ideally I'd expect something like
SELECT AVG(price) OVER( PARTITION BY (time BETWEEN time_intervals[i] AND time_intervals[i+1] for i in range(len(time_intervals))) )
FROM table_name
but I know that doesn't make sense...
My temporary solution is to aggregate many SELECT ... UNION DISTINCT clauses, one for each minute interval. But as you can imagine, this is not very efficient... (I need up to 60*24 = 1440 samples)
Now there very well may already be an answer to that question, but since I'm not even sure about how to formulate it, I found nothing yet. Every link and/or tip would be of great help.
Many thanks in advance.
First of all, your sample data appears to be at millisecond resolution, and you are looking for averages at minute (sixty-second) resolution.
Please try this:
select div(time, 60000000000) as minute,
pair,
avg(price) as avg_price
from your_table
group by div(time, 60000000000) as minute, pair
If you want to control the intervals as you said in your comment, then please try something like this (I do not have access to BigQuery):
with time_ivals as (
select tick,
lead(tick) over (order by tick) as next_tick
from unnest(
[1606170420000000000, 1606170360000000000,
1606170300000000000, 1606170240000000000,
1606170180000000000, ...]) as tick
)
select t.tick, y.pair, avg(y.price) as avg_price
from time_ivals t
join your_table y
on y.time >= t.tick
and y.time < t.next_tick
group by t.tick, y.pair;
I need to group a table by the sum of a NUMC-column, which unfortunately seems not to be possible with ABAP / OpenSQL.
My code looks like that:
SELECT z~anln1
FROM zzanla AS z
INTO TABLE gt_
GROUP BY z~anln1 z~anln2
HAVING SUM( z~percent ) <> 100 " percent unfortunately is a NUMC -> summing up not possible
What would be the best / easiest practices here as I cannot alter the table itself?
Unfortunately the NUMC type is described as numerical text, so at the end it lands in the database as VARCHAR and that is why the functions like SUM or AVG cannot be used.
It all depends on how big your table is. If it is rather small you could get the group fields and the values for sum into an internal table and then sum it using COLLECT statement and eventually remove the rows for which the sum is equal 100%.
One solution is to define the field in the table using a more appropriate type.
NUMC is often used for key fields - like document numbers, which there would never be a reason to add together.
I didn't find a smooth solution.
What I did, was to copy everything in an internal table, looped over it converting the NUMC values to DEC values. Grouping and summing up worked at that point.
At the end, I converted the DEC values back to NUMC values.
It's been awhile. I came back to this post, because someone voted up my original answer. I was thinking about editing my old answer but I decided to post a new one. As this question was asked in 2017, there were some restictions but now it can be done by using CAST function in new OpenSQL.
SELECT z~anln1
FROM zzanla AS z
INTO TABLE #gt_
GROUP BY z~anln1, z~anln2
HAVING SUM( CAST( z~percent AS INT4 ) ) <> 100
I just started using MySQL Workbench (6.1). The default limit for queries is 1,000 and that's fine I want to keep that.
But the results from the action output message will therefore always say "1000 rows returned".
Is there a setting to see the number of records that would be returned in the query had their been no limit? For sanity checking query results?
I know this is late by a few years, but I think you're asking for a way to see total row count in the bottom of the results pane, like in SQL Server. In SQL Server, you would also go in the messages pane and it would say how many rows were returned. I was actually looking for exactly what you were asking for as well, and seems like there is no way to find that. If you have an ID in your table that is just numeric and is in numeric order, you could order by ID desc and look at the biggest number there. That is what I've decided to do.
The result is not always "1000 rows returned". If there are less records than that you will get the actual count. If you want to know the total number of rows in a table do a select count(*) from table. Alternatively, you can switch off the automatic limit and have all records returned by MySQL Workbench, but that can be time + memory consuming for large tables.
I think removing the row limit will help. By default, MySQL workbench will limit the result set to 1000 rows but you can always disable the limit. Check out https://superuser.com/questions/240291/how-to-remove-1000-row-limit-in-mysql-workbench-queries on how to do that.
You can run a second query to check that
select count(*) from (your original query) as t;
this will return the total rows in actual result.
You can use the SQL count function. It returns the count of the total number of rows a query returns.
A sample query:
select count(*) from tableName where field1 = value1
In workbench, in the dropdown menu at the top, set it to dont limit Then run the query to extract data from table Then under the output pane below, the total count of the query results will be displayed in the message column
I have a db with +- 60000 records in, I need to obtain the last 10 records entered, can this be done via postgres? I was thinking off maybe setting the offset to 50 990 and the limit to 10 or something similar, but not sure if this will be efficient?
Something like the following perhaps:
SELECT * FROM your_table
ORDER BY your_timestamp DESC
LIMIT 10
If you want the result sorted by the timestamp, you can wrap this in another query and sort again. You might want to take a look at the execution plan but this shouldn't be too inefficient.
ORDER BY id DESC LIMIT 10
If id is indexed, this will be very efficient. Could naturally also be a timestamp.