Most databases have a built in function for calculating the median but I don't see anything for median in Amazon Redshift.
You could calculate the median using a combination of the nth_value() and count() analytic functions but that seems janky. I would be very surprised if an analytics db didn't have a built in method for computing median so I'm assuming I'm missing something.
http://docs.aws.amazon.com/redshift/latest/dg/r_Examples_of_NTH_WF.html
http://docs.aws.amazon.com/redshift/latest/dg/c_Window_functions.html
And as of 2014-10-17, Redshift supports the MEDIAN window function:
# select min(median) from (select median(num) over () from temp);
min
-----
4.0
Try the NTILE function.
You would divide your data into 2 ranked groups and pick the minimum value from the first group. That's because in datasets with an odd number of values, the first ntile will have 1 more value than the second. This approximation should work very well for large datasets.
create table temp (num smallint);
insert into temp values (1),(5),(10),(2),(4);
select num, ntile(2) over(order by num desc) from temp ;
num | ntile
-----+-------
10 | 1
5 | 1
4 | 1
2 | 2
1 | 2
select min(num) as median from (select num, ntile(2) over(order by num desc) from temp) where ntile = 1;
median
--------
4
I had difficulty with this also, but got some help from Amazon. Since the 2014-06-30 version of Redshift, you can do this with the PERCENTILE_CONT or PERCENTILE_DISC window functions.
They're slightly weird to use, as they will tack the median (or whatever percentile you choose) onto every row. You put that in a subquery and then take the MIN (or whatever) of the median column.
# select count(num), min(median) as median
from
(select num, percentile_cont (0.5) within group (order by num) over () as median from temp);
count | median
-------+--------
5 | 4.0
(The reason it's complicated is that window functions can also do their own mini-group-by and ordering to give you the median of many groups all at once, and other tricks.)
In the case of an even number of values, CONT(inuous) will interpolate between the two middle values, where DISC(rete) will pick one of them.
I typically use the NTILE function to split the data into two groups if I’m looking for an answer that’s close enough. However, if I want the exact median (e.g. the midpoint of an even set of rows), I use a technique suggested on the AWS Redshift Discussion Forum.
This technique orders the rows in both ascending and descending order, then if there is an odd number of rows, it returns the average of the middle row (that is, where row_num_asc = row_num_desc), which is simply the middle row itself.
CREATE TABLE temp (num SMALLINT);
INSERT INTO temp VALUES (1),(5),(10),(2),(4);
SELECT
AVG(num) AS median
FROM
(SELECT
num,
SUM(1) OVER (ORDER BY num ASC) AS row_num_asc,
SUM(1) OVER (ORDER BY num DESC) AS row_num_desc
FROM
temp) AS ordered
WHERE
row_num_asc IN (row_num_desc, row_num_desc - 1, row_num_desc + 1);
median
--------
4
If there is an even number of rows, it returns the average of the two middle rows.
INSERT INTO temp VALUES (9);
SELECT
AVG(num) AS median
FROM
(SELECT
num,
SUM(1) OVER (ORDER BY num ASC) AS row_num_asc,
SUM(1) OVER (ORDER BY num DESC) AS row_num_desc
FROM
temp) AS ordered
WHERE
row_num_asc IN (row_num_desc, row_num_desc - 1, row_num_desc + 1);
median
--------
4.5
Related
I have a table with two columns: parties and their respective spendings (party and per_capita). I have to return a table with the column of the parties and the average of all the spedings done by them if the party in the column is not used in the calculation: so, supose I have x, y and z. I want something like:
X | Average of spendings of (Y,Z)
Y | Average of spendings of (X,Z)
Z | Average of spendings of (X,Y)
I tried the following, resulting in NULL for the spending columns:
SELECT pcp.party, avg(pcp.per_capita) OVER (PARTITION BY pcp.party ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING EXCLUDE CURRENT ROW) AS average
FROM per_capita_party pcp
ORDER BY average DESC;
You can use lateral for doing this. For example:
select distinct symbol,v
from myTable t1, lateral (
select avg(amount)
from myTable t2 where t1.symbol != t2.symbol) t(v);
Here is DBFiddle demo.
A scalar subquery will do the job close to natural language.
SELECT pcp.party,
(select avg(per_capita) from per_capita_party where party <> pcp.party) average
FROM per_capita_party pcp
ORDER BY average DESC;
You can also use window function exclude group frame clause.
Based on Cetin's DBfilddle.
new dbfiddle
SELECT
symbol,
avg(amount) OVER (ORDER BY symbol GROUPS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING EXCLUDE GROUP)
FROM
mytable;
or
SELECT
symbol,
avg(amount) OVER (ORDER BY symbol DESC GROUPS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING EXCLUDE GROUP)
FROM
mytable;
I new to postgresql and I am trying do calculate a rate in a table like this:
class phase
a sold
b stock
c idle
d sold
I want to calculate the total count of sold phases / total like this:
2/4 = 50%
i was trying:
with t as ( select count(class) as total_sold from table where phase='sold')
select total_sold / count(*) from t
group by total_sold
but the result is wrong. How can I do this?
Use AVG() aggregate function:
SELECT 100 * AVG((phase = 'sold')::int) AS avg_sold
FROM tablename;
The boolean expression phase = 'sold' is converted to an integer 1 for true or 0 for false and the average of these values is the ratio that you want.
See the demo.
I have a table T as follows with 1 Billion records. Currently, this table has no Primary key or Indexes.
create table T(
day_c date,
str_c varchar2(20),
comm_c varchar2(20),
src_c varchar2(20)
);
some sample data:
insert into T
select to_date('20171011','yyyymmdd') day_c,'st1' str_c,'c1' comm_c,'s1' src_c from dual
union
select to_date('20171012','yyyymmdd'),'st1','c1','s1' from dual
union
select to_date('20171013','yyyymmdd'),'st1','c1','s1' from dual
union
select to_date('20171014','yyyymmdd'),'st1','c1','s2' from dual
union
select to_date('20171015','yyyymmdd'),'st1','c1','s2' from dual
union
select to_date('20171016','yyyymmdd'),'st1','c1','s2' from dual
union
select to_date('20171017','yyyymmdd'),'st1','c1','s1' from dual
union
select to_date('20171018','yyyymmdd'),'st1','c1','s1' from dual
union
select to_date('20171019','yyyymmdd'),'st1','c1','s1' from dual
union
select to_date('20171020','yyyymmdd'),'st1','c1','s1' from dual;
The expected result is to generate the date ranges for the changes in column src_c.
I have the following code snippet which provides the desired result. However, it is slow as the cost of running lag and lead is quite high on the table.
WITH EndsMarked AS (
SELECT
day_c,str_c,comm_c,src_c,
CASE WHEN src_c= LAG(src_c,1) OVER (ORDER BY day_c)
THEN 0 ELSE 1 END AS IS_START,
CASE WHEN src_c= LEAD(src_c,1) OVER (ORDER BY day_c)
THEN 0 ELSE 1 END AS IS_END
FROM T
), GroupsNumbered AS (
SELECT
day_c,str_c,comm_c,
src_c,
IS_START,
IS_END,
COUNT(CASE WHEN IS_START = 1 THEN 1 END)
OVER (ORDER BY day_c) AS GroupNum
FROM EndsMarked
WHERE IS_START=1 OR IS_END=1
)
SELECT
str_c,comm_c,src_c,
MIN(day_c) AS GROUP_START,
MAX(day_c) AS GROUP_END
FROM GroupsNumbered
GROUP BY str_c,comm_c, src_c,GroupNum
ORDER BY groupnum;
Output :
STR_C COMM_C SRC_C GROUP_START GROUP_END
st1 c1 s1 11-OCT-17 13-OCT-17
st1 c1 s2 14-OCT-17 16-OCT-17
st1 c1 s1 17-OCT-17 20-OCT-17
Any suggestion to speed up?
Oracle database :12c.
SGA Memory:20GB
Total CPU:22
Explain plan:
Order by day_c only, or do you need to partition by str_c and comm_c first? It seems so - in which case I am not sure your query is correct, and Sentinel's solution will need to be adjusted accordingly.
Then:
For some reason (which escapes me), it appears that the match_recognize clause (available only since Oracle 12.1) is faster than analytic functions, even when the work done seems to be the same.
In your problem, (1) you must read 1 billion rows from disk, which can't be done faster than the hardware allows (do you REALLY need to do this on all 1 billion rows, or should you archive a large portion of your table, perhaps after performing this identification of GROUP_START and GROUP_END)? (2) you must order the data by day_c no matter what method you use, and that is time consuming.
With that said, the tabibitosan method (see Sentinel's answer) will be faster than the start-of-group method (which is close to, but simpler than what you currently have).
The match_recognize solution, which will probably be faster than any solution based on analytic functions, looks like this:
select str_c, comm_c, src_c, group_start, group_end
from t
match_recognize(
partition by str_c, comm_c
order by day_c
measures x.src_c as src_c,
first(day_c) as group_start,
last(day_c) as group_end
pattern ( x y* )
define y as src_c = x.src_c
)
-- Add ORDER BY clause here, if needed
;
Here is a quick explanation of how this works; for developers who are not familiar with match_recognize, I provided links to a few good tutorials in a Comment below this Answer.
The match_recognize clause partitions the input rows by str_c and comm_c and orders them by day_c. So far this is exactly the same work that analytic functions do.
Then in the PATTERN and DEFINE clauses I declare and define two "classes" of rows, which will be flagged as X and Y, respectively. X is any row (there are no restrictions on it in the DEFINE clause). However, Y is restricted: it must have the same src_c as the last X row preceding it.
So, in each partition, and reading from the earliest row to the latest (within the partition), I am looking for any number of matches, where a match consists of an arbitrary row (marked X), followed by as many Y rows as possible; where Y means "same src_c as the first row in this match. So, this will identify sequences of rows where the src_c did not change.
For each match that is found, the clause will output the src_c value from the X row (which is the same, really, for all the rows in that match), and the first and the last value in the day_c column for that match. That is what we need to put in the SELECT clause of the overall query.
You can eliminate one CTE by using the Tabibito-san (Traveler) method:
with Groups as (
select t.*
, row_number() over (order by day_c)
- row_number() over (partition by str_c
, comm_c
, src_c
order by day_c) GroupNum
from t
)
select str_c
, comm_c
, src_c
, min(day_c) GROUP_START
, max(day_c) GROUP_END
from Groups
group by str_c
, comm_c
, src_c
, GroupNum
I have a table (only one row) in my PostgreSQL 9.5 db with two columns i.e., count (bigint) and array (text).
count array
6 "112,19.3,142,142,19.3,172,172,20.3,202,202,20.3,232,232,19.3,262,262,19.3,292"
The array represents six (thus count = 6) set of values i.e., Lower_limit, Value and Upper_limit. Now, I need to conditionally modify my array i.e., when upper limit and lower limits are coinciding then select the first upper limit and last lower limit and return the most common value (which is 19.3) among the limits. My desired output would be like:
count array
1 112, 19.3, 292
Could anyone help me to have some pointers towards my desired output?
I must admin - I dont understand how you get count =1, but below is an example of how you can build array with firsrt, last and most common values. Mind if there would be several mos common values it would unpredictably pick on of em
t=#
with a(r) as (values(array[112,19.3,142,142,19.3,172,172,20.3,202,202,20.3,232,232,19.3,262,262,19.3,292]))
, p as (select * from a,unnest(a.r) with ordinality)
, t as (
select count(1) over (partition by unnest)
, unnest u
, r[1] a
, r[array_length(r,1)] e
from p
order by unnest
limit 1
)
select array[a,u,e]
from t
;
array
----------------
{112,19.3,292}
(1 row)
Suppose I have a table with quantity column.
CREATE TABLE transfers (
user_id integer,
quantity integer,
created timestamp default now()
);
I'd like to iteratively go thru a partition using window functions, but access the output rows, not the input table rows.
To access the input table rows I could do something like this:
SELECT LAG(quantity, 1, 0)
OVER (PARTITION BY user_id ORDER BY created)
FROM transfers;
I need to access the previous output row to calculate the next output row. How can i access the lag row in the output? Something like:
CREATE VIEW balance AS
SELECT LAG(balance.total, 1, 0) + quantity AS total
OVER (PARTITION BY user_id ORDER BY created)
FROM transfers;
Edit
This is a minimal example to support the question of how to access the previous output row within a window partition. I don't actually want a sum.
It seems you attempt to calculate a running sum. Luckily that's just what Sum() window function does:
WITH transfers AS(
SELECT i, random()-0.3 AS quantity FROM generate_series(1,100) as i
)
SELECT i, quantity, sum(quantity) OVER (ORDER BY i) from transfers;
I guess, looking at the question, that the only you need is to calculate a cumulative sum.
To calculate a cumulative summ use this query:
SELECT *,
SUM( CASE WHEN quantity IS NULL THEN 0 ELSE quantity END)
OVER ( PARTITION BY user_id ORDER BY created
ROWS BETWEEN unbounded preceding AND current row
) As cumulative_sum
FROM transfers
ORDER BY user_id, created
;
But if you want more complex calculations, especially containing some conditions (decisions) that depend on a result from prevoius row, then you need a recursive approach.