Related
I am querying a database using TSQL in SSMS.I have a dataset that contains two unique ID's, A and B. For each of these ID's I want to sum columns OAC and Adj cumulatively until the next non-zero value is reached in column OAC. In other words, the non-zero values on column OAC will remain the same while the values between them will be adding consequent column Adj values using the first non-zero OAC value as a stop and restart point.
The data table can be created using the below
drop table #T
CREATE TABLE #T(
Id varchar(10),
PeriodNum int,
row_num int,
OAC money ,
Adj money
) ON [PRIMARY]
GO
insert into #T
values
('A','201606','1','5','0'),
('A','201905','2','0','-2'),
('A','201906','3','100','0'),
('A','202008','4','0','-6'),
('A','202009','5','0','-8'),
('A','202106','6','0','-11'),
('A','202109','7','23','0'),
('B','201606','1','3','0'),
('B','201905','2','0','25'),
('B','201906','3','60','0'),
('B','202008','4','0','12'),
('B','202009','5','0','-5'),
('B','202106','6','0','6'),
('B','202109','7','6','0')
I tried the following code to calculate the desired result
select * , sum( iif(t.OAC<> 0,(t.OAC + t.Adj),0)) over (partition by t.ID order by t.row_num asc) as Calc
from #T t
This did not work as the needed result looks as per below in the calc column.
A courtesy of https://blog.jooq.org/10-sql-tricks-that-you-didnt-think-were-possible/ :
Of course, that vendor is not Microsoft, so we're stuck to less finessed options.
One way to go about this is to first use a conditional aggregate on a flag to create "groupings", let's call that [sumstep], and then partition on that to have separate running totals:
;with cte as
(
select *,sum(case when OAC=0 then 0 else 1 end) over (partition by id order by row_num) as sumstep
from #t
)
select *,sum(OAC+Adj) over (partition by id,sumstep order by row_num) as Calc
from cte
I have a table T as follows with 1 Billion records. Currently, this table has no Primary key or Indexes.
create table T(
day_c date,
str_c varchar2(20),
comm_c varchar2(20),
src_c varchar2(20)
);
some sample data:
insert into T
select to_date('20171011','yyyymmdd') day_c,'st1' str_c,'c1' comm_c,'s1' src_c from dual
union
select to_date('20171012','yyyymmdd'),'st1','c1','s1' from dual
union
select to_date('20171013','yyyymmdd'),'st1','c1','s1' from dual
union
select to_date('20171014','yyyymmdd'),'st1','c1','s2' from dual
union
select to_date('20171015','yyyymmdd'),'st1','c1','s2' from dual
union
select to_date('20171016','yyyymmdd'),'st1','c1','s2' from dual
union
select to_date('20171017','yyyymmdd'),'st1','c1','s1' from dual
union
select to_date('20171018','yyyymmdd'),'st1','c1','s1' from dual
union
select to_date('20171019','yyyymmdd'),'st1','c1','s1' from dual
union
select to_date('20171020','yyyymmdd'),'st1','c1','s1' from dual;
The expected result is to generate the date ranges for the changes in column src_c.
I have the following code snippet which provides the desired result. However, it is slow as the cost of running lag and lead is quite high on the table.
WITH EndsMarked AS (
SELECT
day_c,str_c,comm_c,src_c,
CASE WHEN src_c= LAG(src_c,1) OVER (ORDER BY day_c)
THEN 0 ELSE 1 END AS IS_START,
CASE WHEN src_c= LEAD(src_c,1) OVER (ORDER BY day_c)
THEN 0 ELSE 1 END AS IS_END
FROM T
), GroupsNumbered AS (
SELECT
day_c,str_c,comm_c,
src_c,
IS_START,
IS_END,
COUNT(CASE WHEN IS_START = 1 THEN 1 END)
OVER (ORDER BY day_c) AS GroupNum
FROM EndsMarked
WHERE IS_START=1 OR IS_END=1
)
SELECT
str_c,comm_c,src_c,
MIN(day_c) AS GROUP_START,
MAX(day_c) AS GROUP_END
FROM GroupsNumbered
GROUP BY str_c,comm_c, src_c,GroupNum
ORDER BY groupnum;
Output :
STR_C COMM_C SRC_C GROUP_START GROUP_END
st1 c1 s1 11-OCT-17 13-OCT-17
st1 c1 s2 14-OCT-17 16-OCT-17
st1 c1 s1 17-OCT-17 20-OCT-17
Any suggestion to speed up?
Oracle database :12c.
SGA Memory:20GB
Total CPU:22
Explain plan:
Order by day_c only, or do you need to partition by str_c and comm_c first? It seems so - in which case I am not sure your query is correct, and Sentinel's solution will need to be adjusted accordingly.
Then:
For some reason (which escapes me), it appears that the match_recognize clause (available only since Oracle 12.1) is faster than analytic functions, even when the work done seems to be the same.
In your problem, (1) you must read 1 billion rows from disk, which can't be done faster than the hardware allows (do you REALLY need to do this on all 1 billion rows, or should you archive a large portion of your table, perhaps after performing this identification of GROUP_START and GROUP_END)? (2) you must order the data by day_c no matter what method you use, and that is time consuming.
With that said, the tabibitosan method (see Sentinel's answer) will be faster than the start-of-group method (which is close to, but simpler than what you currently have).
The match_recognize solution, which will probably be faster than any solution based on analytic functions, looks like this:
select str_c, comm_c, src_c, group_start, group_end
from t
match_recognize(
partition by str_c, comm_c
order by day_c
measures x.src_c as src_c,
first(day_c) as group_start,
last(day_c) as group_end
pattern ( x y* )
define y as src_c = x.src_c
)
-- Add ORDER BY clause here, if needed
;
Here is a quick explanation of how this works; for developers who are not familiar with match_recognize, I provided links to a few good tutorials in a Comment below this Answer.
The match_recognize clause partitions the input rows by str_c and comm_c and orders them by day_c. So far this is exactly the same work that analytic functions do.
Then in the PATTERN and DEFINE clauses I declare and define two "classes" of rows, which will be flagged as X and Y, respectively. X is any row (there are no restrictions on it in the DEFINE clause). However, Y is restricted: it must have the same src_c as the last X row preceding it.
So, in each partition, and reading from the earliest row to the latest (within the partition), I am looking for any number of matches, where a match consists of an arbitrary row (marked X), followed by as many Y rows as possible; where Y means "same src_c as the first row in this match. So, this will identify sequences of rows where the src_c did not change.
For each match that is found, the clause will output the src_c value from the X row (which is the same, really, for all the rows in that match), and the first and the last value in the day_c column for that match. That is what we need to put in the SELECT clause of the overall query.
You can eliminate one CTE by using the Tabibito-san (Traveler) method:
with Groups as (
select t.*
, row_number() over (order by day_c)
- row_number() over (partition by str_c
, comm_c
, src_c
order by day_c) GroupNum
from t
)
select str_c
, comm_c
, src_c
, min(day_c) GROUP_START
, max(day_c) GROUP_END
from Groups
group by str_c
, comm_c
, src_c
, GroupNum
I have a table named raw_data with the following data
as You can see id 1 and 2 share the same values in field desa, kecamatan and kabupaten, also id 3,4,5.
So basically I want to select all rows that is not followed by the same previous values. expected result would be:
I know it's easy to do this in any programming languages such as PHP, but I need this in postgresql. is this doable? Thanks in Advance.
Assuming higher id denotes latest row, if a row with same all three columns is present not together and you don't want to filter it out as it doesn't have same values as previous row (order by id or created_date), then you can make use of analytic lag() function:
select *
from (
select
t.*,
case
when desa = lag(desa) over (order by id)
and kecamatan = lag(kecamatan) over (order by id)
and kabupaten = lag(kabupaten) over (order by id)
then 0 else 1
end flag
from your_table t
) t where flag = 1;
Using the example from this post: https://blogs.oracle.com/datawarehousing/entry/managing_overflows_in_listagg
The following statement:
SELECT
deptno,
LISTAGG(ename, ';') WITHIN GROUP (ORDER BY empno) AS namelist
FROM emp
GROUP BY deptno;
will generate the following output:
DEPTNO NAMELIST
---------- ----------------------------------------
10 CLARK;KING;MILLER
20 SMITH;JONES;SCOTT;ADAMS;FORD
30 ALLEN;WARD;MARTIN;BLAKE;TURNER;JAMES
Let’s assume that the above statement does not run and that we have a limit of 15 characters that can be returned by each row in our LISTAGG function. This is in actuality 65535 on Amazon Redshift.
We would want the following to be returned in this case:
DEPTNO NAMELIST
---------- ----------------------------------------
10 CLARK;KING
10 MILLER
20 SMITH;JONES
20 SCOTT;ADAMS
20 FORD
30 ALLEN;WARD
30 MARTIN;BLAKE
30 TURNER;JAMES
What would be the best way to recreate this result in Amazon Redshift to avoid any data loss and taking speed into consideration?
It's possible to achieve this with 2 subquery:
First:
SELECT id, field,
sum(length(field) + 1) over
(partition by id order by RANDOM() rows unbounded preceding) as total_length_now
from my_schema.my_table)
Initially we want to calculate how many chars we have for each id in our table. We can use a window function to calculate it incrementally for each row. In the 'order by' statement you can use any unique field that you have. If you don't have one, you can simply use random or an hash function, but is mandatory that the field is unique, if not, the function will not work as we want.
The '+1' in the length represent the semicolon that we will use in the listagg function.
Second:
SELECT id, field, total_length_now / 65535 as sub_id
FROM (sub_query_1)
Now we create a sub_id based on the length that we calculated before. If the total_length_now exceed the limit size (in this case 65535) the division's rest will return a new sub_id.
Last Step
SELECT id, sub_id, listagg(field, ';') as namelist
FROM (sub_query_2)
GROUP BY id, sub_id
ORDER BY id, sub_id
Now we can simply call the listagg function grouping by id and sub_id, since each group cannot exceed the size limit.
Complete query
SELECT id, sub_id, listagg(field, ';') as namelist
FROM (
SELECT id, field, total_length_now / 65535 as sub_id
FROM (SELECT id,
field,
sum(length(field) + 1) over
(partition by id order by field rows unbounded preceding) as total_length_now
from support.test))
GROUP BY id, sub_id
order by id, sub_id
Example with your data (with size limit = 10)
First and second query output:
id, field, total_length_now, sub_id
10,KING,5,0
10,CLARK,11,1
10,MILLER,18,1
20,ADAMS,6,0
20,SMITH,12,1
20,JONES,18,1
20,FORD,23,2
20,SCOTT,29,2
30,JAMES,6,0
30,BLAKE,12,1
30,WARD,17,1
30,MARTIN,24,2
30,TURNER,31,3
30,ALLEN,37,3
Final query output:
id,sub_id,namelist
10,0,KING
10,1,CLARK;MILLER
20,0,ADAMS
20,1,SMITH;JONES
20,2,FORD;SCOTT
30,0,JAMES
30,1,BLAKE;WARD
30,2,MARTIN
30,3,TURNER;ALLEN
It is possible to create a partial list, and then the rest of values as separate rows in one go, but if the number of rows is unconstrained you really need a loop statement to then convert that into a list, and the rows for remaining and so on.
So this is really a task for Apache Spark (or any other map-reduce technology).
i am trying to run an export on a system that only allows t-sql. i know enough of php to make a foreach loop, but i don't know enough of t-sql to generate multiple rows for a given quantity.
i need a result to make a list of items with "1 of 4" like data included in the result
given a table like
orderid, product, quantity
1000,ball,3
1001,bike,4
1002,hat,2
how do i get a select query result like:
orderid, item_num, total_items,
product
1000,1,3,ball
1000,2,3,ball
1000,3,3,ball
1001,1,4,bike
1001,2,4,bike
1001,3,4,bike
1001,4,4,bike
1002,1,2,hat
1002,2,2,hat
You can do this with the aid of an auxiliary numbers table.
;WITH T(orderid, product, quantity) AS
(
select 1000,'ball',3 union all
select 1001,'bike',4 union all
select 1002,'hat',2
)
SELECT orderid, number as item_num, quantity as total_items, product
FROM T
JOIN master..spt_values on number> 0 and number <= quantity
where type='P'
NB: The code above uses the master..spt_values table - this is just for demo purposes I suggest you create your own tally table using one of the techniques here.
If you are on SQL Server 2005 or later version, then you can try a recursive CTE instead of a tally table.
;WITH CTE AS
(
SELECT orderid, 1 item_num, product, quantity
FROM YourTable
UNION ALL
SELECT orderid, item_num+1, product, quantity
FROM CTE
WHERE item_num < quantity
)
SELECT *
FROM CTE
OPTION (MAXRECURSION 0)
I'm not on a computer with a database engine where I can test this, so let me know how it goes.
Well, IF you know the maximum value for the # of products for any product (and it's not too big, say 4), you can:
Create a helper table called Nums containing 1 integer column n, with rows containing 1,2,3,4
Run
SELECT * from Your_table, Nums
WHERE Nums.n <= Your_table.quantity