Related
We want to group our members' enrollments into "continuous enrollments," allowing for a gap of up to 45 days. I know how to use LEAD to determine if an enrollment should be grouped with the next, but I don't know how to group them. Would it be more appropriate to add 45 to the term date and subtract 45 from the effective date, then check for overlapping date periods? My goal is to have a SQL view that returns the results similar to the final query below. Thank you for your help.
SELECT '101' AS MemID, '2021-01-01' AS EffDate, '2021-01-31' AS TermDate INTO #T1 UNION
SELECT '101', '2021-02-01', '2021-02-28' UNION
SELECT '101', '2021-03-01', '2021-03-31' UNION
SELECT '101', '2021-06-01', '2021-06-30' UNION
SELECT '999', '2021-01-01', '2021-01-15' UNION
SELECT '999', '2021-09-01', '2021-09-28' UNION
SELECT '999', '2021-10-01', '2021-10-31'
SELECT *
, LEAD(EffDate) OVER (PARTITION BY MemID ORDER BY EffDate) AS LeadEffDate
, DATEDIFF(DAY, TermDate, (LEAD(EffDate) OVER (PARTITION BY MemID ORDER BY EffDate))) AS DaysToNextEnrollment
, CASE WHEN (DATEDIFF(DAY, TermDate, (LEAD(EffDate) OVER (PARTITION BY MemID ORDER BY EffDate)))) <= 45 THEN 1 ELSE 0 END AS CombineWithNextRecord
FROM #T1
-- result objective
SELECT 101 AS MemID, '2021-01-01' AS EffDate, '2021-03-31' AS TermDate UNION
SELECT 101, '2021-06-01', '2021-06-30' UNION
SELECT 999, '2021-01-01', '2021-01-15' UNION
SELECT 999, '2021-09-01', '2021-10-31'
I think you are really close. Your question is very similar to
TSQL - creating from-to date table while ignoring in-between steps with conditions with a logic difference on what you want to consider to be the same group.
My basic approach is to use the LAG() function to figure out the previous values for MemID and TermDate and combine that with your 45 day rule to define a group. And finally get the first and last values of each group.
Here is my response to that question modified to your situation.
SELECT
a4.MemID
, CONVERT (DATE, a4.First_EffDate) AS [EffDate]
, CONVERT (DATE, a4.TermDate) AS [TermDate]
FROM (
SELECT
a3.MemID
, a3.EffDate
, a3.TermDate
, a3.MemID_group
, FIRST_VALUE (a3.EffDate) OVER (PARTITION BY a3.MemID_group ORDER BY a3.EffDate) AS [First_EffDate]
, ROW_NUMBER () OVER (PARTITION BY a3.MemID_group ORDER BY a3.EffDate DESC) AS [Row_number]
FROM (
SELECT
a2.MemID
, a2.EffDate
, a2.TermDate
, a2.Previous_MemID
, a2.Previous_TermDate
, a2.New_group
, SUM (a2.New_group) OVER (ORDER BY a2.MemID, a2.EffDate) AS [MemID_group]
FROM (
SELECT
a1.MemID
, a1.EffDate
, a1.TermDate
, a1.Previous_MemID
, a1.Previous_TermDate
---------------------------------------------------------------------------------
-- new group if the MemID is different from the previous row OR
-- if the MemID is the same as the previous row AND it has been more than 45 days
-- between the TermDate of the previous row and the EffDate of the current row
,
IIF((a1.MemID <> a1.Previous_MemID)
OR (
a1.MemID = a1.Previous_MemID
AND DATEDIFF (DAY, a1.Previous_TermDate, a1.EffDate) > 45
)
, 1
, 0) AS [New_group]
---------------------------------------------------------------------------------
FROM (
SELECT
MemID
, EffDate
, TermDate
, LAG (MemID) OVER (ORDER BY MemID) AS [Previous_MemID]
, LAG (TermDate) OVER (PARTITION BY MemID ORDER BY EffDate) AS [Previous_TermDate]
FROM #T1
) a1
) a2
) a3
) a4
WHERE a4.[Row_number] = 1;
Here is the dbfiddle.
Lets say I have a table with 3 columns with following values:
ticketid indexid type
--- --- ---
100 191 0
100 192 2
100 193 4
200 194 0
300 195 1
300 196 0
My desired output:
ticketid indexid type
--- --- ---
200 194 0
I only want those rows which have:
1) only 1 row (based on ticketid) in the table
2) have type = (0,2,4)
This is the query I have which is not working:
select distinct ticketid from tab1 where type not in (1,3) group by ticketid having count(indexid) = 1
When I run the above query I am still getting tickets which have more than 1 rows. How can I fix this?
You can do the grouping in a subquery. Then in the main query pull all of the columns.
select ticketid, indexid, type
from tab1
where type in (0,2,4)
and ticketid in (
select ticketid from tab1 group by ticketid having count(*) = 1
)
Assuming that ticketid+indexid is unique, you can do this:
select ticketid
from tab1
where type not in (1,3)
group by ticketid
having count(distinct indexid) = 1
(which is the just your original query reformatted and the distinct keyword moved to the correct place)
I've edited this answer because I misread the original requirements.
DECLARE #tab1 TABLE (TicketID INT, IndexID INT, Type Int)
INSERT
INTO #tab1 (TicketID, IndexID, Type)
VALUES (100,191,0)
,(100,192,2)
,(100,193,4)
,(200,194,0)
,(300,195,1)
,(300,196,0)
SELECT T.TicketID
,T.IndexID
,T.Type
FROM (
SELECT TicketID
,COUNT(IndexID) AS CountOfIndex
,CASE WHEN Type IN (0,2,4) THEN 1 ELSE 0 END AS ValidType
FROM #tab1
GROUP
BY TicketID
,CASE WHEN Type IN (0,2,4) THEN 1 ELSE 0 END
) DATA1
JOIN #tab1 T
ON T.TicketID = DATA1.TicketID
WHERE DATA1.CountOfIndex = 1
GROUP BY T.TicketID
,T.IndexID
,T.Type
HAVING MIN(DATA1.ValidType) = 1
This provides the following results:
TicketID IndexID Type
200 194 0
This query uses a derived table to first find the number of IndexID values based on the repeating TicketID values, while also determining if the Type column value is valid for inclusion in the final output.
The outer query then looks for a the Tickets that have a CountOfIndex = 1, and the minimum value for the Type = 1 (eliminating any records where the TicketID was associated with an invalid Type value).
It may not be the cleanest solution, but I believe that this code highlights the thinking behind classifying wanted and unwanted data, and how to filter the unwanted data out.
you need to group the data and do a count on each group. For ex)
create table tab1
(
ticketid int,
indexid int,
type int
)
insert into tab1
values
( 100, 191, 0),
(100, 192, 2),
( 100, 193 , 4),
( 200 , 194 , 0),
( 300 , 195 , 1),
(300 , 196 , 0)
select *
from tab1
select ticketid
from tab1
--exclude tickets that contains the invalid types
where ticketID NOT IN (
--get tickets that does not contain the valid types
select ticketID
from tab1
where type NOT IN (0,2,4)
)
group by ticketid
having count(1) = 1
I'm trying to create a query, which will give me a row_number for all the returned records. I can do that for all records present in the database. The problem is, i need to somehow retrieve a row number for a query with WHERE statement inside (WHERE posts.status = 'published').
My original query looks like that:
SELECT
posts.*,
row_number() over (ORDER BY posts.score DESC) as position
FROM posts
However, adding a where statement inside over() throws syntax error:
SELECT
posts.*,
row_number() over (
WHERE posts.status = 'published'
ORDER BY posts.score DESC
) as position
FROM posts
SELECT posts.*, row_number() over (ORDER BY posts.score DESC) as position
FROM posts
WHERE posts.status = 'published'
Not quite sure what you are after. Maybe show an example of expected output. Here is an an example of an approach:
create table posts(id int, score int, status text);
insert into posts values(1, 1, 'x');
insert into posts values(2, 2, 'published');
insert into posts values(3, 3, 'x');
insert into posts values(4, 4, 'x');
SELECT x.id, x.score, x.status
,CASE WHEN x.status = 'published' THEN null ELSE x.position END
FROM (SELECT posts.*,
row_number() OVER (ORDER BY posts.score DESC)
-SUM(CASE WHEN status = 'published' THEN 1 ELSE 0 END)
OVER (ORDER BY posts.score DESC) as position
FROM posts
) x
Result:
4 4 x 1
3 3 x 2
2 2 published
1 1 x 3
I have a table looks like,
x y
1 2
2 null
3 null
1 null
11 null
I want to fill the null value by conducting a rolling
function to apply y_{i+1}=y_{i}+x_{i+1} with sql as simple as possible (inplace)
so the expected result
x y
1 2
2 4
3 7
1 8
11 19
implement in postgresql. I may encapsulate it in a window function, but the implementation of custom function seems always complex
WITH RECURSIVE t AS (
select x, y, 1 as rank from my_table where y is not null
UNION ALL
SELECT A.x, A.x+ t.y y , t.rank + 1 rank FROM t
inner join
(select row_number() over () rank, x, y from my_table ) A
on t.rank+1 = A.rank
)
SELECT x,y FROM t;
You can iterate over rows using a recursive CTE. But in order to do so, you need a way to jump from row to row. Here's an example using an ID column:
; with recursive cte as
(
select id
, y
from Table1
where id = 1
union all
select cur.id
, prev.y + cur.x
from Table1 cur
join cte prev
on cur.id = prev.id + 1
)
select *
from cte
;
You can see the query at SQL Fiddle. If you don't have an ID column, but you do have another way to order the rows, you can use row_number() to get an ID:
; with recursive sorted as
(
-- Specify your ordering here. This example sorts by the dt column.
select row_number() over (order by dt) as id
, *
from Table1
)
, cte as
(
select id
, y
from sorted
where id = 1
union all
select cur.id
, prev.y + cur.x
from sorted cur
join cte prev
on cur.id = prev.id + 1
)
select *
from cte
;
Here's the SQL Fiddle link.
Just a brief of business scenario is table has been created for a good receipt. So here we have good expected line with PurchaseOrder(PO) in first few line. And then we receive each expected line physically and that time these quantity may be different, due to business case like quantity may damage and short quantity like that. So we maintain a status for that eg: OK, Damage, also we have to calculate short quantity based on total of expected quantity of each item and total of received line.
if object_id('DEV..Temp','U') is not null
drop table Temp
CREATE TABLE Temp
(
ID INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
Item VARCHAR(32),
PO VARCHAR(32) NULL,
ExpectedQty INT NULL,
ReceivedQty INT NULL,
[STATUS] VARCHAR(32) NULL,
BoxName VARCHAR(32) NULL
)
Please see first few line with PO data will be the expected lines,
and then rest line will be received line
INSERT INTO TEMP (Item,PO,ExpectedQty,ReceivedQty,[STATUS],BoxName)
SELECT 'ITEM01','PO-01','30',NULL,NULL,NULL UNION ALL
SELECT 'ITEM01','PO-02','20',NULL,NULL,NULL UNION ALL
SELECT 'ITEM02','PO-01','40',NULL,NULL,NULL UNION ALL
SELECT 'ITEM03','PO-01','50',NULL,NULL,NULL UNION ALL
SELECT 'ITEM03','PO-02','30',NULL,NULL,NULL UNION ALL
SELECT 'ITEM03','PO-03','20',NULL,NULL,NULL UNION ALL
SELECT 'ITEM04','PO-01','30',NULL,NULL,NULL UNION ALL
SELECT 'ITEM01',NULL,NULL,'20','OK','box01' UNION ALL
SELECT 'ITEM01',NULL,NULL,'25','OK','box02' UNION ALL
SELECT 'ITEM01',NULL,NULL,'5','DAMAGE','box03' UNION ALL
SELECT 'ITEM02',NULL,NULL,'38','OK','box04' UNION ALL
SELECT 'ITEM02',NULL,NULL,'2','DAMAGE','box05' UNION ALL
SELECT 'ITEM03',NULL,NULL,'30','OK','box06' UNION ALL
SELECT 'ITEM03',NULL,NULL,'30','OK','box07' UNION ALL
SELECT 'ITEM03',NULL,NULL,'30','OK','box08' UNION ALL
SELECT 'ITEM03',NULL,NULL,'10','DAMAGE','box09' UNION ALL
SELECT 'ITEM04',NULL,NULL,'25','OK','box10'
Below Table is my expected result based on above data.
I need to show those data following way.
So I appreciate if you can give me an appropriate query for it.
Note: first row is blank and it is actually my table header. :)
SELECT '' as 'ITEM', '' as 'PO#', '' as 'ExpectedQty',
'' as 'ReceivedQty','' as 'DamageQty' ,'' as 'ShortQty' UNION ALL
SELECT 'ITEM01','PO-01','30','30','0' ,'0' UNION ALL
SELECT 'ITEM01','PO-02','20','15','5' ,'0' UNION ALL
SELECT 'ITEM02','PO-01','40','38','2' ,'0' UNION ALL
SELECT 'ITEM03','PO-01','50','50','0' ,'0' UNION ALL
SELECT 'ITEM03','PO-02','30','30','0' ,'0' UNION ALL
SELECT 'ITEM03','PO-03','20','10','10','0' UNION ALL
SELECT 'ITEM04','PO-01','30','25','0' ,'5'
Note : we don't received more than expected.
solution should be based on SQL 2000
You should reconsider how you store this data. Separate Expected and Received+Damaged in different tables (you have many unused (null) cells). This way any query should become more readable.
I think what you try to do can be achieved more easily with a stored procedure.
Anyway, try this query:
SELECT Item, PO, ExpectedQty,
CASE WHEN [rec-consumed] > 0 THEN ExpectedQty
ELSE CASE WHEN [rec-consumed] + ExpectedQty > 0
THEN [rec-consumed] + ExpectedQty
ELSE 0
END
END ReceivedQty,
CASE WHEN [rec-consumed] < 0
THEN CASE WHEN DamageQty >= -1*[rec-consumed]
THEN -1*[rec-consumed]
ELSE DamageQty
END
ELSE 0
END DamageQty,
CASE WHEN [rec_damage-consumed] < 0
THEN DamageQty - [rec-consumed]
ELSE 0
END ShortQty
FROM (
select t1.Item,
t1.PO,
t1.ExpectedQty,
st.sum_ReceivedQty_OK
- (sum(COALESCE(t2.ExpectedQty,0))
+t1.ExpectedQty)
[rec-consumed],
st.sum_ReceivedQty_OK + st.sum_ReceivedQty_DAMAGE
- (sum(COALESCE(t2.ExpectedQty,0))
+t1.ExpectedQty)
[rec_damage-consumed],
st.sum_ReceivedQty_DAMAGE DamageQty
from #tt t1
left join #tt t2 on t1.Item = t2.Item
and t1.PO > t2.PO
and t2.PO is not null
join (select Item
, sum(CASE WHEN status = 'OK' THEN ReceivedQty ELSE 0 END)
sum_ReceivedQty_OK
, sum(CASE WHEN status != 'OK' THEN ReceivedQty ELSE 0 END)
sum_ReceivedQty_DAMAGE
from #tt where PO is null
group by Item) st on t1.Item = st.Item
where t1.PO is not null
group by t1.Item, t1.PO, t1.ExpectedQty,
st.sum_ReceivedQty_OK,
st.sum_ReceivedQty_DAMAGE
) a
order by Item, PO