Merge row columns from a query in sql server - merge

I have a simple query which returns the following rows :
Current rows:
Empl ECode DCode LCode Earn Dedn Liab
==== ==== ===== ===== ==== ==== ====
123 PerHr Null Null 13 0 0
123 Null Union Null 0 10 0
123 Null Per Null 0 20 0
123 Null Null MyHealth 0 0 5
123 Null Null 401 0 0 10
123 Null Null Train 0 0 15
123 Null Null CAFTA 0 0 20
However, I needed to see the above rows as follows :
Empl ECode DCode LCode Earn Dedn Liab
==== ==== ===== ===== ==== ==== ====
123 PerHr Union MyHealth 13 10 5
123 Null Per 401 0 20 10
123 Null Null Train 0 0 15
123 Null Null CAFTA 0 0 20
It's more like merging the succeeding rows into the preceding rows wherever there are Nulls encountered for EarnCode, DednCode & LiabCode. Actually what I wanted to see was to roll up everything to the preceding rows.
In Oracle we had this LAST_VALUE function which we could use, but in this case, I simply cannot figure out what to do with this.
In the example above, ECode's sum value column is Earn, DCode is Dedn, and LCode is Liab; notice that whenever either of ECode, DCode, or LCode is not null, there is a corresponding value in Earn, Dedn, or the Liab columns.
By the way, we are using SQL Server 2008 R2 at work.
Hoping for your advice, thanks.

This is basically the same technique as Tango_Guy does but without the temporary tables and with the sort made explicit. Because the number of rows per Empl is <= the number of rows already in place, I didn't need to make a dummy table for the leftmost table, just filtered the base data to where there was a match amongst the 3 codes. Also, I reviewed your discussion and the Earn and ECode move together. In fact a non-zero Earn in a column without an ECode is effectively lost (this is a good case for a constraint - non-zero Earn is not allowed when ECode is NULL):
http://sqlfiddle.com/#!3/7bd04/3
CREATE TABLE data(ID INT IDENTITY NOT NULL,
Empl VARCHAR(3),
ECode VARCHAR(8),
DCode VARCHAR(8),
LCode VARCHAR(8),
Earn INT NOT NULL,
Dedn INT NOT NULL,
Liab INT NOT NULL ) ;
INSERT INTO data (Empl, ECode, DCode, LCode, Earn, Dedn, Liab)
VALUES ('123', 'PerHr', NULL, NULL, 13, 0, 0),
('123', NULL, 'Union', NULL, 0, 10, 0),
('123', NULL, 'Per', NULL, 0, 20, 0),
('123', NULL, NULL, 'MyHealth', 0, 0, 5),
('123', NULL, NULL, '401', 0, 0, 10),
('123', NULL, NULL, 'Train', 0, 0, 15),
('123', NULL, NULL, 'CAFTA', 0, 0, 20);
WITH basedata AS (
SELECT *, ROW_NUMBER () OVER(ORDER BY ID) AS OrigSort, ROW_NUMBER () OVER(PARTITION BY Empl ORDER BY ID) AS EmplSort
FROM data
),
E AS (
SELECT Empl, ECode, Earn, ROW_NUMBER () OVER(PARTITION BY Empl ORDER BY OrigSort) AS EmplSort
FROM basedata
WHERE ECode IS NOT NULL
),
D AS (
SELECT Empl, DCode, Dedn, ROW_NUMBER () OVER(PARTITION BY Empl ORDER BY OrigSort) AS EmplSort
FROM basedata
WHERE DCode IS NOT NULL
),
L AS (
SELECT Empl, LCode, Liab, ROW_NUMBER () OVER(PARTITION BY Empl ORDER BY OrigSort) AS EmplSort
FROM basedata
WHERE LCode IS NOT NULL
)
SELECT basedata.Empl, E.ECode, D.Dcode, L.LCode, E.Earn, D.Dedn, L.Liab
FROM basedata
LEFT JOIN E
ON E.Empl = basedata.Empl AND E.EmplSort = basedata.EmplSort
LEFT JOIN D
ON D.Empl = basedata.Empl AND D.EmplSort = basedata.EmplSort
LEFT JOIN L
ON L.Empl = basedata.Empl AND L.EmplSort = basedata.EmplSort
WHERE E.ECode IS NOT NULL OR D.DCode IS NOT NULL OR L.LCode IS NOT NULL
ORDER BY basedata.Empl, basedata.EmplSort

Not sure if it is what you need but have you tried coalesc
SELECT Name, Class, Color, ProductNumber,
COALESCE(Class, Color, ProductNumber) AS FirstNotNull
FROM Production.Product ;

I have a solution, but it is very kludgy. If anyone has something better, that would be great.
However, an algorithm:
1) Get rownumbers for each distinct list of values in the columns
2) Join all columns based on rownumber
Example:
select Distinct ECode into #Ecode from source_table order by rowid;
select Distinct DCode into #Dcode from source_table order by rowid;
select Distinct LCode into #Lcode from source_table order by rowid;
select Distinct Earn into #Earn from source_table order by rowid;
select Distinct Dedn into #Dedn from source_table order by rowid;
select Distinct Liab into #Liab from source_table order by rowid;
select b.ECode, c.DCode, d.LCode, e.Earn, f.Dedn, g.Liab
from source_table a -- Note: a source for row numbers that will be >= the below
left outer join #Ecode b on a.rowid = b.rowid
left outer join #DCode c on a.rowid = c.rowid
left outer join #LCode d on a.rowid = d.rowid
left outer join #Earn e on a.rowid = e.rowid
left outer join #Dedn f on a.rowid = f.rowid
left outer join #Liab g on a.rowid = g.rowid
where
b.ecode is not null or
c.dcode is not null or
d.lcode is not null or
e.earn is not null or
f.dedn is not null or
g.liab is not null;
I didn't include Empl, since I don't know what role you want it to play. If this is all true for a given Empl, then you could just add it, join on it, and carry it through.
I don't like this solution at all, so hopefully someone else will come up with something more elegant.
Best,
David

Related

Postgres very hard dynamic select statement with COALESCE

Having a table and data like this
CREATE TABLE solicitations
(
id SERIAL PRIMARY KEY,
name text
);
CREATE TABLE donations
(
id SERIAL PRIMARY KEY,
solicitation_id integer REFERENCES solicitations, -- can be null
created_at timestamp without time zone NOT NULL DEFAULT (now() at time zone 'utc'),
amount bigint NOT NULL DEFAULT 0
);
INSERT INTO solicitations (name) VALUES
('solicitation1'), ('solicitation2');
INSERT INTO donations (created_at, solicitation_id, amount) VALUES
('2018-06-26', null, 10), ('2018-06-26', 1, 20), ('2018-06-26', 2, 30),
('2018-06-27', null, 10), ('2018-06-27', 1, 20),
('2018-06-28', null, 10), ('2018-06-28', 1, 20), ('2018-06-28', 2, 30);
How to make solicitation id's dynamic in following select statement using only postgres???
SELECT
"created_at"
-- make dynamic this begins
, COALESCE("no_solicitation", 0) AS "no_solicitation"
, COALESCE("1", 0) AS "1"
, COALESCE("2", 0) AS "2"
-- make dynamic this ends
FROM crosstab(
$source_sql$
SELECT
created_at::date as row_id
, COALESCE(solicitation_id::text, 'no_solicitation') as category
, SUM(amount) as value
FROM donations
GROUP BY row_id, category
ORDER BY row_id, category
$source_sql$
, $category_sql$
-- parametrize with ids from here begins
SELECT unnest('{no_solicitation}'::text[] || ARRAY(SELECT DISTINCT id::text FROM solicitations ORDER BY id))
-- parametrize with ids from here ends
$category_sql$
) AS ct (
"created_at" date
-- make dynamic this begins
, "no_solicitation" bigint
, "1" bigint
, "2" bigint
-- make dynamic this ends
)
The select should return data like this
created_at no_solicitation 1 2
____________________________________
2018-06-26 10 20 30
2018-06-27 10 20 0
2018-06-28 10 20 30
The solicitation ids that should parametrize select are the same as in
SELECT unnest('{no_solicitation}'::text[] || ARRAY(SELECT DISTINCT id::text FROM solicitations ORDER BY id))
One can fiddle the code here
I decided to use json, which is much simpler then crosstab
WITH
all_solicitation_ids AS (
SELECT
unnest('{no_solicitation}'::text[] ||
ARRAY(SELECT DISTINCT id::text FROM solicitations ORDER BY id))
AS col
)
, all_days AS (
SELECT
-- TODO: compute days ad hoc, from min created_at day of donations to max created_at day of donations
generate_series('2018-06-26', '2018-06-28', '1 day'::interval)::date
AS col
)
, all_days_and_all_solicitation_ids AS (
SELECT
all_days.col AS created_at
, all_solicitation_ids.col AS solicitation_id
FROM all_days, all_solicitation_ids
ORDER BY all_days.col, all_solicitation_ids.col
)
, donations_ AS (
SELECT
created_at::date as created_at
, COALESCE(solicitation_id::text, 'no_solicitation') as solicitation_id
, SUM(amount) as amount
FROM donations
GROUP BY created_at, solicitation_id
ORDER BY created_at, solicitation_id
)
, donations__ AS (
SELECT
all_days_and_all_solicitation_ids.created_at
, all_days_and_all_solicitation_ids.solicitation_id
, COALESCE(donations_.amount, 0) AS amount
FROM all_days_and_all_solicitation_ids
LEFT JOIN donations_
ON all_days_and_all_solicitation_ids.created_at = donations_.created_at
AND all_days_and_all_solicitation_ids.solicitation_id = donations_.solicitation_id
)
SELECT
jsonb_object_agg(solicitation_id, amount) ||
jsonb_object_agg('date', created_at)
AS data
FROM donations__
GROUP BY created_at
which results
data
______________________________________________________________
{"1": 20, "2": 30, "date": "2018-06-28", "no_solicitation": 10}
{"1": 20, "2": 30, "date": "2018-06-26", "no_solicitation": 10}
{"1": 20, "2": 0, "date": "2018-06-27", "no_solicitation": 10}
Thought its not the same that I requested.
It returns only data column, instead of date, no_solicitation, 1, 2, ...., to do so I need to use json_to_record, but I dont know how to produce its as argument dynamically

Parse Text File, Group rows based on Text string marker

I am importing a large text file that consists of several 'reports'. Each report consists of several rows of data. The only way I know when a new report starts is the line starts with "XX". Then all rows following belong to that master row with XX. I am trying to put in a grouping ID so that I can work with the data and parse it into the database.
CREATE TABLE RawData(
ID int IDENTITY(1,1) NOT NULL
,Grp1 int NULL
,Grp2 int NULL
,Rowdata varchar(max) NULL
)
INSERT INTO RawData(Rowdata) VALUES 'XX Monday'
INSERT INTO RawData(Rowdata) VALUES 'Tues day'
INSERT INTO RawData(Rowdata) VALUES 'We d ne s day'
INSERT INTO RawData(Rowdata) VALUES 'Thurs day'
INSERT INTO RawData(Rowdata) VALUES 'F r i d day'
INSERT INTO RawData(Rowdata) VALUES 'XX January'
INSERT INTO RawData(Rowdata) VALUES 'Feb r u a'
INSERT INTO RawData(Rowdata) VALUES 'XX Sun d a y'
INSERT INTO RawData(Rowdata) VALUES 'Sat ur day'
I need to write a script that will update the Grp1 field based on where the "XX" line is at. When I am finished I'd like the table to look like this:
ID Grp1 Grp2 RowData
1 1 1 XX Monday
2 1 2 Tues day
3 1 3 We d ne s day
4 1 4 Thurs day
5 1 5 F r i d day
6 2 1 XX January
7 2 2 Feb r u a
8 3 1 XX Sun d a y
9 3 2 Sat ur day
I know for Grp2 field I can use the DENSE_RANK. The issue I am having is how do I fill in all the values for Grp1. I can do an update where I see the 'XX', but that does not fill in the values below.
Thank you for any advise/help.
This should do the trick
-- sample data
DECLARE #RawData TABLE
(
ID int IDENTITY(1,1) NOT NULL
,Grp1 int NULL
,Grp2 int NULL
,Rowdata varchar(max) NULL
);
INSERT INTO #RawData(Rowdata)
VALUES ('XX Monday'),('Tues day'),('We d ne s day'),('Thurs day'),('F r i d day'),
('XX January'),('Feb r u a'),('XX Sun d a y'),('Sat ur day');
-- solution
WITH rr AS
(
SELECT ID, thisVal = ROW_NUMBER() OVER (ORDER BY ID)
FROM #rawData
WHERE RowData LIKE 'XX %'
),
makeGrp1 AS
(
SELECT
ID,
Grp1 = (SELECT MAX(thisVal) FROM rr WHERE r.id >= rr.id),
RowData
FROM #rawData r
)
SELECT
ID,
Grp1,
Grp2 = ROW_NUMBER() OVER (PARTITION BY Grp1 ORDER BY ID),
RowData
FROM makeGrp1;
UPDATE: below is the code to update you #RawData table; I just re-read the requirement. I'm leaving the original solution as it will help you bbetter understand how my update works:
-- sample data
DECLARE #RawData TABLE
(
ID int IDENTITY(1,1) NOT NULL
,Grp1 int NULL
,Grp2 int NULL
,Rowdata varchar(max) NULL
);
INSERT INTO #RawData(Rowdata)
VALUES ('XX Monday'),('Tues day'),('We d ne s day'),('Thurs day'),('F r i d day'),
('XX January'),('Feb r u a'),('XX Sun d a y'),('Sat ur day');
-- Solution to update the #RawData Table
WITH rr AS
(
SELECT ID, thisVal = ROW_NUMBER() OVER (ORDER BY ID)
FROM #rawData
WHERE RowData LIKE 'XX %'
),
makeGroups AS
(
SELECT
ID,
Grp1 = (SELECT MAX(thisVal) FROM rr WHERE r.id >= rr.id),
Grp2 = ROW_NUMBER()
OVER (PARTITION BY (SELECT MAX(thisVal) FROM rr WHERE r.id >= rr.id) ORDER BY ID)
FROM #rawData r
)
UPDATE #RawData
SET Grp1 = mg.Grp1, Grp2 = mg.Grp2
FROM makeGroups mg
JOIN #RawData rd ON mg.ID = rd.ID;
;with cte0 as (
Select *,Flag = case when RowData like 'XX%' then 1 else 0 end
From RawData )
Update RawData
Set Grp1 = B.Grp1
,Grp2 = B.Grp2
From RawData U
Join (
Select ID
,Grp1 = Sum(Flag) over (Order by ID)
,Grp2 = Row_Number() over (Partition By (Select Sum(Flag) From cte0 Where ID<=a.ID) Order by ID)
From cte0 A
) B on U.ID=B.ID
Select * from RawData
The Updated RawData looks like this

Running Totals with debit credit and previous row SQL Server 2012

I am having problems in recalculating the running totals.
I have a situation where we have duplicate transactions and these must be deleted and the and initial and closing balance must be recalculated based on the amount and taking into account when isdebit.
My attempt is to have nested cursors (parent-child) and the parent select all the distinct bookingNo and the child does the calculation looks very messy and I didn't work, didn't post it because I didn't want to confuse things.
I know in SQL Server 2012 you can use (sum over partition by) but I cannot figure how to do it to handle the deleted row etc..
Below is what I did so far
--Create Table for testing
IF object_id(N'TestTransaction', 'U') IS NOT NULL DROP TABLE TestTransaction
GO
CREATE TABLE [TestTransaction]
(
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[BookingNo] [bigint] NOT NULL,
[IsDebit] [bit] NOT NULL,
[Amount] [decimal](18, 2) NOT NULL,
[InitialBalance] [decimal](18, 2) NOT NULL,
[ClosingBalance] [decimal](18, 2) NOT NULL
) ON [PRIMARY]
GO
INSERT [TestTransaction] ([BookingNo], [IsDebit], [Amount], [InitialBalance], [ClosingBalance])
SELECT 200, 0, 100, 2000,2100 UNION ALL
SELECT 200, 0, 100, 2100,2200 UNION ALL
SELECT 200, 1, 150, 2150,2000 UNION ALL
SELECT 200, 0, 300, 2000,2300 UNION ALL
SELECT 200, 0, 400, 2300,2700 UNION ALL
SELECT 200, 0, 250, 2700,2950 UNION ALL
SELECT 200, 0, 250, 2950,3200
--- end of setup
IF OBJECT_ID('tempdb..#tmpTransToDelete') IS NOT NULL DROP TABLE #tmpTransToDelete
GO
CREATE TABLE #tmpTransToDelete
( BoookingNo bigint,
Isdebit bit,
amount decimal(18,2),
InitialBalance decimal(18,2),
ClosingBalance decimal(18,2)
)
DECLARE #RunnnigInitialBalance decimal(18,2),#RunnnigClosingBalance decimal(18,2)
INSERT #tmpTransToDelete(BoookingNo,Isdebit,amount,InitialBalance,ClosingBalance)
SELECT BookingNo,Isdebit,amount,InitialBalance,ClosingBalance
FROM TestTransaction
WHERE ID IN (1,6)
--Delete all duplicate transaction (just to prove the point)
DELETE TestTransaction WHERE ID IN (1,6)
-- now taking into account the deleted rows recalculate the lot and update the table.
Any help? Suggestions?
edited
Results should be
Id BookingNo IsDebit Amount InitialBalance ClosingBalance
2 200 0 100.00 2000.00 2000.00
3 200 1 150.00 2000.00 2150.00
4 200 0 300.00 2150.00 2450.00
5 200 0 400.00 2450.00 2850.00
7 200 0 250.00 2600.00 2850.00
The RunningTotal approach in my previous response would work if there were transactional data that accounted for the initial balance. But, since that evidently isn't the case, I would say you can't delete any rows without also applying the relative difference to all subsequent rows as part of the same transaction. Moreover, I'm convinced your initial sample data is wrong, which only exacerbates the confusion. It seems to me it should be as follows:
SELECT 200, 0, 100, 2000,2100 UNION ALL
SELECT 200, 0, 100, 2100,2200 UNION ALL
SELECT 200, 1, 150, 2200,2050 UNION ALL
SELECT 200, 0, 300, 2050,2350 UNION ALL
SELECT 200, 0, 400, 2350,2750 UNION ALL
SELECT 200, 0, 250, 2750,3000 UNION ALL
SELECT 200, 0, 250, 3000,3250
With that rectified, here's how I'd write the delete-and-update transaction:
BEGIN TRAN
DECLARE #tbd TABLE (
Id bigint
,BookingNo bigint
,Amount decimal(18,2)
);
DELETE FROM TestTransaction
OUTPUT deleted.Id
, deleted.BookingNo
, deleted.Amount * IIF(deleted.IsDebit = 0, 1, -1) AS Amount
INTO #tbd
WHERE ID IN (1,6);
WITH adj
AS (
SELECT tt.BookingNo, tt.Id, SUM(tbd.amount) AS Amount
FROM TestTransaction tt
JOIN #tbd tbd ON tt.BookingNo = tbd.BookingNo AND tbd.id <= tt.id
GROUP BY tt.BookingNo, tt.Id
)
UPDATE tt
SET InitialBalance -= adj.Amount
,ClosingBalance -= adj.Amount
FROM TestTransaction tt
JOIN adj ON tt.BookingNo = adj.BookingNo AND tt.Id = adj.Id;
COMMIT TRAN
Which yields a final result of:
Id BookingNo IsDebit Amount InitialBalance ClosingBalance
2 200 0 100.00 2000.00 2100.00
3 200 1 150.00 2100.00 1950.00
4 200 0 300.00 1950.00 2250.00
5 200 0 400.00 2250.00 2650.00
7 200 0 250.00 2650.00 2900.00
Here's an example of a running total using your data:
SELECT BookingNo
, Amount
, IsDebit
, SUM(Amount * IIF(IsDebit = 0, 1, -1)) OVER (PARTITION BY BookingNo ORDER BY Id ROWS UNBOUNDED PRECEDING) AS RunningTotal
FROM TestTransaction

SQL Running Subtraction

Just a brief of business scenario is table has been created for a good receipt. So here we have good expected line with PurchaseOrder(PO) in first few line. And then we receive each expected line physically and that time these quantity may be different, due to business case like quantity may damage and short quantity like that. So we maintain a status for that eg: OK, Damage, also we have to calculate short quantity based on total of expected quantity of each item and total of received line.
if object_id('DEV..Temp','U') is not null
drop table Temp
CREATE TABLE Temp
(
ID INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
Item VARCHAR(32),
PO VARCHAR(32) NULL,
ExpectedQty INT NULL,
ReceivedQty INT NULL,
[STATUS] VARCHAR(32) NULL,
BoxName VARCHAR(32) NULL
)
Please see first few line with PO data will be the expected lines,
and then rest line will be received line
INSERT INTO TEMP (Item,PO,ExpectedQty,ReceivedQty,[STATUS],BoxName)
SELECT 'ITEM01','PO-01','30',NULL,NULL,NULL UNION ALL
SELECT 'ITEM01','PO-02','20',NULL,NULL,NULL UNION ALL
SELECT 'ITEM02','PO-01','40',NULL,NULL,NULL UNION ALL
SELECT 'ITEM03','PO-01','50',NULL,NULL,NULL UNION ALL
SELECT 'ITEM03','PO-02','30',NULL,NULL,NULL UNION ALL
SELECT 'ITEM03','PO-03','20',NULL,NULL,NULL UNION ALL
SELECT 'ITEM04','PO-01','30',NULL,NULL,NULL UNION ALL
SELECT 'ITEM01',NULL,NULL,'20','OK','box01' UNION ALL
SELECT 'ITEM01',NULL,NULL,'25','OK','box02' UNION ALL
SELECT 'ITEM01',NULL,NULL,'5','DAMAGE','box03' UNION ALL
SELECT 'ITEM02',NULL,NULL,'38','OK','box04' UNION ALL
SELECT 'ITEM02',NULL,NULL,'2','DAMAGE','box05' UNION ALL
SELECT 'ITEM03',NULL,NULL,'30','OK','box06' UNION ALL
SELECT 'ITEM03',NULL,NULL,'30','OK','box07' UNION ALL
SELECT 'ITEM03',NULL,NULL,'30','OK','box08' UNION ALL
SELECT 'ITEM03',NULL,NULL,'10','DAMAGE','box09' UNION ALL
SELECT 'ITEM04',NULL,NULL,'25','OK','box10'
Below Table is my expected result based on above data.
I need to show those data following way.
So I appreciate if you can give me an appropriate query for it.
Note: first row is blank and it is actually my table header. :)
SELECT '' as 'ITEM', '' as 'PO#', '' as 'ExpectedQty',
'' as 'ReceivedQty','' as 'DamageQty' ,'' as 'ShortQty' UNION ALL
SELECT 'ITEM01','PO-01','30','30','0' ,'0' UNION ALL
SELECT 'ITEM01','PO-02','20','15','5' ,'0' UNION ALL
SELECT 'ITEM02','PO-01','40','38','2' ,'0' UNION ALL
SELECT 'ITEM03','PO-01','50','50','0' ,'0' UNION ALL
SELECT 'ITEM03','PO-02','30','30','0' ,'0' UNION ALL
SELECT 'ITEM03','PO-03','20','10','10','0' UNION ALL
SELECT 'ITEM04','PO-01','30','25','0' ,'5'
Note : we don't received more than expected.
solution should be based on SQL 2000
You should reconsider how you store this data. Separate Expected and Received+Damaged in different tables (you have many unused (null) cells). This way any query should become more readable.
I think what you try to do can be achieved more easily with a stored procedure.
Anyway, try this query:
SELECT Item, PO, ExpectedQty,
CASE WHEN [rec-consumed] > 0 THEN ExpectedQty
ELSE CASE WHEN [rec-consumed] + ExpectedQty > 0
THEN [rec-consumed] + ExpectedQty
ELSE 0
END
END ReceivedQty,
CASE WHEN [rec-consumed] < 0
THEN CASE WHEN DamageQty >= -1*[rec-consumed]
THEN -1*[rec-consumed]
ELSE DamageQty
END
ELSE 0
END DamageQty,
CASE WHEN [rec_damage-consumed] < 0
THEN DamageQty - [rec-consumed]
ELSE 0
END ShortQty
FROM (
select t1.Item,
t1.PO,
t1.ExpectedQty,
st.sum_ReceivedQty_OK
- (sum(COALESCE(t2.ExpectedQty,0))
+t1.ExpectedQty)
[rec-consumed],
st.sum_ReceivedQty_OK + st.sum_ReceivedQty_DAMAGE
- (sum(COALESCE(t2.ExpectedQty,0))
+t1.ExpectedQty)
[rec_damage-consumed],
st.sum_ReceivedQty_DAMAGE DamageQty
from #tt t1
left join #tt t2 on t1.Item = t2.Item
and t1.PO > t2.PO
and t2.PO is not null
join (select Item
, sum(CASE WHEN status = 'OK' THEN ReceivedQty ELSE 0 END)
sum_ReceivedQty_OK
, sum(CASE WHEN status != 'OK' THEN ReceivedQty ELSE 0 END)
sum_ReceivedQty_DAMAGE
from #tt where PO is null
group by Item) st on t1.Item = st.Item
where t1.PO is not null
group by t1.Item, t1.PO, t1.ExpectedQty,
st.sum_ReceivedQty_OK,
st.sum_ReceivedQty_DAMAGE
) a
order by Item, PO

Dealing with periods and dates without using cursors

I would like to solve this issue avoiding to use cursors (FETCH).
Here comes the problem...
1st Table/quantity
------------------
periodid periodstart periodend quantity
1 2010/10/01 2010/10/15 5
2st Table/sold items
-----------------------
periodid periodstart periodend solditems
14343 2010/10/05 2010/10/06 2
Now I would like to get the following view or just query result
Table Table/stock
-----------------------
periodstart periodend itemsinstock
2010/10/01 2010/10/04 5
2010/10/05 2010/10/06 3
2010/10/07 2010/10/15 5
It seems impossible to solve this problem without using cursors, or without using single dates instead of periods.
I would appreciate any help.
Thanks
DECLARE #t1 TABLE (periodid INT,periodstart DATE,periodend DATE,quantity INT)
DECLARE #t2 TABLE (periodid INT,periodstart DATE,periodend DATE,solditems INT)
INSERT INTO #t1 VALUES(1,'2010-10-01T00:00:00.000','2010-10-15T00:00:00.000',5)
INSERT INTO #t2 VALUES(14343,'2010-10-05T00:00:00.000','2010-10-06T00:00:00.000',2)
DECLARE #D1 DATE
SELECT #D1 = MIN(P) FROM (SELECT MIN(periodstart) P FROM #t1
UNION ALL
SELECT MIN(periodstart) FROM #t2) D
DECLARE #D2 DATE
SELECT #D2 = MAX(P) FROM (SELECT MAX(periodend) P FROM #t1
UNION ALL
SELECT MAX(periodend) FROM #t2) D
;WITH
L0 AS (SELECT 1 AS c UNION ALL SELECT 1),
L1 AS (SELECT 1 AS c FROM L0 A CROSS JOIN L0 B),
L2 AS (SELECT 1 AS c FROM L1 A CROSS JOIN L1 B),
L3 AS (SELECT 1 AS c FROM L2 A CROSS JOIN L2 B),
L4 AS (SELECT 1 AS c FROM L3 A CROSS JOIN L3 B),
Nums AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) AS i FROM L4),
Dates AS(SELECT DATEADD(DAY,i-1,#D1) AS D FROM Nums where i <= 1+DATEDIFF(DAY,#D1,#D2)) ,
Stock As (
SELECT D ,t1.quantity - ISNULL(t2.solditems,0) AS itemsinstock
FROM Dates
LEFT OUTER JOIN #t1 t1 ON t1.periodend >= D and t1.periodstart <= D
LEFT OUTER JOIN #t2 t2 ON t2.periodend >= D and t2.periodstart <= D ),
NStock As (
select D,itemsinstock, ROW_NUMBER() over (order by D) - ROW_NUMBER() over (partition by itemsinstock order by D) AS G
from Stock)
SELECT MIN(D) AS periodstart, MAX(D) AS periodend, itemsinstock
FROM NStock
GROUP BY G, itemsinstock
ORDER BY periodstart
Hopefully a little easier to read than Martin's. I used different tables and sample data, hopefully extrapolating the right info:
CREATE TABLE [dbo].[Quantity](
[PeriodStart] [date] NOT NULL,
[PeriodEnd] [date] NOT NULL,
[Quantity] [int] NOT NULL
) ON [PRIMARY]
CREATE TABLE [dbo].[SoldItems](
[PeriodStart] [date] NOT NULL,
[PeriodEnd] [date] NOT NULL,
[SoldItems] [int] NOT NULL
) ON [PRIMARY]
INSERT INTO Quantity (PeriodStart,PeriodEnd,Quantity)
SELECT '20100101','20100115',5
INSERT INTO SoldItems (PeriodStart,PeriodEnd,SoldItems)
SELECT '20100105','20100107',2 union all
SELECT '20100106','20100108',1
The actual query is now:
;WITH Dates as (
select PeriodStart as DateVal from SoldItems union select PeriodEnd from SoldItems union select PeriodStart from Quantity union select PeriodEnd from Quantity
), Periods as (
select d1.DateVal as StartDate, d2.DateVal as EndDate
from Dates d1 inner join Dates d2 on d1.DateVal < d2.DateVal left join Dates d3 on d1.DateVal < d3.DateVal and d3.DateVal < d2.DateVal where d3.DateVal is null
), QuantitiesSold as (
select StartDate,EndDate,COALESCE(SUM(si.SoldItems),0) as Quantity
from Periods p left join SoldItems si on p.StartDate < si.PeriodEnd and si.PeriodStart < p.EndDate
group by StartDate,EndDate
)
select StartDate,EndDate,q.Quantity - qs.Quantity
from QuantitiesSold qs inner join Quantity q on qs.StartDate < q.PeriodEnd and q.PeriodStart < qs.EndDate
And the result is:
StartDate EndDate (No column name)
2010-01-01 2010-01-05 5
2010-01-05 2010-01-06 3
2010-01-06 2010-01-07 2
2010-01-07 2010-01-08 4
2010-01-08 2010-01-15 5
Explanation: I'm using three Common Table Expressions. The first (Dates) is gathering all of the dates that we're talking about, from the two tables involved. The second (Periods) selects consecutive values from the Dates CTE. And the third (QuantitiesSold) then finds items in the SoldItems table that overlap these periods, and adds their totals together. All that remains in the outer select is to subtract these quantities from the total quantity stored in the Quantity Table
John, what you could do is a WHILE loop. Declare and initialise 2 variables before your loop, one being the start date and the other being end date. Your loop would then look like this:
WHILE(#StartEnd <= #EndDate)
BEGIN
--processing goes here
SET #StartEnd = #StartEnd + 1
END
You would need to store your period definitions in another table, so you could retrieve those and output rows when required to a temporary table.
Let me know if you need any more detailed examples, or if I've got the wrong end of the stick!
Damien,
I am trying to fully understand your solution and test it on a large scale of data, but I receive following errors for your code.
Msg 102, Level 15, State 1, Line 20
Incorrect syntax near 'Dates'.
Msg 102, Level 15, State 1, Line 22
Incorrect syntax near ','.
Msg 102, Level 15, State 1, Line 25
Incorrect syntax near ','.
Damien,
Based on your solution I also wanted to get a neat display for StockItems without overlapping dates. How about this solution?
CREATE TABLE [dbo].[SoldItems](
[PeriodStart] [datetime] NOT NULL,
[PeriodEnd] [datetime] NOT NULL,
[SoldItems] [int] NOT NULL
) ON [PRIMARY]
INSERT INTO SoldItems (PeriodStart,PeriodEnd,SoldItems)
SELECT '20100105','20100106',2 union all
SELECT '20100105','20100108',3 union all
SELECT '20100115','20100116',1 union all
SELECT '20100101','20100120',10
;WITH Dates as (
select PeriodStart as DateVal from SoldItems
union
select PeriodEnd from SoldItems
union
select PeriodStart from Quantity
union
select PeriodEnd from Quantity
), Periods as (
select d1.DateVal as StartDate, d2.DateVal as EndDate
from Dates d1
inner join Dates d2 on d1.DateVal < d2.DateVal
left join Dates d3 on d1.DateVal < d3.DateVal and
d3.DateVal < d2.DateVal where d3.DateVal is null
), QuantitiesSold as (
select StartDate,EndDate,SUM(si.SoldItems) as Quantity
from Periods p left join SoldItems si on p.StartDate < si.PeriodEnd and si.PeriodStart < p.EndDate
group by StartDate,EndDate
)
select StartDate,EndDate, qs.Quantity
from QuantitiesSold qs
where qs.quantity is not null