Postgres SQL Grouping By column value inside of Case statment - postgresql

t1:
AccountName
Date
Amount
A1
2022-06-30
2
A2
2022-06-30
1
A3
2022-06-30
A1
2022-07-31
4
A2
2022-07-31
5
A3
2022-07-31
I want to do a transformation on this table such that I fill in the "Amount" column of all rows with account name 'A3' and lets say that for each month group the 'A3' -"Amount" value is equal to (the 'A1' 'Amount' column + the 'A2' 'Amount' column), so the expected result table is:
AccountName
Date
Amount
A1
2022-06-30
2
A2
2022-06-30
1
A3
2022-06-30
3
A1
2022-07-31
4
A2
2022-07-31
5
A3
2022-07-31
9
The only way I can think of solving this is using multiple CTE's to separate each 'Date' value and using a case statements with multiple selects to get these values the using a union at the end:
with d1 as (
select *
from t1
WHERE Date = '2022-06-30'),
c1 as (
SELECT
"AccountName",
"Date",
Case WHEN "AccountName" = 'A3'
THEN (SELECT "Amount" FROM t1 WHERE "AccountName" = 'A1') +
(SELECT "Amount" FROM t1 WHERE AccountName = 'A2')
ELSE "Amount" END AS "Amount"
FROM d1),
d2 as (
select *
from t1
WHERE Date = '2022-07-31'),
c2 as (
SELECT
"AccountName",
"Date",
Case WHEN "AccountName" = 'A3'
THEN (SELECT "Amount" FROM t1 WHERE "AccountName" = 'A1') +
(SELECT "Amount" FROM t1 WHERE AccountName = 'A2')
ELSE "Amount" END AS "Amount"
FROM d2)
SELECT * FROM c1
Union
SELECT * FROM c2
Is a better way of doing this? As i have multiple row calculations based on other row values and on top of that multiple Distinct 'Date' values (24) for which i would have to create separate CTE's for. This would result in an extremely long sql script for me. Is there maybe a way to group by every 'Date' value in the date column to avoid making multiple CTE's for each 'Date' Value? Additionally is there a better way to construct the sums values for the 'Amount' values for all 'A3' rows rather that using multiple selects in side each 'CASE WHEN'? Thanks!

You can use a window function for this - no need to hardcode the dates:
SELECT
"AccountName",
"Date",
(CASE WHEN "AccountName" = 'A3'
THEN SUM("Amount") FILTER (WHERE "AccountName" IN ('A1', 'A2')) OVER (PARTITION BY "Date")
ELSE "Amount"
END) AS "Amount"
FROM t1
An equivalent query using subqueries would be
SELECT
"AccountName",
"Date",
(CASE WHEN "AccountName" = 'A3'
THEN (
SELECT SUM("Amount")
FROM t1
WHERE "Date" = outer."Date"
AND "AccountName" IN ('A1', 'A2')
)
ELSE "Amount"
END) AS "Amount"
FROM t1 outer
or, assuming that the amounts of A1 and A2 are never NULL:
SELECT
"AccountName",
"Date",
(CASE WHEN "AccountName" = 'A3'
THEN (
SELECT "Amount"
FROM t1
WHERE "Date" = t1out."Date"
AND "AccountName" = 'A1'
) + (
SELECT "Amount"
FROM t1
WHERE "Date" = t1out."Date"
AND "AccountName" = 'A2'
)
ELSE "Amount"
END) AS "Amount"
FROM t1 t1out

This is how I would do it I think. The amount of rows without one becomes the sum of amounts in the same calendar month. I already combined month and year into a string, but it's probably way more efficient to compare the year and month value seperately but I like how this looks in joins.
update t1 t1update
set amount = (
select sum(amount) from t1
where
extract(year from t1update.date date) || '-' || extract(month from t1update.date =
extract(year from t1.date date) || '-' || extract(month from t1.date)
)
where t1update.amount = '' or t1update.amount is null

Related

How to repeat some data points in query results?

I am trying to get the max date by account from 3 different tables and view those dates side by side. I created a separate query for each table, merged the results with UNION ALL, and then wrapped all that in a PIVOT.
The first 2 sections in the link/pic below show what I have been able to accomplish and the 3rd section is what I would like to do.
Query results by step
How can I get the results from 2 of the tables to repeat? Is that possible?
--define var_ent_type = 'ACOM'
--define var_ent_id = '52766'
--define var_dict_id = 113
SELECT
*
FROM
(
SELECT
E.ENTITY_TYPE,
E.ENTITY_ID,
'PERF_SUMMARY' as "TableName",
PS.DICTIONARY_ID,
to_char(MAX(PS.END_EFFECTIVE_DATE), 'YYYY-MM-DD') as "MaxDate"
FROM
RULESDBO.ENTITY E
INNER JOIN PERFORMDBO.PERF_SUMMARY PS ON (PS.ENTITY_ID = E.ENTITY_ID)
WHERE
1=1
-- AND E.ENTITY_TYPE = '&var_ent_type'
-- AND E.ENTITY_ID = '&var_ent_id'
AND PS.DICTIONARY_ID >= 100
AND (E.ACTIVE_STATUS <> 'N' )--and E.TERMINATION_DATE is null )
GROUP BY
E.ENTITY_TYPE,
E.ENTITY_ID,
'PERF_SUMMARY',
PS.DICTIONARY_ID
union all
SELECT
E.ENTITY_TYPE,
E.ENTITY_ID,
'POSITION' as "TableName",
0 as DICTIONARY_ID,
to_char(MAX(H.EFFECTIVE_DATE), 'YYYY-MM-DD') as "MaxDate"
FROM
RULESDBO.ENTITY E
INNER JOIN HOLDINGDBO.POSITION H ON (H.ENTITY_ID = E.ENTITY_ID)
WHERE
1=1
-- AND E.ENTITY_TYPE = '&var_ent_type'
-- AND E.ENTITY_ID = '&var_ent_id'
AND (E.ACTIVE_STATUS <> 'N' )--and E.TERMINATION_DATE is null )
GROUP BY
E.ENTITY_TYPE,
E.ENTITY_ID,
'POSITION',
1
union all
SELECT
E.ENTITY_TYPE,
E.ENTITY_ID,
'CASH_ACTIVITY' as "TableName",
0 as DICTIONARY_ID,
to_char(MAX(C.EFFECTIVE_DATE), 'YYYY-MM-DD') as "MaxDate"
FROM
RULESDBO.ENTITY E
INNER JOIN CASHDBO.CASH_ACTIVITY C ON (C.ENTITY_ID = E.ENTITY_ID)
WHERE
1=1
-- AND E.ENTITY_TYPE = '&var_ent_type'
-- AND E.ENTITY_ID = '&var_ent_id'
AND (E.ACTIVE_STATUS <> 'N' )--and E.TERMINATION_DATE is null )
GROUP BY
E.ENTITY_TYPE,
E.ENTITY_ID,
'CASH_ACTIVITY',
1
--ORDER BY
-- 2,3, 4
)
PIVOT
(
MAX("MaxDate")
FOR "TableName"
IN ('CASH_ACTIVITY', 'PERF_SUMMARY','POSITION')
)
Everything is possible. You only need a window function to make the value repeat across rows w/o data.
--Assuming current query is QC
With QC as (
...
)
select code, account, grouping,
--cash,
first_value(cash) over (partition by code, account order by grouping asc rows unbounded preceding) as cash_repeat,
perf,
--pos,
first_value(pos) over (partition by code, account order by grouping asc rows unbounded preceding) as pos_repeat
from QC
;
See first_value() help here: https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/FIRST_VALUE.html#GUID-D454EC3F-370C-4C64-9B11-33FCB10D95EC

Pivot / Crosstab PostgreSQL ERROR: invalid return type

Hello I have created a view, but I want to pivot it with dynamic years.
Output before pivoting:
Expected output:
My query :
SELECT *
FROM crosstab(
' select b.jenisiuran,
date_part(''year''::text, a.insertdate) AS tahun,
sum(b.jumlah_amt) AS jumlah
FROM blm_dpembayaraniuran a
JOIN blm_dpembayaraniuranline b ON a.blm_dpembayaraniuran_key::text = b.blm_dpembayaraniuran_key::text
GROUP BY date_part(''year''::text, a.insertdate), b.jenisiuran'
) AS (TRANSAKSI TEXT, "2019" NUMERIC, "2020" NUMERIC, "2021" numeric);
and I'm getting error like this :
ERROR: invalid return type
Detail: SQL rowid datatype does not match return rowid datatype.
Thanks for helping me
I find using filtered aggregation easier to work with than crosstab()
select b.jenisiuran as transaksi,
sum(b.jumlah_amt) filter (where extract(year from a.insertdate) = 2019) as "2019",
sum(b.jumlah_amt) filter (where extract(year from a.insertdate) = 2020) as "2020",
sum(b.jumlah_amt) filter (where extract(year from a.insertdate) = 2021) as "2021",
sum(b.jumlah_amt) as total
FROM blm_dpembayaraniuran a
JOIN blm_dpembayaraniuranline b ON a.blm_dpembayaraniuran_key::text = b.blm_dpembayaraniuran_key::text
WHERE a.insertdate >= date '2019-01-01'
AND a.insertdate < date '2022-01-01'
GROUP b.jenisiuran;
Adding a range condition on inserdate should improve performance as the grouping only needs to be done for the rows in the desired range, not on all rows in the both tables.

How to add a condition in select class, based on the result value of another select class

I need a Postgres query to get "A value", "A value date", "B value" and "B value date"
The B value and date should be the one which is between 95 to 100 days of "A value date"
I have the query to get "A value" and "A value date", don't know how to get the B value and date by using the result (A value)
select u.id,
(select activity
from Sol_pro
where user_id = u.id
and uni_par = 'weight_m'
order by created_at asc
limit 1) as A_value
from users u;
the B_value and B_date from the same sol_pro table,
95-100 days after the A_value date (if more mores are there between 95-100, I need only one(recent) value) Expected
Output: id = 112233, A_Value = "weight = 210.25", A_value_date = 12-12-2020, B_value = "weight = 220.25", B_value_date = 12-12-2020
Well without table definition I developed it from the output columns and your original query. Further I had to make up stuff for the data, but the following should be close enough for you to see the technique involved. It is actually a simple join operation, just that it is self-join on the table sol_pol. (I.e it is joined to itself). Notice comments indicated by --<<<
select distinct on (a.id)
a.id
, a.user_id --<<< assumption needed
, a.activity "A_value"
, a.created_at::date "A_date"
, b.activity "B_value"
, b.created_at::date
from sol_pro a
join sol_pro b
on ( b.user_id = a.user_id --<<< assumption
and b.uni_par = a.uni_par --<<< assumption
)
where a.id = 112233 --<<< given Orig query
and a.uni_par = 'weight_m' --<<< given Orig query, but not needed if id is PK
and b.created_at between a.created_at + interval '95 days' --<<< date range inclusive of 95-100
and a.created_at + interval '100 days'
order by a.id, b.created_at desc;
See here for example run. The example contains a column you will not have "belayer_note". This is just a note-to-self I sometimes use for initial testing.
Suppose that you have tables users and measures:
# create table users (id integer);
# create table measures (user_id integer, value decimal, created_at date);
They are filled with test data:
INSERT INTO users VALUES (1), (2), (3);
INSERT INTO measures VALUES (1, 100, '9/10/2020'), (1, 103, '9/15/2020'), (1, 104, '10/2/2020');
INSERT INTO measures VALUES (2, 200, '9/11/2020'), (2, 207, '9/21/2020'), (2, 204, '10/1/2020');
INSERT INTO measures VALUES (3, 300, '9/12/2020'), (3, 301, '10/1/2020'), (3, 318, '10/12/2020');
Query:
WITH M AS (
SELECT
A.user_id,
A.value AS A_value, A.created_at AS A_date,
B.value AS B_value, B.created_at AS B_date
FROM measures A
LEFT JOIN measures B ON
A.user_id = B.user_id AND
B.created_at >= A.created_at + INTERVAL '5 days' AND
B.created_at <= A.created_at + INTERVAL '10 days'
ORDER BY
user_id, A_date, B_date
)
SELECT DISTINCT ON (user_id) * FROM M;
will select for each user:
the first available measurement (A)
the next measurement (B) which is made between 5-10 days from (A).
Result:
user_id | a_value | a_date | b_value | b_date
---------+---------+------------+---------+------------
1 | 100 | 2020-09-10 | 103 | 2020-09-15
2 | 200 | 2020-09-11 | 207 | 2020-09-21
3 | 300 | 2020-09-12 | [NULL] | [NULL]
(3 rows)
P.S. You must sort table rows carefully with ODRDER BY when using DISTINCT ON () clause because PostgreSQL will keep only first records and discard others.

Write all days of period for each single account from a single data table

I have a data table with daily transactions on several bank accounts. I would like to calculate the sum of the transactions on each bank account for each day within a certain time period. For days where there was no transaction during that period, I want to see a NULL value.
I am using two tables: one with the transaction data and one calendar table.
I was able to get the desired result for a single account with the code shown below (ZWID is the ID of the bank account)
WITH sum_transactions as
(
SELECT csd.ZWID, csd.ValueDate, sum_total = sum(csd.amount)
FROM myDataBase.CashData as csd
WHERE csd.ValueDate > '20190131' and csd.ValueDate <= '20190208'
AND csd.ZWID IN (1592)
GROUP BY csd.ZWID, csd.ValueDate
)
SELECT st.zwid, cal.Calendar_Date, st.sum_total
FROM treasury.dbo.calendar as cal
LEFT JOIN sum_transactions as st on st.ValueDate = cal.Calendar_Date
WHERE cal.Calendar_Date > '20190131' and cal.Calendar_Date<= '20190208'
ORDER BY 1, 2
I get the following (desired) output:
zwid Calendar_Date sum_total
1592 2019-02-01 606174,09
NULL 2019-02-02 NULL
NULL 2019-02-03 NULL
1592 2019-02-04 -600000
NULL 2019-02-05 NULL
NULL 2019-02-06 NULL
NULL 2019-02-07 NULL
NULL 2019-02-08 NULL
i.e. there were two days with transaction(s) on that specific bank account in the period.
Now, when I add a second account (ID 1593) (to the IN statement), I would hope to get a second set of 8 new rows (for 01 Feb to 08 Feb) with either a sum or a NULL value (a total of 16 rows for both accounts).
However, I now get a result table that shows no rows with NULL values for the first account anymore (apart from the two days where both accounts show no transactions).
zwid Calendar_date sum_total
NULL 2019-02-02 NULL
NULL 2019-02-03 NULL
1592 2019-02-04 -600000
1592 2019-02-01 606174,09
1593 2019-02-01 -847958,75
1593 2019-02-04 303105,26
1593 2019-02-05 -285312,64
1593 2019-02-06 502762,95
1593 2019-02-07 405372,02
1593 2019-02-08 326213,87
Obviously, I do not succeed in having the query write all Dates for each account separatly.
How do I need to change my query for it to run through one bank account, write all days of the period (value or NULL) and only then move on to the next account?
Update: I am looking at a large number of bank accounts. The number of accounts will change over time
I think this might be what you need, try it out and let me know. But basically I had to use a CROSS APPLY to the full list of IDs/Dates you were looking for and then I used the rest of your code to get your desired results:
DROP TABLE IF EXISTS #Test;
DROP TABLE IF EXISTS #FullCalendar;
CREATE TABLE #Test
(
ZWID INT ,
ValueDate DATE ,
Amount MONEY
);
INSERT INTO #Test ( ZWID ,
ValueDate ,
Amount )
VALUES ( 1, '20190101', 100.00 ) ,
( 1, '20190101', 75.00 ) ,
( 1, '20190108', 75.00 ) ,
( 1, '20190110', 50.00 ) ,
( 2, '20190101', 25.00 ) ,
( 2, '20190102', 35.00 ) ,
( 2, '20190103', 50.00 ) ,
( 2, '20190103', 125.00 ) ,
( 3, '20190102', 150.00 ) ,
( 3, '20190109', 100.00 ) ,
( 3, '20190110', 75.00 ) ,
( 3, '20190110', 75.00 );
SELECT dd.Date, t.ZWID
INTO #FullCalendar
FROM dbo.DateDimension AS dd
CROSS APPLY #Test AS t
WHERE dd.Date >= '20190101' AND dd.Date < '20190111'
GROUP BY dd.Date ,
t.ZWID
--SELECT * FROM #FullCalendar ORDER BY ZWID, Date
;WITH sum_trans AS (
SELECT
t.ZWID, t.ValueDate, sum_total = SUM(t.Amount)
FROM #Test AS t
GROUP BY t.ZWID ,
t.ValueDate )
SELECT fc.Date, fc.ZWID, st.sum_total
FROM #FullCalendar AS fc
LEFT OUTER JOIN sum_trans AS st ON st.ZWID = fc.ZWID AND fc.Date = st.ValueDate
ORDER BY fc.ZWID,fc.Date;
Leaving my old answer here as well.
I was able to get your desired result by using 2 CTEs and a UNION ALL:
WITH sum_transactions as
(
SELECT csd.ZWID, csd.ValueDate, sum_total = sum(csd.amount)
FROM myDataBase.CashData as csd
WHERE csd.ValueDate > '20190131' and csd.ValueDate <= '20190208'
AND csd.ZWID IN (1592)
GROUP BY csd.ZWID, csd.ValueDate
) ,
WITH sum_transactions2 as
(
SELECT csd.ZWID, csd.ValueDate, sum_total = sum(csd.amount)
FROM myDataBase.CashData as csd
WHERE csd.ValueDate > '20190131' and csd.ValueDate <= '20190208'
AND csd.ZWID IN (1593)
GROUP BY csd.ZWID, csd.ValueDate
)
SELECT st.zwid, cal.Calendar_Date, st.sum_total
FROM treasury.dbo.calendar as cal
LEFT JOIN sum_transactions as st on st.ValueDate = cal.Calendar_Date
WHERE cal.Calendar_Date > '20190131' and cal.Calendar_Date<= '20190208'
ORDER BY 1, 2
UNION ALL
SELECT st.zwid, cal.Calendar_Date, st.sum_total
FROM treasury.dbo.calendar as cal
LEFT JOIN sum_transactions2 as st on st.ValueDate = cal.Calendar_Date
WHERE cal.Calendar_Date > '20190131' and cal.Calendar_Date<= '20190208'
ORDER BY 1, 2

Dealing with periods and dates without using cursors

I would like to solve this issue avoiding to use cursors (FETCH).
Here comes the problem...
1st Table/quantity
------------------
periodid periodstart periodend quantity
1 2010/10/01 2010/10/15 5
2st Table/sold items
-----------------------
periodid periodstart periodend solditems
14343 2010/10/05 2010/10/06 2
Now I would like to get the following view or just query result
Table Table/stock
-----------------------
periodstart periodend itemsinstock
2010/10/01 2010/10/04 5
2010/10/05 2010/10/06 3
2010/10/07 2010/10/15 5
It seems impossible to solve this problem without using cursors, or without using single dates instead of periods.
I would appreciate any help.
Thanks
DECLARE #t1 TABLE (periodid INT,periodstart DATE,periodend DATE,quantity INT)
DECLARE #t2 TABLE (periodid INT,periodstart DATE,periodend DATE,solditems INT)
INSERT INTO #t1 VALUES(1,'2010-10-01T00:00:00.000','2010-10-15T00:00:00.000',5)
INSERT INTO #t2 VALUES(14343,'2010-10-05T00:00:00.000','2010-10-06T00:00:00.000',2)
DECLARE #D1 DATE
SELECT #D1 = MIN(P) FROM (SELECT MIN(periodstart) P FROM #t1
UNION ALL
SELECT MIN(periodstart) FROM #t2) D
DECLARE #D2 DATE
SELECT #D2 = MAX(P) FROM (SELECT MAX(periodend) P FROM #t1
UNION ALL
SELECT MAX(periodend) FROM #t2) D
;WITH
L0 AS (SELECT 1 AS c UNION ALL SELECT 1),
L1 AS (SELECT 1 AS c FROM L0 A CROSS JOIN L0 B),
L2 AS (SELECT 1 AS c FROM L1 A CROSS JOIN L1 B),
L3 AS (SELECT 1 AS c FROM L2 A CROSS JOIN L2 B),
L4 AS (SELECT 1 AS c FROM L3 A CROSS JOIN L3 B),
Nums AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) AS i FROM L4),
Dates AS(SELECT DATEADD(DAY,i-1,#D1) AS D FROM Nums where i <= 1+DATEDIFF(DAY,#D1,#D2)) ,
Stock As (
SELECT D ,t1.quantity - ISNULL(t2.solditems,0) AS itemsinstock
FROM Dates
LEFT OUTER JOIN #t1 t1 ON t1.periodend >= D and t1.periodstart <= D
LEFT OUTER JOIN #t2 t2 ON t2.periodend >= D and t2.periodstart <= D ),
NStock As (
select D,itemsinstock, ROW_NUMBER() over (order by D) - ROW_NUMBER() over (partition by itemsinstock order by D) AS G
from Stock)
SELECT MIN(D) AS periodstart, MAX(D) AS periodend, itemsinstock
FROM NStock
GROUP BY G, itemsinstock
ORDER BY periodstart
Hopefully a little easier to read than Martin's. I used different tables and sample data, hopefully extrapolating the right info:
CREATE TABLE [dbo].[Quantity](
[PeriodStart] [date] NOT NULL,
[PeriodEnd] [date] NOT NULL,
[Quantity] [int] NOT NULL
) ON [PRIMARY]
CREATE TABLE [dbo].[SoldItems](
[PeriodStart] [date] NOT NULL,
[PeriodEnd] [date] NOT NULL,
[SoldItems] [int] NOT NULL
) ON [PRIMARY]
INSERT INTO Quantity (PeriodStart,PeriodEnd,Quantity)
SELECT '20100101','20100115',5
INSERT INTO SoldItems (PeriodStart,PeriodEnd,SoldItems)
SELECT '20100105','20100107',2 union all
SELECT '20100106','20100108',1
The actual query is now:
;WITH Dates as (
select PeriodStart as DateVal from SoldItems union select PeriodEnd from SoldItems union select PeriodStart from Quantity union select PeriodEnd from Quantity
), Periods as (
select d1.DateVal as StartDate, d2.DateVal as EndDate
from Dates d1 inner join Dates d2 on d1.DateVal < d2.DateVal left join Dates d3 on d1.DateVal < d3.DateVal and d3.DateVal < d2.DateVal where d3.DateVal is null
), QuantitiesSold as (
select StartDate,EndDate,COALESCE(SUM(si.SoldItems),0) as Quantity
from Periods p left join SoldItems si on p.StartDate < si.PeriodEnd and si.PeriodStart < p.EndDate
group by StartDate,EndDate
)
select StartDate,EndDate,q.Quantity - qs.Quantity
from QuantitiesSold qs inner join Quantity q on qs.StartDate < q.PeriodEnd and q.PeriodStart < qs.EndDate
And the result is:
StartDate EndDate (No column name)
2010-01-01 2010-01-05 5
2010-01-05 2010-01-06 3
2010-01-06 2010-01-07 2
2010-01-07 2010-01-08 4
2010-01-08 2010-01-15 5
Explanation: I'm using three Common Table Expressions. The first (Dates) is gathering all of the dates that we're talking about, from the two tables involved. The second (Periods) selects consecutive values from the Dates CTE. And the third (QuantitiesSold) then finds items in the SoldItems table that overlap these periods, and adds their totals together. All that remains in the outer select is to subtract these quantities from the total quantity stored in the Quantity Table
John, what you could do is a WHILE loop. Declare and initialise 2 variables before your loop, one being the start date and the other being end date. Your loop would then look like this:
WHILE(#StartEnd <= #EndDate)
BEGIN
--processing goes here
SET #StartEnd = #StartEnd + 1
END
You would need to store your period definitions in another table, so you could retrieve those and output rows when required to a temporary table.
Let me know if you need any more detailed examples, or if I've got the wrong end of the stick!
Damien,
I am trying to fully understand your solution and test it on a large scale of data, but I receive following errors for your code.
Msg 102, Level 15, State 1, Line 20
Incorrect syntax near 'Dates'.
Msg 102, Level 15, State 1, Line 22
Incorrect syntax near ','.
Msg 102, Level 15, State 1, Line 25
Incorrect syntax near ','.
Damien,
Based on your solution I also wanted to get a neat display for StockItems without overlapping dates. How about this solution?
CREATE TABLE [dbo].[SoldItems](
[PeriodStart] [datetime] NOT NULL,
[PeriodEnd] [datetime] NOT NULL,
[SoldItems] [int] NOT NULL
) ON [PRIMARY]
INSERT INTO SoldItems (PeriodStart,PeriodEnd,SoldItems)
SELECT '20100105','20100106',2 union all
SELECT '20100105','20100108',3 union all
SELECT '20100115','20100116',1 union all
SELECT '20100101','20100120',10
;WITH Dates as (
select PeriodStart as DateVal from SoldItems
union
select PeriodEnd from SoldItems
union
select PeriodStart from Quantity
union
select PeriodEnd from Quantity
), Periods as (
select d1.DateVal as StartDate, d2.DateVal as EndDate
from Dates d1
inner join Dates d2 on d1.DateVal < d2.DateVal
left join Dates d3 on d1.DateVal < d3.DateVal and
d3.DateVal < d2.DateVal where d3.DateVal is null
), QuantitiesSold as (
select StartDate,EndDate,SUM(si.SoldItems) as Quantity
from Periods p left join SoldItems si on p.StartDate < si.PeriodEnd and si.PeriodStart < p.EndDate
group by StartDate,EndDate
)
select StartDate,EndDate, qs.Quantity
from QuantitiesSold qs
where qs.quantity is not null