I have temp table that I've populated with a running total. I used SQL Server windowing functions. The data in my temp table is in the following format:
|Day | Sku Nbr | CMQTY |
| 1 | f45 | 0 |
| 2 | f45 | 2 |
| 3 | f45 | 0 |
| 4 | f45 | 7 |
| 5 | f45 | 0 |
| 6 | f45 | 0 |
| 7 | f45 | 0 |
| 8 | f45 | 13 |
| 9 | f45 | 15 |
| 10 | f45 | 21 |
I would like to manipulate the data so that it displays like this:
|Day| Sku Nbr | CMQTY |
| 1 | f45 | 0 |
| 2 | f45 | 2 |
| 3 | f45 | 2 |
| 4 | f45 | 7 |
| 5 | f45 | 7 |
| 6 | f45 | 7 |
| 7 | f45 | 7 |
| 8 | f45 | 13 |
| 9 | f45 | 15 |
| 10 | f45 | 21 |
I've tried using a lag function but there are issues when I have multiple days, in a row, with a 0 CMQTY. I've also tried using CASE WHEN logic but am failing.
You can use row_number as below
;with cte as (
select *, sm = sum(case when cmqty>0 then 1 else 0 end) over (order by [day]) from #yoursum
)
select *, sum(cmqty) over(partition by sm order by [day]) from cte
Your table structure
create table #yoursum ([day] int, sku_nbr varchar(10), CMQTY int)
insert into #yoursum
([Day] , Sku_Nbr , CMQTY ) values
( 1 ,'f45', 0 )
,( 2 ,'f45', 2 )
,( 3 ,'f45', 0 )
,( 4 ,'f45', 7 )
,( 5 ,'f45', 0 )
,( 6 ,'f45', 0 )
,( 7 ,'f45', 0 )
,( 8 ,'f45', 13 )
,( 9 ,'f45', 15 )
,( 10 ,'f45', 21 )
For fun, another approach. First for some sample data:
IF OBJECT_ID('tempdb..#t1') IS NOT NULL DROP TABLE #t1;
CREATE TABLE #t1
(
[day] int NOT NULL,
[Sku Nbr] varchar(5) NOT NULL,
CMQTY int NOT NULL,
CONSTRAINT pk_t1 PRIMARY KEY CLUSTERED([day] ASC)
);
INSERT #t1 VALUES
(1 , 'f45', 0),
(2 , 'f45', 2),
(3 , 'f45', 0),
(4 , 'f45', 7),
(5 , 'f45', 0),
(6 , 'f45', 0),
(7 , 'f45', 0),
(8 , 'f45', 13),
(9 , 'f45', 15),
(10, 'f45', 21);
And the solution:
DECLARE #RunningTotal int = 0;
UPDATE #t1
SET #RunningTotal = CMQTY = IIF(CMQTY = 0, #RunningTotal, CMQTY)
FROM #t1 WITH (TABLOCKX)
OPTION (MAXDOP 1);
Results:
day Sku Nbr CMQTY
---- ------- ------
1 f45 0
2 f45 2
3 f45 2
4 f45 7
5 f45 7
6 f45 7
7 f45 7
8 f45 13
9 f45 15
10 f45 21
This approach is referred to the local updateable variable or Quirky Update. You can read more about it here: http://www.sqlservercentral.com/articles/T-SQL/68467/
Related
I'm struggling, and knowing the terminology to search for the answer is likely my problem as I can't imagine this is an edge case.
dbfiddle available
I have a table in Postgres 9.4:
CREATE TABLE test (
id serial PRIMARY KEY, cust_id INTEGER,
category VARCHAR, key INTEGER, value INTEGER
);
INSERT INTO test (cust_id, category, key, value)
VALUES
(1, 'alpha', 0,300),(1, 'bravo', 0,150),(1, 'alpha', 1,300),
(1, 'bravo', 1,200),(1, 'alpha', 2,300),(1, 'bravo', 2,250),
(2, 'alpha', 0,301),(2, 'bravo', 0,151),(2, 'alpha', 1,301),
(2, 'bravo', 1,201),(2, 'alpha', 2,301),(2, 'bravo', 2,251),
(3, 'alpha', 0,302),(3, 'bravo', 0,152),(3, 'alpha', 1,302),
(3, 'bravo', 1,202),(3, 'alpha', 2,302),(3, 'bravo', 2,252);
id | cust_id | category | key | value
----+---------+----------+-----+-------
1 | 1 | alpha | 0 | 300
2 | 1 | bravo | 0 | 150
3 | 1 | alpha | 1 | 300
4 | 1 | bravo | 1 | 200
5 | 1 | alpha | 2 | 300
6 | 1 | bravo | 2 | 250
7 | 2 | alpha | 0 | 301
8 | 2 | bravo | 0 | 151
9 | 2 | alpha | 1 | 301
10 | 2 | bravo | 1 | 201
11 | 2 | alpha | 2 | 301
12 | 2 | bravo | 2 | 251
13 | 3 | alpha | 0 | 302
14 | 3 | bravo | 0 | 152
15 | 3 | alpha | 1 | 302
16 | 3 | bravo | 1 | 202
17 | 3 | alpha | 2 | 302
18 | 3 | bravo | 2 | 252
(18 rows)
I'd like to query the results to look like the following:
cust_id | category | 0 | 1 | 2
---------+----------+-----+-----+-----
1 | alpha | 300 | 300 | 300
1 | bravo | 150 | 200 | 250
2 | alpha | 301 | 301 | 301
2 | bravo | 151 | 201 | 251
3 | alpha | 302 | 302 | 302
3 | bravo | 152 | 202 | 252
(6 rows)
I've tried:
SELECT
*
FROM
crosstab(
'SELECT cust_id,category,key,value FROM test ORDER BY cust_id,category,key',
$$values ('0'::INT),
('1'::INT),
('2'::INT) $$
) AS ct (
"cust_id" INT, "category" TEXT, "0" INT,
"1" INT, "2" INT
);
which nets me (lacking the bravo category rows and uses bravo values for columns 1,2,3):
cust_id | category | 0 | 1 | 2
---------+----------+-----+-------
1 | alpha | 150 | 200 | 250
2 | alpha | 151 | 201 | 251
3 | alpha | 152 | 202 | 252
(2 rows)
I get closer with the following by removing the cust_id field and limiting to a single id:
SELECT
*
FROM
crosstab(
'SELECT category,key,value FROM test WHERE cust_id = 1 ORDER BY category,key',
$$values ('0'::INT),
('1'::INT),
('2'::INT) $$
) AS ct (
"category" TEXT, "0" INT,
"1" INT, "2" INT
);
but this only gives the result for a single cust_id, but I need this for all customers:
category | 0 | 1 | 2
----------+-----+-------
alpha | 300 | 300 | 300
bravo | 150 | 200 | 250
(2 rows)
https://dbfiddle.uk/?rdbms=postgres_9.5&fiddle=2c75cf9a1b18bb980ddd72953235d54e
here is one way :
select cust_id , category
, max(case when key = 0 then value end) "0"
, max(case when key = 1 then value end) "1"
, max(case when key = 2 then value end) "2"
from test
group by cust_id , category
order by cust_id , category
I have a table coltures_report with fields application_id, coltures_id and row_step. I would like to get the maximum value for row_step grouped by a combination of application_id and coltures_id. However row_step can have not unique values and I would to get all the rows with the maximum values.
id | application_id| coltures_id | row_step |
----+---------------+-------------+----------+
1 | 1169 | 4 | 5 |
2 | 1169 | 5 | 5 |
3 | 1169 | 2 | 0 |
4 | 1124 | 1 | 5 |
5 | 1124 | 1 | 4 |
6 | 1156 | 1 | 5 |
7 | 1156 | 2 | 5 |
8 | 1156 | 3 | 5 |
Expected result is
id | application_id| coltures_id | row_step |
----+---------------+-------------+----------+
1 | 1169 | 4 | 5 |
2 | 1169 | 5 | 5 |
3 | 1124 | 1 | 5 |
4 | 1156 | 1 | 5 |
5 | 1156 | 2 | 5 |
6 | 1156 | 3 | 5 |
With NOT EXISTS:
select cr.* from coltures_report cr
where not exists (
select 1 from coltures_report
where application_id = cr.application_id and row_step > cr.row_step
)
or with rank() window function:
select cr.id, cr.application_id, cr.coltures_id, cr.row_step
from (
select *,
rank() over (partition by application_id order by row_step desc) rn
from coltures_report
) cr
where cr.rn = 1
Or with a correlated subquery:
select cr.* from coltures_report cr
where cr.row_step = (select max(row_step) from coltures_report where application_id = cr.application_id)
See the demo.
Results:
> id | application_id | coltures_id | row_step
> -: | -------------: | ----------: | -------:
> 1 | 1169 | 4 | 5
> 2 | 1169 | 5 | 5
> 4 | 1124 | 1 | 5
> 6 | 1156 | 1 | 5
> 7 | 1156 | 2 | 5
> 8 | 1156 | 3 | 5
TIL about tablefunc and crosstab. At first I wanted to "group data by columns" but that doesn't really mean anything.
My product sales look like this
product_id | units | date
-----------------------------------
10 | 1 | 1-1-2018
10 | 2 | 2-2-2018
11 | 3 | 1-1-2018
11 | 10 | 1-2-2018
12 | 1 | 2-1-2018
13 | 10 | 1-1-2018
13 | 10 | 2-2-2018
I would like to produce a table of products with months as columns
product_id | 01-01-2018 | 02-01-2018 | etc.
-----------------------------------
10 | 1 | 2
11 | 13 | 0
12 | 0 | 1
13 | 20 | 0
First I would group by month, then invert and group by product, but I cannot figure out how to do this.
After enabling the tablefunc extension,
SELECT product_id, coalesce("2018-1-1", 0) as "2018-1-1"
, coalesce("2018-2-1", 0) as "2018-2-1"
FROM crosstab(
$$SELECT product_id, date_trunc('month', date)::date as month, sum(units) as units
FROM test
GROUP BY product_id, month
ORDER BY 1$$
, $$VALUES ('2018-1-1'::date), ('2018-2-1')$$
) AS ct (product_id int, "2018-1-1" int, "2018-2-1" int);
yields
| product_id | 2018-1-1 | 2018-2-1 |
|------------+----------+----------|
| 10 | 1 | 2 |
| 11 | 13 | 0 |
| 12 | 0 | 1 |
| 13 | 10 | 10 |
My table is:
id sub_id datetime resource
---|-----|------------|-------
1 | 10 | 04/03/2009 | 399
2 | 11 | 04/03/2009 | 244
3 | 10 | 04/03/2009 | 555
4 | 10 | 03/03/2009 | 300
5 | 11 | 03/03/2009 | 200
6 | 11 | 03/03/2009 | 500
7 | 11 | 24/12/2008 | 600
8 | 13 | 01/01/2009 | 750
9 | 10 | 01/01/2009 | 760
10 | 13 | 01/01/2009 | 570
11 | 11 | 01/01/2009 | 870
12 | 13 | 01/01/2009 | 670
13 | 13 | 01/01/2009 | 703
14 | 13 | 01/01/2009 | 705
I need to select for each sub_id only 2 times
Result would be:
id sub_id datetime resource
---|-----|------------|-------
1 | 10 | 04/03/2009 | 399
3 | 10 | 04/03/2009 | 555
5 | 11 | 03/03/2009 | 200
6 | 11 | 03/03/2009 | 500
8 | 13 | 01/01/2009 | 750
10 | 13 | 01/01/2009 | 570
How can I achieve this result in postgres ?
Use the window function row_number():
select id, sub_id, datetime, resource
from (
select *, row_number() over (partition by sub_id order by id)
from my_table
) s
where row_number < 3;
look at the order column (I use id to match your sample):
t=# with data as (select *,count(1) over (partition by sub_id order by id) from t)
select id,sub_id,datetime,resource from data where count <3;
id | sub_id | datetime | resource
----+--------+------------+----------
1 | 10 | 2009-03-04 | 399
3 | 10 | 2009-03-04 | 555
2 | 11 | 2009-03-04 | 244
5 | 11 | 2009-03-03 | 200
8 | 13 | 2009-01-01 | 750
10 | 13 | 2009-01-01 | 570
(6 rows)
A Customer can be in multiple positions over the lifetime and can only have one active position (marked by start date and end date). A position is part of a cost centre.
If a customer over the lifetime had 18 positions, i have to check if any of those positions in a consecutive order, were part of the same cost centre. if they were, then i have use the start date from the earliest position (same cost centre). i have written something like this:
By using 2 row_number() calculations on slightly different partitions it is possible to get a calculation(rn) that allows us to group for each consecutive set of positions in the same cost centre. You already have one such row_number when you setup the temp table. I included rn1 & rn2 so you could investigate how it works.
SQL Fiddle
MS SQL Server 2014 Schema Setup:
CREATE TABLE TempTbl
([ConsecutivePositions] int, [CustomerID] int, [PositionID] int, [CustomerPositionId] int, [StartDate] datetime, [EndDate] varchar(23), [CostCentreID] int)
;
INSERT INTO TempTbl
([ConsecutivePositions], [CustomerID], [PositionID], [CustomerPositionId], [StartDate], [EndDate], [CostCentreID])
VALUES
(1, 2734, 195, 31860, '2013-10-17 16:08:53', '2015-03-06 11:51:09.440', 5),
(2, 2734, 29, 39405, '2015-03-06 11:51:09', '2016-01-27 13:10:19.720', 3),
(3, 2734, 271, 23599, '2012-04-05 16:21:41', '2012-12-04 11:32:47.433', 13),
(4, 2734, 107, 26479, '2012-12-04 11:32:47', '2013-03-19 09:07:13.633', 14),
(5, 2734, 297, 28497, '2013-03-19 09:07:13', '2013-10-17 16:08:53.120', 14),
(6, 2734, 154, 2723, '2007-11-27 00:00:00', '2009-07-10 15:44:16.640', 3),
(7, 2734, 145, 19436, '2011-03-15 00:00:00', '2011-10-18 15:42:36.877', 906),
(8, 2734, 146, 17453, '2010-09-12 00:00:00', '2010-11-11 15:58:25.043', 13),
(9, 2734, 8, 18180, '2010-11-11 00:00:00', '2011-03-15 17:57:48.027', 13),
(10, 2734, 8, 21606, '2011-10-18 15:42:36', '2011-11-11 16:42:54.787', 13),
(11, 2734, 8, 21982, '2011-11-14 11:18:24', '2012-04-05 16:21:41.230', 13),
(12, 2734, 264, 21958, '2011-11-11 16:42:54', '2011-11-14 11:18:24.057', 906),
(13, 2734, 5, 12785, '2009-07-10 00:00:00', '2009-07-29 09:30:52.430', 3),
(14, 2734, 5, 12999, '2009-07-29 00:00:00', '2010-03-04 13:00:30.223', 3),
(15, 2734, 149, 15165, '2010-03-04 00:00:00', '2010-08-16 12:13:30.703', 3),
(16, 2734, 8, 17044, '2010-08-16 00:00:00', '2010-09-12 16:29:01.203', 13),
(17, 2734, 891, 45453, '2016-01-27 13:10:19', NULL, 906)
;
Query 1:
with cte as (
select
*
, row_number() over(partition by CustomerID order by StartDate) rn1
, row_number() over(partition by CustomerID, CostCentreID order by StartDate) rn2
, row_number() over(partition by CustomerID order by StartDate)
- row_number() over(partition by CustomerID, CostCentreID order by StartDate) rn3
from temptbl
)
select
CustomerID
, CostCentreID
, rn3
, count(*) c
, min(StartDate) StartDate
, max(EndDate) EndDate
from cte
group by
CustomerID, CostCentreID, rn3
order by
CustomerID, StartDate
Results:
| CustomerID | CostCentreID | rn3 | c | StartDate | EndDate |
|------------|--------------|-----|---|----------------------|-------------------------|
| 2734 | 3 | 0 | 4 | 2007-11-27T00:00:00Z | 2010-08-16 12:13:30.703 |
| 2734 | 13 | 4 | 3 | 2010-08-16T00:00:00Z | 2011-03-15 17:57:48.027 |
| 2734 | 906 | 7 | 1 | 2011-03-15T00:00:00Z | 2011-10-18 15:42:36.877 |
| 2734 | 13 | 5 | 1 | 2011-10-18T15:42:36Z | 2011-11-11 16:42:54.787 |
| 2734 | 906 | 8 | 1 | 2011-11-11T16:42:54Z | 2011-11-14 11:18:24.057 |
| 2734 | 13 | 6 | 2 | 2011-11-14T11:18:24Z | 2012-12-04 11:32:47.433 |
| 2734 | 14 | 12 | 2 | 2012-12-04T11:32:47Z | 2013-10-17 16:08:53.120 |
| 2734 | 5 | 14 | 1 | 2013-10-17T16:08:53Z | 2015-03-06 11:51:09.440 |
| 2734 | 3 | 11 | 1 | 2015-03-06T11:51:09Z | 2016-01-27 13:10:19.720 |
| 2734 | 906 | 14 | 1 | 2016-01-27T13:10:19Z | (null) |
----
Instead of a group by query, use more window functions, and you can get the all details in the temp table as well as the wanted cost center related dates.
with cte as (
select
*
, row_number() over(partition by CustomerID order by StartDate) rn1
, row_number() over(partition by CustomerID, CostCentreID order by StartDate) rn2
, row_number() over(partition by CustomerID order by StartDate)
- row_number() over(partition by CustomerID, CostCentreID order by StartDate) rn3
from temptbl
)
, cte2 as (
select
*
, min(StartDate) over(partition by CustomerID, CostCentreID, rn3) MinStartDate
, max(EndDate) over(partition by CustomerID, CostCentreID, rn3) MaxEndDate
from cte
)
select
*
from cte2
;
Results:
| ConsecutivePositions | CustomerID | PositionID | CustomerPositionId | StartDate | EndDate | CostCentreID | rn1 | rn2 | rn3 | MinStartDate | MaxEndDate |
|----------------------|------------|------------|--------------------|----------------------|-------------------------|--------------|-----|-----|-----|----------------------|-------------------------|
| 6 | 2734 | 154 | 2723 | 2007-11-27T00:00:00Z | 2009-07-10 15:44:16.640 | 3 | 1 | 1 | 0 | 2007-11-27T00:00:00Z | 2010-08-16 12:13:30.703 |
| 13 | 2734 | 5 | 12785 | 2009-07-10T00:00:00Z | 2009-07-29 09:30:52.430 | 3 | 2 | 2 | 0 | 2007-11-27T00:00:00Z | 2010-08-16 12:13:30.703 |
| 14 | 2734 | 5 | 12999 | 2009-07-29T00:00:00Z | 2010-03-04 13:00:30.223 | 3 | 3 | 3 | 0 | 2007-11-27T00:00:00Z | 2010-08-16 12:13:30.703 |
| 15 | 2734 | 149 | 15165 | 2010-03-04T00:00:00Z | 2010-08-16 12:13:30.703 | 3 | 4 | 4 | 0 | 2007-11-27T00:00:00Z | 2010-08-16 12:13:30.703 |
| 16 | 2734 | 8 | 17044 | 2010-08-16T00:00:00Z | 2010-09-12 16:29:01.203 | 13 | 5 | 1 | 4 | 2010-08-16T00:00:00Z | 2011-03-15 17:57:48.027 |
| 8 | 2734 | 146 | 17453 | 2010-09-12T00:00:00Z | 2010-11-11 15:58:25.043 | 13 | 6 | 2 | 4 | 2010-08-16T00:00:00Z | 2011-03-15 17:57:48.027 |
| 9 | 2734 | 8 | 18180 | 2010-11-11T00:00:00Z | 2011-03-15 17:57:48.027 | 13 | 7 | 3 | 4 | 2010-08-16T00:00:00Z | 2011-03-15 17:57:48.027 |
| 10 | 2734 | 8 | 21606 | 2011-10-18T15:42:36Z | 2011-11-11 16:42:54.787 | 13 | 9 | 4 | 5 | 2011-10-18T15:42:36Z | 2011-11-11 16:42:54.787 |
| 11 | 2734 | 8 | 21982 | 2011-11-14T11:18:24Z | 2012-04-05 16:21:41.230 | 13 | 11 | 5 | 6 | 2011-11-14T11:18:24Z | 2012-12-04 11:32:47.433 |
| 3 | 2734 | 271 | 23599 | 2012-04-05T16:21:41Z | 2012-12-04 11:32:47.433 | 13 | 12 | 6 | 6 | 2011-11-14T11:18:24Z | 2012-12-04 11:32:47.433 |
| 7 | 2734 | 145 | 19436 | 2011-03-15T00:00:00Z | 2011-10-18 15:42:36.877 | 906 | 8 | 1 | 7 | 2011-03-15T00:00:00Z | 2011-10-18 15:42:36.877 |
| 12 | 2734 | 264 | 21958 | 2011-11-11T16:42:54Z | 2011-11-14 11:18:24.057 | 906 | 10 | 2 | 8 | 2011-11-11T16:42:54Z | 2011-11-14 11:18:24.057 |
| 2 | 2734 | 29 | 39405 | 2015-03-06T11:51:09Z | 2016-01-27 13:10:19.720 | 3 | 16 | 5 | 11 | 2015-03-06T11:51:09Z | 2016-01-27 13:10:19.720 |
| 4 | 2734 | 107 | 26479 | 2012-12-04T11:32:47Z | 2013-03-19 09:07:13.633 | 14 | 13 | 1 | 12 | 2012-12-04T11:32:47Z | 2013-10-17 16:08:53.120 |
| 5 | 2734 | 297 | 28497 | 2013-03-19T09:07:13Z | 2013-10-17 16:08:53.120 | 14 | 14 | 2 | 12 | 2012-12-04T11:32:47Z | 2013-10-17 16:08:53.120 |
| 1 | 2734 | 195 | 31860 | 2013-10-17T16:08:53Z | 2015-03-06 11:51:09.440 | 5 | 15 | 1 | 14 | 2013-10-17T16:08:53Z | 2015-03-06 11:51:09.440 |
| 17 | 2734 | 891 | 45453 | 2016-01-27T13:10:19Z | (null) | 906 | 17 | 3 | 14 | 2013-10-17T16:08:53Z | 2015-03-06 11:51:09.440 |