Show month as a row postgresql - postgresql

I have this table, i call it transaction table
id periode_month total_amount
U1 1 1000
U1 2 1200
U1 3 1000
U1 4 1000
U2 2 1250
I'm trying to achieve this
id month 1 month 2 month 3 month 4 month 5 ... month 12
U1 1000 1200 1000 1000 0 0
U2 0 1250 0 0 0 0
Here is what i do so far
SELECT *
FROM crosstab(
'select client_id, periode_month, total_amount
from sucor_transactions
order by 1,2')
AS ct(userid VARCHAR, periode_month int, total_amount numeric);
my query above returning this error return and sql tuple descriptions are incompatible
then, i'm using google again and i found different query
SELECT *
FROM crosstab (
$$SELECT client_id, periode_month,"total_amount"
FROM sucor_transactions
ORDER BY 1,2$$
) AS t (
class int
-- "value" double precision -- column does not exist in result!
);
but it is returning this error return and sql tuple descriptions are incompatible. How can i solve my problem. thanks in advance

In crosstab, you need to user order by , and give the year column in double quotes
For filtering the month i used generate series.
CREATE EXTENSION IF NOT EXISTS tablefunc;
SELECT * FROM CROSSTAB (
'SELECT id,periode_month,total_amount
FROM crospost order by 1,2' ,'SELECT g FROM generate_series(1,12) g')
AS ct(id varchar , "month 1" int, "month 2" int, "month 3" int,
"month 4" int, "month 5" int, "month 6" int,
"month 7" int, "month 8" int, "month 9" int,
"month 10" int, "month 11" int, "month 12" int);

Related

Get Data Week Wise in SQL Server

I have a Table with columns ProductId, DateofPurchase, Quantity.
I want a report in which week it belongs to.
Suppose if I give March Month I can get the quantity for the march month.
But I want as below if I give date as parameter.
Here Quantity available for March month on 23/03/2018 is 100
Material Code Week1 Week2 Week3 Week4
12475 - - - 100
The logic is 1-7 first week, 8-15 second week, 16-23 third week, 24-30 fourth week
#Sasi, this can get you started. YOu will need to use CTE to build a template table that describes what happens yearly. Then using your table with inner join you can link it up and do a pivot to group the weeks.
Let me know if you need any tweaking.
DECLARE #StartDate DATE='20180101'
DECLARE #EndDate DATE='20180901'
DECLARE #Dates TABLE(
Workdate DATE Primary Key
)
DECLARE #tbl TABLE(ProductId INT, DateofPurchase DATE, Quantity INT);
INSERT INTO #tbl
SELECT 12475, '20180623', 100
;WITH Dates AS(
SELECT Workdate=#StartDate,WorkMonth=DATENAME(MONTH,#StartDate),WorkYear=YEAR(#StartDate), WorkWeek=datename(wk, #StartDate )
UNION ALL
SELECT CurrDate=DateAdd(WEEK,1,Workdate),WorkMonth=DATENAME(MONTH,DateAdd(WEEK,1,Workdate)),YEAR(DateAdd(WEEK,1,Workdate)),datename(wk, DateAdd(WEEK,1,Workdate)) FROM Dates D WHERE Workdate<#EndDate ---AND (DATENAME(MONTH,D.Workdate))=(DATENAME(MONTH,D.Workdate))
)
SELECT *
FROM
(
SELECT
sal.ProductId,
GroupWeek='Week'+
CASE
WHEN WorkWeek BETWEEN 1 AND 7 THEN '1'
WHEN WorkWeek BETWEEN 8 AND 15 THEN '2'
WHEN WorkWeek BETWEEN 16 AND 23 THEN '3'
WHEN WorkWeek BETWEEN 24 AND 30 THEN '4'
WHEN WorkWeek BETWEEN 31 AND 37 THEN '5'
WHEN WorkWeek BETWEEN 38 AND 42 THEN '6'
END,
Quantity
FROM
Dates D
JOIN #tbl sal on
sal.DateofPurchase between D.Workdate and DateAdd(DAY,6,Workdate)
)T
PIVOT
(
SUM(Quantity) FOR GroupWeek IN (Week1, Week2, Week3, Week4, Week5, Week6, Week7, Week8, Week9, Week10, Week11, Week12, Week13, Week14, Week15, Week16, Week17, Week18, Week19, Week20, Week21, Week22, Week23, Week24, Week25, Week26, Week27, Week28, Week29, Week30, Week31, Week32, Week33, Week34, Week35, Week36, Week37, Week38, Week39, Week40, Week41, Week42, Week43, Week44, Week45, Week46, Week47, Week48, Week49, Week50, Week51, Week52
/*add as many as you need*/)
)p
--ORDER BY
--1
option (maxrecursion 0)
Sample Data :
DECLARE #Products TABLE(Id INT PRIMARY KEY,
ProductName NVARCHAR(50))
DECLARE #Orders TABLE(ProductId INT,
DateofPurchase DATETIME,
Quantity BIGINT)
INSERT INTO #Products(Id,ProductName)
VALUES(1,N'Product1'),
(2,N'Product2')
INSERT INTO #Orders( ProductId ,DateofPurchase ,Quantity)
VALUES (1,'2018-01-01',130),
(1,'2018-01-09',140),
(1,'2018-01-16',150),
(1,'2018-01-24',160),
(2,'2018-01-01',30),
(2,'2018-01-09',40),
(2,'2018-01-16',50),
(2,'2018-01-24',60)
Query :
SELECT P.Id,
P.ProductName,
Orders.MonthName,
Orders.Week1,
Orders.Week2,
Orders.Week3,
Orders.Week4
FROM #Products AS P
INNER JOIN (SELECT O.ProductId,
SUM((CASE WHEN DATEPART(DAY,O.DateofPurchase) BETWEEN 1 AND 7 THEN O.Quantity ELSE 0 END)) AS Week1,
SUM((CASE WHEN DATEPART(DAY,O.DateofPurchase) BETWEEN 8 AND 15 THEN O.Quantity ELSE 0 END)) AS Week2,
SUM((CASE WHEN DATEPART(DAY,O.DateofPurchase) BETWEEN 16 AND 23 THEN O.Quantity ELSE 0 END)) AS Week3,
SUM((CASE WHEN DATEPART(DAY,O.DateofPurchase) >= 24 THEN O.Quantity ELSE 0 END)) AS Week4,
DATENAME(MONTH,O.DateofPurchase) AS MonthName
FROM #Orders AS O
GROUP BY O.ProductId,DATENAME(MONTH,O.DateofPurchase)) AS Orders ON P.Id = Orders.ProductId
Result :
-----------------------------------------------------------------------
| Id | ProductName | MonthNumber | Week1 | Week2 | Week3 | Week4 |
-----------------------------------------------------------------------
| 1 | Product1 | January | 130 | 140 | 150 | 160 |
| 2 | Product2 | January | 30 | 40 | 50 | 60 |
-----------------------------------------------------------------------

Postgres - convert null result column to zero in crosstab query

I have a crosstab query that is working fine. It is based on customers and their total transactions, split by months of the year.
The only issue is that when there is no data for a column (ie no activity for a month) then I get a null value which I would like to convert to a zero.
I have tried coalesce on the 'amount' field, but that does not work.
If anyone has any pointers to help I would be very grateful.
The query is:
select *
from crosstab(
$ct$
SELECT sa.id,
company.name,
to_char(sat.transaction_date, 'YYYY-MM') AS my,
COALESCE(sat.amount,0) AS amnt
FROM sales_account_transactions sat
JOIN sales_account sa ON sa.id = sat.sales_account
JOIN company ON sa.company = company.id
WhERE sat.financial_company = 1
AND sat.transaction_date BETWEEN '2018-01-01' AND '2018-03-31'
AND sat.reversed_by = 0
AND sat.original_id = 0
GROUP BY sa.id, company.name, my, amnt
ORDER BY company.name, to_char(sat.transaction_date, 'YYYY-MM');
$ct$,
$$VALUES
('2018-01'), ('2018-02'), ('2018-03')
$$
)
as ct(id int, name text,
"Jan 2018" int, "Feb 2018" int, "Mar 2018" int);
Instead of select *, use select coalesce("Jan 2018", 0) as "Jan 2018", ...

Postgres - bind results of equal type by year - long to wide data

Please excuse my not very propper way of asking this as i am new to postgres...
Having the following two tables:
CREATE TABLE pub (
id int
, time timestamp
);
id time
1 1 2010-02-10 01:00:00
2 2 2011-02-10 01:00:00
3 3 2012-02-10 01:00:00
And
CREATE TABLE val (
id int
, type text
, val int
);
id type val
1 1 A 1
2 1 B 2
3 1 C 3
4 2 A 4
5 2 B 5
6 3 D 6
I would like to get the following output (for id <= 2 )
type 2010 2011
1 A 1 4
2 B 2 5
3 C 3 NULL
So type is the superset of all type's present in table val.
NULL meaning that there is no value for label C.
Ideally the column-headings are are years of the time. Alternatively the id itself...
Exists at least two ways to do this.
If your table have not many categories you can use CTE
WITH x AS (
SELECT type,
sum(val) FILTER (WHERE date_part('year', time) = 2010) AS "2010",
sum(val) FILTER (WHERE date_part('year', time) = 2011) AS "2011"
FROM pub AS p JOIN val AS v ON (v.id = p.id)
GROUP BY type
)
SELECT * FROM x
WHERE "2010" is NOT NULL OR "2011" IS NOT NULL
ORDER BY type
;
But if you have many or dynamic categories you must use crosstab:
CREATE EXTENSION tablefunc;
SELECT * FROM crosstab(
$$
SELECT type,
date_part('year', time)::text as time,
sum(val) AS val
FROM pub AS p JOIN val AS v ON (v.id = p.id)
GROUP BY type, 2
ORDER BY 1, 2
$$,
$$VALUES ('2010'::text), ('2011'), ('2012') $$
) AS ct (type text, "2010" int, "2011" int, "2012" int);
;

Find date sequence in PostgreSQL

I'm trying to find the maximum sequence of days by customer in my data. I want to understand what is the max sequence of days that specific customer made. If someone enter to my app in the 25/8/16 AND 26/08/16 AND 27/08/16 AND 01/09/16 AND 02/09/16 - The max sequence will be 3 days (25,26,27).
In the end (The output) I want to get two fields: custid | MaxDaySequence
I have the following fields in my data table: custid | orderdate(timestemp)
For exmple:
custid orderdate
1 25/08/2007
1 03/10/2007
1 13/10/2007
1 15/01/2008
1 16/03/2008
1 09/04/2008
2 18/09/2006
2 08/08/2007
2 28/11/2007
2 04/03/2008
3 27/11/2006
3 15/04/2007
3 13/05/2007
3 19/06/2007
3 22/09/2007
3 25/09/2007
3 28/01/2008
I'm using PostgreSQL 2014.
Thanks
Trying:
select custid, max(num_days) as longest
from (
select custid,rn, count (*) as num_days
from (
select custid, date(orderdate),
cast (row_number() over (partition by custid order by date(orderdate)) as varchar(5)) as rn
from table_
) x group by custid, CURRENT_DATE - INTERVAL rn|| ' day'
) y group by custid
Try:
SELECT custid, max( abc ) as max_sequence_of_days
FROM (
SELECT custid, yy, count(*) abc
FROM (
SELECT * ,
SUM( xx ) OVER (partition by custid order by orderdate ) yy
FROM (
select * ,
CASE WHEN
orderdate - lag( orderdate ) over (partition by custid order by orderdate )
<= 1
THEN 0 ELSE 1 END xx
from mytable
) x
) z
GROUP BY custid, yy
) q
GROUP BY custid
Demo: http://sqlfiddle.com/#!15/00422/11
===== EDIT ===========
Got "operator does not exist: interval <= integer"
This means that orderdate column is of type timestamp, not date.
In this case you need to use <= interval '1' day condition instead of <= 1:
Please see this link: https://www.postgresql.org/docs/9.0/static/functions-datetime.html to learn more about date arithmetic in PostgreSQL
Please see this demo:
http://sqlfiddle.com/#!15/7c2200/2
SELECT custid, max( abc ) as max_sequence_of_days
FROM (
SELECT custid, yy, count(*) abc
FROM (
SELECT * ,
SUM( xx ) OVER (partition by custid order by orderdate ) yy
FROM (
select * ,
CASE WHEN
orderdate - lag( orderdate ) over (partition by custid order by orderdate )
<= interval '1' day
THEN 0 ELSE 1 END xx
from mytable
) x
) z
GROUP BY custid, yy
) q
GROUP BY custid

Rolling sum per time interval per group

Table, data and task as follows.
See SQL-Fiddle-Link for demo-data and estimated results.
create table "data"
(
"item" int
, "timestamp" date
, "balance" float
, "rollingSum" float
)
insert into "data" ( "item", "timestamp", "balance", "rollingSum" ) values
( 1, '2014-02-10', -10, -10 )
, ( 1, '2014-02-15', 5, -5 )
, ( 1, '2014-02-20', 2, -3 )
, ( 1, '2014-02-25', 13, 10 )
, ( 2, '2014-02-13', 15, 15 )
, ( 2, '2014-02-16', 15, 30 )
, ( 2, '2014-03-01', 15, 45 )
I need to get all rows in an defined time interval. The above table doesn't hold a record per item for each possible date - only dates on which changes applied are recorded ( it is possible that there are n rows per timestamp per item )
If the given interval does not fit exactly on stored timestamps, the latest timestamp before startdate ( nearest smallest neighbour ) should be used as start-balance/rolling-sum.
estimated results ( time interval: startdate = '2014-02-13', enddate = '2014-02-20' )
"item", "timestamp" , "balance", "rollingSum"
1 , '2014-02-13' , -10 , -10
1 , '2014-02-15' , 5 , -5
1 , '2014-02-20' , 2 , -3
2 , '2014-02-13' , 15 , 15
2 , '2014-02-16' , 15 , 30
I checked questions like this and googled a lot, but didn't found a solution yet.
I don't think it's a good idea to extend "data" table with one row per missing date per item, thus the complete interval ( smallest date <-----> latest date per item may expand over several years ).
Thanks in advance!
select sum(balance)
from table
where timestamp >= (select max(timestamp) from table where timestamp <= 'startdate')
and timestamp <= 'enddate'
Don't know what you mean by rolling-sum.
here is an attempt. Seems it gives the right result, not so beautiful. Would have been easier in sqlserver 2012+:
declare #from date = '2014-02-13'
declare #to date = '2014-02-20'
;with x as
(
select
item, timestamp, balance, row_number() over (partition by item order by timestamp, balance) rn
from (select item, timestamp, balance from data
union all
select distinct item, #from, null from data) z
where timestamp <= #to
)
, y as
(
select item,
timestamp,
coalesce(balance, rollingsum) balance ,
a.rollingsum,
rn
from x d
cross apply
(select sum(balance) rollingsum from x where rn <= d.rn and d.item = item) a
where timestamp between '2014-02-13' and '2014-02-20'
)
select item, timestamp, balance, rollingsum from y
where rollingsum is not null
order by item, rn, timestamp
Result:
item timestamp balance rollingsum
1 2014-02-13 -10,00 -10,00
1 2014-02-15 5,00 -5,00
1 2014-02-20 2,00 -3,00
2 2014-02-13 15,00 15,00
2 2014-02-16 15,00 30,00