I have table:
name | surname | project | dates | hours
aaa aaaa 1 12.08.2011 10
aaa aaaa 1 13.08.2011 8
aaa aaaa 1 14.08.2011 7
And i need result like this:
name | surname | project | dates | hours | dates | hours | dates | hours | total
aaa aaaa 1 12.08.2011 10 13.08.2011 8 14.08.2011 7 25
SELECT name,surname,project,
MAX(DECODE(C,1,dates)) dates,
MAX(DECODE(C,1,hours)) hours,
MAX(DECODE(C,2,dates)) dates,
MAX(DECODE(C,2,hours)) hours,
MAX(DECODE(C,3,dates)) dates,
MAX(DECODE(C,3,hours)) hours,
sum(hours) as Total
FROM (SELECT name,surname,project,dates,hours
,ROW_NUMBER() OVER(PARTITION BY project ORDER BY project) C
FROM work )
GROUP BY name,surname,project
This work. But I need dynamically sql query because number of rows can be variable. Is it possible ? Thanks
You can generate the sql dynamically. Look into the DBMS_SQL package. But each row needs to have the same number of columns.
Another way to do this is to return a nested table or vararray of dates and hours.
Related
create table your_table(type text,compdate date,amount numeric);
insert into your_table values
('A','2022-01-01',50),
('A','2022-02-01',76),
('A','2022-03-01',300),
('A','2022-04-01',234),
('A','2022-05-01',14),
('A','2022-06-01',9),
('B','2022-01-01',201),
('B','2022-02-01',33),
('B','2022-03-01',90),
('B','2022-04-01',41),
('B','2022-05-01',11),
('B','2022-06-01',5),
('C','2022-01-01',573),
('C','2022-02-01',77),
('C','2022-03-01',109),
('C','2022-04-01',137),
('C','2022-05-01',405),
('C','2022-06-01',621);
I am trying to calculate to show the percentage change in $ from 6 months prior to today's date for each type. In example:
Type A decreased -82% over six months.
Type B decreased -97.5%
Type C increased +8.4%.
How do I write this in postgresql mixed in with other statements?
It looks like comparing against 5, not 6 months prior, and 2022-06-01 isn't today's date.
Join the table with itself based on the matching type and desired time difference. Demo
select
b.type,
b.compdate,
a.compdate "6 months earlier",
b.amount "amount 6 months back",
round(-(100-b.amount/a.amount*100),2) "change"
from your_table a
inner join your_table b
on a.type=b.type
and a.compdate = b.compdate - '5 months'::interval;
-- type | compdate | 6 months earlier | amount 6 months back | change
--------+------------+------------------+----------------------+--------
-- A | 2022-06-01 | 2022-01-01 | 9 | -82.00
-- B | 2022-06-01 | 2022-01-01 | 5 | -97.51
-- C | 2022-06-01 | 2022-01-01 | 621 | 8.38
I have this table inside my postgresql database,
item_code | date | price
==============================
aaaaaa.1 |2019/12/08 | 3.04
bbbbbb.b |2019/12/08 | 19.48
261893.c |2019/12/08 | 7.15
aaaaaa.1 |2019/12/17 | 4.15
bbbbbb.2 |2019/12/17 | 20
xxxxxx.5 |2019/03/12 | 3
xxxxxx.5 |2019/03/18 | 4.5
how can i calculate the average per item, per month over the year. so i get the result something like:
item_code | month | price
==============================
aaaaaa.1 | 2019/12 | 3.59
bbbbbb.2 | 2019/12 | 19.74
261893.c | 2019/12 | 7.15
xxxxxx.5 | 2019/03 | 3.75
I have tried to look and apply many alternatives but i am still not get the point, would really appreciate your help because i am new to postgresql.
I don't see how the question relates to a moving average. It seems you just want group by:
select item_code, date_trunc('month', date) as date_month, avg(price) as price
from mytable
group by item_code, date_month
This gives date_month as a date, truncated to the first day of the month - which I find more useful that the format you suggested. But it you do want that:
to_char(date, 'YYYY/MM') as date_month
I've got a requirement to built a list report to show volume by 3 grouped by columns. The issue i'm having is if nothing happened on specific days for the specific grouped columns, i cant force it to show 0.
what i'm currently getting is something like:
ABC | AA | 01/11/2017 | 1
ABC | AA | 03/11/2017 | 2
ABC | AA | 05/11/2017 | 1
what i need is:
ABC | AA | 01/11/2017 | 1
ABC | AA | 02/11/2017 | 0
ABC | AA | 03/11/2017 | 2
ABC | AA | 04/11/2107 | 0
ABC | AA | 05/11/2017 | 1
ive tried going down the route of unioning a "dummy" query with no query filters, however there are days where nothing has happened, at all, for those first 2 columns so it doesn't always populate.
Hope that makes sense, any help would be greatly appreciated!
to anyone who wanted an answer i figured it out. Query 1 for just the dates, as there will always be some form of event happening daily so will always give a unique date range.
query 2 for the other 2 "grouped by" columns.
Create a data item in each with "1" as the result (but would work with anything as long as they are the same).
Query 1, left join to Query 2 on this new data item.
This then gives a full combination of all 3 columns needed. The resulting "Query 3" can then be left joined again to get the measures. Final query (depending on aggregation) may need to have the measure data item wrapped with a COALESCE/ISNULL to create a 0 on those days nothing happened.
I have tasks that are estimated to some hours. And time spent minus estimated should result in time left to spend.
Employee table
CREATE TABLE sign
(signid varchar(3), signname varchar(30));
INSERT INTO sign
(signid, signname)
VALUES
('AA', 'Adam'),
('BB', 'Bert'),
('CC', 'Cecil'),
('DD', 'David')
Task table
CREATE TABLE task
(taskid int4, taskdate date, tasksign varchar(3), taskhr numeric(10,2));
INSERT INTO task
(taskid, taskdate, tasksign, taskhr)
VALUES
(1,'2016-01-01','AA',10),
(2,'2016-02-01','BB',10),
(3,'2016-01-15','BB',10),
(4,'2016-03-01','BB',10),
(5,'2016-01-03','CC',10)
Time sheet table
CREATE TABLE hr
(hrid int4, hrsign varchar(3), hrtask int4, hrqty numeric(10,2));
INSERT INTO hr
(hrid, hrsign, hrtask, hrqty)
VALUES
(1,'AA',1,1.1),
(2,'BB',2,1.2),
(3,'CC',5,2.3),
(4,'CC',5,5)
My attempt to get a simple query that subtract spent time from estimated time gives wrong answer:
SELECT signid,signname,to_char(taskdate, 'iyyy-iw'),sum(taskhr),sum(hrqty)
FROM sign
LEFT JOIN task ON tasksign=signid
LEFT JOIN hr ON taskid=hrtask
GROUP BY 1,2,3
ORDER BY 2,3
The answer is:
id name week task hr
AA Adam 2015-53 10 1,1000
BB Bert 2016-02 10 NULL
BB Bert 2016-05 10 1,2000
BB Bert 2016-09 10 NULL
CC Cecil 2015-53 20 7,3000
DD David NULL NULL NULL
The task hours seems to be duplicated. It should look like this:
id name week task hr
AA Adam 2015-53 10 1,1000
BB Bert 2016-02 10 NULL
BB Bert 2016-05 10 1,2000
BB Bert 2016-09 10 NULL
CC Cecil 2015-53 10 7,3000
DD David NULL NULL NULL
Any tip how to make a query that calculate correct?
"fiddle"
http://rextester.com/UOO16020
Joining the hr table multiplies the task table rows. Aggregate hr before joining:
select signid, signname, to_char(taskdate, 'iyyy-iw'), sum(taskhr), sum(hrqty)
from
sign
left join
task on tasksign = signid
left join (
select hrtask, sum(hrqty) as hrqty
from hr
group by 1
)
hr on taskid = hrtask
group by 1,2,3
order by 2,3
;
signid | signname | to_char | sum | sum
--------+----------+---------+-------+------
AA | Adam | 2015-53 | 10.00 | 1.10
BB | Bert | 2016-02 | 10.00 |
BB | Bert | 2016-05 | 10.00 | 1.20
BB | Bert | 2016-09 | 10.00 |
CC | Cecil | 2015-53 | 10.00 | 7.30
DD | David | | |
I'm trying to figure out how to show distinct records in groups in crystal reports. The view I wrote returns something like this:
Field 1 | Field 2 | Field 3
----------------------------------
10 | 111 | Record Info 1
10 | 111 | Record Info 1
10 | 222 | Record Info 2
20 | 111 | Record Info 1
20 | 222 | Record Info 2
The report groups are based off field one, and I want distinct fields 2 and 3 for each group:
Field 1 | Field 2 | Field 3
----------------------------------
10 | 111 | Record Info 1
10 | 222 | Record Info 2
20 | 111 | Record Info 1
20 | 222 | Record Info 2
Field 2 and 3 are always the same, Field 1 acts as an FK reference to any entries in the view. Selecting distinct xxx in the view isn't really viable due to the huge amount of columns being brought in.
Can this be done in CR?
Cheers
Create a group for field1, field2
Hide Details area, field1 group area header and field1 group footer
Drop all the columns you want to show in the field2 group area header/footer.
Good luck!
You might also consider using Database | Select Distinct Records.