Employee Salary Should display monthwise as moth displaying Horizontally as Headings - postgresql

My requirement is as follows:
Am using Postgresql and ireport 4.0.1 for generating this report.
I've four tables like g_employee,g_year,g_period,g_salary, by joining these four tables and passing parameter are fromDate and toDate these parameter values like '01/02/14' between '01/05/14'.Based this parameters the displaying months will be vary in the headings as i shown in the below example:
EmpName
01/02/14 01/03/14 01/04/14 01/05/14
abc
2000 3000 3000 2000
Can anyone help me in this getting output?

What you're describing sounds like the number of columns would grow or shrink based on the number of months between the 2 parameters, which just doesn't work.
I don't know any way to add additional columns based on an interval between 2 parameters without a procedural code generated sql statement.
What is possible is:
emp_id1 period1 salary
emp_id1 period2 salary
emp_id1 period3 salary
epd_id1 period4 salary
emp_id2 period1 salary
emp_id2 period2 salary
emp_id2 period3 salary
epd_id2 period4 salary
generated with something like:
select g_employee_id,
g_period_start,
g_salary_amt
from g_employee, g_year, g_period, g_salary
where <join everything>
and g_period_start between date_param_1 and date_param_2
group by g_employee_id, g_period_start;
Hard to get more specific with out the table structure.
As the range between date_param_1 and date_param_2 grew, the number of rows would grow for each employee with pay in that "g_period"
EDIT - Other option:
The less dynamic option which requires more parameters would be:
select g_employee_id,
(select g_salary_amount
from g_period, g_salary
where g_period_id = g_salary_period_id
and g_salard_emp_id = g_employee_id
and g_period_start = <DATE_PARAM_1> ) as "DATE_PARAM_1_desc",
(select g_salary_amount
from g_period, g_salary
where g_period_id = g_salary_period_id
and g_salard_emp_id = g_employee_id
and g_period_start = <DATE_PARAM_2> ) as "DATE_PARAM_2_desc",
(select g_salary_amount
from g_period, g_salary
where g_period_id = g_salary_period_id
and g_salard_emp_id = g_employee_id
and g_period_start = <DATE_PARAM_3> ) as "DATE_PARAM_3_desc"
,..... -- dynamic not possible
from employee;

i create one table #g_employee and insert dummy data
create table #g_employee(empid int,yearid int,periodid int,salary int)
insert into #g_employee(empid,yearid,periodid,salary)
select 1,2014,02,2000
union
select 2,2014,02,2000
union
select 3,2014,02,2000
union
select 3,2014,03,2000
union
select 1,2014,03,3000
union
select 1,2014,04,4000
output query as per your requirement :
Solution 1 :
select empid, max(Case when periodid=2 and yearid=2014 then salary end) as '01/02/2014'
, max(Case when periodid=3 and yearid=2014 then salary end) as '01/03/2014'
, max(Case when periodid=4 and yearid=2014 then salary end) as '01/04/2014'
from #g_employee
group by empid
you can do with dynamic sql :
Solution 2 :
DECLARE #cols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX)
select #cols = STUFF((SELECT ',' + QUOTENAME(periodid)
from #g_employee
group by periodid
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #query = 'SELECT empid,' + #cols + ' from
(
select salary, periodid,empid
from #g_employee
) x
pivot
(
max(salary)
for periodid in (' + #cols + ')
) p '
execute(#query)
hope this will help

Related

SQL Server - Select with Group By together Raw_Number

I'm using SQL Server 2000 (80). So, it's not possible to use the LAG function.
I have a code a data set with four columns:
Purchase_Date
Facility_no
Seller_id
Sale_id
I need to identify missing Sale_ids. So every sale_id is a 100% sequential, so the should not be any gaps in order.
This code works for a specific date and store if specified. But i need to work on entire data set looping looping through every facility_id and every seller_id for ever purchase_date
declare #MAXCOUNT int
set #MAXCOUNT =
(
select MAX(Sale_Id)
from #table
where
Facility_no in (124) and
Purchase_date = '2/7/2020'
and Seller_id = 1
)
;WITH TRX_COUNT AS
(
SELECT 1 AS Number
union all
select Number + 1 from TRX_COUNT
where Number < #MAXCOUNT
)
select * from TRX_COUNT
where
Number NOT IN
(
select Sale_Id
from #table
where
Facility_no in (124)
and Purchase_Date = '2/7/2020'
and seller_id = 1
)
order by Number
OPTION (maxrecursion 0)
My Dataset
This column:
case when
Sale_Id=0 or 1=Sale_Id-LAG(Sale_Id) over (partition by Facility_no, Purchase_Date, Seller_id)
then 'OK' else 'Previous Missing' end
will tell you which Seller_Ids have some sale missing. If you want to go a step further and have exactly your desired output, then filter out and distinct the 'Previous Missing' ones, and join with a tally table on not exists.
Edit: OP mentions in comments they can't use LAG(). My suggestion, then, would be:
Make a temp table that that has the max(sale_id) group by facility/seller_id
Then you can get your missing results by this pseudocode query:
Select ...
from temptable t
inner join tally N on t.maxsale <=N.num
where not exists( select ... from sourcetable s where s.facility=t.facility and s.seller=t.seller and s.sale=N.num)
> because the only way to "construct" nonexisting combinations is to construct them all and just remove the existing ones.
This one worked out
; WITH cte_Rn AS (
SELECT *, ROW_NUMBER() OVER(PARTITION BY Facility_no, Purchase_Date, Seller_id ORDER BY Purchase_Date) AS [Rn_Num]
FROM (
SELECT
Facility_no,
Purchase_Date,
Seller_id,
Sale_id
FROM MyTable WITH (NOLOCK)
) a
)
, cte_Rn_0 as (
SELECT
Facility_no,
Purchase_Date,
Seller_id,
Sale_id,
-- [Rn_Num] AS 'Skipped Sale'
-- , case when Sale_id = 0 Then [Rn_Num] - 1 Else [Rn_Num] End AS 'Skipped Sale for 0'
, [Rn_Num] - 1 AS 'Skipped Sale for 0'
FROM cte_Rn a
)
SELECT
Facility_no,
Purchase_Date,
Seller_id,
Sale_id,
-- [Skipped Sale],
[Skipped Sale for 0]
FROM cte_Rn_0 a
WHERE NOT EXISTS
(
select * from cte_Rn_0 b
where b.Sale_id = a.[Skipped Sale for 0]
and a.Facility_no = b.Facility_no
and a.Purchase_Date = b.Purchase_Date
and a.Seller_id = b.Seller_id
)
--ORDER BY Purchase_Date ASC

tSQL transpose / pivot table dynamicly with multiple columns

Looking to pivot/transpose with tsql (or something else)? on a table with multiple rows per item number, one row per Code (unit of measure).
It would have to be dynamic as there could be lot of different unit of measure codes per item.
Current data table:
select [Item No_], Code, [Qty_ per Unit of Measure], Weight, Cubage
from [mycompany$Item Unit of Measure]
where [Item No_] in ('007967','007968')
Desired output would be:
Addiotional info
We have a table that holds all the possible Unit of Measure codes that perhaps could be used in the final code?
select
Code
from [mycompany$Unit of Measure]
How to acchieve this, and what would the SQL code look like?
Suggested solution from #Larnu :
DECLARE #cols AS NVARCHAR(MAX);
DECLARE #query AS NVARCHAR(MAX);
select #cols = STUFF((SELECT distinct ',' +
QUOTENAME(Code)
FROM [mycompany$Unit of Measure]
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
, 1, 1, '');
SELECT #query =
'SELECT *
FROM
(
SELECT
o.[Item No_],
p.Code,
o.Weight
FROM [mycompany$Item Unit of Measure] AS o
INNER JOIN [mycompany$Unit of Measure] AS p ON o.Code = p.Code
) AS t
PIVOT
(
MAX(Weight)
FOR Code IN( ' + #cols + ' )' +
' ) AS p ; ';
execute(#query);
However this only give max value for Weight and not Cubage. Also it doesn't meet the desired end results as column heads are not tagged with CODE.Cubage, CODE.Weight etc. (PALLET.Weight, PALLET.Cubage)
Screenshot of results with above code:

How can I SUM distinct records in a Postgres database where there are duplicate records?

Imagine a table that looks like this:
The SQL to get this data was just SELECT *
The first column is "row_id" the second is "id" - which is the order ID and the third is "total" - which is the revenue.
I'm not sure why there are duplicate rows in the database, but when I do a SUM(total), it's including the second entry in the database, even though the order ID is the same, which is causing my numbers to be larger than if I select distinct(id), total - export to excel and then sum the values manually.
So my question is - how can I SUM on just the distinct order IDs so that I get the same revenue as if I exported to excel every distinct order ID row?
Thanks in advance!
Easy - just divide by the count:
select id, sum(total) / count(id)
from orders
group by id
See live demo.
Also handles any level of duplication, eg triplicates etc.
You can try something like this (with your example):
Table
create table test (
row_id int,
id int,
total decimal(15,2)
);
insert into test values
(6395, 1509, 112), (22986, 1509, 112),
(1393, 3284, 40.37), (24360, 3284, 40.37);
Query
with distinct_records as (
select distinct id, total from test
)
select a.id, b.actual_total, array_agg(a.row_id) as row_ids
from test a
inner join (select id, sum(total) as actual_total from distinct_records group by id) b
on a.id = b.id
group by a.id, b.actual_total
Result
| id | actual_total | row_ids |
|------|--------------|------------|
| 1509 | 112 | 6395,22986 |
| 3284 | 40.37 | 1393,24360 |
Explanation
We do not know what the reasons is for orders and totals to appear more than one time with different row_id. So using a common table expression (CTE) using the with ... phrase, we get the distinct id and total.
Under the CTE, we use this distinct data to do totaling. We join ID in the original table with the aggregation over distinct values. Then we comma-separate row_ids so that the information looks cleaner.
SQLFiddle example
http://sqlfiddle.com/#!15/72639/3
Create custom aggregate:
CREATE OR REPLACE FUNCTION sum_func (
double precision, pg_catalog.anyelement, double precision
)
RETURNS double precision AS
$body$
SELECT case when $3 is not null then COALESCE($1, 0) + $3 else $1 end
$body$
LANGUAGE 'sql';
CREATE AGGREGATE dist_sum (
pg_catalog."any",
double precision)
(
SFUNC = sum_func,
STYPE = float8
);
And then calc distinct sum like:
select dist_sum(distinct id, total)
from orders
SQLFiddle
You can use DISTINCT in your aggregate functions:
SELECT id, SUM(DISTINCT total) FROM orders GROUP BY id
Documentation here: https://www.postgresql.org/docs/9.6/static/sql-expressions.html#SYNTAX-AGGREGATES
If we can trust that the total for 1 order is actually 1 row. We could eliminate the duplicates in a sub-query by selecting the the MAX of the PK id column. An example:
CREATE TABLE test2 (id int, order_id int, total int);
insert into test2 values (1,1,50);
insert into test2 values (2,1,50);
insert into test2 values (5,1,50);
insert into test2 values (3,2,100);
insert into test2 values (4,2,100);
select order_id, sum(total)
from test2 t
join (
select max(id) as id
from test2
group by order_id) as sq
on t.id = sq.id
group by order_id
sql fiddle
In difficult cases:
select
id,
(
SELECT SUM(value::int4)
FROM jsonb_each_text(jsonb_object_agg(row_id, total))
) as total
from orders
group by id
I would suggest just use a sub-Query:
SELECT "a"."id", SUM("a"."total")
FROM (SELECT DISTINCT ON ("id") * FROM "Database"."Schema"."Table") AS "a"
GROUP BY "a"."id"
The Above will give you the total of each id
Use below if you want the full total of each duplicate removed:
SELECT SUM("a"."total")
FROM (SELECT DISTINCT ON ("id") * FROM "Database"."Schema"."Table") AS "a"
Using subselect (http://sqlfiddle.com/#!7/cef1c/51):
select sum(total) from (
select distinct id, total
from orders
)
Using CTE (http://sqlfiddle.com/#!7/cef1c/53):
with distinct_records as (
select distinct id, total from orders
)
select sum(total) from distinct_records;

Pivot Table By One Column And Multiple Aggregates

I prepared the following report. It displays three columns (AccruedInterest, Tip & CardRevenue) by month for a given year. Now I want to "Rotate" the result so that StartDate value turn into 12 columns.
I have this:
I need this:
I have tried pivoting the table but I need multiple columns to be aggregated as you see.
You have to unpivot your data and the pivot.
Note: I put in my own values in your table as to make each unique so you know the data is correct.
--Create YourTable
SELECT * INTO YourTable
FROM
(
SELECT CAST('2015-01-01' AS DATE) StartDate,
607.834 AS AccruedInterest,
1 AS Tip,
3 AS CardRevenue
UNION ALL
SELECT CAST('2015-02-01' AS DATE) StartDate,
643.298 AS AccruedInterest,
16.8325 AS Tip,
5 AS CardRevenue
) A;
GO
--This pivots your data
SELECT *
FROM
(
--This unpivots your data using cross apply
SELECT col,val,StartDate
FROM YourTable
CROSS APPLY
(
SELECT 'AccruedInterest', CAST(AccruedInterest AS VARCHAR(100))
UNION ALL
SELECT 'Tip', CAST(Tip AS VARCHAR(100))
UNION ALL
SELECT 'CardRevenue', CAST(CardRevenue AS VARCHAR(100))
) A(col,val)
) B
PIVOT
(
MAX(val) FOR startdate IN([2015-01-01],[2015-02-01])
) pvt
Results:
col 2015-01-01 2015-02-01
AccruedInterest 607.834 643.298
CardRevenue 3 5
Tip 1.0000 16.8325

T-SQL how to count the number of duplicate rows then print the outcome?

I have a table ProductNumberDuplicates_backups, which has two columns named ProductID and ProductNumber. There are some duplicate ProductNumbers. How can I count the distinct number of products, then print out the outcome like "() products was backup." ? Because this is inside a stored procedure, I have to use a variable #numrecord as the distinct number of rows. I put my codes like this:
set #numrecord= select distinct ProductNumber
from ProductNumberDuplicates_backups where COUNT(*) > 1
group by ProductID
having Count(ProductNumber)>1
Print cast(#numrecord as varchar)+' product(s) were backed up.'
obviously the error was after the = sign as the select can not follow it. I've search for similar cases but they are just select statements. Please help. Many thanks!
Try
select #numrecord= count(distinct ProductNumber)
from ProductNumberDuplicates_backups
Print cast(#numrecord as varchar)+' product(s) were backed up.'
begin tran
create table ProductNumberDuplicates_backups (
ProductNumber int
)
insert ProductNumberDuplicates_backups(ProductNumber)
select 1
union all
select 2
union all
select 1
union all
select 3
union all
select 2
select * from ProductNumberDuplicates_backups
declare #numRecord int
select #numRecord = count(ProductNumber) from
(select ProductNumber, ROW_NUMBER()
over (partition by ProductNumber order by ProductNumber) RowNumber
from ProductNumberDuplicates_backups) p
where p.RowNumber > 1
print cast(#numRecord as varchar) + ' product(s) were backed up.'
rollback