How to get the minimum date of same column in DB2 - db2

Need to get order qty of the minimum ADATE
Im using below query and getting 12 records. Now I want to select orderqty of minimum ADATE which is 06-NOV-2018(2018-11-06).
For every customer(will get multiple records), i need to get the Order_Qty of minimum ADATE column.
select
Customer ,
OrderID ,
LocationID ,
Order_Qty,Sent_date ,ADATE
from
(
select
OrderID ,
LocationID ,
Sent_date ,
Order_Qty ,
Customer ,
TimeStampA
from ARC_TBL
)
obn
inner join
(
select
ADATE ,TimeStampA
from trackTBL snt
)snt
on obn.TimeStampA = snt.TimeStampA
where Customer='ABC' and OrderID='XYZ100' and Sent_date='2018-11-18' and LocationID='250';

SELECT QTY, ADATE
FROM table
ORDER BY ADATE
FETCH FIRST 1 ROW ONLY
Explain your question in more detail and you will get better answers.

Related

Find difference between longest tenure and least tenure employee(s) SQL

i WanT a query to find the number of days between the longest and least tenured employee still working for the company. The output should include the number of employees with the longest-tenure, the number of employees with the least-tenure, and the number of days between both the longest-tenured and least-tenured hiring dates.
this is what i have so far,i am having trouble getting the difference to appear in output
select a.count, date_tenure
from
(
select distinct hire_date ,current_date - hire_date as date_tenure, count(id) over (partition by hire_date) as count, rank() over (order by current_date-hire_date) as rank_asc,
rank() over (order by current_date-hire_date desc) as rank_desc
from employees
where termination_date is null
order by 2 desc) a
where rank_desc = 1
or rank_asc = 1
table : employees
id
hire_date
termination_date

Output all values of group even if null for each day of the year postgresql

my problem is the following:
I have a movement table that documents whenever a car is moved from one station to another or a station gets supplied with more cars. The tables entries are: id, productId, quantity, updatedAt, date, locationId, createdAt.
I want so display the sum of the cars available for each day of the year, grouped by productId and locationId.
with you as(
SELECT "productId"
, "locationId"
, "date" as start_date
, SUM(SUM("Movement"."quantity")) OVER (PARTITION BY "productId", "locationId" ORDER BY "date") as schau
, "quantity"
, LEAD ("date") OVER (PARTITION BY "productId", "locationId" ORDER BY "date") as end_date
FROM "Movement"
GROUP BY 1,2,3,5
Order BY "date"),
calendar as (
select date '2022-01-01' + (n || ' days')::interval calendar_date
from generate_series(0, 365) n
)
SELECT * FROM calendar
left join you on calendar.calendar_date between start_date and end_date
The SumSum delivers the rolling sum of the products available. Check. The LEAD function allows to define periods per movement which are necessary in order to match the calendar I get by using the generate_series function. Like this I get a date for every day of the year. Check.
The only problem I have, is that my output won´t show me products that don´t have an entry by the time of the day. Is there any way to ALWAYS show all members of a group even if there is no entry for them? In my case I want to show all possible product and locationId s even if for this date there is no entry in my table? (I want to output 0 in that case)

Max fuction in Postgres does not give the max value

I am writing a simple SQL query to get the latest record from every customer and to get the max of device_count if there are multiple records for a customer with same timestamp. However, the max function doesn't seem to take the max value though. Any help would be appreciated.
My SQL query -
select sub.customerid, max(sub.device_count) from(
SELECT customerid, device_count,
RANK() OVER
(
PARTITION by customerid
ORDER BY date_time desc
) AS rownum
FROM tableA) sub
WHERE rownum = 1
group by 1
Sample data:
customerid
device_count
date_time
A
3573
2021-07-26 02:15:09-05:00
A
4
2021-07-26 02:15:13-05:00
A
16988
2021-07-26 02:15:13-05:00
A
20696
2021-07-26 02:15:13-05:00
A
24655
2021-07-26 02:15:13-05:00
Desired Output should be to get the row with max device_count which is 24655 but I get 16988 as the output.
try to :
sort your table using ORDER BY customerid,device_count
Then apply the LAST_VALUE(device_count) window function aver the customerid partition.
Apply LAST_VALUE() to find the latest device_count (since it's sorted ascending, the last device_count value is the max).
You need to put device_count into the window function's order by and take out the aggregation:
select sub.customerid, device_count from(
SELECT customerid, device_count,
RANK() OVER
(
PARTITION by customerid
ORDER BY date_time desc, device_count desc
) AS rownum
FROM tableA) sub where rownum=1;
But if the top row for a customerid has ties (in both date_time and device_count fields) it will return all such ties. So better to replace RANK() with ROW_NUMBER().

Postgres : Need distinct records count

I have a table with duplicate entries and the objective is to get the distinct entries based on the latest time stamp.
In my case 'serial_no' will have duplicate entries but I select unique entries based on the latest time stamp.
Below query is giving me the unique results with the latest time stamp.
But my concern is I need to get the total of unique entries.
For example assume my table has 40 entries overall. With the below query I am able to get 20 unique rows based on the serial number.
But the 'total' is returned as 40 instead of 20.
Any help on this pls?
SELECT
*
FROM
(
SELECT
DISTINCT ON (serial_no) id,
serial_no,
name,
timestamp,
COUNT(*) OVER() as total
FROM
product_info
INNER JOIN my.account ON id = accountid
WHERE
lower(name) = 'hello'
ORDER BY
serial_no,
timestamp DESC OFFSET 0
LIMIT
10
) AS my_info
ORDER BY
serial_no asc
product_info table intially has this data
serial_no name timestamp
11212 pulp12 2018-06-01 20:00:01
11213 mango 2018-06-01 17:00:01
11214 grapes 2018-06-02 04:00:01
11215 orange 2018-06-02 07:05:30
11212 pulp12 2018-06-03 14:00:01
11213 mango 2018-06-03 13:00:00
After the distict query I got all unique results based on the latest
timestamp:
serial_no name timestamp total
11212 pulp12 2018-06-03 14:00:01 6
11213 mango 2018-06-03 13:00:00 6
11214 grapes 2018-06-02 04:00:01 6
11215 orange 2018-06-02 07:05:30 6
But total is appearing as 6 . I wanted the total to be 4 since it has
only 4 unique entries.
I am not sure how to modify my existing query to get this desired
result.
Postgres supports COUNT(DISTINCT column_name), so if I have understood your request, using that instead of COUNT(*) will work, and you can drop the OVER.
What you could do is move the window function to a higher level select statement. This is because window function is evaluated before distinct on and limit clauses are applied. Also, you can not include DISTINCT keyword within window functions - it has not been implemented yet (as of Postgres 9.6).
SELECT
*,
COUNT(*) OVER() as total -- here
FROM
(
SELECT
DISTINCT ON (serial_no) id,
serial_no,
name,
timestamp
FROM
product_info
INNER JOIN my.account ON id = accountid
WHERE
lower(name) = 'hello'
ORDER BY
serial_no,
timestamp DESC
LIMIT
10
) AS my_info
Additionally, offset is not required there and one more sorting is also superfluous. I've removed these.
Another way would be to include a computed column in the select clause but this would not be as fast as it would require one more scan of the table. This is obviously assuming that your total is strictly connected to your resultset and not what's beyond that being stored in the table, but gets filtered out.
select count(*), serial_no from product_info group by serial_no
will give you the number of duplicates for each serial number
The most mindless way of incorporating that information would be to join in a sub query
SELECT
*
FROM
(
SELECT
DISTINCT ON (serial_no) id,
serial_no,
name,
timestamp,
COUNT(*) OVER() as total
FROM
product_info
INNER JOIN my.account ON id = accountid
WHERE
lower(name) = 'hello'
ORDER BY
serial_no,
timestamp DESC OFFSET 0
LIMIT
10
) AS my_info
join (select count(*) as counts, serial_no from product_info group by serial_no) as X
on X.serial_no = my_info.serial_no
ORDER BY
serial_no asc

array_agg group by and null

Given this table:
SELECT * FROM CommodityPricing order by dateField
"SILVER";60.45;"2002-01-01"
"GOLD";130.45;"2002-01-01"
"COPPER";96.45;"2002-01-01"
"SILVER";70.45;"2003-01-01"
"GOLD";140.45;"2003-01-01"
"COPPER";99.45;"2003-01-01"
"GOLD";150.45;"2004-01-01"
"MERCURY";60;"2004-01-01"
"SILVER";80.45;"2004-01-01"
As of 2004, COPPER was dropped and mercury introduced.
How can I get the value of (array_agg(value order by date desc) ) [1] as NULL for COPPER?
select commodity,(array_agg(value order by date desc) ) --[1]
from CommodityPricing
group by commodity
"COPPER";"{99.45,96.45}"
"GOLD";"{150.45,140.45,130.45}"
"MERCURY";"{60}"
"SILVER";"{80.45,70.45,60.45}"
SQL Fiddle
select
commodity,
array_agg(
case when commodity = 'COPPER' then null else price end
order by date desc
)
from CommodityPricing
group by commodity
;
To "pad" missing rows with NULL values in the resulting array, build your query on full grid of rows and LEFT JOIN actual values to the grid.
Given this table definition:
CREATE TEMP TABLE price (
commodity text
, value numeric
, ts timestamp -- using ts instead of the inappropriate name date
);
I use generate_series() to get a list of timestamps representing the years and CROSS JOIN to a unique list of all commodities (SELECT DISTINCT ...).
SELECT commodity, (array_agg(value ORDER BY ts DESC)) AS years
FROM generate_series ('2002-01-01 00:00:00'::timestamp
, '2004-01-01 00:00:00'::timestamp
, '1y') t(ts)
CROSS JOIN (SELECT DISTINCT commodity FROM price) c(commodity)
LEFT JOIN price p USING (ts, commodity)
GROUP BY commodity;
Result:
COPPER {NULL,99.45,96.45}
GOLD {150.45,140.45,130.45}
MERCURY {60,NULL,NULL}
SILVER {80.45,70.45,60.45}
SQL Fiddle.
I cast the array to text in the fiddle, because the display sucks and would swallow NULL values otherwise.