It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
The community reviewed whether to reopen this question 8 months ago and left it closed:
Original close reason(s) were not resolved
I have a stock transaction table like this:
Item Date TxnType Qty Price
ABC 01-April-2012 IN 200 750.00
ABC 05-April-2012 OUT 100
ABC 10-April-2012 IN 50 700.00
ABC 16-April-2012 IN 75 800.00
ABC 25-April-2012 OUT 175
XYZ 02-April-2012 IN 150 350.00
XYZ 08-April-2012 OUT 120
XYZ 12-April-2012 OUT 10
XYZ 24-April-2012 IN 90 340.00
I need the value of the inventory for each item in FIFO (First in first out) meaning the first purchased item should be consumed first.
The output stock valuation of the above data is:
Item Qty Value
ABC 50 40000.00
XYZ 110 37600.00
Please help me to get the solution.
Surprisingly difficult to get right. I suspect it would be easier using SQL Server 2012 which supports running sums in windowing functions. Anyhow:
declare #Stock table (Item char(3) not null,[Date] datetime not null,TxnType varchar(3) not null,Qty int not null,Price decimal(10,2) null)
insert into #Stock(Item , [Date] , TxnType, Qty, Price) values
('ABC','20120401','IN', 200, 750.00),
('ABC','20120405','OUT', 100 ,null ),
('ABC','20120410','IN', 50, 700.00),
('ABC','20120416','IN', 75, 800.00),
('ABC','20120425','OUT', 175, null ),
('XYZ','20120402','IN', 150, 350.00),
('XYZ','20120408','OUT', 120 ,null ),
('XYZ','20120412','OUT', 10 ,null ),
('XYZ','20120424','IN', 90, 340.00);
;WITH OrderedIn as (
select *,ROW_NUMBER() OVER (PARTITION BY Item ORDER BY [DATE]) as rn
from #Stock
where TxnType = 'IN'
), RunningTotals as (
select Item,Qty,Price,Qty as Total,0 as PrevTotal,rn from OrderedIn where rn = 1
union all
select rt.Item,oi.Qty,oi.Price,rt.Total + oi.Qty,rt.Total,oi.rn
from
RunningTotals rt
inner join
OrderedIn oi
on
rt.Item = oi.Item and
rt.rn = oi.rn - 1
), TotalOut as (
select Item,SUM(Qty) as Qty from #Stock where TxnType='OUT' group by Item
)
select
rt.Item,SUM(CASE WHEN PrevTotal > out.Qty THEN rt.Qty ELSE rt.Total - out.Qty END * Price)
from
RunningTotals rt
inner join
TotalOut out
on
rt.Item = out.Item
where
rt.Total > out.Qty
group by rt.Item
The first observation is that we don't need to do anything special for OUT transactions - we just need to know the total quantity. That's what the TotalOut CTE calculates. The first two CTEs work with IN transactions, and compute what "interval" of stock each represents - change the final query to just select * from RunningTotals to get a feel for that.
The final SELECT statement finds rows which haven't been completely exhausted by outgoing transactions, and then decides whether it's the whole quantity of that incoming transaction, or whether that is the transaction that straddles the outgoing total.
I think you have to use detail transaction table for this.
Like Stock,StockDetail,StockDetailTransaction.
In this StockDetailTransaction table contain FIFO Entry for Stock.
When Item In/Out that time add record in StockDetailTransaction.
Related
This question already has answers here:
Select first row in each GROUP BY group?
(20 answers)
Closed 3 years ago.
I have the following table in PostgreSQL and I want to display the maximum sale figure of each fname along with the relevant saledate. Please help me with the solution code.
CREATE TABLE CookieSale (
ID VARCHAR(4),
fname VARCHAR(15),
sale FLOAT,
saleDate DATE
);
INSERT INTO CookieSale
VALUES
('E001', 'Linda', 1000.00, '2016-01-30'),
('E002', 'Sally', 750.00, '2016-01-30'),
('E003', 'Zindy', 500.00, '2016-01-30'),
('E001', 'Linda', 150.00, '2016-02-01'),
('E001', 'Linda', 5000.00, '2016-02-01'),
('E002', 'Sally', 250.00, '2016-02-01'),
('E001', 'Linda', 250.00, '2016-02-02'),
('E002', 'Sally', 150.00, '2016-02-02'),
('E003', 'Zindy', 50.00, '2016-02-02');
I tried with
SELECT fname, MAX(sale), saleDate
FROM CookieSale;
I need the results to be like
"Lynda | 5000.00 | 2016-02-01"
Your description and expected results are inconsistent. Description says "maximum sale figure of each fname" but expected results indicates only the maximum overall. And neither has anything to do with the Title. Please try being a little more consistent.
I'll take description as what is actually wanted. For that use the Window function RANK within a sub select and the outer select just take rank 1.
select fname, sale, saledate
from ( select cs.*, rank() over (partition by fname order by fname, sale desc) rk
from cookiesales cs
) csr
where rk = 1;
I have a table (table1) has fact data. Let's say (products, start, end, value1, month[calculated column]) are the columns and start and end columns are timestamp.
What I am trying to have is a table and bar chart which give me sum of value1 for each month divided by a factor number according to each month (this report is a yearly bases. I mean, I load the data into qlik sense for one year).
I used the start and end to generate autoCalendar as a timestamp field in qlik sense data manager. Then, I get the month from start and store it in the calculated column "month" in the table1 using the feature of autoCalendar (Month(start.autoCalendar.Month)).
After that, I created another table having two columns (month, value2) the value2 column is a factor value which I need it to divide the value1 according to each month. that's mean (sum(value1) /1520 [for January], sum(value2) / 650 [for February]) and so on. Here the month and month columns are relational columns in qlik sense. then I could in my expression calculated the sum(value1) and get the targeted value2 which compatible with the month for the table2.
I could make the calculation correctly. but still one thing is missed. The data of the products does not have value (value1 ) in every month. For example, let's say that I have a products (p1,p2...). I have data in the table 1 for (Jun, Feb, Nov), and for p2 for (Mrz, Apr,Mai, Dec). Hence, When the data are presented in a qlik sense table as well as in a bar chart I can see only the months which have values in the fact table. The qlik sense table contains (2 dimensions which are [products] and [month] and the measure is m1[sum(value1)/value2]).
What I want to have a yearly report showing the 12 months. and in my example I can see for p1 (only 3 months) and for p2 (4 months). When there is no data the measure column [m1] 0 and I want to have the 0 in my table and chart.
I am think, it might be a solution if I can show the data of the the qlik sense table as right outer join of my relation relationship (table1.month>>table2.month).So, is it possible in qlik sense to have outer join in such an example? or there is a better solution to my problem.
Update
Got it. Not sure if that this is the best approach but in this cases I usually fill the missing records during the script load.
// Main table
Sales:
Load
*,
ProductId & '-' & Month as Key_Product_Month
;
Load * Inline [
ProductId, Month, SalesAmount
P1 , 1 , 10
P1 , 2 , 20
P1 , 3 , 30
P2 , 1 , 40
P2 , 2 , 50
];
// Get distinct products and assign 0 as SalesAmount
Products_Temp:
Load
distinct ProductId,
0 as SalesAmount
Resident
Sales
;
join (Products_Temp) // Cross join in this case
Load
distinct Month
Resident
Sales
;
// After the cross join Products_Temp table contains
// all possible combinations between ProductId and Month
// and for each combination SalesAmount = 0
Products_Temp_1:
Load
*,
ProductId & '-' & Month as Key_Product_Month1 // Generate the unique id
Resident
Products_Temp
;
Drop Table Products_Temp; // we dont need this anymore
Concatenate (Sales)
// Concatenate to main table only the missing ProductId-Month
// combinations that are missing
Load
*
Resident
Products_Temp_1
Where
Not Exists(Key_Product_Month, Key_Product_Month1)
;
Drop Table Products_Temp_1; // not needed any more
Drop Fields Key_Product_Month1, Key_Product_Month; // not needed any more
Before the script:
After the script:
The table link in Qlik Sense (and Qlikview) is more like full outer join. if you want to show the id only from one table (and not all) you can create additional field in the table you want and then perform your calculations on top of this field instead on the linked one. For example:
Table1:
Load
id,
value1
From
MyQVD1.qvd (qvd)
;
Table2:
Load
id,
id as MyRightId
value2
From
MyQVD2.qvd (qvd)
;
In the example above both tables will still be linked on id field but if you want to count only the id values in the right table (Table2) you just need to type
count( MyRightId )
I know this questions has been answered and I quite like Stefan's approach but hope my answer will help other users. I recently ran into something similar and I used a slightly different logic with the following script:
// Main table
Sales:
Load * Inline [
ProductId, Month, SalesAmount
P1 , 1 , 10
P1 , 2 , 20
P1 , 3 , 30
P2 , 1 , 40
P2 , 2 , 50
];
Cartesian:
//Create a combination of all ProductId and Month and then load the existing data into this table
NoConcatenate Load distinct ProductId Resident Sales;
Join
Load Distinct Month Resident Sales;
Join Load ProductId, Month, SalesAmount Resident Sales; //Existing data loaded
Drop Table Sales;
This results in the following output table:
The Null value in the new (bottom-most) row can stay like that but if you prefer replacing it then use Map..Using process
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Input
TABLE 1: Header
ID ITEMNAME COST
1 MIXEE 1800
2 REFRIZIRATOR 12000
TABLE 2 : DETAILS
DID DITEMNAME DLINE DCOST
1 MIXEE 1 900
2 MIXEE 2 900
3 REFRIGIRATOR 1 9000
4 REFRIGIRATOR 2 1000
Expected OutPut
DETAILS
DID DITEMNAME DLINE DCOST
1 MIXEE 1 900
2 MIXEE 2 900
3 REFRIGIRATOR 1 12000
4 REFRIGIRATOR 2 0
Explanation : Header table will contains summary sum or line level information or it contains the accurate information and detail table may not contain accurate information cost
so if header table cost and sum of detail table cost is same then i need to keep the records as it is other wise i need to make first line level value is cost of header table and next line should be zero .
Like this i do have around 80 lacs records in detail table .
Okay so I think I got it. Let me know if it needs any tweaks.
Your Tables
DECLARE #Header TABLE (ID INT, ItemName VARCHAR(20), Cost INT);
DECLARE #Details TABLE (ID INT, DiteName VARCHAR(20), Dline TINYINT,DCost INT);
INSERT INTO #Header
VALUES (1,'Mixee',1800),
(2,'Refridgerator',12000);
INSERT INTO #Details
VALUES (1,'Mixee',1,900),
(2,'Mixee',2,900),
(3,'Refridgerator',1,9000),
(4,'Refridgerator',2,9000);
Actual Query
SELECT D.ID,
D.DiteName,
D.Dline,
CASE
WHEN H.Cost = sum_DCost THEN DCost
ELSE CASE
WHEN D.ID = CA.min_ID THEN H.Cost
ELSE 0
END
END AS DCost
FROM #Details AS D
CROSS APPLY (
SELECT DiteName,SUM(DCost) sum_DCost,MIN(ID) min_ID
FROM #Details
WHERE DiteName = D.DiteName
GROUP BY DiteName
) CA
INNER JOIN #Header AS H
ON D.DiteName = H.ItemName
Here is how you can do it:
SELECT d.ID, d.DITEMNAME, d.DLINE,
CASE WHEN SUM(d.DCOST) OVER(PARTITION BY d.DITEMNAME) = h.COST
THEN d.DCOST
ELSE CASE WHEN d.DLINE = 1 THEN h.COST ELSE 0 END END AS DCOST
FROM Header h
JOIN Details d ON h.ITEMNAME = d.DITEMNAME
I am really out of ideas of how to solve this issue and need some assistance - not only solution but idea of how to approach will be welcomed.
I have the following table:
TABLE Data
(
RecordID
,DateAdd
,Status
)
with sample date like this:
11 2012-10-01 OK
11 2012-10-04 NO
11 2012-11-05 NO
22 2012-10-01 OK
33 2012-11-01 NO
33 2012-11-15 OK
And this table with the following example data:
TABLE Periods
(
PeriodID
,PeriodName
,DateStart
,DateEnd
)
1 Octomer 2012-10-01 2012-10-31
2 November 2012-11-01 2012-11-30
What I need to do, is to populate a new table:
TABLE DataPerPeriods
(
PeriodID,
RecordID,
Status
)
That will store all possible combinations of PeriodID and RecordID and the latest status for period if available. If status is not available for give period, then the status for previous periods. If there are no previous status at all - then NULL for status.
For example with the following data I need something like this:
1 11 NO //We have status "OK" and "NO", but "NO" is latest for the period
1 22 OK
1 33 NULL//Because there are no records for this and previous periods
2 11 NO //We get the previos status as there are no records in this periods
2 22 OK //There are not records for this period, but record for last periods is available
2 33 NO //We have status "OK" and "NO", but "OK" is latest for the period
EDIT: I have already populate the period ids and the records ids in the last table, I need more help on the status update.
There might be a better way to do this. But this is the most straight-forward path I know to get what you're looking for, unconventional as it appears. For larger datasets you may have to change your approach:
SELECT p.PeriodID, td.RecordID, statusData.[Status] FROM Periods p
CROSS JOIN (SELECT DISTINCT RecordID FROM Data) td
OUTER APPLY (SELECT TOP 1 [Status], [DateAdd]
FROM Data
WHERE [DateAdd] <= p.DateEnd
AND [RecordID] = td.RecordID
ORDER BY [DateAdd] DESC) statusData
ORDER BY p.PeriodID, td.RecordID
The CROSS JOIN is what gives you every possible combination of RecordIDs and DISTINCT Periods.
The OUTER APPLY selects the latest Status before then end of each Period.
Check out my answer on another question to know how to grab the first or last status : Aggregate SQL Function to grab only the first from each group
OK, here's an idea. Nobody likes cursors, including me, but sometimes for things like this they do come in handy.
The idea is that this cursor loops through each of the Data records, pulling out the ID as an identifier. Inside the loop it finds a single data record and gets the count of a join that meets your criteria.
If the #count = 0, the condition is not met and you should not insert a record for that period.
If the #Count=1, the condition is met so insert a record for the period.
If these conditions need to be updated frequently, you can your query to a job and run it every minute or hour ... what have you.
Hope this helps.
DECLARE #ID int
DECLARE merge_cursor CURSOR FAST_FORWARD FOR
select recordID
from data
OPEN merge_cursor
FETCH NEXT FROM merge_cursor INTO #ID
WHILE ##FETCH_STATUS = 0
BEGIN
--get join if record is found in the periods
declare #Count int
select #Count= count(*)
from data a inner join periods b
on a.[dateadd] between b.datestart and b.dateend
where a.recordID = #ID
if #count>0
--insert into DataPerPeriods(PeriodID, RecordID, Status)
select b.periodid, a.recordid, a.status
from data a inner join periods b on a.[dateadd] between b.datestart and b.dateend --between beginning of month and end of month
where a.recordid = #ID
else
--insert into DataPerPeriods(PeriodID, RecordID, Status)
select b.periodid, a.recordid, a.status
from data a inner join periods b on a.[dateadd] < b.dateend
where a.recordID = #ID --fix this area
FETCH NEXT FROM merge_cursor INTO #ID
END
CLOSE merge_cursor
DEALLOCATE merge_cursor
I'm trying to optimizes a recursive query for speed. The full query runs for 15 minutes.
The part I'm trying to optimize takes ~3.5min to execute, and the same logic is used twice in the query.
Description:
Table Ret contains over 300K rows with 30 columns (Daily snapshot)
Table Ret_Wh is the werehouse for Ret with over 5million rows (Snapshot history, 90days)
datadate - the day the info was recorded (like 10-01-2012)
statusA - a status like (Red, Blue) that an account can have.
statusB - a different status like (Large, Small) that an account can have.
Statuses can change day to day.
old - an integer age on the account. Age can be increased/decreased if there is a payment on the account. Otherwise incerase by 1 with each day.
account - the account number, and primary key of a row.
In Ret the account is unique.
In RetWh account is unique per datadate.
money - dollars in the account
Both Ret and Ret_Wh have the columns listed above
Query Goal: Select all accounts from Ret_Wh that had an age in a certain range, at ANY time during he month, and had a specific status while in that range.
Then select from those results, matching accounts in Ret, with a specific age "today", no matter their status.
My Goal: Do this in a way that doesn't take 3.5 minutes
Pseudo_Code:
#sdt='2012-10-01' -- or the beginning of any month
#dt = getdate()
create table #temp (account char(20))
create table #result (account char(20), money money)
while #sdt < #dt
BEGIN
insert into #temp
select
A.account
from Ret_Wh as A
where a.datadate = #sdt
and a.statusA = 'Red'
and a.statusB = 'Large'
and a.old between 61 and 80
set #sdt=(add 1 day to #sdt)
END
------
select distinct
b.account
,b.money
into #result
from #temp as A
join (Select account, money from Ret where old = 81) as B
on A.account=B.account
I want to create a distinct list of accounts in Ret_Wh (call it #shrinking_list). Then, in the while, I join Ret_Wh to #shrkining_list. At the end of the while, I delete one account from #shrinking_list. Then the while iterrates, with a smaller list joined to Ret_Wh, thereby speeding up the query as #sdt increases by 1 day. However, I don't know how to pass the exact same account number selected, to an external variable in the while, so that I can delete it from the #shrinking_list.
Any ideas on that, or how to speed this up in general?
Why are you using a cursor to get dates from #sdt to #dt one at a time?
select distinct b.account, b.money
from Ret as B
join Ret_Wh as A
on A.account = B.account
and a.datadate >= #sdt
and a.datadate < #dt
and a.statusA = 'Red'
and a.statusB = 'Large'
and a.old between 61 and 80
where b.old = 81