I am having problems in recalculating the running totals.
I have a situation where we have duplicate transactions and these must be deleted and the and initial and closing balance must be recalculated based on the amount and taking into account when isdebit.
My attempt is to have nested cursors (parent-child) and the parent select all the distinct bookingNo and the child does the calculation looks very messy and I didn't work, didn't post it because I didn't want to confuse things.
I know in SQL Server 2012 you can use (sum over partition by) but I cannot figure how to do it to handle the deleted row etc..
Below is what I did so far
--Create Table for testing
IF object_id(N'TestTransaction', 'U') IS NOT NULL DROP TABLE TestTransaction
GO
CREATE TABLE [TestTransaction]
(
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[BookingNo] [bigint] NOT NULL,
[IsDebit] [bit] NOT NULL,
[Amount] [decimal](18, 2) NOT NULL,
[InitialBalance] [decimal](18, 2) NOT NULL,
[ClosingBalance] [decimal](18, 2) NOT NULL
) ON [PRIMARY]
GO
INSERT [TestTransaction] ([BookingNo], [IsDebit], [Amount], [InitialBalance], [ClosingBalance])
SELECT 200, 0, 100, 2000,2100 UNION ALL
SELECT 200, 0, 100, 2100,2200 UNION ALL
SELECT 200, 1, 150, 2150,2000 UNION ALL
SELECT 200, 0, 300, 2000,2300 UNION ALL
SELECT 200, 0, 400, 2300,2700 UNION ALL
SELECT 200, 0, 250, 2700,2950 UNION ALL
SELECT 200, 0, 250, 2950,3200
--- end of setup
IF OBJECT_ID('tempdb..#tmpTransToDelete') IS NOT NULL DROP TABLE #tmpTransToDelete
GO
CREATE TABLE #tmpTransToDelete
( BoookingNo bigint,
Isdebit bit,
amount decimal(18,2),
InitialBalance decimal(18,2),
ClosingBalance decimal(18,2)
)
DECLARE #RunnnigInitialBalance decimal(18,2),#RunnnigClosingBalance decimal(18,2)
INSERT #tmpTransToDelete(BoookingNo,Isdebit,amount,InitialBalance,ClosingBalance)
SELECT BookingNo,Isdebit,amount,InitialBalance,ClosingBalance
FROM TestTransaction
WHERE ID IN (1,6)
--Delete all duplicate transaction (just to prove the point)
DELETE TestTransaction WHERE ID IN (1,6)
-- now taking into account the deleted rows recalculate the lot and update the table.
Any help? Suggestions?
edited
Results should be
Id BookingNo IsDebit Amount InitialBalance ClosingBalance
2 200 0 100.00 2000.00 2000.00
3 200 1 150.00 2000.00 2150.00
4 200 0 300.00 2150.00 2450.00
5 200 0 400.00 2450.00 2850.00
7 200 0 250.00 2600.00 2850.00
The RunningTotal approach in my previous response would work if there were transactional data that accounted for the initial balance. But, since that evidently isn't the case, I would say you can't delete any rows without also applying the relative difference to all subsequent rows as part of the same transaction. Moreover, I'm convinced your initial sample data is wrong, which only exacerbates the confusion. It seems to me it should be as follows:
SELECT 200, 0, 100, 2000,2100 UNION ALL
SELECT 200, 0, 100, 2100,2200 UNION ALL
SELECT 200, 1, 150, 2200,2050 UNION ALL
SELECT 200, 0, 300, 2050,2350 UNION ALL
SELECT 200, 0, 400, 2350,2750 UNION ALL
SELECT 200, 0, 250, 2750,3000 UNION ALL
SELECT 200, 0, 250, 3000,3250
With that rectified, here's how I'd write the delete-and-update transaction:
BEGIN TRAN
DECLARE #tbd TABLE (
Id bigint
,BookingNo bigint
,Amount decimal(18,2)
);
DELETE FROM TestTransaction
OUTPUT deleted.Id
, deleted.BookingNo
, deleted.Amount * IIF(deleted.IsDebit = 0, 1, -1) AS Amount
INTO #tbd
WHERE ID IN (1,6);
WITH adj
AS (
SELECT tt.BookingNo, tt.Id, SUM(tbd.amount) AS Amount
FROM TestTransaction tt
JOIN #tbd tbd ON tt.BookingNo = tbd.BookingNo AND tbd.id <= tt.id
GROUP BY tt.BookingNo, tt.Id
)
UPDATE tt
SET InitialBalance -= adj.Amount
,ClosingBalance -= adj.Amount
FROM TestTransaction tt
JOIN adj ON tt.BookingNo = adj.BookingNo AND tt.Id = adj.Id;
COMMIT TRAN
Which yields a final result of:
Id BookingNo IsDebit Amount InitialBalance ClosingBalance
2 200 0 100.00 2000.00 2100.00
3 200 1 150.00 2100.00 1950.00
4 200 0 300.00 1950.00 2250.00
5 200 0 400.00 2250.00 2650.00
7 200 0 250.00 2650.00 2900.00
Here's an example of a running total using your data:
SELECT BookingNo
, Amount
, IsDebit
, SUM(Amount * IIF(IsDebit = 0, 1, -1)) OVER (PARTITION BY BookingNo ORDER BY Id ROWS UNBOUNDED PRECEDING) AS RunningTotal
FROM TestTransaction
Related
I got a table with linestrings that I want to divide into chunks that have a list of id not higher than provided number for each and store only lines that are within certain distance.
For example, I got a table with 14 rows
create table lines ( id integer primary key, geom geometry(linestring) );
insert into lines (id, geom) values ( 1, 'LINESTRING(0 0, 0 1)');
insert into lines (id, geom) values ( 2, 'LINESTRING(0 1, 1 1)');
insert into lines (id, geom) values ( 3, 'LINESTRING(1 1, 1 2)');
insert into lines (id, geom) values ( 4, 'LINESTRING(1 2, 2 2)');
insert into lines (id, geom) values ( 11, 'LINESTRING(2 2, 2 3)');
insert into lines (id, geom) values ( 12, 'LINESTRING(2 3, 3 3)');
insert into lines (id, geom) values ( 13, 'LINESTRING(3 3, 3 4)');
insert into lines (id, geom) values ( 14, 'LINESTRING(3 4, 4 4)');
create index lines_gix on lines using gist(geom);
I want to split it into chunks with 3 ids for each chunk with lines that are within 2 meters from each other or the first one.
The result I am trying to get from this example is:
| Chunk No.| Id chunk list |
|----------|----------------|
| 1 | 1, 2, 3 |
| 2 | 4, 5, 6 |
| 3 | 7, 8, 9 |
| 4 | 10, 11, 12 |
| 5 | 13, 14 |
I tried to use st_clusterwithin but when lines are close to each other it will return all of them not split into chunks.
I also tried to use some with recursive magic like the one from the answer provided by Paul Ramsey here. But I don't know how to modify the query to return limited grouped id list.
I am not sure if it is the best possible answer so if anyone has a better method or know how to improve provided answer feel free to update it. With a little modification of Paul answer, I've managed to create following queries that are doing what I asked for.
-- Create function for easier interaction
CREATE OR REPLACE FUNCTION find_connected(integer, double precision, integer, integer[])
returns integer[] AS
$$
WITH RECURSIVE lines_r AS -- Recursive allow to use the same query on the output - is like continues append to result and use it inside a query
(SELECT ARRAY[id] AS idlist,
geom, id
FROM lines
WHERE id = $1
UNION ALL
SELECT array_append(lines_r.idlist, lines.id) AS idlist, -- append id list to array
lines.geom AS geom, -- keep geometry
lines.id AS id -- keep source table id
FROM (SELECT * FROM lines WHERE NOT $4 #> array[id]) lines, lines_r -- from source table and recursive table
WHERE ST_DWITHIN(lines.geom, lines_r.geom, $2) -- where lines are within 2 meters
AND NOT lines_r.idlist #> ARRAY[lines.id] -- recursive id list array not contain lines array
AND array_length(idlist, 1) <= $3
)
SELECT idlist
FROM lines_r WHERE array_length(idlist, 1) <= $3 ORDER BY array_length(idlist, 1) DESC LIMIT 1;
$$
LANGUAGE 'sql';
-- Create id chunks
WITH RECURSIVE groups_r AS (
(SELECT find_connected(id, 2, 3, ARRAY[id]) AS idlist, find_connected(id, 2, 3, ARRAY[id]) AS grouplist, id
FROM lines WHERE id = 1)
UNION ALL
(SELECT array_cat(groups_r.idlist, find_connected(lines.id, 2, 3, groups_r.idlist)) AS idlist,
find_connected(lines.id, 2, 3, groups_r.idlist) AS grouplist,
lines.id
FROM lines,
groups_r
WHERE NOT groups_r.idlist #> ARRAY[lines.id]
LIMIT 1))
SELECT
-- (SELECT array_agg(DISTINCT x) FROM unnest(idlist) t (x)) idlist, -- left for better understanding what is happening
row_number() OVER () chunk_id,
(SELECT array_agg(DISTINCT x) FROM unnest(grouplist) t (x)) grouplist,
id input_line_id
FROM groups_r;
The only problem is that performance is quite pure when the number of ids in the chunk increase. For a table with 300 rows and 20 ids per chunk, execution time is around 15 min, even with indexes on geometry and id columns.
Having a table and data like this
CREATE TABLE solicitations
(
id SERIAL PRIMARY KEY,
name text
);
CREATE TABLE donations
(
id SERIAL PRIMARY KEY,
solicitation_id integer REFERENCES solicitations, -- can be null
created_at timestamp without time zone NOT NULL DEFAULT (now() at time zone 'utc'),
amount bigint NOT NULL DEFAULT 0
);
INSERT INTO solicitations (name) VALUES
('solicitation1'), ('solicitation2');
INSERT INTO donations (created_at, solicitation_id, amount) VALUES
('2018-06-26', null, 10), ('2018-06-26', 1, 20), ('2018-06-26', 2, 30),
('2018-06-27', null, 10), ('2018-06-27', 1, 20),
('2018-06-28', null, 10), ('2018-06-28', 1, 20), ('2018-06-28', 2, 30);
How to make solicitation id's dynamic in following select statement using only postgres???
SELECT
"created_at"
-- make dynamic this begins
, COALESCE("no_solicitation", 0) AS "no_solicitation"
, COALESCE("1", 0) AS "1"
, COALESCE("2", 0) AS "2"
-- make dynamic this ends
FROM crosstab(
$source_sql$
SELECT
created_at::date as row_id
, COALESCE(solicitation_id::text, 'no_solicitation') as category
, SUM(amount) as value
FROM donations
GROUP BY row_id, category
ORDER BY row_id, category
$source_sql$
, $category_sql$
-- parametrize with ids from here begins
SELECT unnest('{no_solicitation}'::text[] || ARRAY(SELECT DISTINCT id::text FROM solicitations ORDER BY id))
-- parametrize with ids from here ends
$category_sql$
) AS ct (
"created_at" date
-- make dynamic this begins
, "no_solicitation" bigint
, "1" bigint
, "2" bigint
-- make dynamic this ends
)
The select should return data like this
created_at no_solicitation 1 2
____________________________________
2018-06-26 10 20 30
2018-06-27 10 20 0
2018-06-28 10 20 30
The solicitation ids that should parametrize select are the same as in
SELECT unnest('{no_solicitation}'::text[] || ARRAY(SELECT DISTINCT id::text FROM solicitations ORDER BY id))
One can fiddle the code here
I decided to use json, which is much simpler then crosstab
WITH
all_solicitation_ids AS (
SELECT
unnest('{no_solicitation}'::text[] ||
ARRAY(SELECT DISTINCT id::text FROM solicitations ORDER BY id))
AS col
)
, all_days AS (
SELECT
-- TODO: compute days ad hoc, from min created_at day of donations to max created_at day of donations
generate_series('2018-06-26', '2018-06-28', '1 day'::interval)::date
AS col
)
, all_days_and_all_solicitation_ids AS (
SELECT
all_days.col AS created_at
, all_solicitation_ids.col AS solicitation_id
FROM all_days, all_solicitation_ids
ORDER BY all_days.col, all_solicitation_ids.col
)
, donations_ AS (
SELECT
created_at::date as created_at
, COALESCE(solicitation_id::text, 'no_solicitation') as solicitation_id
, SUM(amount) as amount
FROM donations
GROUP BY created_at, solicitation_id
ORDER BY created_at, solicitation_id
)
, donations__ AS (
SELECT
all_days_and_all_solicitation_ids.created_at
, all_days_and_all_solicitation_ids.solicitation_id
, COALESCE(donations_.amount, 0) AS amount
FROM all_days_and_all_solicitation_ids
LEFT JOIN donations_
ON all_days_and_all_solicitation_ids.created_at = donations_.created_at
AND all_days_and_all_solicitation_ids.solicitation_id = donations_.solicitation_id
)
SELECT
jsonb_object_agg(solicitation_id, amount) ||
jsonb_object_agg('date', created_at)
AS data
FROM donations__
GROUP BY created_at
which results
data
______________________________________________________________
{"1": 20, "2": 30, "date": "2018-06-28", "no_solicitation": 10}
{"1": 20, "2": 30, "date": "2018-06-26", "no_solicitation": 10}
{"1": 20, "2": 0, "date": "2018-06-27", "no_solicitation": 10}
Thought its not the same that I requested.
It returns only data column, instead of date, no_solicitation, 1, 2, ...., to do so I need to use json_to_record, but I dont know how to produce its as argument dynamically
all!
Given the following table structure
DECLARE #TempTable TABLE
(
idProduct INT,
Layers INT,
LayersOnPallet INT,
id INT IDENTITY(1, 1) NOT NULL,
Summarized BIT NOT NULL DEFAULT(0)
)
and the following insert statement which generates test data
INSERT INTO #TempTable(idProduct, Layers, LayersOnPallet)
SELECT 1, 2, 4
UNION ALL
SELECT 1, 2, 4
UNION ALL
SELECT 1, 1, 4
UNION ALL
SELECT 2, 2, 4
I would like to summarize only those rows (by the Layers only) with the same idProduct and which will have the sum of layers equal to LayersOnPallet.
A picture is worth a thousand words:
From the picture above, you can see that only the first to rows were summarized because both have the same idProduct and the sum(layers) will be equal to LayersOnPallet.
How can I achieve this? It's there any way to do this only in selects (not with while)?
Thank you!
Perhaps this will do the trick. Note my comments:
-- your sample data
DECLARE #TempTable TABLE
(
idProduct INT,
Layers INT,
LayersOnPallet INT,
id INT IDENTITY(1, 1) NOT NULL,
Summarized BIT NOT NULL DEFAULT(0)
)
INSERT INTO #TempTable(idProduct, Layers, LayersOnPallet)
SELECT 1, 2, 4 UNION ALL
SELECT 1, 2, 4 UNION ALL
SELECT 1, 1, 4 UNION ALL
SELECT 2, 2, 4;
-- an intermediate temp table used for processing
IF OBJECT_ID('tempdb..#processing') IS NOT NULL DROP TABLE #processing;
-- let's populate the #processing table with duplicates
SELECT
idProduct,
Layers,
LayersOnPallet,
rCount = COUNT(*)
INTO #processing
FROM #tempTable
GROUP BY
idProduct,
Layers,
LayersOnPallet
HAVING COUNT(*) > 1;
-- Remove the duplicates
DELETE t
FROM #TempTable t
JOIN #processing p
ON p.idProduct = t.idProduct
AND p.Layers = t.Layers
AND p.LayersOnPallet = t.LayersOnPallet
-- Add the new, updated record
INSERT #TempTable
SELECT
idProduct,
Layers * rCount,
LayersOnPallet, 1
FROM #processing;
DROP TABLE #processing; -- cleanup
-- Final output
SELECT idProduct, Layers, LayersOnPallet, Summarized
FROM #TempTable;
Results:
idProduct Layers LayersOnPallet Summarized
----------- ----------- -------------- ----------
1 4 4 1
1 1 4 0
2 2 4 0
I have a simple query which returns the following rows :
Current rows:
Empl ECode DCode LCode Earn Dedn Liab
==== ==== ===== ===== ==== ==== ====
123 PerHr Null Null 13 0 0
123 Null Union Null 0 10 0
123 Null Per Null 0 20 0
123 Null Null MyHealth 0 0 5
123 Null Null 401 0 0 10
123 Null Null Train 0 0 15
123 Null Null CAFTA 0 0 20
However, I needed to see the above rows as follows :
Empl ECode DCode LCode Earn Dedn Liab
==== ==== ===== ===== ==== ==== ====
123 PerHr Union MyHealth 13 10 5
123 Null Per 401 0 20 10
123 Null Null Train 0 0 15
123 Null Null CAFTA 0 0 20
It's more like merging the succeeding rows into the preceding rows wherever there are Nulls encountered for EarnCode, DednCode & LiabCode. Actually what I wanted to see was to roll up everything to the preceding rows.
In Oracle we had this LAST_VALUE function which we could use, but in this case, I simply cannot figure out what to do with this.
In the example above, ECode's sum value column is Earn, DCode is Dedn, and LCode is Liab; notice that whenever either of ECode, DCode, or LCode is not null, there is a corresponding value in Earn, Dedn, or the Liab columns.
By the way, we are using SQL Server 2008 R2 at work.
Hoping for your advice, thanks.
This is basically the same technique as Tango_Guy does but without the temporary tables and with the sort made explicit. Because the number of rows per Empl is <= the number of rows already in place, I didn't need to make a dummy table for the leftmost table, just filtered the base data to where there was a match amongst the 3 codes. Also, I reviewed your discussion and the Earn and ECode move together. In fact a non-zero Earn in a column without an ECode is effectively lost (this is a good case for a constraint - non-zero Earn is not allowed when ECode is NULL):
http://sqlfiddle.com/#!3/7bd04/3
CREATE TABLE data(ID INT IDENTITY NOT NULL,
Empl VARCHAR(3),
ECode VARCHAR(8),
DCode VARCHAR(8),
LCode VARCHAR(8),
Earn INT NOT NULL,
Dedn INT NOT NULL,
Liab INT NOT NULL ) ;
INSERT INTO data (Empl, ECode, DCode, LCode, Earn, Dedn, Liab)
VALUES ('123', 'PerHr', NULL, NULL, 13, 0, 0),
('123', NULL, 'Union', NULL, 0, 10, 0),
('123', NULL, 'Per', NULL, 0, 20, 0),
('123', NULL, NULL, 'MyHealth', 0, 0, 5),
('123', NULL, NULL, '401', 0, 0, 10),
('123', NULL, NULL, 'Train', 0, 0, 15),
('123', NULL, NULL, 'CAFTA', 0, 0, 20);
WITH basedata AS (
SELECT *, ROW_NUMBER () OVER(ORDER BY ID) AS OrigSort, ROW_NUMBER () OVER(PARTITION BY Empl ORDER BY ID) AS EmplSort
FROM data
),
E AS (
SELECT Empl, ECode, Earn, ROW_NUMBER () OVER(PARTITION BY Empl ORDER BY OrigSort) AS EmplSort
FROM basedata
WHERE ECode IS NOT NULL
),
D AS (
SELECT Empl, DCode, Dedn, ROW_NUMBER () OVER(PARTITION BY Empl ORDER BY OrigSort) AS EmplSort
FROM basedata
WHERE DCode IS NOT NULL
),
L AS (
SELECT Empl, LCode, Liab, ROW_NUMBER () OVER(PARTITION BY Empl ORDER BY OrigSort) AS EmplSort
FROM basedata
WHERE LCode IS NOT NULL
)
SELECT basedata.Empl, E.ECode, D.Dcode, L.LCode, E.Earn, D.Dedn, L.Liab
FROM basedata
LEFT JOIN E
ON E.Empl = basedata.Empl AND E.EmplSort = basedata.EmplSort
LEFT JOIN D
ON D.Empl = basedata.Empl AND D.EmplSort = basedata.EmplSort
LEFT JOIN L
ON L.Empl = basedata.Empl AND L.EmplSort = basedata.EmplSort
WHERE E.ECode IS NOT NULL OR D.DCode IS NOT NULL OR L.LCode IS NOT NULL
ORDER BY basedata.Empl, basedata.EmplSort
Not sure if it is what you need but have you tried coalesc
SELECT Name, Class, Color, ProductNumber,
COALESCE(Class, Color, ProductNumber) AS FirstNotNull
FROM Production.Product ;
I have a solution, but it is very kludgy. If anyone has something better, that would be great.
However, an algorithm:
1) Get rownumbers for each distinct list of values in the columns
2) Join all columns based on rownumber
Example:
select Distinct ECode into #Ecode from source_table order by rowid;
select Distinct DCode into #Dcode from source_table order by rowid;
select Distinct LCode into #Lcode from source_table order by rowid;
select Distinct Earn into #Earn from source_table order by rowid;
select Distinct Dedn into #Dedn from source_table order by rowid;
select Distinct Liab into #Liab from source_table order by rowid;
select b.ECode, c.DCode, d.LCode, e.Earn, f.Dedn, g.Liab
from source_table a -- Note: a source for row numbers that will be >= the below
left outer join #Ecode b on a.rowid = b.rowid
left outer join #DCode c on a.rowid = c.rowid
left outer join #LCode d on a.rowid = d.rowid
left outer join #Earn e on a.rowid = e.rowid
left outer join #Dedn f on a.rowid = f.rowid
left outer join #Liab g on a.rowid = g.rowid
where
b.ecode is not null or
c.dcode is not null or
d.lcode is not null or
e.earn is not null or
f.dedn is not null or
g.liab is not null;
I didn't include Empl, since I don't know what role you want it to play. If this is all true for a given Empl, then you could just add it, join on it, and carry it through.
I don't like this solution at all, so hopefully someone else will come up with something more elegant.
Best,
David
I have a table containing transaction information on items. The possible transactions are purchase (type_id: 1) and sale(type_id: 2). A really stripped down version of the table "Transactions" is like the following:
Transaction_ID Item_ID Transaction_Type_ID Quantity Price Date
1, 1, 1, 50, 10, 6/1,
2, 2, 1, 40, 20, 13/1,
3, 1, 2, 10, 13, 14/1,
4, 2, 2, 20, 25, 3/2,
5, 1, 2, 20, 12, 20/2
I have the following query:
SELECT B.Item_ID
B.Quantity - (SELECT SUM(Quantity) FROM Transactions A Where A.Item_ID = B.Item_ID AND A.Transaction_Type_ID = 2) AS 'Quantity Left'
B.Price * (B.Quantity - (SELECT SUM(Quantity) FROM Transactions A Where A.Item_ID = B.Item_ID AND A.Transaction_Type_ID = 2)) AS 'Purchase Amount Left'
FROM Transaction B
WHERE B.Transaction_Type_ID = 1
AND B.Quantity - (SELECT SUM(Quantity) FROM Transactions A Where A.Item_ID = B.Item_ID AND A.Transaction_Type_ID = 2) > 0
Like you may already noticed, I am trying to get all the purchased items that are still in stock. You may notice also that there is an annoying sub-query repeating twice in the SELECT clause and once in the WHERE clause.
How can I reduce it? Is it possible to use WITH in the beginning of the statement in this case?
JOIN onto an aggregated derived table?
SELECT
B.Item_ID
B.Quantity - ISNULL(A.SUMQuantity, 0) AS 'Quantity Left',
B.Price * (B.Quantity - ISNULL(A.SUMQuantity, 0)) AS 'Purchase Amount Left'
FROM
Transaction B
LEFT JOIN
(
SELECT SUM(Quantity) AS SUMQuantity, Item_ID
FROM Transaction
WHERE Transaction_Type_ID = 2
GROUP BY Item_ID
) ON A.Item_ID = B.Item_ID
WHERE
B.Transaction_Type_ID = 1
AND
B.Quantity - ISNULL(A.SUMQuantity, 0) > 0
If you always have rows where Transaction_Type_ID = 2, then you can remove LEFT JOIN and ISNULL. In your current code, you assume you always have rows.
It also looks like you're mixing entities in the same table based on Transaction_Type_ID. A simple SUM(Quantity) .. GROUP BY Item_ID would be more correct if
- sales are <0 quantity transaction
- restocks are >0 quantity transaction2
You should be able to do something like the following query. As I'm not totally sure what you are trying to do you may have to modify it to suit but it should at least be a good starting point.
select Transaction_Type_ID, Item_ID
, Quantity - QuantitySold
, Value * (Quantity - QuantitySold)
from
(
select B.Transaction_Type_ID, B.Item_ID
, sum(case when B.Transaction_Type_ID = 1 then Quantity else 0 end) QuantityPurchased
, sum(case when B.Transaction_Type_ID = 1 then Quantity * Price else 0 end) Value
, sum(case when B.Transaction_Type_ID = 2 then Quantity else 0 end) QuantitySold
from Transaction B
group by B.Transaction_Type_ID, B.Item_ID
) x;
This should work although untested:
SELECT B.Item_ID
B.Quantity - C.Quantity AS 'Quantity Left'
B.Price * (B.Quantity - C.Quantity) AS 'Purchase Amount Left'
FROM Transaction B
cross apply
SELECT SUM(Quantity) Quantity FROM Transactions A
WHERE A.Item_ID = B.Item_ID AND A.Transaction_Type_ID = 2) C
WHERE B.Transaction_Type_ID = 1 AND B.Quantity > C.Quantity