how to show error meassage using trigger in SAP HANA - triggers

I’m trying to create a trigger such as whenever I insert a new record in the Sales table, the Product table should update is “Inventory” based on Sales table “quantity”:
Product table Sales table
P_ID|QTY | |P_ID|QTY |
1 |10 | |1 |5 |
2 |15 |
Code:
create trigger "KABIL_PRACTICE"."SALES_TRIGGER"
after insert on "KABIL_PRACTICE"."SALES" REFERENCING NEW ROW AS newrow for each row
begin
update "KABIL_PRACTICE"."Inventory" set "Inventory" = "Inventory" - :newrow.QTY
where "P_ID" = :newrow.P_ID ;
end;
I get the expected result when I insert a record into the Sales table with P-ID 1 and quantity 5:
updated Product table Sales table
P_ID|QTY | |P_ID|QTY |
1 |5 | |1 |5 |
2 |15 | |1 |5 |
But if I insert a record into the Sales table again with P_ID 1 and quantity 6, the Sales table quantity is more than the available inventory quantity means it goes to negative value...
updated Product table Sales table
P_ID|QTY | |P_ID|QTY |
1 |-1 | |1 |5 |
2 |15 | |1 |5 |
|1 |6 |
I just want to intimate sales order quantity value is higher than available inventory quantity and it should not go to negative values... is there is any way to this...
I tried this code:
create trigger "KABIL_PRACTICE"."SALES_UPDATE_TRIGGER"
before insert on "KABIL_PRACTICE"."SALES" REFERENCING NEW ROW AS newrow for each row
begin
if("Inventory" > :newrow.QTY )
Then
update "KABIL_PRACTICE"."Inventory" set "Inventory" = "Inventory" - :newrow.QTY
where "P_ID" = :newrow.P_ID ;
elseif ("Inventory" < :newrow.QTY )
Then NULL;
delete "KABIL_PRACTICE"."SALES" where "QTY" = 0;
end;

The problem you have here is a classic. Usually the two business processes "SALES" and "ORDER FULFILLMENT" are separated, so the act of selling something would not have an immediate effect on the stock level. Instead, the order fulfilment could actually use other resources (e.g. back ordering from another vendor or producing more). That way the sale would be de-coupled from the current stock levels.
Anyhow, if you want to keep it a simple dependency of "only-sell-whats-available-right-now" then you need to consider the following:
multiple sales could be going on at the same time
what to do with sales that can only be partly fulfilled, e.g. should all available items be sold or should the whole order be handled as not able to fulfil?
To address the first point, again, different approaches can be taken. The easiest probably is to set a lock on the inventory records you are interested in as long as you make the decision(s) whether to process the order (and inventory transaction) or not.
SELECT QTY "KABIL_PRACTICE"."Inventory" WHERE P_ID = :P_ID FOR UPDATE;
This statement will acquire a lock on the relevant row(s) and return or wait until the lock gets available in case another session already holds it.
Once the quantity of an item is retrieved, you can call the further business logic (fulfil order completely, partly or decline).
Each of these application paths could be a stored procedure grouping the necessary steps.
By COMMITing the transaction the lock will get released.
As a general remark: this should not be implemented as triggers. Triggers should generally not be involved in application paths that could lead to locking situations in order to avoid system hang situations. Also, triggers don't really allow for a good understanding of the order in which statements get executed, which easily can lead to unwanted side effects.
Rather than triggers, stored procedures can provide an interface for applications to work with your data in a meaningful and safe way.
E.g.
procedure ProcessOrder
for each item in order
check stock and lock entry
(depending on business logic):
subtract all available items from stock to match order as much as possible
OR: only fulfil order items that can be fully provided and mark other items as not available. Reduce sales order SUM.
OR: decline the whole order.
COMMIT;
Your application can then simply call the procedure and take retrieve the outcome data (e.g. current order data, status,...) without having to worry about how the tables need to be updated.

Related

Postgresql: More efficient way of joining tables based on multiple address fields

I have a table that lists two connected values, ID and TaxNumber(TIN) that looks somewhat like this:
IDTINMap
ID | TIN
-------------------------
1234567890 | 654321
-------------------------
3456321467 | 986321
-------------------------
8764932312 | 245234
An ID can map to multiple TINs, and a TIN might map to multiple IDs, but there is a Unique constraint on the table for an ID, TIN pair.
This list isn't complete, and the table has about 8000 rows. I have another table, IDListing that contains metadata for about 9 million IDs including name, address, city, state, postalcode, and the ID.
What I'm trying to do is build an expanded ID - TIN map. Currently I'm doing this by first joining the IDTINMap table with IDListing on the ID field, which gives something that looks like this in a CTE that I'll call Step1 right now:
ID | TIN | Name | Address | City | State | Zip
------------------------------------------------------------------------------------------------
1234567890 | 654321 | John Doe | 123 Easy St | Seattle | WA | 65432
------------------------------------------------------------------------------------------------
3456321467 | 986321 | Tyler Toe | 874 W 84th Ave| New York | NY | 48392
------------------------------------------------------------------------------------------------
8764932312 | 245234 | Jane Poe | 984 Oak Street|San Francisco | CA | 12345
Then I go through again and join the IDListing table again, joining Step1 on address, city, state, zip, and name all being equal. I know I could do something more complicated like fuzzy matching, but for right now we're just looking at exact matches. In the join I preserve the ID in step 1 as 'ReferenceID', keep the TIN, and then have another column of all the matching IDs. I don't keep any of the address/city/state/zip info, just the three numbers.
Then I can go back and insert all the distinct pairs into a final table.
I've tried this with a query and it works and gives me the desired result. However the query is slower than to be desired. I'm used to joining on rows that I've indexed (like ID or TIN) but it's slow to join on all of the address fields. Is there a good way to improve this? Joining on each field individually is faster than joining on a CONCAT() of all the fields (This I have tried). I'm just wondering if there is another way I can optimize this.
Make the final result a materialized view. Refresh it when you need to update the data (every night? every three hours?). Then use this view for your normal operations.

Versioning in the database

I want to store full versioning of the row every time a update is made for amount sensitive table.
So far, I have decided to use the following approach.
Do not allow updates.
Every time a update is made create a new
entry in the table.
However, I am undecided on what is the best database structure design for this change.
Current Structure
Primary Key: id
id(int) | amount(decimal) | other_columns
First Approach
Composite Primary Key: id, version
id(int) | version(int) | amount(decimal) | change_reason
1 | 1 | 100 |
1 | 2 | 20 | correction
Second Approach
Primary Key: id
Uniqueness Index on [origin_id, version]
id(int) | origin_id(int) | version(int) | amount(decimal) | change_reason
1 | NULL | 1 | 100 | NULL
2 | 1 | 2 | 20 | correction
I would suggest a new table which store unique id for item. This serves as lookup table for all available items.
item Table:
id(int)
1000
For the table which stores all changes for item, let's call it item_changes table. item_id is a FOREIGN KEY to item table's id. The relationship between item table to item_changes table, is one-to-many relationship.
item_changes Table:
id(int) | item_id(int) | version(int) | amount(decimal) | change_reason
1 | 1000 | 1 | 100 | NULL
2 | 1000 | 2 | 20 | correction
With this, item_id will never be NULL as it is a valid FOREIGN KEY to item table.
The best method is to use Version Normal Form (vnf). Here is an answer I gave for a neat way to track all changes to specific fields of specific tables.
The static table contains the static data, such as PK and other attributes which do not change over the life of the entity or such changes need not be tracked.
The version table contains all dynamic attributes that need to be tracked. The best design uses a view which joins the static table with the current version from the version table, as the current version is probably what your apps need most often. Triggers on the view maintain the static/versioned design without the app needing to know anything about it.
The link above also contains a link to a document which goes into much more detail including queries to get the current version or to "look back" at any version you need.
Why you are not going for SCD-2 (Slowly Changing Dimension), which is a rule/methodology to describe the best solution for your problem. Here is the SCD-2 advantage and example for using, and it makes standard design pattern for the database.
Type 2 - Creating a new additional record. In this methodology, all history of dimension changes is kept in the database. You capture attribute change by adding a new row with a new surrogate key to the dimension table. Both the prior and new rows contain as attributes the natural key(or other durable identifiers). Also 'effective date' and 'current indicator' columns are used in this method. There could be only one record with the current indicator set to 'Y'. For 'effective date' columns, i.e. start_date, and end_date, the end_date for current record usually is set to value 9999-12-31. Introducing changes to the dimensional model in type 2 could be very expensive database operation so it is not recommended to use it in dimensions where a new attribute could be added in the future.
id | amount | start_date |end_date |current_flag
1 100 01-Apr-2018 02-Apr-2018 N
2 80 04-Apr-2018 NULL Y
Detail Explanation::::
Here, all you need to add the 3 extra column, START_DATE, END_DATE, CURRENT_FLAG to track your record properly. When the first time record inserted # source, this table will be store the value as:
id | amount | start_date |end_date |current_flag
1 100 01-Apr-2018 NULL Y
And, when the same record will be updated then you have to update the "END_DATE" of the previous record as current_system_date and "CURRENT_FLAG" as "N", and insert the second record as below. So you can track everything about your records. as below...
id | amount | start_date |end_date |current_flag
1 100 01-Apr-2018 02-Apr-2018 N
2 80 04-Apr-2018 NULL Y

Will huge table entries slow down query performance?

Let's say I have a table persons that looks like this:
|id | name | age |
|---|------|-----|
|1 |foo |21 |
|2 |bar |22 |
|3 |baz |23 |
and add a new column history where I store a big JSON blob of, let's say ~4MB.
|id | name | age | history |
|---|------|-----|----------|
|1 |foo |21 |JSON ~ 4MB|
|2 |bar |22 |JSON ~ 4MB|
|3 |baz |23 |JSON ~ 4MB|
Will this negatively impact queries against this table overall?
What about queries like:
SELECT name FROM persons WHERE ... (Guess: This won't impact performance)
SELECT * FROM persons WHERE ... (Guess: This will impact performance as the database needs to read and send the big history entry)
Are there any other side effects like various growing caches etc. that could slow down database performance overall?
The JSON attribute will not be stored in the table itself, but in the TOAST table that belongs to the table, which is where all variable-length entries above a certain size are stored (and compressed).
Queries that do not read the JSON values won't affect performance at all, since the TOAST entries won't even be touched. Only if you read the JSON value performance will be affected, mostly because of the additional data read from storage and transmitted to the client, but of course the additional data will also reside in the database cache and compete with other data there.
So your guess is right.
depending on how many transactions and the type of transactions (Create, Read, Update, Delete) that are using this table there could be performance issues.
if you are updating the history lots, you will be doing a lot of updates transactions which will cause the table to reindex each update transaction.
let say table persons is called every time a user logs in and it also updates the history for that user. you are doing a select and update, if this is happening a lot it will cause lots of reindexing and could cause issues when users are logging on and other users are also updating history.
a better option would be to have a separate table for personupdates with
a relation to the person table.

How to disallow parallel INSERTs in PostgreSQL?

I have an ON INSERT trigger in PostgreSQL 9.2, which does some calculation and injects extra data into every row being inserted. The problem is that I don't want any two INSERT transactions to happen in parallel. I want them to go one by one, because my calculations have to be incremental and take into account previous calculation results. Is it possible to achieve this somehow?
I'm trying to create a rolling balance on a list of payments:
id | amount | balance
----------------------
1 | 50 | 50
2 | 130 | 180
3 | -75 | 105
4 | 15 | 120
The balance has to be calculated on every INSERT as a previous balance plus a new payment amount. If INSERTs happen in parallel I have duplicates in balance column, which is logical. I need to find a way how to enforce them to be executed in a strict sequential order.
SERIALIZABLE transaction didn't help, mostly because of the problem explained here. I simply get errors on most of transactions: could not serialize access due to read/write dependencies among transactions.
What did help was an explicit LOCK before every INSERT:
LOCK TABLE receipt IN SHARE ROW EXCLUSIVE MODE;
INSERT INTO receipt ...
All inserts are happening consequently now.

DB2 table partitioning and delete old records based on condition

I have a table with few million records.
___________________________________________________________
| col1 | col2 | col3 | some_indicator | last_updated_date |
-----------------------------------------------------------
| | | | yes | 2009-06-09.12.2345|
-----------------------------------------------------------
| | | | yes | 2009-07-09.11.6145|
-----------------------------------------------------------
| | | | no | 2009-06-09.12.2345|
-----------------------------------------------------------
I have to delete records which are older than month with some_indicator=no.
Again I have to delete records older than year with some_indicator=yes.This job will run everyday.
Can I use db2 partitioning feature for above requirement?.
How can I partition table using last_updated_date column and above two some_indicator values?
one partition should contain records falling under monthly delete criterion whereas other should contain yearly delete criterion records.
Are there any performance issues associated with table partitioning if this table is being frequently read,upserted?
Any other best practices for above requirement will surely help.
I haven't done much with partitioning (I've mostly worked with DB2 on the iSeries), but from what I understand, you don't generally want to be shuffling things between partitions (ie - making the partition '1 month ago'). I'm not even sure if it's even possible. If it was, you'd have to scan some (potentially large) portion of your table every day, just to move it (select, insert, delete, in a transaction).
Besides which, partitioning is a DB Admin problem, and it sounds like you just have a DB User problem - namely, deleting 'old' records. I'd just do this in a couple of statements:
DELETE FROM myTable
WHERE some_indicator = 'no'
AND last_updated_date < TIMESTAMP(CURRENT_DATE - 1 MONTH, TIME('00:00:00'))
and
DELETE FROM myTable
WHERE some_indicator = 'yes'
AND last_updated_date < TIMESTAMP(CURRENT_DATE - 1 YEAR, TIME('00:00:00'))
.... and you can pretty much ignore using a transaction, as you want the rows gone.
(as a side note, using 'yes' and 'no' for indicators is terrible. If you're not on a version that has a logical (boolean) type, store character '0' (false) and '1' (true))