Asynchronous DML operations tracking - oracle12c

I want log the DML operations on the Oracle table I.e pre and post state objects asynchronously without trigger.can anyone suggest me on this?

Related

CICS optimization

I have a CICS program, which will read a DB2 table to obtain the rules based of the field name. Let's say my record type is AA and this type will have at least 20 rules that I need to do loop in DB2 tables. Like wise I have few record types and many more rules tied to each type.
I get data from MQ and for each record type I call separate CICS program. So when I have to process high load, DB2 rules table is getting held by so many program and this causing performance issue.
I want to get away from DB2 and load this rules in CICS Container and maintain periodically. But I'm not sure if this will work. I don't want to use or create VSAM's. I'm looking for some kind of storage I could use and maintain in CICS.
My question is. If I create a pipeline and container will I able to access them by multiple program at a same time and will data stored rules stay in Container after successful get?
Before reading further, please understand that DB2 solves all the sharing and locking problems very efficiently. I've never encountered a problem with too many transactions trying to read a DB2 table concurrently. Updating, yes; a mix of updates and reads, yes; just reading, no.
So, in order to implement your own caching of a DB2 table inside CICS you need a data store. As #BruceMartin indicates, a TS queue is an option, I would say that given your other constraints it is your only option.
In order to automate this you must create a trigger on your DB2 table that fires after INSERT, UPDATE, or DELETE. The trigger must cause the TS queue to be repopulated. The repopulation mechanism could be EXCI or MQ, as the code performing the repopulation must execute within CICS.
During the repopulation, all transactions reading the TS queue must wait for the repopulation to complete. This can be done with the CICS ENQ API, with a caveat. In order to prevent all these transactions from single-threading through their TS queue read due to always ENQing, I suggest using two TS queues, one holds the DB2 data and the other is a "trigger" TS queue. The contents of the trigger TS queue are not significant, you can store a timestamp, or "Hello, World", or "ABC" it doesn't matter.
A normal transaction attempts a read of the trigger TS queue. If the read is unsuccessful the transaction simply reads the TS queue with the DB2 data. But if the read is successful then repopulation is in progress and the transaction ENQs on a resource (call it XYZ). On return from the ENQ, DEQ and read the TS queue with the DB2 data.
During repopulation, a program executed by the trigger on the DB2 table executes in CICS. First ENQing on resource XYZ, then creating the trigger TS queue, then deleting the TS queue with the DB2 data, then creating the TS queue and populating it with the new DB2 data, deleting the trigger TS queue, finally DEQing resource XYZ. I would strongly suggest using a multi-row SELECT to obtain the DB2 data as it is significantly more efficient than the traditional OPEN CURSOR, FETCH, CLOSE CURSOR method.

Is optimistic locking equivalent to Select For Update?

It is my first time using EF Core and DDD concepts. Our database is Microsoft SQL Server. We use optimistic concurrency based on the RowVersion for user requests. This handles concurrent read and writes by users.
With the DDD paradigma user changes are not written directly to the database nor is the logic handled in database with a stored procedure. It is a three step process:
get aggregate from repository that pulls it from the database
update aggregate through domain commands that implement business logic
save aggregate back to repository that writes it to the database
The separation of read and write in the application logic can lead again to race conditions between parallel commands.
Since the time between read and write in the backend is normally fairly short, those race conditions can be handled with optimistic and also pessimistic locking.
To my understanding optimistic concurrency using RowVersion is sufficient for lost update problem, but not for write skew as is shown in Martin Kleppmann's book "Designing Data-Intensive Applications". This would require locking the read records.
To prevent write skew a common solution is to lock the records in step 1 with FOR UPDATE or in SQL Server with the hints UPDLOCK and HOLDLOCK.
EF Core does neither support FOR UPDATE nor SQL Server's WITH.
If I'm not able to lock records with EF Core does it mean there is no way to prevent write skew except using Raw SQL or Stored Procedures?
If I use RowVersion, I first check the RowVersion after getting the aggregate from the database. If it doesn't match I can fail fast. If it matches it is checked through EF Core in step 3 when updating the database. Is this pattern sufficient to eliminate all race conditions except write skew?
Since the write skew race condition occurs when read and write is on different records, it seems that there can always be a transaction added maybe later during development that makes a decision on a read. In a complex system I would not feel safe if it is not just simple CRUD access. Is there another solution when using EF Core to prevent write skew without locking records for update?
If you tell EF Core about the RowVersion attribute, it will use it in any update statement. BUT you have to be careful to preserve the RowVersion value from your data retrieval. The usual work pattern would retrieve the data, the user potentially edits the data, and then the user saves the data. When the user saves the data, you would normally have EF retrieve the entity, update the entity with the user's changes, and save the updates. EF uses the RowVersion in a Where clause to ensure nothing has changed since you read the data. This is the tricky part- you want to make sure the RowVersion is still the same as your initial data retrieval, not the second retrieval used to update the entity before saving.

How to defer a slow database operation with SQLAlchemy / Python

I've got a REST interface to a database, implemented in Python FastAPI + SQLAlchemy. I want to run some (relatively) costly operations on my data, such as calculating cryptographic hashes or signatures. My hope is to be able to:
Perform the insert, so that all referential integrity is checked by the database.
Return the REST response to the front-end (primary key of insert).
Asynchronously calculate the costly hash/signature/whatever, and update into the DB (same table).
I'd prefer to have a cross-database solution, but it seems running a trigger on insert/update might be the way to go. If so, my target in production is PostgreSQL.
Does anyone have a suggested approach they'd favour?
I would try to separate the concerns.
Build a fast REST service that does items 1 and 2 (concern: consistent data persistence);
Create a scheduled job that does item 3 in the background using pgAgent or s/th similar (concern: data "decoration") or use a notify/listen setup to invoke a separate process.
Running a trigger would do the actions synchronously. As far as I understand this is not your intention.

Optimize trigger to write data on data warehouse

We are using using trigger to store the data on warehouse. Whenever some process is executed a trigger fires and store some information on the data warehouse. When number of transactions increase it affects the processing time.
What would be the best way to do this activity ?
I was thinking about Foreign Data Wrapper or AWS Read replica. Is there any other way to do such activity would be appreciated as well. Or I might not have to use trigger at all ?
Here are quick tips
Reduce latency between database server
Target database table should have less index, To Improve DML Performance
Logical replication may solve syncing data to warehouse
Option 3 is an architectural change, though you don't need to write triggers on each table to sync data

How mongodb handles users requests when multiple insert commands execute

I am new in mongodb and i want to know How mongodb handels users requests.
What happened if the multiple users fire the multiple insert commands or read commands at the same time.
2:-When or where Snapshot coming in to the picture.(Which phase).
Multiple Inserts and Multiple Reads
MongoDB allows multiple clients to read and write the same data.
In order to ensure consistency, it uses locking and other concurrency control measures to prevent multiple clients from modifying the same piece of data simultaneously
Read this documentation it will give you complete info about concurrency
concurrency reference
MongoDB allows very fast writes and updates by default. The tradeoff is that you are not explicitly notified of failures.By default most drivers do asynchronous, ‘unsafe’ writes - this means that the driver does not return an error directly, similar to INSERT DELAYED with MySQL. If you want to know if something succeeded, you have to manually check for errors using getLastError.
MongoDB doesn't offer durability if you use the default configuration. It writes once every minute data to the disk.
This can be configured using j Option and Write Concern on the insert query.
write-concern reference
Snapshot
The $snapshot operator prevents the cursor from returning a document more than once because an intervening write operation results in a move of the document.
Even in snapshot mode, objects inserted or deleted during the lifetime of the cursor may or may not be returned.
snapshot reference
References: here and here
Hope it Helps!!
I am asking that question in the context of journaling in mongodb. As per the mongodb documentation. A write operation first come into the private view.So the Quetion is if multiple write operation have been performed at the same time,than multiple private view will be created...
2;-Checkpoints and snapshot:in the journaling process which point of place snapshot of data is available..?