Sample code for Cassandra trigger - triggers

First of all the basic question is -> how to implement trigger in Cassandra?
How do I make delete operation in multiple tables in Cassandra Trigger. Is there any sample code for delete? If there is any detailed documentation on Cassandra Trigger with sample codes it would be very helpful.
Thanks
Chaity

you can find here a Documentation about using CQL
http://docs.datastax.com/en/cql/3.1/cql/cql_reference/trigger_r.html
It's this maybe you want to have?

I hope it is not too late fore response
In order to implement Cassandra Trigger, you need to:
implement ITrigger interface from Cassandra Maven Dependency
build jar with dependencies and place it under /etc/cassandra/triggers folder (location may vary depending on environment: docker, local and etc)
start Cassandra and execute CREATE TRIGGER ... cql query
You can check my sample project https://github.com/timurt/cassandra-trigger
Inside I implemented detection of the insert,update,delete operations for partition,row,cell entities
Hope this will help you!

Related

SAS Viya - Environment Manager: Job triggers

I am currently looking into SAS Viya 3.4 to replace SAS 9.4.
Now I was curious to see the possibilities of the Environment Manager in scheduling Jobs and mantaining and creating Job flows. However, I noticed that I could only Drag and Drop Jobs in a flow and connect them with very few configurable options. Also as a trigger to start a Jobflow I was only able to select a time event. I am wondering if there are other trigger types to choose from. Like a Job will be triggered if a specific table exists or a file exists [or ...]. Neither did I see the possibility to trigger/start a job based on the return code of the previous job.
Also it does not seem to be smart enough to make sure two jobs don't access a library with write access at the same time.
I can't see how SAS Viya could replace a Job Orchestration Tool. However, I feel like the tool was built to replace such an Orchestration Tool. Did I miss something or is it just not possible to do so with the Environment Manager in SAS Viya?
Any help/insights is highly appreciated. I already searched through the documentation but could not find anything.. Maybe I was just looking at the wrong place?
Why 3.4 and not 3.5 (or Viya 4)?
If you want to use Viya with your own Job Orchestration software you can consider this tool (built by my team): https://cli.sasjs.io/job/
We deployed it on Jenkins for this customer: https://www.sas.com/en_us/news/press-releases/2021/july/sas-partnership-with-lloyds-list-intelligence.html

jOOQ code generation fails on triggers - how to skip them?

In my application I use Flyway to migrate the database. I have a SQL file containing the database structure which includes some CREATE TRIGGER statements. jOOQ code generation fails because it uses H2 which does not support triggers. What is the best way to work around this problem?
Can I skip CREATE TRIGGER statements on code generation?
Refactor CREATE TRIGGER statements into a separate SQL file. Can I skip SQL files based on file name for the code generation?
Can I use e.g. docker to start a MariaDB server which is used instead H2 for code generation?
Or maybe you have a better or nicer idea how to deal with trigger creation?
You can ignore certain statements like this:
-- [jooq ignore start]
-- Anything between these two tokens is ignored by the jOOQ parser
CREATE TRIGGER ...
-- [jooq ignore stop]
Find the docs here: https://www.jooq.org/doc/3.1/manual/code-generation/codegen-ddl/#N90C34

Deploying DB2 user define functions in sequence of dependency

We have about 200 user define functions in DB2. These UDF are generated by datastudio into a single script file.
When we create a new DB, we need to run the script file several times because some UDF are dependent on other UDF and cannot be create until the precedent functions are created first.
Is there a way to generate a script file so that the order they are deployed take into account this dependency. Or is there some other technique to arrange the order efficiently?
Many thanks in advance.
That problem should only happen if the setting of auto_reval is not correct. See "Creating and maintaining database objects" for details.
Db2 allows to create objects in an "unsorted" order. Only when the object is used (accessed), the objects and its depending objects are checked. The behavior was introduced a long time ago. Only some old, migrated databases keep auto_reval=disabled. Some environments might set it based on some configuration scripts.
if you still run into issues, try setting auto_reval=DEFERRED_FORCE.
The db2look system command can generate DDL by by object creation time with the -ct option, so that can help if you don't want to use the auto_reval method.

How to create a Derived Column in IIDR CDC for Kafka Topics?

we are currently working on a project to get data from an IBM i (formerly known as AS400) system with IBM IIDR CDC to Apache Kafka (Confluent Plattform).
So far everything was working fine, everything get replicated and appears in the topics.
Now we are trying to create a derived column in a table mapping which gives us the journal entry type from the source system (IBM i).
We would like to have the information to see whether it was an Insert,Update or Delete Operation.
Therefore we crated a derived column called OPERATION as Char(2) with Expression &ENTTYP.
But unfortunately the Kafka Topic doesn't show the value.
Can someone tell me what we were missing here?
Best regards,
Michael
I own the IBM IDR Kafka target, so lets see if I can help a bit.
So you have two options. The recommended way to see audit information would be to use one of the audit KCOPs. For instance you might use this one...
https://www.ibm.com/support/knowledgecenter/en/SSTRGZ_11.4.0/com.ibm.cdcdoc.cdckafka.doc/tasks/kcopauditavroformat.html#kcopauditavroformat
You'll note that the audit.jcf property in the example is set to CCID and ENTTYP, so you get both the operation type and the transaction id.
Now if you are using derived columns I believe you would follow the following procedure... https://www.ibm.com/support/knowledgecenter/en/SSTRGZ_11.4.0/com.ibm.cdcdoc.mcadminguide.doc/tasks/addderivedcolumn.html
If this is not working out, open a ticket and the L2 folks will provide a deeper debug. Oh also if you end up adding one, does the actual column get created in the output, just with no value in it?
Cheers,
Shawn
your colleagues told me how to do it:
DR Management Console -> Go to the "Filtering" tab -> find out "Derived Column" column in "Filter Columns" (Source Columns) section and mark "replicate" next to the column. Save table mapping afterwards and see if it appears now.
Unfortunately a derived column isn`t automatically selected for replication, but now I know how to select it.
you need to duplicate the new column on filter:
https://www.ibm.com/docs/en/idr/11.4.0?topic=mstkul-mapping-audit-fields-journal-control-fields-kafka-targets

tsqlt - create separate database for unit testing

I have started using tsqlt, and my question is it possible to have a separate database with just the testing stuff? (tables/sp's/assemblies etc).
This testing database will sit on the same instance as the actual/target database.
If I try to fake a table I get the following error:
FakeTable could not resolve the object name, 'target_db.dbo.Sometable'
Has anyone had any experience with this?
Thanks.
As you discovered, this isn't currently possible as the mocking procedures don't accept three part names. This is something that's been covered at the User feedback forums of SQL Test (RedGate's product that acts as a front end to tSQLt) at : http://sqltest.uservoice.com/forums/140716-sql-test-forum/suggestions/2421628-reduce-the-footprint
Dennis Lloyd, one of the authors of the tSQLt framework wrote towards the end of that thread that support of a separate 'tSQLt' database was something they would keep under consideration.
Also a related issue of mocking remote objects at http://sqltest.uservoice.com/forums/140716-sql-test-forum/suggestions/2423449-being-able-to-mock-fake-remote-objects
I hope that helps,
Dave
You can now do this, so long as the tSQLt framework is in the other database:
EXEC tSQLt.FakeTable '[dbo].[Position]';
EXEC OtherDB.tSQLt.FakeTable '[dbo].[PositionArchive]';
Source
This means that you can at least put your tests where you want them, though you have to install the framework in the actual database under test. Which is not perfect, but it's better.