Can Q Subscription in Q-replication be loaded/exported - db2

I'm new to Q- rep. I've gone through the IBM doc on the topic.There is a step regarding Q subscription, we have a requirement to replicate 250 plus tables.
What I'm supposed to do is to create a replication q map & then use them for q subscription,
-> specify the queues that will be used to transmit data
-> the rows and columns that we want to replicate.
This is a bulky task since we've more than 250 tables.
What i couldn't figure out is that, is there a way to export the Q subscription that is prepared once?
What I mean is that once I create Q subscription & perform Q replication at offshore, can my Subscription mapping be exported & send to onsite so that the same procedure can be done only without any need for manually performing Q subscription step again?

The Replication Center, which you presumably are going to use to configure replication, will indeed generate a script that you would run to implement the changes. You can save this script and reuse it in other environments. http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.swg.im.iis.repl.qrepl.doc/topics/iiyrqrcccrunsqls.html

Related

Streaming data Complex Event Processing over files and a rather long period

my challenge:
we receive files every day with about 200.000 records. We keep the files for approx 1 year, to support re-processing, etc..
For the sake of the discussion assume it is some sort of long lasting fulfilment process, with a provisioning-ID that correlates records.
we need to identify flexible patterns in these files, and trigger events
typical questions are:
if record A is followed by record B which is followed by record C, and all records occured within 60 days, then trigger an event
if record D or record E was found, but record F did NOT follow within 30 days, then trigger an event
if both records D and record E were found (irrespective of the order), followed by ... within 24 hours, then trigger an event
some pattern require lookups in a DB/NoSql or joins for additional information either to select the record, or to put into the event.
"Selecting a record" can be simple "field-A equals", but can also be "field-A in []" or "filed-A match " or "func identify(field-A, field-B)"
"days" might also be "hours" or "in previous month". Hence more flexible then just "days". Usually we have some date/timestamp in the record. The maximum is currently "within 6 months" (cancel within setup phase)
The created events (preferably JSON) needs to contain data from all records which were part of the selection process.
We need an approach that allows to flexibly change (add, modify, delete) the pattern, optionally re-processing the input files.
Any thoughts on how to tackle the problem elegantly? May be some python or java framework, or does any of the public cloud solutions (AWS, GCP, Azure) address the problem space especially well?
thanks a lot for your help
After some discussions and readings, we'll try first Apache Flink with the FlinkCEP library. From the docs and blog entries it seems to be able to do the job. It also seems AWS's choice, running on their EMR cluster. We didn't find any managed service on GCP nor Azure providing the functionalities. Of course we can always deploy and manage it ourselves. Unfortunately we didn't find a Python framework

How to take data from 2 databases (with same schema) and copy it into 1 database using Data factory

I want to take data from 2 databases and copy(coalesce) it into 1 using Data factory.
The issue is: It seems that multiple inputs is not allowed for copy activities.
So i resorted to having 2 different datasets which are exact copies but with a different name... and then putting 2 different activities into the 1 pipeline which use their specific output dataset.
It just seems odd and wrong to do it this way.
Can i have some help.
This is what my diagram currently looks like:
Is there no way of just copying data from 2 seperate databases (which have the same structure but different data) to the 1 database?
The short answer is yes. But you need to work within the constraints of how ADF handles this.
A couple of things to help...
You'll always need at least 2 activities to do this when using the copy type activity. Microsoft of course charges per activity execution in ADF, so they aren't going to allow you to take shortcuts having many inputs and output per single copy activity (single charge).
The approach you show above is ok and to pass the ADF validation as you've found you simply need to have the output datasets created separately and called different things. Even if they still refer to the same underlying target table etc. This is really only a problem for the copy activity. What you could do is land the data firstly into separate staging tables in the Azure target database just for the copy (1:1). Then have a third downstream activity that executes a stored procedure that does the union of tables. In this case you could have 2 inputs to 1 output in the activity if you want to have that level of control in ADF.
Like this:
Final point, if you don't want the activities to execute in parallel you could chain the datasets to enforce a fake dependency or add a simple 'delay' clause to one of the copy operations. A delay on an activity would be simpler than provisioning a time slice offset.
Hope this helps

Handling multiple updates to a singe db field

To give a bit of background to my issue, I've got a very basic banking system. The process at the moment goes:
A transaction is added to an Azure Service Bus
An Azure Webjob picks up this message and creates the new row in the SQL DB.
The balance (total) of the account needs to be updated with the value in the message (be it + or -).
So for example if the field is 10 and I get two updates (10, -5) the field needs to be 15 (10 + 10 - 5), it isn't a case of just updating the value, it needs to do some arithmetic.
Now I'm not too sure how to handle the update of the balance as there could be many requests come in so need to update accordingly.
I figured one way is to do the update on the SQL side rather than the web job, but that doesn't help with concurrent updates.
Can I do some locking with the field? But what happens to an update when it is blocked because an update is already in progress? Does it wait or fail? If it waits then this should be OK. I'm using EF.
I figured another way round this is to have another WebJob that will run on a schedule and will add up all the amounts and update the value once, and so this will be the only thing touching that field.
Thanks
One way or another, you will need to serialize write access to account balance field (actually to the whole row).
Having a separate job that picks up "pending" inserts, and eventually updates balance will be ok in case writes are more frequent on your system than reads, or you don't have to always return most recent balance. Otherwise, to get the current balance you will need to do something like
SELECT balance +
ISNULL((SELECT SUM(transaction_amount)
FROM pending_insert pi WHERE pi.user_id = ac.user_id
),0) as actual_balance
FROM account ac
WHERE ac.user_id = :user_id
That is definitely more expensive from performance perspective , but for some systems it's perfectly fine. Another pitfall (again, it may or may not be relevant to your case) is enforcing, for instance, non-negative balance.
Alternatively, you can consistently handle banking transactions in the following way :
begin database transaction
find and lock row in account table
validate total amount if needed
insert record into banking_transaction
update user account, i.e. balance = balance +transasction_amount
commit /rollback
If multiple user accounts are involved, you have to always lock them in the same order to avoid deadlocks.
That approach is more robust, but potentially worse from concurrency point of view (again, it depends on the nature of updates in your application - here the worst case is many concurrent banking transactions for one user, updates to multiple users will go fine).
Finally, it's worth mentioning that since you are working with SQLServer, beware of deadlocks due to lock escalation. You may need to implement some retry logic in any case
You would want to use a parameter substitution method in your sql. You would need to find out how to do that based on the programming language you are using in your web job.
$updateval = -5;
Update dbtable set myvalue = myvalue + $updateval
code example:
int qn = int.Parse(TextBox3.Text)
SqlCommand cmd1 = new SqlCommand("update product set group1 = group1 + #qn where productname = #productname", con);
cmd1.Parameters.Add(new SqlParameter("#productname", TextBox1.Text));
cmd1.Parameters.Add(new SqlParameter("#qn", qn));
then execute.

How to transfer data from Sql Server 2008 R2 database to CRM 2011 internal database using Plugin?

Scenario:
X_source = N/A.
Y_source = SQL server 2008 R2.
Z_source = CRM 2011 database.
I have a Y_source that will be updated daily with information from X_source at certain intervals. After that is done Z_source has to connect to the Y_source and upload that information. I have no control over X & Y source but do know that Y_source will be on the same network as the Z_source.
Problem:
Since I know that there are more than 200,000 records in Y_source I can't just call all the records and upload them to the Z_source. I have to find a way where I can iterate through them either in batches or 1 by 1. The idea I have in mind is to use T-SQL cursor's but this may seem like the wrong aprroach.
Sources:
I have the address and credentials to both Y & Z. I also have control over Z_source.
Edit
Ok let me clear some things out that I think may be important.:
Z_source is indeed a database that is separate from CRM 2011 but it is the origin of it's source.
Also the process that updates Z_source can be an external process from CRM 2011. Which means as long as the Database is updated it does not matter if CRM triggered the update or not.
The amount of Records to be handled will be well over 200,000.
I don't know if you're used to SSIS but I think it could really help you !
Here's two nice posts about it : http://gotcrm.blogspot.be/2012/07/crm-2011-data-import-export-using-cozy.html and http://a33ik.blogspot.be/2012/02/integrating-crm-2011-using-sql.html
Regards,
Kévin
The solution that I came up was to create a C# console application to connect to the Y_source retrieve the data then with the CRM 2011 SDK use the quickstart app in: Sdk/samplecode/cs/quickstart and modified it to insert in Z_source. This app will run via a Windows Task 6 hours after the Y_source gets updated so yeah I don't need a precise trigger for this.
A few things:
Plugins in CRM 2011 are analogous to SQL triggers. CRM events, such as Create, Delete, Update, Merge, etc., trigger the execution of code you've written in a plugin. This doesn't seem appropriate for your situation as you want to do your operations in batches independently of CRM actions.
Nothing in CRM 2011 is done in set-based batches. Everything is done one database row at a time. (To prove this, profile any CRM event that you'd think should be done in one set and see the resultant SQL.) However, just because CRM 2011 can't use set based operations doesn't mean you have to gather all your source data in SQL Server one row at a time.
So I recommend the following:
Write a quick app that pulls all the data from SQL Server at once. Call .ToList() on the result to place the result set in memory.
Loop through the list of rows, and for each, do the appropriate action in CRM 2011 (Create, Update, Delete, etc.).
For each row, include the unique identifier of that row in the CRM record so you'll know in the future whether to delete or update the record when syncing with Y-Source.
Schedule your app to be run whenever the Y-Source is updated.
Depending on your needs, the app can become a CLR stored procedure that is scheduled or triggered in SQL Server, a console app that's run on a schedule on a server, or anything else that can accomplish the above. The recent question Schedule workflows via external tool speaks to this as well.

Service broker with SqlNotificationRequest

I am in the process of evaluating a Service Broker with SQL noticiation for my project. My requirements is User places a order from System A and it will update Order Table. As soon as order is place i need to notify the System B. I have done a quick POC with Trigger , Service Broker and SQLNotificaiton ADO.NET. It is working as i expected.
What i would like to know the group
A) What are the best practices i need to follow for this?
B) What are disadvantages with the above approach if any?
C) Are there any disadvantes of using the Triggers? If so what are those for above approach?
The order table will get order from System A like 1000 to 1500 every day. I also would like to know the performance of above approach.
If what you're trying to do is simply push data from System A to System B, then as long as there are no clients connected to System B, you won't need Sql Notification.
Instead of using triggers you may consider Change Tracking. Take a look at "Real Time Data Integration..." article on Service Broker Team Blog.