Reconciliation during Dataphor's registration of libraries - rdbms

When registering libraries in Dataphor what is the difference between registering with and without reconciliation?
In my experience from learning and using this DBMS we've always registered without reconciliation. What are some example cases where we may choose one option over the other?

Registering with reconciliation means that the Data Definition Language (DDL) statements will be run against the target device(s). This is desired behavior when starting from a blank or non-existing database, where you want Dataphor to create the needed structures. Otherwise, the preferred methodology is to register without reconciliation so that any existing database is ignored, and use the DeviceReconciliationScript() operator to reconcile the changes.

Related

How to create warning message in trigger?

Is it possible to create a warning message in a trigger in Firebird 2.5?
I know I can create an exception message which will stop the user from saving the record changes, but in this instance I don't mind if the user continues.
Could I call a procedure that generates the message?
There is no mechanism in Firebird to produce warnings in PSQL code, you can only raise exceptions, which in triggers will result in the effect of the executed statement that fired the trigger to be undone.
In short, this is not possible.
There are workarounds possible, but those would require 'external' protocols, like, for example, inserting the warning message into a global temporary table, requiring the calling code to explicitly select from that temporary table after execution.
SQL model does provide putting query on pause and then waiting for extra input from client to either unfreeze it or fail it. SQL is not user-interactive service and there is no confirmation dialogs. You have to rethink your application design.
One possible avenue, nominally staying withing 2-tier client-server framework, would be creating temporary tabless for all the data you want to save (for example transaction-scope GTTs), and then have TWO stored procedures. One SP would be sanity-checking and returning list of warnings, if any. Another SP then would dump the data from GTTs to main, persistent tables without doing those checks.
Your client app would select warnings from the check-SP first, if it returns any then show them to the user, then either call save-SP and commit, or rollback without calling save-SP.
This is abusing C/S idea, so there would be dragons. First of all, you would have to have several GTTs and two SPs for E-V-E-R-Y pausable data saving in your app. And that can be a lot.
Also, notice, that database data may change after you called check-SP and before you called save-SP. Becuse some OTHER application running elsewhere could be changing and committing data during that pause. Especially if you transaction was of READ COMMMITTED kind. But with SNAPSHOT tx too.
Better approach would be to drop C/S scheme and go to 3-tier model, AKA multi-tier, AKA "Application Server". That way your client app sends the "briefcase" of data to the app-server, it would be app-server (not SQL triggers) doing all the data validation, and then it would be saving it to data storage backend, SQL or any other.
There, of course, still would be that problem, that data could had been changed by other users, why you paused one user and waited him to read and decide. But you would have more flexibility in app-server on data reconcilation, than you would have with plain SQL.

Preventing update loops for multiple databases using CDC

We have a number of legacy systems that we're unable to make changes to - however, we want to start taking data changes from these systems and applying them automatically to other systems.
We're thinking of some form of service bus (no specific tech picked yet) sitting in the middle, and a set of bus adapters (one per legacy application) to translate between database specific concepts and general update messages.
One area I've been looking at is using Change Data Capture (CDC) to monitor update activity in the legacy databases, and use that information to construct appropriate messages. However, I have a concern - how best could I, as a consumer of CDC information, distinguish changes applied by the application vs changes applied by the bus adapter on receipt of messages - because otherwise, the first update that gets distributed by the bus will get re-distributed by every receiver when they apply that change to their own system.
If I was implementing "poor mans" CDC - i.e. triggers, then those triggers execute within the context/transaction/connection of the original DML statements - so I could either design them to ignore one particular user (the user applying incoming updates from the bus), or set and detect a session property to similar ignore certain updates.
Any ideas?
If I understand your question correctly, you're trying to define a message routing structure that works with a design you've already selected (using an enterprise service bus) and a message implementation that you can use to flow data off your legacy systems that only forward-ports changes to your newer systems.
The difficulty is you're trying to apply changes in such a way that they don't themselves generate a CDC message from the clients receiving the data bundle from your legacy systems. In fact, all you're concerned about is having your newer systems consume the data and not propagate messages back to your bus, creating unnecessary crosstalk that might exponentiate, overloading your infrastructure.
The secret is how MSSQL's CDC features reconcile changes as they propagate through the network. Specifically, note this caveat:
All the changes are logged in terms of LSN or Log Sequence Number. SQL
distinctly identifies each operation of DML via a Log Sequence Number.
Any committed modifications on any tables are recorded in the
transaction log of the database with a specific LSN provided by SQL
Server. The __$operationcolumn values are: 1 = delete, 2 = insert, 3 =
update (values before update), 4 = update (values after update).
cdc.fn_cdc_get_net_changes_dbo_Employee gives us all the records net
changed falling between the LSN we provide in the function. We have
three records returned by the net_change function; there was a delete,
an insert, and two updates, but on the same record. In case of the
updated record, it simply shows the net changed value after both the
updates are complete.
For getting all the changes, execute
cdc.fn_cdc_get_all_changes_dbo_Employee; there are options either to
pass 'ALL' or 'ALL UPDATE OLD'. The 'ALL' option provides all the
changes, but for updates, it provides the after updated values. Hence
we find two records for updates. We have one record showing the first
update when Jason was updated to Nichole, and one record when Nichole
was updated to EMMA.
While this documentation is somewhat terse and difficult to understand, it appears that changes are logged and reconciled in LSN order. Competing changes should be discarded by this system, allowing your consistency model to work effectively.
Note also:
CDC is by default disabled and must be enabled at the database level
followed by enabling on the table.
Option B then becomes obvious: institute CDC on your legacy systems, then use your service bus to translate these changes into updates that aren't bound to CDC (using, for example, raw transactional update statements). This should allow for the one-way flow of data that you seek from the design of your system.
For additional methods of reconciling changes, consider the concepts raised by this Wikipedia article on "eventual consistency". Best of luck with your internal database messaging system.

Explain the steps for db2-cobol's execution process if both are db2 -cobol programs

How to run two sub programs from a main program if both are db2-cobol programs?
My main program named 'Mainpgm1', which is calling my subprograms named 'subpgm1' and 'subpgm2' which are a called programs and I preferred static call only.
Actually, I am now using a statement called package instead of a plan and one member, both in 'db2bind'(bind program) along with one dbrmlib which is having a dsn name.
The main problem is that What are the changes affected in 'db2bind' while I am binding both the db2-cobol programs.
Similarly, in the 'DB2RUN'(run program) too.
Each program (or subprogram) that contains SQL needs to be pre-processed to create a DBRM. The DBRM is then bound into
a PLAN that is accessed by a LOAD module at run time to obtain the correct DB/2 access
paths for the SQL statements it contains.
You have gone from having all of your SQL in one program to several sub-programs. The basic process
remains the same - you need a PLAN to run the program.
DBA's often suggest that if you have several sub-programs containing SQL that you
create PACKAGES for them and then bind the PACKAGES into a PLAN. What was once a one
step process is now two:
Bind DBRM into a PACKAGE
Bind PACKAGES into a PLAN
What is the big deal with PACKAGES?
Suppose you have 50 sub-programs containing SQL. If you create
a DBRM for each of them and then bind all 50 into a PLAN, as a single operation, it is going
to take a lot of resources to build the PLAN because every SQL statement in every program needs
to be analyzed and access paths created for them. This isn't so bad when all 50 sub-programs are new
or have been changed. However, if you have a relatively stable system and want to change 1 sub-program you
end up reBINDing all 50 DBRMS to create the PLAN - even though 49 of the 50 have not changed and
will end up using exactly the same access paths. This isn't a very good apporach. It is analagous to compiling
all 50 sub-programs every time you make a change to any one of them.
However, if you create a PACKAGE for each sub-program, the PACKAGE is what takes the real work to build.
It contains all the access paths for its associated DBRM. Now if you change just 1 sub-program you
only have to rebuild its PACKAGE by rebinding a single DBRM into the PACKAGE collection and then reBIND the PLAN.
However, binding a set of PACKAGES (collection) into a PLAN
is a whole lot less resource intensive than binding all the DBRM's in the system.
Once you have a PLAN containing all of the access paths used in your program, just use it. It doesn't matter
if the SQL being executed is from subprogram1 or subprogram2. As long as you have associated the PLAN
to the LOAD that is being run it should all work out.
Every installation has its own naming conventions and standards for setting up PACKAGES, COLLECTIONS and
PLANS. You should review these with your Data Base Administrator before going much further.
Here is some background information concerning program preperation in a DB/2 environment:
Developing your Application

How do I listen for, load and run user-defined workflows at runtime that have been persisted using SqlWorkflowInstanceStore?

The result of SqlWorkflowInstanceStore.WaitForEvents does not tell me what type of workflow is runnable. The constructor of WorkflowApplication takes a workflow definition, and at a minimum, I need to be able to store a workflow ID in the store and query it, so that I can determine which workflow definition to load for the WorkflowApplication.
I also don't want to create a SqlWorkflowInstanceStore for each custom workflow type, since there may be thousands of different workflows.
I thought about trying to use WorkflowServiceHost, but not every workflow has a Receive activity and I don't think it is feasible to have thousands of WorkflowServiceHosts running, each supporting a different workflow type.
Ideally, I just want to query the database for a runnable workflow, determine its workflow definition ID, load the appropriate XAML from a workflow definition table, instantiate WorkflowApplication with the workflow definition, and call LoadRunnableInstance().
I would like to have a way to correlate which workflow is related to a given HasRunnableWorkflowEvent raised by the SqlWorkflowInstanceStore (along with the custom workflow definition ID), or have an alternate way of supporting potentially thousands of different custom workflow types created at runtime. I must also load balance the execution of workflows across multiple application servers.
There's a free product from Microsoft that does pretty much everything you say there, and then some. Oh, and it's excellent too.
Windows Server AppFabric. No, not Azure.
http://www.microsoft.com/windowsserver2008/en/us/app-main.aspx
-Oisin

COBOL DB2 program

If I have 1 COBOL DB2 program which is calling 2 other COBOL DB2 sub programs, then how many DBRMs,Packages,Plans it will create? If I am changing any one of the sub program then do I need to recompile and bind all the programs?I am really confused with DBRMs,Plans and Packages.
Regards,
Manasi
Oh my... This is a huge topic so this
answer is going to be very simplified and therefore incomplete.
The answer depends somewhat on whether you are using the DB/2 pre-compiler or co-compiler. For this
answer I will assume you are using the pre-compiler. If you are using the co-compiler the
principle is pretty much the same but the mechanics are a bit different.
The goal here is to generate:
a Load module from your COBOL source
DB/2 Plan to provide your program with access paths into the DB/2 database
Everything in between just supports the mechanics of creating an appropriate DB/2 Plan
for your program to run against.
The Mechanics
Each program and/or sub-program containing DB/2 statements needs
to be pre-processed by the DB/2 pre-compiler. The pre-compiler
creates a DBRM (Data Base Request Module). The pre-compile also alters your source program by commenting
out all the EXEC SQL...END-EXEC statements and replaces them with specific calls to the DB/2 subsystem.
You then use the regular COBOL compiler to compile the code emitted by the pre-processor to produce an object module which you then link into an executable.
The DBRM produced by the pre-compile contains a listing of the SQL statements contained
in your program plus some other information that DB/2 uses to
associate these specific SQL statements to your program. The DBRM is typically written to
a permanent dataset (usually a PDS) and then input to
the DB/2 Binder where the specific access
paths for each SQL statement in your program are compiled into a form that DB/2
can actually use. The binder does for DB/2 roughly the same function as the compiler does for COBOL.
Think of the DBRM as your source code and the Binder as the compiler.
The access path information produced when the DBRM is bound
needs to be stored somewhere so that it can be located and used
when your program calls DB/2.
Where to put the binder output? Your options are to bind it into a Package or directly into a Plan.
The shortest route is to bind a group of DBRMs directly into a Plan.
However, as with many short cuts, this may not be the
most efficient thing to do (I will touch upon the reason later on).
Most larger systems will not bind DBRMs directly into Plans, they will bind into a Package. The bound
Package is stored within the DB/2 sub-system (same way a Plan is). What then is a Package?
A Package is a bound single DBRM. A Plan, on the other hand, typically contains the
access paths for multiple DBRMs.
Now we have a set of Packages, each Package contains the SQL access paths to its respective DBRM which
was derived from a given
program. We need to construct a Plan from these. To do this, a set of Bind Cards
is created, usually by your Data Base Administrator. Bind Cards are just a sort of "source code"
to the DB/2 Binder (they are not punched cards).
The Bind Cards define which Packages form
a given Plan. These are then submitted to the Binder which assembles them into a Plan. Note:
you may also hear mention of Collections, these are just named groupings of Packages that have
been defined by your DBA.
To summarize, we have the following process:
Program -> (Pre-Compiler) -> Modified Program -> (COBOL compiler) -> Object -> (Link) -> Load Module
-> DBRM -> (DB/2 Bind) -> Package
Bind Cards -> (DB/2 Bind) -> DB/2 Plan
Package(s) ->
The two fundamental outputs here are a Load Module (your executable COBOL program) and a DB/2 Plan. The Program
and Plan come together in your JCL where you have to give the Plan name somewhere within the EXEC statement
along with the program to run.
With this brief, and highly simplified, background, lets try to answer your questions:
How may DBRMs are created?
One DBRM per program containing SQL EXEC statements
How many Packages are created?
A package is constructed from one DBRM. There is a 1:1 correspondence between Source Program and Package
How many Plans are created?
Any given Package may be included in multiple Collections and or multiple Bind Card sets. This
means that a given Package may be included in multiple Plans.
If I change a program what is affected?
If you bind your DBRM directly into a Plan, then you must rebind every Plan that uses
that DBRM. This can be a very time consuming and error prone proposition.
However, if you bind your DBRM into a Package, then you only need to rebind that Package.
Since there is a 1:1 correspondence between Program and Package, only a single bind
needs to be done. The only time the Plan needs to be rebound is when a Package or Collection
is added or removed from the Bind Card set that defines it.
The advantage of using Packages should be clear from the above and this is why most
shops do not bind DBRMs directly into Plans, but use Packages instead.