need a repeatable method for SSIS package OnError Event Handler - event-handling

All the SSIS packages here at work have an identical OnError event handler and I'm looking for a way to avoid creating the same handler for every single package. The Event Handler first queries a table for a list of email addresses and then sends an email to the list of recipients, including in the body the package name, package error, error date & time, etc. The Execute SQL query and Email Task are literally identical in every package Event Handler. Is there some way to modularize this routine? Perhaps by calling another package that handles it all? I want to eliminate (or nearly eliminate) chances for developers to make a mistake while creating, recreating, and recreating yet again this identical process. They way it's done now will be a miserable task to make a simple change to our error handling process in all our packages.

After extensive searching, I think I've settled on my best options: a custom SSIS task, a child package, or a stored procedure (in an Execute SQL task). I don't know how to make a custom task, so I'm going to opt for the child package and pass various execution state variables (like error description, error number, package name, package start time, etc.) into the child package as parameters. I assume I could make a custom task that only needs to be dropped into the error handler to work properly, but I don't have time to learn how.

Related

Datastage- Loop throught the file to read email ID and send email

I have to read Input file to get email id of employees and send each employee email.
How can I do this using Datastage job?
File looks like this,
PERSON_ID|FName|LName|Email_ID
DataStage itself offers a Notification Stage which is only available at the Sequence level.
As your information is in the data stream of a job you could use a Wrapped Stage in order to send the mail from within a job.
A wrapped stage allows to call a OS command for each row in your stream. Sendmail etc. could be used to send the mails like you wish.
I have implemented this recently. The wrapped stage is tricky so I would recommend to use it in a very simple way - use it to call the bash (or any other shell) and prepare the mail command upfront and simply send it to that stage.
There are some more options.
First is using the Wrapped Stage like Michael mentioned. Another method is writing a Parallel Routine to use in an ordinary parallel transformer, which is quite similar.
The simplest way of sending an email per row that I know of is using a server routine in a transformer.
Drawback is that server routines are deprecated and we're not yet sure
how well they can be migrated to future versions of DataStage (CP4D).
This should be considerd when doing this.
In each project you should have a folder Routines/Built-In/Utilities containing the server routines DSSendMailAttachmentTester and DSSendMailTester. These are originally meant to be used in the Routine Editor just for testing the backend wether it's actually able to send mail.
But you can also use them in a Transformer as well, as long as it's a BASIC Transformer. That means you can either write a server job using all old school stuff (which is probably not what you want), or you can use the BASIC Transformer in a parallel job. (Follow the link on how to enable it.) It gives access to BASIC transforms and functions.
I suggest copying the mentioned server routines to make your own custom one and maybe modify it to your needs.

How to create warning message in trigger?

Is it possible to create a warning message in a trigger in Firebird 2.5?
I know I can create an exception message which will stop the user from saving the record changes, but in this instance I don't mind if the user continues.
Could I call a procedure that generates the message?
There is no mechanism in Firebird to produce warnings in PSQL code, you can only raise exceptions, which in triggers will result in the effect of the executed statement that fired the trigger to be undone.
In short, this is not possible.
There are workarounds possible, but those would require 'external' protocols, like, for example, inserting the warning message into a global temporary table, requiring the calling code to explicitly select from that temporary table after execution.
SQL model does provide putting query on pause and then waiting for extra input from client to either unfreeze it or fail it. SQL is not user-interactive service and there is no confirmation dialogs. You have to rethink your application design.
One possible avenue, nominally staying withing 2-tier client-server framework, would be creating temporary tabless for all the data you want to save (for example transaction-scope GTTs), and then have TWO stored procedures. One SP would be sanity-checking and returning list of warnings, if any. Another SP then would dump the data from GTTs to main, persistent tables without doing those checks.
Your client app would select warnings from the check-SP first, if it returns any then show them to the user, then either call save-SP and commit, or rollback without calling save-SP.
This is abusing C/S idea, so there would be dragons. First of all, you would have to have several GTTs and two SPs for E-V-E-R-Y pausable data saving in your app. And that can be a lot.
Also, notice, that database data may change after you called check-SP and before you called save-SP. Becuse some OTHER application running elsewhere could be changing and committing data during that pause. Especially if you transaction was of READ COMMMITTED kind. But with SNAPSHOT tx too.
Better approach would be to drop C/S scheme and go to 3-tier model, AKA multi-tier, AKA "Application Server". That way your client app sends the "briefcase" of data to the app-server, it would be app-server (not SQL triggers) doing all the data validation, and then it would be saving it to data storage backend, SQL or any other.
There, of course, still would be that problem, that data could had been changed by other users, why you paused one user and waited him to read and decide. But you would have more flexibility in app-server on data reconcilation, than you would have with plain SQL.

How can I know what component failed?

When using the On SubJob Error trigger, I would like to know what component failed inside the subjob. I have read that you can check the error message of each component and select the one that is not null. But I feel this practice is bad. Is there any variable that stores the identity of the component that failed?
I may be wrong, but I'm afraid there isn't. This is because globalVar elements are component-scoped (ie they are get/set by components themselves), not subjob-scoped (this would mean being set by Talend itself, or something). When the subjobError signal is triggered, you loose any component-based data coming from tFileInputDelimited. For this design reason, I don't think you will be able to solve your problem without iterating inside the globalMap searhcing for the error strings here and there.
Alternatively you can use tLogCatcher, which has a 'origin' column, to spot the offending component and eventually route to different recoverable subjobs depending on which component went to exception. This is not a design I trust too much, actually, because tLogCatcher is job-scoped, while OnSubjobError is directly linked to a specific subjob only. But It could work in simple cases

How to make an InArgument's value dependant upon the value of another InArgument at design time

I have a requirement to allow a user to specify the value of an InArgument / property from a list of valid values (e.g. a combobox). The list of valid values is determined by the value of another InArgument (the value of which will be set by an expression).
For instance, at design time:
User enters a file path into workflow variable FilePath
The DependedUpon InArgument is set to the value of FilePath
The file is queried and a list of valid values is displayed to the user to select the appropriate value (presumably via a custom PropertyValueEditor).
Is this possible?
Considering this is being done at design time, I'd strongly suggest you provide for all this logic within the designer, rather than in the Activity itself.
Design-time logic shouldn't be contained within your Activity. Your Activity should be able to run independent of any designer. Think about it this way...
You sit down and design your workflow using Activities and their designers. Once done, you install/xcopy the workflows to a server somewhere else. When the server loads that Activity prior to executing it, what happens when your design logic executes in CacheMetadata? Either it is skipped using some heuristic to determine that you are not running in design time, or you include extra logic to skip this code when it is unable to locate that file. Either way, why is a server executing this design time code? The answer is that it shouldn't be executing it; that code belongs with the designers.
This is why, if you look at the framework, you'll see that Activities and their designers exist in different assemblies. Your code should be the same way--design-centric code should be delivered in separate assemblies from your Activities, so that you may deliver both to designers, and only the Activity assemblies to your application servers.
When do you want to validate this, at design time or run time?
Design time is limited because the user can use an expression that depends on another variable and you can't read the value from there at design time. You can however look at the expression and possibly deduce an invalid combination that way. In this case you need to add code to the CacheMetadata function.
At run time you can get the actual values and validate them in the Execute function.

How do I listen for, load and run user-defined workflows at runtime that have been persisted using SqlWorkflowInstanceStore?

The result of SqlWorkflowInstanceStore.WaitForEvents does not tell me what type of workflow is runnable. The constructor of WorkflowApplication takes a workflow definition, and at a minimum, I need to be able to store a workflow ID in the store and query it, so that I can determine which workflow definition to load for the WorkflowApplication.
I also don't want to create a SqlWorkflowInstanceStore for each custom workflow type, since there may be thousands of different workflows.
I thought about trying to use WorkflowServiceHost, but not every workflow has a Receive activity and I don't think it is feasible to have thousands of WorkflowServiceHosts running, each supporting a different workflow type.
Ideally, I just want to query the database for a runnable workflow, determine its workflow definition ID, load the appropriate XAML from a workflow definition table, instantiate WorkflowApplication with the workflow definition, and call LoadRunnableInstance().
I would like to have a way to correlate which workflow is related to a given HasRunnableWorkflowEvent raised by the SqlWorkflowInstanceStore (along with the custom workflow definition ID), or have an alternate way of supporting potentially thousands of different custom workflow types created at runtime. I must also load balance the execution of workflows across multiple application servers.
There's a free product from Microsoft that does pretty much everything you say there, and then some. Oh, and it's excellent too.
Windows Server AppFabric. No, not Azure.
http://www.microsoft.com/windowsserver2008/en/us/app-main.aspx
-Oisin