I am trying to create a trigger that sends an email based on a database event, specifically, when a record is INSERTed in a certain table, I want an email stating that fact to go to the SysAdmin.
I can successfully do the following from a SQL window in iSeries Navigator:
CL:SNDDST TYPE(*LMSG)
TOINTNET(('sysadmin#mycompany.com'))
DSTD('this is the Subject Line')
LONGMSG('This is an Email sent from iSeries box via Navigator')
...and an email gets sent. Which means that the necessary SMTP stuff is there and working.
So all I'm trying to do is encapsulate this code, perhaps with some data changes (e.g. "A record has been added to the XYZ table on whatever-the-sysdate-is"). Navigator has some tantalizing examples that call CL to do some plain-vanilla things, but no clue as to how to make it work in a trigger. I know how to write triggers that do "database stuff", but not this CL stuff. And this is iSeries DB2, so I don't have access to UTL_MAIL.
I know next to nothing about CL, DDS or other iSeries internals... I would prefer not to have to create an external Java program, but will do that as a last resort...but even then, I'm having a hard time finding straightforward examples.
thanks in advance.
First off, note that SNDDST isn't the best choice for internet mail from the IBM i. Basically, SNDDST is a relic from the SNADS networking days that IBM hacked into supporting SMTP emails. There are free alternatives, or if you're reasonably current on fixes for 7.1 then you should have the Send SMTP E-mail (SNDSMTPEMM) command available.
The Run SQL Scripts window of iNav does indeed support CL commands using the CL: prefix. But that's not the same thing as having the query engine itself understand CL.
The CL: prefix isn't going to work inside an SQL trigger.
You could however,use the QCMDEXC stored procedure to call a CL command. But I wouldn't necessarily call that the best option.
The IBM i supports using "external" stored procedures and triggers. Theoretically, you could use a CL program that invokes the SNDSMTPEMM command directly. But given you desires to include data from the table, I wouldn't recommend that approach as you'd be tied to the table structure.
Instead, create your own UTLMAILSND CL program that invokes SNDSMTPEMM. Then defined the UTLMAILSND program as an external stored procedure (you can even give it a longer SQL name of UTIL_MAIL_SEND).
Now you can call your UTIL_MAIL_SEND() procedure from your SQL trigger.
You need to try the SNDSMTPEMM command. It's like sliced bread compared to SNDDST TYPE(*LMSG) It supports HTML too which makes for a lot of fun.
Yes, I used SNDSMPTEMM (skipping the html for now...).
One big note, however: using this command in a CL program doesn't work when being called from SQL. I had to change it to a CLLE program.
So the final answer is as follows: a) an INSERT trigger on the table in question, which calls: b) an (external) PROCEDURE created in the database, which in turn calls: c) the compiled CLLE program object. Works like a charm.
p.s. I create the whole body of the email in the INSERT trigger, and pass it along, eventually to the CLLE program. This allows me to have just this one CLLE program to report on any INSERT/UPDATE/DELETE anywhere in the database.
Related
I have to read Input file to get email id of employees and send each employee email.
How can I do this using Datastage job?
File looks like this,
PERSON_ID|FName|LName|Email_ID
DataStage itself offers a Notification Stage which is only available at the Sequence level.
As your information is in the data stream of a job you could use a Wrapped Stage in order to send the mail from within a job.
A wrapped stage allows to call a OS command for each row in your stream. Sendmail etc. could be used to send the mails like you wish.
I have implemented this recently. The wrapped stage is tricky so I would recommend to use it in a very simple way - use it to call the bash (or any other shell) and prepare the mail command upfront and simply send it to that stage.
There are some more options.
First is using the Wrapped Stage like Michael mentioned. Another method is writing a Parallel Routine to use in an ordinary parallel transformer, which is quite similar.
The simplest way of sending an email per row that I know of is using a server routine in a transformer.
Drawback is that server routines are deprecated and we're not yet sure
how well they can be migrated to future versions of DataStage (CP4D).
This should be considerd when doing this.
In each project you should have a folder Routines/Built-In/Utilities containing the server routines DSSendMailAttachmentTester and DSSendMailTester. These are originally meant to be used in the Routine Editor just for testing the backend wether it's actually able to send mail.
But you can also use them in a Transformer as well, as long as it's a BASIC Transformer. That means you can either write a server job using all old school stuff (which is probably not what you want), or you can use the BASIC Transformer in a parallel job. (Follow the link on how to enable it.) It gives access to BASIC transforms and functions.
I suggest copying the mentioned server routines to make your own custom one and maybe modify it to your needs.
Is there a way to call a program from db2 interactive SQL in as400 (strsql)? this program receives an argument by reference and modify it's content. In CL, you simply call it like this:
call myprogram 12345
I need to be able to call it in interactive SQL, Is there any way or workaround to do this? like launching an OS command? for example in C you do system("your system command"). I couldn't find anything related to it.
STRSQL supports the SQL CALL statement.
The best option is to define the program as External SQL Stored procedure
--note
----- numeric-->zoned decimal
----- decimal-->packed decimal
CREATE PROCEDURE MYLIB.MYPROGRAM_SP
(IN number numeric(5,0))
LANGUAGE RPGLE
EXTERNAL NAME 'MYLIB/MYPROGRAM'
PARAMETER STYLE GENERAL;
Then you can
CALL MYLIB.MYPROGRAM_SP(12345)
Technically, every *PGM object on the IBM i is a stored procedure. You can call it without explicitly defining it as shown above. But assumptions are made about the parms in that case. It's much better to provide the DB with the interface definition.
Note that STRSQL is a 20 year old tool, it has various limitations including not supporting OUT or INOUT parameters of stored procedures.
A much better choice is to use the Run SQL Scripts component of IBM's Access Client Solutions (ACS)
What is the difference between pre-compile and bind for a COBOL DB2 program.
How does syntax check differ in both the processes.
If we give the wrong column name in our code, then in which process it will fail.
It seems you need to do some study in the Db2 Knowledge Centre.
A pre-compile action creates a bindfile, containing the static SQL present in the source code (i.e the sections of code with EXEC SQL statements in your COBOL), in addition to a compilable form of the source code that contains the non-SQL logic and data (your PROCEDURE DIVISION and DATA DIVISION etc).
A bind action uses both the bindfile and the database to create a package inside the database which is the executable form of the bindfile contents. The package contains sections corresponding your your EXEC SQL blocks for static SQL.
Later, when the built (i.e. compiled and linked) application executes, and wants to use the database, this will cause sections of the package to be loaded from the database catalog (or read from cache) and executed by the database manager to deliver the required actions.
As each command (precompile, vs bind) serves a different purpose, the syntax varies , and also can vary with the Db2-server platform (Z/OS , i-series, Linux/Unix/Windows) and version.
Refer to the free Db2 Knowledge Center for your version of Db2 and your Db2-server platform (separate different documentation Knowledge Center websites exist for Db2-for-Z/OS, Db2 for i-series, Db2-for-Linux/Unix/Windows ).
I am working on a Ruby Script (using MacRuby with Scripting Bridge) to do some processing on a FileMaker Pro database (FMP Advanced 10.) I am able to read databases, tables, and records by creating a FileMakerProAdvancedApplication object:
framework 'scriptingbridge'
fm = SBApplication.applicationWithBundleIdentifier('com.filemaker.client.advanced')
the resulting object works great for reading values out of FileMaker databases, but I am confused about how to create new objects. The FileMaker scripting dictionary provides a "create" command, but it does not show up in the header generated by sdef /Applications/FileMaker\ Pro\ 10\ Advanced/Filero\ Advanced.app/ | sdp -fh --basename FilemakerProAdvanced (command taken from Apple's Scripting Bridge Docs.) Is it possible to create new elements with FMP's script support? What am I missing?
Not sure I know to much about the scripting bridge, but assume that it must be using AppleScript behind the scenes. When you say create new objects, to you mean records or tables?
I'm fairly certain you can't create tables (or fields) in FileMaker via AppleScript.
You can create (and delete) records within existing tables. I would fire up a copy of the AppleScript Editor, and have a look at the FileMaker script dictionary from that end.
The generated header files rarely duplicate the Dictionary as seen via Applescript for an application. There are sometimes duplicate function calls and/or some objects and functions that are available via Applescript are not available vis Scripting Bridge. As far as I know, there is nothing to indicate why this change would be in place and there is no way to get around this limitation.
I have been working with this stored procedure Sys.xp_readerrorlog for around a week now, and what I have learned is it accepts 7 parameters to fully refine how it should display its data. Easily enough to understand.
I have the question now from, where exactly does this stored procedure get its data from? I know you can also preview the data in the SSMS Object browser, under Managements In the SQL Server logs folder, although I have come to the theory that the Dialog that opens when you read the logs also use this procedure to display to the user in a grid.
I am baffled. I scouted through the system databases and found nothing (no table) which looks remotely like the output you get from this procedure
exec sys.xp_readerrorlog 1,0,'','',null,null,N'Desc';
Any expert that can tell me where the actual log data is stored, and if it is queryable through a select statement if you have admin rights?
It reads from the SQL Server error log file, which is a plain text file. There is no built-in interface to the file from TSQL; xp_readerrorlog is widely known, but it's also undocumented so relying on it is risky although of course you can use it if you don't mind that risk.
Using SMO you can find the file location but there is no special API for reading it because it's just a text file.