DB2 Scheduled Trigger - triggers

I'm new to triggers and I want to ask the proper procedure to create a trigger (or any better methods) to duplicate the contents of T4 table to T5 table on a specified datetime.
For example, on the 1st day of every month at 23:00, I want to duplicate the contents of T4 table to T5 table.
Can anyone please advise what's the best method?
Thank you.
CREATE TRIGGER TRIG1
AFTER INSERT ON T4
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
INSERT INTO T5 VALUES (:NEW.B, :NEW.A);
END TRIG1;

It can be done by Administrative Task Scheduler feature instead of cron. Here is a sample script.
#!/bin/sh
db2set DB2_ATS_ENABLE=YES
db2stop
db2start
db2 -v "drop db db1"
db2 -v "create db db1"
db2 -v "connect to db1"
db2 -v "CREATE TABLESPACE SYSTOOLSPACE IN IBMCATGROUP MANAGED BY AUTOMATIC STORAGE EXTENTSIZE 4"
db2 -v "create table s1.t4 (c1 int)"
db2 -v "create table s1.t5 (c1 int)"
db2 -v "insert into s1.t4 values (1)"
db2 -v "create procedure s1.copy_t4_t5() language SQL begin insert into s1.t5 select * from s1.t4; end"
db2 -v "CALL SYSPROC.ADMIN_TASK_ADD ('ATS1', CURRENT_TIMESTAMP, NULL, NULL, '0,10,20,30,40,50 * * * *', 'S1', 'COPY_T4_T5',NULL , NULL, NULL )"
date
It will create a task, called 'ATS1' and will call the procedure s1.copy_t4_t5 every 10 minuets, such as 01:00, 01:20, 01:30. You may need to run below after executing the script:
db2 -v "connect to db1"
Then, after some time, run below to see if the t5 table has row as expected:
db2 -v "select * from s1.t5"
For your case, the 5th parameter would be replaced with '0 23 1 * *'.
It represents 'minute hour day_of_month month weekday' so
it will be called every 1st day of month at 23:00.
For more information, how to modify existing task, delete task, review status, see at:
Administrative Task Scheduler routines and views
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.sql.rtn.doc/doc/c0061223.html
Also, here is one of good article about it:
[DB2 LUW] Sample administrative task scheduler ADMIN_TASK_ADD and ADMIN_TASK_REMOVE usage
https://www.ibm.com/support/pages/node/1140388?lang=en
Hope this helps.

Related

Script does not create tables but pgAdmin does

I am running a script that creates a database, some tables with foreign keys and insert some data, but somehow creating the tables is not working, although it doesn't throw any error: I go to pgAdmin, look for the tables created and there's no one...
When I copy the text of my script and execute it into the Query Tool, it works fine and creates the tables.
Can you please explain me what I am doing wrong?
Script:
DROP DATABASE IF EXISTS test01 WITH (FORCE); --drops even if in use
CREATE DATABASE test01
WITH
OWNER = postgres
ENCODING = 'UTF8'
LC_COLLATE = 'German_Germany.1252'
LC_CTYPE = 'German_Germany.1252'
TABLESPACE = pg_default
CONNECTION LIMIT = -1
IS_TEMPLATE = False
;
CREATE TABLE customers
(
customer_id INT GENERATED ALWAYS AS IDENTITY,
customer_name VARCHAR(255) NOT NULL,
PRIMARY KEY(customer_id)
);
CREATE TABLE contacts
(
contact_id INT GENERATED ALWAYS AS IDENTITY,
customer_id INT,
contact_name VARCHAR(255) NOT NULL,
phone VARCHAR(15),
email VARCHAR(100),
PRIMARY KEY(contact_id),
CONSTRAINT fk_customer
FOREIGN KEY(customer_id)
REFERENCES customers(customer_id)
ON DELETE CASCADE
);
INSERT INTO customers(customer_name)
VALUES('BlueBird Inc'),
('Dolphin LLC');
INSERT INTO contacts(customer_id, contact_name, phone, email)
VALUES(1,'John Doe','(408)-111-1234','john.doe#bluebird.dev'),
(1,'Jane Doe','(408)-111-1235','jane.doe#bluebird.dev'),
(2,'David Wright','(408)-222-1234','david.wright#dolphin.dev');
I am calling the script from a Windows console like this:
"C:\Program Files\PostgreSQL\15\bin\psql.exe" -U postgres -f "C:\Users\my user name\Desktop\db_create.sql" postgres
My script is edited in Notepad++ and saved with Encoding set to UTF-8 without BOM, as per a suggestion found here
I see you are using -U postgres command line parameter, and also using database name as last parameter (postgres).
So all your SQL commands was executed while you are connected to postgres database. Of course, CREATE DATABASE command did creation of test01 database, but CREATE TABLE and INSERT INTO did executed not for test01 database, but for postgres database, and all your tables are in postgres database, but not in test01.
You need to split your SQL script into 2 scripts (files): first for 'CREATE DATABASE', second for the rest of.
You need to execute first script as before, like
psql.exe -U postgres -f "db_create_1.sql" postgres
And for second one need to choose the database which was created at 1st step, like
psql.exe -U postgres -f "db_create_2.sql" test01

Automate Native Range Partitioning in PostgreSQL 10 for Zabbix 3.4

I would like to automate the process of partitioning a Zabbix 3.4 Database using PostgreSQL's Native Range Partitioning.
Would it be wiser to write a SQL function to perform the following below or to use a shell/python script?
Make sure at least one partition is created ahead of what's needed.
Delete any partition older than x weeks/months; for history 7 days and for trends 1 year
The following below is the solution I came up with for transitioning to PSQL 10 Native Range Partitioning from a PSQL 9.4 populated database with no partitioning.
A. Create a Zabbix Empty PSQL 10 Database.
Ensure you first create an empty Zabbix PSQL 10 DB.
# su postgres
postgres#<>:~$ createuser -P -s -e zabbix
postgres#<>:~$ psql
postgres# create database zabbix;
postgres# grant all privileges on database zabbix to zabbix;
B. Create tables & Native Range Partitions on clock column
Create the tables in the Zabbix DB and implement native range partitioning for the clock column. Below is an example of a manual SQL script that can be fun for history table. Perform this for all the history tables you want to partition via range.
zabbix=# CREATE TABLE public.history
(
itemid bigint NOT NULL,
clock integer NOT NULL DEFAULT 0,
value numeric(20,0) NOT NULL DEFAULT (0)::numeric,
ns integer NOT NULL DEFAULT 0
) PARTITION BY RANGE (clock);
zabbix=# CREATE TABLE public.history_old PARTITION OF public.history
FOR VALUES FROM (MINVALUE) TO (1522540800);
zabbix=# CREATE TABLE public.history_y2018m04 PARTITION OF public.history
FOR VALUES FROM (1522540800) TO (1525132800);
zabbix=# CREATE TABLE public.history_y2018m05 PARTITION OF public.history
FOR VALUES FROM (1525132800) TO (1527811200);
zabbix=# CREATE INDEX ON public.history_old USING btree (itemid, clock);
zabbix=# CREATE INDEX ON public.history_y2018m04 USING btree (itemid, clock);
zabbix=# CREATE INDEX ON public.history_y2018m05 USING btree (itemid, clock);
C. Automate it!
I used a shell script because it is one of the simplest ways to deal with creating new partitions in PSQL 10. Make sure you're always at least one partition ahead of what's needed.
Let's call the script auto_history_tables_monthly.sh.
On a Debian 8 Flavor OS that runs PSQL 10 ensure the script is in a certain directory (I used /usr/local/bin) with correct permissions (chown postgres:postgres /usr/local/bin/auto_history_tables_monthly.sh) and make it executable (chmod u+x /usr/local/bin/auto_history_tables_monthly.sh as postgres user).
Create a cron job (crontab -e) for postgres user with following:
0 0 1 * * /usr/local/bin/auto_history_tables_monthly.sh | psql -d zabbix
This will run the shell script the first of every month.
Below is the script. It uses the date command to leverage the UTC epoch value. It creates a table a month in advance and drops a partition 2 months older. This appears to work well in conjunction with a 31 days of retention for history that is customized to my needs. Ensure the PSQL 10 DB is on UTC time for this use case.
#!/bin/bash
month_diff () {
year=$1
month=$2
delta_month=$3
x=$((12*$year+10#$month-1))
x=$((x+$delta_month))
ry=$((x/12))
rm=$(((x % 12)+1))
printf "%02d %02d\n" $ry $rm
}
month_start () {
year=$1
month=$2
date '+%s' -d "$year-$month-01 00:00:00" -u
}
month_end () {
year=$1
month=$2
month_start $(month_diff $year $month 1)
}
# Year using date
current_year=$(date +%Y)
current_month=$(date +%m)
# Math
next_date=$(month_diff $current_year $current_month 1)
next_year=$(echo $next_date|sed 's/ .*//')
next_month=$(echo $next_date|sed 's/.* //')
start=$(month_start $next_date)
end=$(month_end $next_date)
#next_month_table="public.history_y${next_year}m${next_month}"
# Create next month table for history, history_uint, history_str, history_log, history_text
sql="
CREATE TABLE IF NOT EXISTS public.history_y${next_year}m${next_month} PARTITION OF public.history
FOR VALUES FROM ($start) TO ($end);
\nCREATE TABLE IF NOT EXISTS public.history_uint_y${next_year}m${next_month} PARTITION OF public.history_uint
FOR VALUES FROM ($start) TO ($end);
\nCREATE TABLE IF NOT EXISTS public.history_str_y${next_year}m${next_month} PARTITION OF public.history_str
FOR VALUES FROM ($start) TO ($end);
\nCREATE TABLE IF NOT EXISTS public.history_log_y${next_year}m${next_month} PARTITION OF public.history_log
FOR VALUES FROM ($start) TO ($end);
\nCREATE TABLE IF NOT EXISTS public.history_text_y${next_year}m${next_month} PARTITION OF public.history_text
FOR VALUES FROM ($start) TO ($end);
\nCREATE INDEX on public.history_y${next_year}m${next_month} USING btree (itemid, clock);
\nCREATE INDEX on public.history_uint_y${next_year}m${next_month} USING btree (itemid, clock);
\nCREATE INDEX on public.history_str_y${next_year}m${next_month} USING btree (itemid, clock);
\nCREATE INDEX on public.history_log_y${next_year}m${next_month} USING btree (itemid, clock);
\nCREATE INDEX on public.history_text_y${next_year}m${next_month} USING btree (itemid, clock);
"
echo -e $sql
# Math
prev_date=$(month_diff $current_year $current_month -2)
prev_year=$(echo $prev_date|sed 's/ .*//')
prev_month=$(echo $prev_date|sed 's/.* //')
# Drop last month table for history, history_uint, history_str, history_log, history_text
sql="
DROP TABLE public.history_y${prev_year}m${prev_month};
\nDROP TABLE public.history_uint_y${prev_year}m${prev_month};
\nDROP TABLE public.history_str_y${prev_year}m${prev_month};
\nDROP TABLE public.history_log_y${prev_year}m${prev_month};
\nDROP TABLE public.history_text_y${prev_year}m${prev_month};
"
echo -e $sql
D. Then dump the data from the old database within. I used pg_dump/pg_restore.
I'm sure there are more complex solutions out there but I found this to be simplest for the needs of autopartitioning the Zabbix Database using the PostgreSQL 10 Native Range Partitioning functionality.
Please let me know if you need more details.
I have written detailed notes on using PostgreSQL version 11 together with pgpartman as the mechanism for native table partitioning with Zabbix (version 3.4 as of this writing).
zabbix-postgres-partitioning

Invalid column name in DB2

I'm having trouble with the column name of one of my tables.
My version of DB2 is DB2/LINUXX8664 11.1.0. I'm running it on a CentOS Linux Release 7.2.1511. My version of IBM Data Studio is 4.1.2.
The column is named "NRO_AÑO" in the table "PERIODO" in the schema "COMPRAS".
When I execute the simple query
SELECT NRO_AÑO
FROM COMPRAS.PERIODO
it yields the following error:
"NRO_AÑO" is not valid in the context where it is used.. SQLCODE=-206, SQLSTATE=42703, DRIVER=3.68.61
If I execute the query
SELECT *
FROM COMPRAS.PERIODO
it yields data with the following columns
I'm guessing it has something to do with the charsets involved, but I'm not sure where to look at.
Thanks in advance.
It worked for me:
[db2inst1#server ~]$ db2 "create table compras.periodo (nro_año int)"
DB20000I The SQL command completed successfully.
[db2inst1#server ~]$ db2 "insert into compras.periodo values (1)"
DB20000I The SQL command completed successfully.
[db2inst1#server ~]$ db2 "insert into compras.periodo (nro_año) values (2)"
DB20000I The SQL command completed successfully.
[db2inst1#server ~]$ db2 "select nro_año from compras.periodo"
NRO_AÑO
-----------
1
2
2 record(s) selected.
Probably, you are having a console encoding problem (putty), and you should review how the name of the column in the database is stored:
db2 "select colname from syscat.columns where tabname = 'PERIODO'"
COLNAME
--------------------------------------------------------------------------------------------------------------------------------
NRO_AÑO
1 record(s) selected.
Creating the table from Putty (SSH client) and then selecting from Data Studio, then the characters higher that 128 will have different representations. Java (DataStudio) uses UTF-8, but probably the script used to create the table used another encoding and this is having problems in the database (Putty, Windows, Notepad, etc).
It worked for me when I run the script from DB2 command line processor on DB2 9.7.
db2 => CREATE TABLE TEMP_TABLE(NRO_AÑO INTEGER)
DB20000I The SQL command completed successfully.
db2 => INSERT INTO TEMP_TABLE(NRO_AÑO) VALUES(1)
DB20000I The SQL command completed successfully.
db2 => SELECT * FROM TEMP_TABLE
NRO_AÑO
-----------
1
1 record(s) selected.
db2 => select colname from syscat.columns where tabname = 'TEMP_TABLE'
COLNAME
------------
NRO_AÑO
1 record(s) selected.
Your issue may also be that columns need to be enclosed in quotes, as found in IBM Data Studio Ver 4, example:
INSERT INTO DB2ADMIN.FB_WEB_POSTS ("UserName","FaceID", "FaceURL","FaceStory","FaceMessage","FaceDate","FaceStamp")
VALUES ('SocialMate','233555900032117_912837012103999', 'http://localhost/doculogs.nsf/index.html', 'Some Message or Story','Random Files Project for Lotus Notes, Google, Oracle App samples', '2017-09-09', '2017-09.23');

How to log an error into a Table from SQLPLus

I am very new to Oracle so please base with me if this is covered else where.
I have a MS SQL box running Jobs calling batch files running scripts in SQLPLUS to ETL to an Oracle 10G database.
I have an intermittent issue with a script that is causing the ETL to fail which at the minute without error logging is something of an unknown. The current solution highlights the load failure based on rowcounts for before and after the script has finsihed.
I'd like to be able to insert any errors encoutered whilst running the offending script into an error log table on the same database receiving the data loads.
There's nothing too technical about the script, at a high level is performs the following steps all via SQL code and no procedural calls.
Updates a table with Date and current row counts
Pulls data from a remote source into a staging table
Merges the Staging table into an intermediate staging table
Performs some transformational actions
Merges the intermediate staging table into the final Fact table
Updates a table with new row counts
Please advise whether it is possible to pass error messages, codes, line number etc etc via SQLPLUS into a Database table? and if so the easiest method to achieve this.
A first few lines of the script are shown below to give a flavour
/*set echo off*/
set heading off
set feedback off
set sqlblanklines on
/* ID 1 BATCH START TIME */
INSERT INTO CITSDMI.CITSD_TIMETABLE_ORDERLINE TGT
(TGT.BATCH_START_TIME)
(SELECT SYSDATE FROM DUAL);
COMMIT;
insert into CITSDMI.CITSD_TIMETABLE_ALL_LOADS
(LOAD_NAME, LOAD_CRITICALITY,LOAD_TYPE,BATCH_START_TIME)
values
('ORDERLINE','HIGH','SMART',(SELECT SYSDATE FROM DUAL));
commit;
/* Clear the Staging Tables */
TRUNCATE TABLE STAGE_SMART_ORDERLINE;
Commit;
TRUNCATE TABLE TRANSF_FACT_ORDERLINE;
Commit;
and so it goes on with the rest of the steps.
Any assistant will be greatly appreciated.
Whilst not fully understanding your requirement, a couple of pointers.
The WHENEVER command will help you control what sqlplus should do when an error occurs, e.g.
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
WHENEVER OSERROR EXIT FAILURE ROLLBACK
INSERT ...
INSERT ...
This will cause sqlplus to exit with error status 1 if any of the following statements fail.
You can also have WHENEVER SQLERROR CONTINUE ...
Since the WHENEVER ... EXIT FAILURE/SUCCESS controls the exit status, the calling script/program will know if it worked failed.
Logging
use SPOOL to spool the out to a file.
Logging to table.
Best way is to wrap your statements into PLSQL anonymous blocks and use exception hanlders to log errors.
So, putting the above together, using a UNIX shell as the invoker:
sqlplus -S /nolog <<EOF
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
CONNECT ${USRPWD}
SPOOL ${SPLFILE}
BEGIN
INSERT INTO the_table ( c1, c1 ) VALUES ( '${V1}', '${V2}' );
EXCEPTION
WHEN OTHERS THEN
INSERT INTO the_error_tab ( name, errno, errm ) VALUES ( 'the_script', SQLCODE, SQLERRM );
COMMIT;
END;
/
SPOOL OFF
QUIT
EOF
if [ ${?} -eq 0 ]
then
echo "Success!"
else
echo "Oh dear!! See ${SPLFILE}"
fi

DB2 CLI result output

When running command-line queries in MySQL you can optionally use '\G' as a statement terminator, and instead of the result set columns being listed horizontally across the screen, it will list each column vertically, which the corresponding data to the right. Is there a way to the same or a similar thing with the DB2 command line utility?
Example regular MySQL result
mysql> select * from tagmap limit 2;
+----+---------+--------+
| id | blog_id | tag_id |
+----+---------+--------+
| 16 | 8 | 1 |
| 17 | 8 | 4 |
+----+---------+--------+
Example Alternate MySQL result:
mysql> select * from tagmap limit 2\G
*************************** 1. row ***************************
id: 16
blog_id: 8
tag_id: 1
*************************** 2. row ***************************
id: 17
blog_id: 8
tag_id: 4
2 rows in set (0.00 sec)
Obviously, this is much more useful when the columns are large strings, or when there are many columns in a result set, but this demonstrates the formatting better than I can probably explain it.
I don't think such an option is available with the DB2 command line client. See http://www.dbforums.com/showthread.php?t=708079 for some suggestions. For a more general set of information about the DB2 command line client you might check out the IBM DeveloperWorks article DB2's Command Line Processor and Scripting.
Little bit late, but found this post when I searched for an option to retrieve only the selected data.
So db2 -x <query> gives only the result back. More options can be found here: https://www.ibm.com/docs/en/db2/11.1?topic=clp-options
Example:
[db2inst1#a21c-db2 db2]$ db2 -n select postschemaver from files.product
POSTSCHEMAVER
--------------------------------
147.3
1 record(s) selected.
[db2inst1#a21c-db2 db2]$ db2 -x select postschemaver from files.product
147.3
DB2 command line utility always displays data in tabular format. i.e. rows horizontally and columns vertically. It does not support any other format like \G statement terminator do for mysql. But yes, you can store column organized data in DB2 tables when DB2_WORKLOAD=ANALYTICS is set.
db2 => connect to coldb
Database Connection Information
Database server = DB2/LINUXX8664 10.5.5
SQL authorization ID = BIMALJHA
Local database alias = COLDB
db2 => create table testtable (c1 int, c2 varchar(10)) organize by column
DB20000I The SQL command completed successfully.
db2 => insert into testtable values (2, 'bimal'),(3, 'kumar')
DB20000I The SQL command completed successfully.
db2 => select * from testtable
C1 C2
----------- ----------
2 bimal
3 kumar
2 record(s) selected.
db2 => terminate
DB20000I The TERMINATE command completed successfully.