Ant sql task - print executed query and result with PostgreSQL - postgresql

I'm trying to print executed query and also result of the query on PostgreSQL. I have simple ant sql task, using standard postgresql JDBC driver.
<?xml version="1.0"?>
<project name="Ant SQL task" default="sql">
<target name="sql">
<echo>Ant SQL task</echo>
<sql
driver="org.postgresql.Driver"
url="jdbc:postgresql://hostname:port/database"
userid="user"
password="password"
src="script.sql"
print="yes"
>
</sql>
</target>
</project>
The script.sql containts just
select 1 as test;
I'm getting this output:
info:
[sql] Executing resource: script.sql
[sql] test
[sql] 1
[sql]
[sql] 0 rows affected
[sql] 1 of 1 SQL statements executed successfully
I would like to print the select query in the output like this:
info:
[sql] Executing resource: script.sql
[sql] select 1 as test;
[sql] test
[sql] 1
[sql]
[sql] 0 rows affected
[sql] 1 of 1 SQL statements executed successfully
Is there a way to include the select query in the output?

1.) Set loglevel to DEBUG (2) for the jdbc. Except now you don't have too little info in your log file, but too much.
2.) Append your queries a little:
SELECT 1 as foo, (SELECT query FROM pg_stat_activity);
foo | query
-----+--------------------------------------------------------
1 | SELECT 1 as foo, (SELECT query FROM pg_stat_activity);
(1 row)
Note that this will give you all running queries, not just the current one. You can narrow it down by one of the many columns in pg_stat_activity, but that's probably unnecessary unless this is a busy system. Set ApplicationName in your connection parameters if you decide to do that.
3.) Add lines to script.sql that manually repeat the next query:
SELECT 'SELECT 1;';
SELECT 1;
SELECT 'SELECT 2;';
SELECT 2;
4.) Ditch the ant sql module and use exec to run psql -e -f script.sql instead:
SELECT 1;
?column?
----------
1
(1 row)
You can omit the row count with -P footer=OFF.

Related

COPY FROM STDIN does not work in liquibase

I'm trying to upload a lot of data from a .sql file using the COPY command for Postgresql.
I have those data in a file.sql in the following format :
COPY my_table(id, name, status) FROM stdin;
1 peter active
1 steve active
1 maria active
\.
And my changeset like this:
<changeSet id="sqlFile-example" author="me" >
<sqlFile encoding="UTF-8"
path="file.sql"
relativeToChangelogFile="true"
endDelimiter=";"
splitStatements="false"
/>
</changeSet>
And get this error:
[ERROR] Failed to execute goal
org.liquibase:liquibase-maven-plugin:3.6.3:update (default-cli) on
project lincoln-soft: Error setting up or running Liquibase: Migration
failed for change set
src/main/resources/db/liquibase/db-changelog.xml::sqlFile-example::me
[ERROR] Reason: liquibase.exception.DatabaseException: ERROR: unexpected message type 0x50 during COPY from stdin
[ERROR] Where:
COPY my_table, line 1 [Failed SQL: COPY my_table(id, name, status)
FROM stdin;
[ERROR] 1 peter active
[ERROR] 1 steve active
[ERROR] 1 maria active
[ERROR] \.]
I there a way to upload those data by liquibase?
Finally got a solution, as #a_horse_with_no_name and #Laurenz Albe
mentioned, can't use COPY FROM STDIN directly in JDBC, so I used pg_dump to generate insert statements like this:
pg_dump --table=public.my_table --data-only --column-inserts my_databse > /tmp/my_table_data.sql
It gives me a file my_table_data.sql with the inserts statements like this:
INSERT INTO public.my_table (id, name, status) VALUES (1,peter,active);
INSERT INTO public.my_table (id, name, status) VALUES (1,peter,active);
INSERT INTO public.my_table (id, name, status) VALUES (1,peter,active);
And then I use this liquibase Chageset to upload the sql file:
<changeSet id="sqlFile-example" author="me" >
<sqlFile encoding="UTF-8"
path="my_table_data.sql"
relativeToChangelogFile="true"
splitStatements="true"
stripComments="true"
/>
</changeSet>
It works for me
As Laurenz already mentioned: you can't use COPY FROM STDIN directly in JDBC (you can use the CopyManager API to implement that manually, but Liquibase doesn't support that and I also don't know of any plugin that would do that)
I would suggest you use Liquibase's built-in ability to load CSV (text) files. Put your input data in CSV file, e.g. my_table_data.txt with a header line for the columns:
id,name,status
1,peter,active
1,steve,active
1,maria,active
Then use <loadData> instead of running a SQL script:
<changeSet id="sqlFile-example" author="me" >
<loadData tableName="my_table"
file="my_table_data.txt"
separator=","
encoding="UTF-8">
</changeSet>
Mixing the COPY statement and the data in the same file only works in psql scripts.
Moreover, COPY FROM STDIN is not supported by the JDBC driver at all.
You should use INSERT statements in your script.

error while insert column in postgresql using shell script?

Below is my shell script, I am trying to insert columns in posrgresql using shell script.
But getting below error.
Script:
i='dcm_account494401_click_2017050511_20170505_093843_556195422.csv.gz'
load_date='2017-05-12'
load_status='Fail'
message="INFO: Load into table 'stg_ft_raw_activity' completed, 362554 record(s) loaded successfully.
INFO: Load into table 'stg_ft_raw_activity' completed, 1 record(s) were loaded with replacements made for ACCEPTINVCHARS. Check 'stl_replacements' system table for details.
"
psql "host=$HOST port=$DBPORT dbname=$DBNAME user=$DBUSER password=$DBPASS" -F --no-align <<EOF
truncate table stg.notification_table;
\set fname $i
\set load_date $load_date
\set load_status $load_status
\set message $message
insert into stg.notification_table values (:'fname', :'load_date', :'load_status',:"message");
EOF
error:
Expanded display is used automatically.
TRUNCATE TABLE and COMMIT TRANSACTION
ERROR: syntax error at or near "INFO"
LINE 1: INFO: Load into table 'stg_ft_raw_activity' completed, 1 re...
^
message col is string value and also contains spl chars. is that a reason?
Please help to resolve.
Thansk,
There is no need to use these variables with \set.
Here is an example:
message="INFO: Load into table 'stg_ft_raw_activity' completed, 362554 record(s) loaded successfully.
INFO: Load into table 'stg_ft_raw_activity' completed, 1 record(s) were loaded with replacements made for ACCEPTINVCHARS. Check 'stl_replacements' system table for details.
"
psql <<EOF
INSERT INTO message_table VALUES (\$\$$message\$\$);
EOF
This makes use of “dollar quoting” for string literals, with the $ signs that are no shell variable reference escaped with backslashes.

Invalid column name in DB2

I'm having trouble with the column name of one of my tables.
My version of DB2 is DB2/LINUXX8664 11.1.0. I'm running it on a CentOS Linux Release 7.2.1511. My version of IBM Data Studio is 4.1.2.
The column is named "NRO_AÑO" in the table "PERIODO" in the schema "COMPRAS".
When I execute the simple query
SELECT NRO_AÑO
FROM COMPRAS.PERIODO
it yields the following error:
"NRO_AÑO" is not valid in the context where it is used.. SQLCODE=-206, SQLSTATE=42703, DRIVER=3.68.61
If I execute the query
SELECT *
FROM COMPRAS.PERIODO
it yields data with the following columns
I'm guessing it has something to do with the charsets involved, but I'm not sure where to look at.
Thanks in advance.
It worked for me:
[db2inst1#server ~]$ db2 "create table compras.periodo (nro_año int)"
DB20000I The SQL command completed successfully.
[db2inst1#server ~]$ db2 "insert into compras.periodo values (1)"
DB20000I The SQL command completed successfully.
[db2inst1#server ~]$ db2 "insert into compras.periodo (nro_año) values (2)"
DB20000I The SQL command completed successfully.
[db2inst1#server ~]$ db2 "select nro_año from compras.periodo"
NRO_AÑO
-----------
1
2
2 record(s) selected.
Probably, you are having a console encoding problem (putty), and you should review how the name of the column in the database is stored:
db2 "select colname from syscat.columns where tabname = 'PERIODO'"
COLNAME
--------------------------------------------------------------------------------------------------------------------------------
NRO_AÑO
1 record(s) selected.
Creating the table from Putty (SSH client) and then selecting from Data Studio, then the characters higher that 128 will have different representations. Java (DataStudio) uses UTF-8, but probably the script used to create the table used another encoding and this is having problems in the database (Putty, Windows, Notepad, etc).
It worked for me when I run the script from DB2 command line processor on DB2 9.7.
db2 => CREATE TABLE TEMP_TABLE(NRO_AÑO INTEGER)
DB20000I The SQL command completed successfully.
db2 => INSERT INTO TEMP_TABLE(NRO_AÑO) VALUES(1)
DB20000I The SQL command completed successfully.
db2 => SELECT * FROM TEMP_TABLE
NRO_AÑO
-----------
1
1 record(s) selected.
db2 => select colname from syscat.columns where tabname = 'TEMP_TABLE'
COLNAME
------------
NRO_AÑO
1 record(s) selected.
Your issue may also be that columns need to be enclosed in quotes, as found in IBM Data Studio Ver 4, example:
INSERT INTO DB2ADMIN.FB_WEB_POSTS ("UserName","FaceID", "FaceURL","FaceStory","FaceMessage","FaceDate","FaceStamp")
VALUES ('SocialMate','233555900032117_912837012103999', 'http://localhost/doculogs.nsf/index.html', 'Some Message or Story','Random Files Project for Lotus Notes, Google, Oracle App samples', '2017-09-09', '2017-09.23');

How to log an error into a Table from SQLPLus

I am very new to Oracle so please base with me if this is covered else where.
I have a MS SQL box running Jobs calling batch files running scripts in SQLPLUS to ETL to an Oracle 10G database.
I have an intermittent issue with a script that is causing the ETL to fail which at the minute without error logging is something of an unknown. The current solution highlights the load failure based on rowcounts for before and after the script has finsihed.
I'd like to be able to insert any errors encoutered whilst running the offending script into an error log table on the same database receiving the data loads.
There's nothing too technical about the script, at a high level is performs the following steps all via SQL code and no procedural calls.
Updates a table with Date and current row counts
Pulls data from a remote source into a staging table
Merges the Staging table into an intermediate staging table
Performs some transformational actions
Merges the intermediate staging table into the final Fact table
Updates a table with new row counts
Please advise whether it is possible to pass error messages, codes, line number etc etc via SQLPLUS into a Database table? and if so the easiest method to achieve this.
A first few lines of the script are shown below to give a flavour
/*set echo off*/
set heading off
set feedback off
set sqlblanklines on
/* ID 1 BATCH START TIME */
INSERT INTO CITSDMI.CITSD_TIMETABLE_ORDERLINE TGT
(TGT.BATCH_START_TIME)
(SELECT SYSDATE FROM DUAL);
COMMIT;
insert into CITSDMI.CITSD_TIMETABLE_ALL_LOADS
(LOAD_NAME, LOAD_CRITICALITY,LOAD_TYPE,BATCH_START_TIME)
values
('ORDERLINE','HIGH','SMART',(SELECT SYSDATE FROM DUAL));
commit;
/* Clear the Staging Tables */
TRUNCATE TABLE STAGE_SMART_ORDERLINE;
Commit;
TRUNCATE TABLE TRANSF_FACT_ORDERLINE;
Commit;
and so it goes on with the rest of the steps.
Any assistant will be greatly appreciated.
Whilst not fully understanding your requirement, a couple of pointers.
The WHENEVER command will help you control what sqlplus should do when an error occurs, e.g.
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
WHENEVER OSERROR EXIT FAILURE ROLLBACK
INSERT ...
INSERT ...
This will cause sqlplus to exit with error status 1 if any of the following statements fail.
You can also have WHENEVER SQLERROR CONTINUE ...
Since the WHENEVER ... EXIT FAILURE/SUCCESS controls the exit status, the calling script/program will know if it worked failed.
Logging
use SPOOL to spool the out to a file.
Logging to table.
Best way is to wrap your statements into PLSQL anonymous blocks and use exception hanlders to log errors.
So, putting the above together, using a UNIX shell as the invoker:
sqlplus -S /nolog <<EOF
WHENEVER SQLERROR EXIT FAILURE ROLLBACK
CONNECT ${USRPWD}
SPOOL ${SPLFILE}
BEGIN
INSERT INTO the_table ( c1, c1 ) VALUES ( '${V1}', '${V2}' );
EXCEPTION
WHEN OTHERS THEN
INSERT INTO the_error_tab ( name, errno, errm ) VALUES ( 'the_script', SQLCODE, SQLERRM );
COMMIT;
END;
/
SPOOL OFF
QUIT
EOF
if [ ${?} -eq 0 ]
then
echo "Success!"
else
echo "Oh dear!! See ${SPLFILE}"
fi

DB2 CLI result output

When running command-line queries in MySQL you can optionally use '\G' as a statement terminator, and instead of the result set columns being listed horizontally across the screen, it will list each column vertically, which the corresponding data to the right. Is there a way to the same or a similar thing with the DB2 command line utility?
Example regular MySQL result
mysql> select * from tagmap limit 2;
+----+---------+--------+
| id | blog_id | tag_id |
+----+---------+--------+
| 16 | 8 | 1 |
| 17 | 8 | 4 |
+----+---------+--------+
Example Alternate MySQL result:
mysql> select * from tagmap limit 2\G
*************************** 1. row ***************************
id: 16
blog_id: 8
tag_id: 1
*************************** 2. row ***************************
id: 17
blog_id: 8
tag_id: 4
2 rows in set (0.00 sec)
Obviously, this is much more useful when the columns are large strings, or when there are many columns in a result set, but this demonstrates the formatting better than I can probably explain it.
I don't think such an option is available with the DB2 command line client. See http://www.dbforums.com/showthread.php?t=708079 for some suggestions. For a more general set of information about the DB2 command line client you might check out the IBM DeveloperWorks article DB2's Command Line Processor and Scripting.
Little bit late, but found this post when I searched for an option to retrieve only the selected data.
So db2 -x <query> gives only the result back. More options can be found here: https://www.ibm.com/docs/en/db2/11.1?topic=clp-options
Example:
[db2inst1#a21c-db2 db2]$ db2 -n select postschemaver from files.product
POSTSCHEMAVER
--------------------------------
147.3
1 record(s) selected.
[db2inst1#a21c-db2 db2]$ db2 -x select postschemaver from files.product
147.3
DB2 command line utility always displays data in tabular format. i.e. rows horizontally and columns vertically. It does not support any other format like \G statement terminator do for mysql. But yes, you can store column organized data in DB2 tables when DB2_WORKLOAD=ANALYTICS is set.
db2 => connect to coldb
Database Connection Information
Database server = DB2/LINUXX8664 10.5.5
SQL authorization ID = BIMALJHA
Local database alias = COLDB
db2 => create table testtable (c1 int, c2 varchar(10)) organize by column
DB20000I The SQL command completed successfully.
db2 => insert into testtable values (2, 'bimal'),(3, 'kumar')
DB20000I The SQL command completed successfully.
db2 => select * from testtable
C1 C2
----------- ----------
2 bimal
3 kumar
2 record(s) selected.
db2 => terminate
DB20000I The TERMINATE command completed successfully.