WHENEVER SQLERROR of postgresql causes INTO assignment to fail - postgresql

This is a piece of ecpg code. When using fetch to traverse the data, because of the use of WHENEVER SQLERROR DO sqlerr(), if an error occurs, it will be judged by the sqlerr method that it is a 22002 error, and the error will be ignored and the traversal will continue
int main(int argc, char *argv[]) {
....
EXEC SQL DECLARE mt_cur CURSOR WITH HOLD FOR
SELECT
C1, C2, C3
FROM MYTABLE,
ORDER BY C1;
EXEC SQL WHENEVER SQLERROR CONTINUE;
EXEC SQL CLOSE mt_cur;
EXEC SQL WHENEVER SQLERROR DO sqlerr();
EXEC SQL OPEN mt_cur;
EXEC SQL WHENEVER NOT FOUND DO break;
while (1) {
EXEC SQL FETCH mt_cur INTO :v1, :v2, :v3;
if (v3 > 5 ) {
continue;
}
printf("%d%d\n", v1, v3);
}
EXEC SQL CLOSE mt_cur;
....
return (0);
}
void sqlerr() {
if (strncmp(sqlca.sqlstate, "22002", 5) == 0) {
return;
}
EXEC SQL WHENEVER SQLERROR CONTINUE;
EXEC SQL ROLLBACK WORK;
exit(1);
}
If the data in the table is like this
C1 C2 C3
1 0 2
2 (NULL) 8
3 (NULL) 9
The correct running result should be to print only one 12, but now it prints out 12, 22, 32
After analyzing the program, I think that when traversing the second row of data, the C2 column is NULL, which leads to entering the sqlerr method. After returning in sqlerr, INTO will not be executed to assign a value to v3, and it will enter the next traversal. At this time, the value of v3 is 2 assigned during the first traversal
Is my idea correct? If so, is there a way for postgresql to avoid this situation, I want the value of C3 to be INTO to v3 normally when C2 is NULL

Related

Errors with declared array/table variable values in SAP HanaDB SQL

I am trying to add a declared variable to replace a hardcoded list of values in a "where in" clause.
Researching how Hana handles array variables it seems like I can do this by declaring an array and then either using a select directly on it or by unnesting it first into a table but I keep getting errors I can't resolve.
When I try it this way:
DO
BEGIN
DECLARE CODES_ARRAY NVARCHAR(100) ARRAY = ARRAY('01','02','03','04');
SELECT T0."ItemCode"
FROM OITM T0
INNER JOIN OITW T1 ON T0."ItemCode" = T1."ItemCode"
WHERE "WhsCode" IN (SELECT "code" FROM :CODES_ARRAY); -- line 9 where error occurs
END;
I get this error message sql syntax error: incorrect syntax near ")": line 9 col 54 (at pos 239)
I can't figure out what the syntax error resolution is.
So then I tried inserting a declared table variable like this:
DO
BEGIN
DECLARE CODES_ARRAY NVARCHAR(100) ARRAY = ARRAY('01','02','03','04');
DECLARE CODES_TABLE TABLE = UNNEST(:CODES_ARRAY) AS ("code"); -- line 5 where error occurs
SELECT T0."ItemCode"
FROM OITM T0
INNER JOIN OITW T1 ON T0."ItemCode" = T1."ItemCode"
WHERE "WhsCode" IN (SELECT "code" FROM CODES_TABLE); -- I know : is missing here but when adding, the same error from previous block shows up
END;
and I get this error message: identifier must be declared: 1: line 5 col 38 (at pos 123)
As far as I can tell the array variable is declared as it should be so I don't know how to resolve the error.
I've read the SAP Hana SQL Reference documentation (for array/table variables, unnest function, etc.) over and over and it seems like I've got everything setup correctly but can't figure out these errors. I would like to be able to use both of these approaches at different times if possible (the "array variable to table variable" and the "array variable only" approaches)
I don't know exactly what is going on here, but one thing I notice that the two different error messages referenced in my post (see difference from errors in the first two code blocks) is that each error is occurring either immediately before the use of the variable with the : (in the case of the UNNEST) or immediately following the variable with the : (in the case of using in the SELECT * FROM in the query).
Because of that, I wondered if the issue is "upstream" at the Hana ADO.NET application query preparation and execution call level, but I ran a test and when I double checked the query string just before it is executed, it appears unchanged and the variables with : still look as they should, so at least as far as just before execution through Hana ADO.NET HanaCommand it looks correct - but once executing the query using HanaDataReader or HanaDataAdapter it returns the error messages referred to above. It may be a red herring to chase the problem from the Hana ADO.NET level but don't know what else to do.
Update
To further troubleshoot, I tried executing this code block below using hdbsql.exe -n XXX.XXX.XXX.XXX:30015 -u XXX -p XXX -m -I "c:\temp\test.sql" -c "#" and it works! So, the errors I see only show up when executing the same query through the Hana ADO.NET interface.
DO
BEGIN
DECLARE CODES_ARRAY NVARCHAR(10) ARRAY = ARRAY('01','02','03','04');
DECLARE CODES_TABLE TABLE ("code" NVARCHAR(10)) = UNNEST(:CODES_ARRAY) AS ("code");
SELECT T0."ItemCode"
FROM OITM T0
INNER JOIN OITW T1 ON T0."ItemCode" = T1."ItemCode"
WHERE "WhsCode" IN (SELECT "code" FROM :CODES_TABLE); -- line 10 where error occurs when using Hana ADO.NET
END;
#
The above fails when through Hana ADO.NET with the error message: sql syntax error: incorrect syntax near ")": line 10 col 54 (at pos 325) but works when executed through hdbsql.
Update
The C# code that executes the query is fairly straight forward, but for completeness of troubleshooting effort I am including the interesting parts of our HanaHelper class. This code works successfully to execute 100s of queries a day without errors or problems. This is the first time where a variable of any type has been attempted to be declared or used in a query through this code and when the errors started showing up. As far as I can tell, the issue is tied to the use of the : when using the variable in the query.
public class HanaHelper
{
public HanaConnection objConn = null;
public HanaHelper(string ConnectionString)
{
try
{
objConn = new HanaConnection(ConnectionString);
}
catch (Exception e)
{
Console.WriteLine(#"Exception thrown by HanaConnection: {0}\n{1}", e.Message, e.InnerException);
}
}
public DataSet GetData(string strSQL)
{
using (HanaCommand objCmd = new HanaCommand(strSQL, objConn))
{
using (HanaDataAdapter objDA = new HanaDataAdapter(objCmd))
{
DataSet objDS = new DataSet();
try
{
objDA.Fill(objDS);
}
catch (Exception)
{
throw;
}
finally
{
// do something interesting regardless of success or failure
}
objConn.Close();
return objDS;
}
}
}
}
Any clue here why the same query works through hdbsql but fails when executing through Hana ADO.NET?
Update
I figured out how to use HanaSQLTrace in the C# code so that I can inspect the prepared query text and viola, the source of error messages becomes apparent, all occurrences of ":VARNAME" are replaced with "? " (a ? replaces the : and a space for each character in the variable name). I suppose it is trying to pre-substitute occurrences of : with a ? as if there were parameters to be substituted.
How can this behavior be disabled, or worked with, or worked around so that I can use variables in a query in Hana ADO.NET effectively?
Updated based on the OP feedback.
To refer to a variable (in order to access its value(s)) in SQLScript it's
necessary to put a colon : in front of the variable name.
The main issue, however, turns out to be the declaration of the CODES_TABLE table variable.
With HANA 2 SPS 4 the error message is
`SAP DBTech JDBC: [264]: invalid datatype: unknown type SYSTEM.TABLE: line 5 col 23`
This points to the declaration of the TABLE typed variable CODES_TABLE which lacks the definition of what columns should be in the table.
Adding this fixes the issue.
With these changes, your code should work:
DO
BEGIN
DECLARE CODES_ARRAY NVARCHAR(100) ARRAY = ARRAY('01','02','03','04');
DECLARE CODES_TABLE TABLE ("code" NVARCHAR(100)) = UNNEST(:CODES_ARRAY) AS ("code");
-- ^^^^^^^^^^^^^^^^^^^^^^
-- |
---------------------------------------+
SELECT
T0."ItemCode"
FROM
OITM T0
INNER JOIN OITW T1
ON T0."ItemCode" = T1."ItemCode"
WHERE
"WhsCode" IN (SELECT "code" FROM :CODES_TABLE);
-- ^
-- |
---------------------------------------+
END;
An alternative option to declare and assign the table variable is to not use the DECLARE command.
DO
BEGIN
DECLARE CODES_ARRAY NVARCHAR(100) ARRAY = ARRAY('01','02','03','04');
CODES_TABLE = UNNEST(:CODES_ARRAY) AS ("code");
SELECT
T0."ItemCode"
FROM
OITM T0
INNER JOIN OITW T1
ON T0."ItemCode" = T1."ItemCode"
WHERE
"WhsCode" IN (SELECT "code" FROM :CODES_TABLE);
END;

TSQL OUTPUT clause in MERGE statement raise Msg 596 "Cannot continue the execution because the session is in the kill state"

I wrote T-SQL MERGE query to merge staging data into a data warehouse (you can find it at the bottom).
If I uncomment the OUTPUT statement the I get error mentioned in the title.
However, if I do not include it, everything works perfectly fine and MERGE succeeds.
I know that there are some issue connected to the MERGE clause, however there are more connected to the type of merge.
I checked the following answer: [https://dba.stackexchange.com/questions/140880/why-does-this-merge-statement-cause-the-session-to-be-killed], however in my execution plan I cannot find exactly index insert followed by index merge.
Rather, what I see is the following execution plan
Code was developed on database attached to SQL Server 2012 (SP4) instance
I would really appreciate good explanation of this problem, ideally referencing steps from my execution plan.
Thank you.
declare #changes table (chgType varchar(50),Id varchar(18))
begin try
set xact_abort on
begin tran
;with TargetUserLogHsh as (select
hsh =hashbytes('md5',concat(coalesce([AccountName],'')
,coalesce([TaxNumber],'')))
,LastLoginCast = coalesce(CONVERT(datetime,LastLogin,103),getdate())
,* from
dw.table1)
,SourceUserLogHsh as (select
hsh =hashbytes('md5',concat(coalesce([AccountName],'')
,coalesce([TaxNumber],'')))
,LastLoginCast = coalesce(CONVERT(datetime,LastLogin,103),getdate())
,* from
sta.table1)
merge TargetUserLogHsh target
using SourceUserLogHsh source
on target.ContactId = source.ContactId and target.Lastlogincast >= source.LastLoginCast
when not matched then insert (
[AccountName]
,[TaxNumber]
,[LastLogin]
)
values (
source.[AccountName]
,source.[TaxNumber]
,source.[LastLogin]
)
when matched and target.lastlogincast = source.lastlogincast
and target.hsh != source.hsh then
update
set
[AccountName] = source.[AccountName]
,[TaxNumber] = source.[TaxNumber]
,[LastLogin] = source.[LastLogin]
output $action,inserted.contactid into #changes
;
commit tran
end try
begin catch
if ##TRANCOUNT > 0 rollback tran
select ERROR_MESSAGE()
end catch

PostgreSQL Cursor Re-open error in GnuCOBOL

I am trying to move from Oracle to PostgreSQL on GnuCOBOL. I have a piece of code which uses cursors and need opening a cursor multiple times. However when trying to open the cursor again I get the error ERROR: cursor "fetchtbl_c1" already exists
IDENTIFICATION DIVISION.
PROGRAM-ID. FETCHTBL.
DATA DIVISION.
WORKING-STORAGE SECTION.
01 D-SOC-REC.
05 D-SOC-NO-1 PIC X(3).
05 FILLER PIC X.
05 D-SOC-NO-2 PIC X(3).
EXEC SQL BEGIN DECLARE SECTION END-EXEC.
01 USERNAME PIC X(30) VALUE SPACE.
01 SOC-REC-VARS.
05 SOC-NO-1 PIC X(3).
05 SOC-NO-2 PIC X(3).
EXEC SQL END DECLARE SECTION END-EXEC.
EXEC SQL INCLUDE SQLCA END-EXEC.
PROCEDURE DIVISION.
MAIN-RTN.
MOVE SPACE TO USERNAME.
EXEC SQL
CONNECT :USERNAME
END-EXEC.
IF SQLCODE NOT = ZERO DISPLAY "ERROR CONNECTING".
* DECLARE CURSOR
EXEC SQL
DECLARE C1 CURSOR FOR
SELECT SOC_NO_1, SOC_NO_2
FROM INSP
ORDER BY SOC_NO_1
END-EXEC.
EXEC SQL
OPEN C1
END-EXEC.
IF SQLCODE = ZERO DISPLAY "OPEN SUCCESSFUL"
ELSE DISPLAY "OPEN FAILED".
* FETCH
EXEC SQL
FETCH C1 INTO :SOC-NO-1,:SOC-NO-2
END-EXEC.
IF SQLCODE = ZERO DISPLAY "FETCH SUCCESSFUL"
ELSE DISPLAY "FETCH FAILED".
PERFORM UNTIL SQLCODE NOT = ZERO
MOVE SOC-NO-1 TO D-SOC-NO-1
MOVE SOC-NO-2 TO D-SOC-NO-2
DISPLAY D-SOC-REC
EXEC SQL
FETCH C1 INTO :SOC-NO-1,:SOC-NO-2
END-EXEC
END-PERFORM.
* CLOSE CURSOR
EXEC SQL
CLOSE C1
END-EXEC.
IF SQLCODE = ZERO DISPLAY "CLOSE SUCCESSFUL"
ELSE DISPLAY "CLOSE FAILED".
* OPEN AGAIN
EXEC SQL
OPEN C1
END-EXEC.
IF SQLCODE = ZERO DISPLAY "REOPEN SUCCESSFUL"
ELSE DISPLAY "REOPEN FAILED " SQLERRMC.
* COMMIT
EXEC SQL
COMMIT WORK
END-EXEC.
* DISCONNECT
EXEC SQL
DISCONNECT ALL
END-EXEC.
* END
STOP RUN.
Pre-compiled the code using ocesql and compiled using cobc -x
Postgres Output
OPEN SUCCESSFUL
FETCH SUCCESSFUL
003 001
005 001
CLOSE SUCCESSFUL
REOPEN FAILED ERROR: cursor "fetchtbl_c1" already exists
The above code works perfectly fine (except for connection part) in Oracle.
Oracle output
OPEN SUCCESSFUL
FETCH SUCCESSFUL
003 001
CLOSE SUCCESSFUL
REOPEN SUCCESSFUL
I have tried searching on the internet but without any luck. Anybody can help me with this?
I am using PostgreSQL version 10.3 and GnuCOBOL version 2.2.0.
There seemed to be an issue with ocesql pre-compiler. I have put a fix in ocdb.c in function OCDBSetResultStatus to return a successful code in case there is not result resource (which happens for open cursor case).
This might not be entirely correct but after spending a few hours testing I see this working fine.
Code changes:
int
OCDBSetResultStatus(int id, struct sqlca_t *st){
struct s_conn *p_conn;
int retval;
p_conn = look_up_conn_lists(id);
if(p_conn == NULL){
//return OCDB_RES_FATAL_ERROR;
return RESULT_ERROR;
}
if(p_conn->resaddr == OCDB_RES_DEFAULT_ADDRESS){
// 結果リソースが無いため成功で返す
// Ankit: uncommented since there is no result resource,
// (true in case of open cursor)
return OCDB_RES_COMMAND_OK;
//return RESULT_ERROR;
}
#ifdef PGSQL_MODE_ON
retval = OCDB_PGSetResultStatus(p_conn->resaddr,st);
#endif
return retval;
}
Let me know if anybody faces any issues because of this change.

Postgres copy data & evaluate expression

Is it possible a copy command to evaluate expressions upon insertion?
For example consider the following table
create table test1 ( a int, b int)
and we have a file to import
5 , case when b = 1 then 100 else 101
25 , case when b = 1 then 100 else 101
145, case when b = 1 then 100 else 101
The following command fill fail
COPY test1 FROM 'file' USING DELIMITERS ',';
with the following error
ERROR: invalid input syntax for integer
which means that it can not evaluate the case expression. Is there any workaround?
The command COPY only copies data (obviously) and does not evaluate SQL code, as explained in the documentation: http://www.postgresql.org/docs/9.3/static/sql-copy.html
As far as I know there is not workarounds to making COPY evaluating sql code.
You must preprocess your csv file and convert it to a standard sql script with INSERT statements in this form:
INSERT INTO your_table VALUES(145, CASE WHEN 1 = 1 THEN 100 ELSE 101 END);
Then execute the sql script with the client you are using. I.e. with psql you would use the -f option:
psql -d your_database -f your_sql_script

Fast Array Inserts with Postgres

In Oracle OCI and OCCI there are API facilities to perform array inserts where you build up an array of values in the client and send this array along with a prepared statement to the server to insert thousands of entries into a table in a single shot resulting in huge performance improvements in some scenarios. Is there anything similar in PostgreSQL ?
I am using the stock PostgreSQL C API.
Some pseudo code to illustrate what i have in mind:
stmt = con->prepare("INSERT INTO mytable VALUES ($1, $2, $3)");
pg_c_api_array arr(stmt);
for triplet(a, b, c) in mylongarray:
pg_c_api_variant var = arr.add();
var.bind(1, a);
var.bind(2, b);
var.bind(3, c);
stmt->bindarray(arr);
stmt->exec()
PostgreSQL has similar functionality - statement COPY and COPY API - it is very fast
libpq documentation
char *data = "10\t20\40\n20\t30\t40";
pres = PQexec(pconn, "COPY mytable FROM stdin");
/* can be call repeatedly */
copy_result = PQputCopyData(pconn, data, sizeof(data));
if (copy_result != 1)
{
fprintf(stderr, "Copy to target table failed: %s\n",
PQerrorMessage(pconn));
EXIT;
}
if (PQputCopyEnd(pconn, NULL) == -1)
{
fprintf(stderr, "Copy to target table failed: %s\n",
PQerrorMessage(pconn));
EXIT;
}
pres = PQgetResult(pconn);
if (PQresultStatus(pres) != PGRES_COMMAND_OK)
{
fprintf(stderr, "Copy to target table failed:%s\n",
PQerrorMessage(pconn));
EXIT;
}
PQclear(pres);
As Pavel Stehule points out, there is the COPY command and, when using libpq in C, associated functions for transmitted the copy data. I haven't used these. I mostly program against PostgreSQL in Python, have have used similar functionality from psycopg2. It's extremely simple:
conn = psycopg2.connect(CONN_STR)
cursor = conn.cursor()
f = open('data.tsv')
cusor.copy_from(f, 'incoming')
f.close()
In fact I've often replaced open with a file-like wrapper object that performs some basic data cleaning first. It's pretty seamless.
I like this way of creating thousands of rows in a single command:
INSERT INTO mytable VALUES (UNNEST($1), UNNEST($2), UNNEST($3));
Bind an array of the values of column 1 to $1, an array of the values of column 2 to $2 etc.! Providing the values in columns may seem a bit strange at first when you are used to thinking in rows.
You need PostgreSQL >= 8.4 for UNNEST or your own function to convert arrays into sets.