DB2 SQL Error: SQLCODE=-433, SQLSTATE=22001 when inserting Base64 CLOB data - db2

DB2 exception - DB2 SQL Error: SQLCODE=-433, SQLSTATE=22001 is thrown when inserting a Base64 CLOB data with length of 38K characters into DB2 table CLOB field defined with length of 10MB. Database insertion is done via a Stored Procedure called by a MuleSoft flow. We've been unable to find the root cause or solution to this. Has anyone seen this behaviour?

Related

PostgreSQL equivalent or Best Practice of Oracle LOG ERRORS clause in INSERT statement

While doing Oracle to PostgreSQL conversion came across INSERT statements loading a huge data using and storing all the errored records in the ERROR table.
All insert statements are of form
INSERT ALL INTO ABC
(LIST of Columns)
VALUES
(Column data)
LOG ERROR INTO ..
SELECT statement;
in PostgreSQL didn't found any equivalent which can store the errored records into some Error tables.
If anyone went though the conversion or transformation from Oracle to PostgreSQL and used some best practices for these type of conversion. please share any best practice.

Unable to store entire CLOB data in to CLOB defined column in DB2

I am unable to upload the entire clob data using db2 load. Only the data till a certain length is put into the column(CLOB) of the table.

invalid input syntax for type json aws dms postgres

I'm running a task that migrates all data from a postgres 10.4 to a RDS postgres 10.4.
Not able to migrate tables which have jsonb column.
After error, whole table is getting suspended.Table contain 449 rows only.
I have made following error policy, still whole table suspended.
"DataErrorPolicy": "IGNORE_RECORD",
"DataTruncationErrorPolicy": "IGNORE_RECORD",
"DataErrorEscalationPolicy": "SUSPEND_TABLE",
"DataErrorEscalationCount": 1000,
My expectation is that whole table should be transferred, it can ignore record if any json is wrong.
I dont know why its giving this error 'invalid input syntax for type json' , i have checked all json and all jsons are valid.
After debugging more, this error has been considered as TABLE error , but why ? Thats why table got suspended since TableErrorPolicy is 'SUSPEND_TABLE'.
Why this error considered as table error instead of record error?
Is JSONB column not supported by DMS thats why we are getting below error?
Logs :-
2020-09-01T12:10:04 https://forums.aws.amazon.com/I: Next table to load 'public'.'TEMP_TABLE' ID = 1, order = 0 (tasktablesmanager.c:1817)
2020-09-01T12:10:04 https://forums.aws.amazon.com/I: Start loading table 'public'.'TEMP_TABLE' (Id = 1) by subtask 1.
Start load timestamp 0005AE3F66381F0F (replicationtask_util.c:755)
2020-09-01T12:10:04 https://forums.aws.amazon.com/I: REPLICA IDENTITY information for table 'public'.'TEMP_TABLE': Query status='Success' Type='DEFAULT'
Description='Old values of the Primary Key columns (if any) will be captured.' (postgres_endpoint_unload.c:191)
2020-09-01T12:10:04 https://forums.aws.amazon.com/I: Unload finished for table 'public'.'TEMP_TABLE' (Id = 1). 449 rows sent. (streamcomponent.c:3485)
2020-09-01T12:10:04 https://forums.aws.amazon.com/I: Table 'public'.'TEMP_TABLE' contains LOB columns, change working mode to default mode (odbc_endpoint_imp.c:4775)
2020-09-01T12:10:04 https://forums.aws.amazon.com/I: Table 'public'.'TEMP_TABLE' has Non-Optimized Full LOB Support (odbc_endpoint_imp.c:4788)
2020-09-01T12:10:04 https://forums.aws.amazon.com/I: Load finished for table 'public'.'TEMP_TABLE' (Id = 1). 449 rows received. 0 rows skipped.
Volume transferred 190376. (streamcomponent.c:3770)
2020-09-01T12:10:04 https://forums.aws.amazon.com/E: RetCode: SQL_ERROR SqlState: 22P02 NativeError: 1 Message: ERROR: invalid input syntax for type json;
Error while executing the query https://forums.aws.amazon.com/ (ar_odbc_stmt.c:2648)
2020-09-01T12:10:04 https://forums.aws.amazon.com/W: Table 'public'.'TEMP_TABLE' (subtask 1 thread 1) is suspended (replicationtask.c:2471)
Edit- after debugging more, this error has been considered as TABLE error , but why ?
JSONB column data type must be nullable in target DB.
Note- In my case, after making JSONB column as nullable, this error disappeared.
As mentioned in AWS documentation-
In this case, AWS DMS treats JSONB data as if it were a LOB column. During the full load phase of a migration, the target column must be nullable.
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.Prerequisites
https://aws.amazon.com/premiumsupport/knowledge-center/dms-error-null-value-column/
AWS DMS treats the JSON data type in PostgreSQL as a LOB data type column. This means that the LOB size limitation when you use limited LOB mode applies to JSON data. For example, suppose that limited LOB mode is set to 4,096 KB. In this case, any JSON data larger than 4,096 KB is truncated at the 4,096 KB limit and fails the validation test in PostgreSQL.
Reference: AWS DMS - JSON data types being truncated
Update: You can tweak the error handling task settings to skip erroneous rows by setting the value for DataErrorPolicy to IGNORE_RECORD which determines the action AWS DMS takes when there is an error related to data processing at the record level.
Some examples of data processing errors include conversion errors, errors in transformation, and bad data. The default is LOG_ERROR. IGNORE_RECORD, the task continues and the data for that record is ignored.
Reference: AWS DMS - Error handling task settings
You mentioned that you're migrating from PostgreSQL to PostgreSQL. Is there a specific reason to Use AWS DMS?
AWS Docs: https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html#CHAP_Source.PostgreSQL.Homogeneous
When you migrate from a database engine other than PostgreSQL to a PostgreSQL database, AWS DMS is almost always the best migration tool to use. But when you are migrating from a PostgreSQL database to a PostgreSQL database, PostgreSQL tools can be more effective.
...
We recommend that you use PostgreSQL database migration tools such as pg_dump under the following conditions:
You have a homogeneous migration, where you are migrating from a source PostgreSQL database to a target PostgreSQL database.
You are migrating an entire database.
The native tools allow you to migrate your data with minimal downtime.

SparkSQL/JDBC error com.microsoft.sqlserver.jdbc.SQLServerException: Column, parameter, or variable #7: Cannot find data type BLOB

Saving DataFrame to table with VARBINARY columns is throwing error:
com.microsoft.sqlserver.jdbc.SQLServerException: Column, parameter, or
variable #7: Cannot find data type BLOB
If I try to use VARBINARY in createTableColumnTypes option, I get "VARBINARY not supported".
Workaround is:
change TARGET schema to use VARCHAR.
Add .option("createTableColumnTypes", "Col1 varchar(500), Col2) varchar(500)")
While this workaround lets us go ahead with saving rest of data, actual binary data from source table (from where Data is read) is not saved correctly for these 2 columns - we see NULL data.
We are using MS SQL Server 2017 JDBC driver and Spark 2.3.2.
Any help, workaround to address this issue correctly so that we don't lose data is appreciated.

DB2 query for fetching clob value

In DB2 , how do we write a query to fetch value from clob datatype column
Not sure if this will help with your situation, but we ran into a similar situation at my company. For us, we were able to read the values out as a normal string using C#/.NET and the IBM iSeries data provider. The data we wanted to fetch from the CLOB was just simple text, which allowed this process to work.
For sql/pl, you can select clob data from database same other type, but if you use jdbc I should byte[] for Clob data.