Get the enconding of a char column through SQL query in db2 - db2

How do I get the encoding of a char column in db2 through a query? I'm using Db2 i 7.2. Basically, I need to check if the enconding of a char column is "for bit data".

FOR BIT DATA is not an encoding, it is an attribute of character data types that have no encoding.
Each column data type and, for character types, encoding can be found in the catalog view QSYS2.SYSCOLUMNS -- columns DATA_TYPE and CCSID respectively. I guess you'll be looking for 'BINARY' and 'VARBIN' values.
More information in the manual.

Related

Convert a BLOB to VARCHAR instead of VARCHAR FOR BIT

I have a BLOB field in a table that I am selecting. This field data consists only of JSON data.
If I do the following:
Select CAST(JSONBLOB as VARCHAR(2000)) from MyTable
--> this returns the value in VARCHAR FOR BIT DATA format.
I just want it as a standard string or varcher - not in bit format.
That is because I need to use JSON2BSON function to convert the JSON to BSON. JSON2BSON accepts a string but it will not accept a VarChar for BIT DATA type...
This conversation should be easy.
I am able to do the select as a VARCHAR FOR BIT DATA.. Manually COPY it using the UI. Paste it into a select literal and convert that to BSON. I need to migrate a bunch of data in this BLOB from JSON to BSON, and doing it manually won't be fast. I just want to explain how simple of a use case this should be.
What is the select command to essentially get this to work:
Select JSON2BSON(CAST(JSONBLOB as VARCHAR(2000))) from MyTable
--> Currently this fails because the CAST converts this (even though its only text characters) to VARCHAR for BIT DATA type and not standard VARCHAR.
What is the suggestion to fix this?
DB2 11 on Windows.
If the data is JSON, then the table column should be CLOB in the first place...
Having the table column a BLOB might make sense if the data is actually already BSON.
You could change the blob into a clob using the converttoclob procedure then you should be ok.
https://www.ibm.com/support/knowledgecenter/SSEPGG_11.5.0/com.ibm.db2.luw.apdv.sqlpl.doc/doc/r0055119.html
You can use this function to remove the "FOR BIT DATA" flag on a column
CREATE OR REPLACE FUNCTION DB_BINARY_TO_CHARACTER(A VARCHAR(32672 OCTETS) FOR BIT DATA)
RETURNS VARCHAR(32672 OCTETS)
NO EXTERNAL ACTION
DETERMINISTIC
BEGIN ATOMIC
RETURN A;
END
or if you are on Db2 11.5 the function SYSIBMADM.UTL_RAW.CAST_TO_VARCHAR2 will also work

how to convert a blob column type to clob in DB2

I have a table in DB2 with column type BLOB, I would like to convert this to a CLOB type. My approach here is to create a new column with CLOB type, copy all the data from BLOB column to CLOB column, drop the BLOB column and rename the CLOB column. However, I am not sure how to do the second step i.e. update data from BLOB column to CLOB column. How can I do this in DB2. Thanks in advance.
If you are using Db2-LUW v10.5 or higher, consider using the supplied stored procedure CONVERTTOCLOB for that purpose. Such a conversion makes sense when you know the data is character based.
You can use the CONVERTTOCLOB stored procedue, but if you want to convert a BLOB to a CLOB within a query, and you don't mind it being truncated to 32K bytes, then you can use VARCHAR() to convert it. Here follows an example
create table b(b blob) organize by row;
insert into b values blob(x'F09F9880');
select cast(VARCHAR(b) as VARCHAR(4) FOR MIXED DATA) from b;
1
----
😀
Note the CAST( ... as VARCHAR(x) FOR MIXED DATA) bit converts from a VARCHAR FOR BIT DATA so the output is not shown in hex format

how to understand column types from sql file?

I am not good at column types as I understand. From another country with another system they just send me a sql file and they claim that there is an image on that sql file. I guess it is byte array, however I couldnt insert it into PostgreSQL. When I try to insert it says:
LINE 1: ...ES ('00246c4e-1bc8-4dde-bb89-e9dee69990d5', '0', 0xffa0ffa40...
^
********** Error **********
ERROR: syntax error at or near "xffa0f
Could you please help me to create related table with its column properties?
I know that it is not good question, however here is starting of sql file;
INSERT INTO `fps` VALUES ('00246c4e-1bc8-4dde-bb89-e9dee69990d5', '0', 0xffa0ffa4003a0907000932d325cd000ae0f3199a010a41eff19a010b8e2......
What is the type of 0xffa0ff....?
'00246c4e-1bc8-4dde-bb89-e9dee69990d5' is a UUID.
'0' is just a character string. There are a few different string types to choose from. However, if all of these values are integers, you may want to create the column as an INTEGER instead.
0xff... is a hex string, though not in a format that Postgres will recognise. You can store this data in a bytea column, but in order for the INSERT to succeed, you will need to modify the script, replacing, for example,
0xab...ef
with
'\xab...ef'

DB2 DBCLOB data INSERT with Unicode data

The problem at hand is to insert data into a DB2 table which has a DBCLOB column. The table's encoding is Unicode. The subsystem is a MIXED YES with Japanese CCSID set of (290, 930, 300). The application is bound ENCODING CCSID.
I was successful in FETCHING the DBCLOB's data in Unicode, no problem there. But when I turn around and try to INSERT it back, the data inserted is being interpreted as not being Unicode, seems DB2 thinks its EBCDIC DBCS/GRAPHIC, and the inserted row shows Unicode 0xFEFE. When I manually update the data being inserted to valid DBCS then the data inserts OK and shows the expected Unicode DBCS values.
To insert the data I am using a dynamically prepared INSERT statement with a placeholder for the DBCLOB column. The SQLVAR entry associated with the placeholder is a DBCLOB_LOCATOR with the CCSID set to 1200.
A DBCLOB locator is being created doing a SET dbclobloc = SUBSTR(dbclob, 1, length). The created locator is being put into SQLDA. Then the prepared INSERT is being executed.
It seems DB2 is ignoring the 1200 CCSID associated with the DBCLOB_LOCATOR SQLVAR. Attempts to put a CAST(? AS DBCLOB CCSID UNICODE) on the placeholder in the INSERT do not help because at that time DB2 seems to have made up its mind about the encoding of the data to be inserted.
I am stuck :( Any ideas?
Greg
I think I figured it out and it is not good: the SET statement for the DBCLOB_LOCATOR is static SQL and the DBRM is bound ENCODING EBCDIC. Hence DB2 has no choice but to assume the data is in the CCSID of the plan.
I also tried what the books suggest and used a SELECT ... FROM SYSIBM.SYSDUMMYU to set the DBCLOB_LOCATOR. This should have told DB2 that the data was coming in Unicode. But it failed again, with symptoms indicating it still assumed the DBCS EBCDIC CCSID.
Not good.

Can I use nvarchar data type in plpgsql?

Can I use nvarchar data type in plpgsql?
If so, do I have to follow the same syntax?
If not, then what is the alternative?
Just use the text or varchar types.
There's no need for, or support for, nvarchar. All PostgreSQL text-like types are always in the database's text encoding, you can't choose a 1-byte or 2-byte type like in MS SQL Server. Generally you just use a UTF-8 encoded DB, and everything works with no hassle.