how to convert a blob column type to clob in DB2 - db2

I have a table in DB2 with column type BLOB, I would like to convert this to a CLOB type. My approach here is to create a new column with CLOB type, copy all the data from BLOB column to CLOB column, drop the BLOB column and rename the CLOB column. However, I am not sure how to do the second step i.e. update data from BLOB column to CLOB column. How can I do this in DB2. Thanks in advance.

If you are using Db2-LUW v10.5 or higher, consider using the supplied stored procedure CONVERTTOCLOB for that purpose. Such a conversion makes sense when you know the data is character based.

You can use the CONVERTTOCLOB stored procedue, but if you want to convert a BLOB to a CLOB within a query, and you don't mind it being truncated to 32K bytes, then you can use VARCHAR() to convert it. Here follows an example
create table b(b blob) organize by row;
insert into b values blob(x'F09F9880');
select cast(VARCHAR(b) as VARCHAR(4) FOR MIXED DATA) from b;
1
----
😀
Note the CAST( ... as VARCHAR(x) FOR MIXED DATA) bit converts from a VARCHAR FOR BIT DATA so the output is not shown in hex format

Related

Convert a BLOB to VARCHAR instead of VARCHAR FOR BIT

I have a BLOB field in a table that I am selecting. This field data consists only of JSON data.
If I do the following:
Select CAST(JSONBLOB as VARCHAR(2000)) from MyTable
--> this returns the value in VARCHAR FOR BIT DATA format.
I just want it as a standard string or varcher - not in bit format.
That is because I need to use JSON2BSON function to convert the JSON to BSON. JSON2BSON accepts a string but it will not accept a VarChar for BIT DATA type...
This conversation should be easy.
I am able to do the select as a VARCHAR FOR BIT DATA.. Manually COPY it using the UI. Paste it into a select literal and convert that to BSON. I need to migrate a bunch of data in this BLOB from JSON to BSON, and doing it manually won't be fast. I just want to explain how simple of a use case this should be.
What is the select command to essentially get this to work:
Select JSON2BSON(CAST(JSONBLOB as VARCHAR(2000))) from MyTable
--> Currently this fails because the CAST converts this (even though its only text characters) to VARCHAR for BIT DATA type and not standard VARCHAR.
What is the suggestion to fix this?
DB2 11 on Windows.
If the data is JSON, then the table column should be CLOB in the first place...
Having the table column a BLOB might make sense if the data is actually already BSON.
You could change the blob into a clob using the converttoclob procedure then you should be ok.
https://www.ibm.com/support/knowledgecenter/SSEPGG_11.5.0/com.ibm.db2.luw.apdv.sqlpl.doc/doc/r0055119.html
You can use this function to remove the "FOR BIT DATA" flag on a column
CREATE OR REPLACE FUNCTION DB_BINARY_TO_CHARACTER(A VARCHAR(32672 OCTETS) FOR BIT DATA)
RETURNS VARCHAR(32672 OCTETS)
NO EXTERNAL ACTION
DETERMINISTIC
BEGIN ATOMIC
RETURN A;
END
or if you are on Db2 11.5 the function SYSIBMADM.UTL_RAW.CAST_TO_VARCHAR2 will also work

MariaDB CAST OR CONVERT Function is not work

my table has a varchar(64) column ,when i try to convert it to char(36), it not work.
SELECT
CONVERT('00edff66-3ef4-4447-8319-fc8eb45776ab',CHAR(36)) AS A,
CAST('00edff66-3ef4-4447-8319-fc8eb45776ab' AS CHAR(36)) as B
this is desc result
It's like this because it is derived from MySQL where there was no CAST AS VARCHAR option. In MySQL there was only CAST AS CHAR which was producing VARCHAR. Here are what the supported options were in MySQL 5.6:
https://dev.mysql.com/doc/refman/5.6/en/cast-functions.html#function_cast
See they explicitly mention that "No padding occurs for values shorter than N characters". Later MariaDB started adding CAST AS VARCHAR only to make it more cross-platform compatible with systems like Oracle, Postgre, MSSQL, etc.:
https://jira.mariadb.org/browse/MDEV-11283
But still CAST AS CHAR and CAST AS VARCHAR is the same. And I guess it should be the same. Why do you need the fixed length datatype in RAM? It should matter only when you store it.
For example if you have a table with CHAR datatype:
CREATE TABLE tbltest(col CHAR(10));
and you insert casted as VARCHAR data for example:
INSERT INTO tbltest(col) VALUES(CAST('test' AS VARCHAR(5)));
it will be stored as CHAR(10) datatype. Because that's what the table uses.

Bytea to actual text value in postgresql

I have a table to store file information in postgresql.
select id,filestream,name from Table_file_info
Here filestream is bytea datatype. How to get bytea data into actual text (content of my file) in postgresql.
I tried with below query:
select encode(filestream, 'escape')::text as name from Table_file_info
but i am getting as below
ICAgICAgICAgc2FkZnNhZGZhZCBzZGRkZGRkZGRkIFRlc3R0dA==
actual content of my file is: sadfsadfad sddddddddd Testtt
It looks like base64. Meaning your file was first converted to base64, then converted to bytea (which is kind of pointless since base64 is already text)
select encode(decode(encode(filestream,'escape'),'base64'),'escape') from Table_file_info;

Alter a Column from INTEGER To BIGINT

In my database I have several fields with INTEGER Type. I need to change some of them to BIGINT.
So my question is, can I just use the following command?
ALTER TABLE MyTable ALTER COLUMN MyIntegerColumn TYPE BIGINT;
Are the contained data be converted the correct way? After the convert is this column a "real" BIGINT column?
I know this is not possible if there are constraints on this column (Trigger, ForeingKey,...). But if there are no constraints is it possible to do it this way?
Or is it better to convert it by a Help-Column:
MyIntegerColumn -> MyIntegerColumnBac -> MyBigIntColumn
When you execute
ALTER TABLE MyTable ALTER COLUMN MyIntegerColumn TYPE BIGINT;
Firebird will not convert existing data from INTEGER to BIGINT, instead it will create a new format version for the table.
When inserting new rows or updating existing rows, the value will be stored as a BIGINT, but when reading Firebird will convert 'old' rows on the fly from INTEGER to BIGINT. This happens transparently for you as the user. This is to prevent needing to rewrite all existing rows, which could be costly (IO, garbage collection of old versions of rows, etc).
So please, do use ALTER TABLE .. ALTER COLUMN, do not do MyIntegerColumn -> MyIntegerColumnBac -> MyBigIntColumn. There are some exceptions to this rule, eg (potentially) lossy character set transformations are better done that way to prevent transliterations errors on select if a character does not exist in the new character set, or changing a (var)char column to be shorter (which can't be done with alter column).
To be a little more specific: when a row is written in the database it contains a format version (aka version count) of that row. The format version points to a description of a row (datatypes, etc) how Firebird should read that row. An alter table will create a new format version, and that format will be applied when writing new rows or updating existing rows. When reading an old row, Firebird will apply necessary transformation to present that row as the new format (for example adding new columns with their default values, transforming a data type of a column).
These format versions are also a reason why the number of alter tables are restricted: if you apply more than 255 alter tables on a single table you must backup and restore the database (the format version is a single byte) before further changes are allowed to that table.

How to insert (raw bytes from file data) using a plain text script

Database: Postgres 9.1
I have a table called logos defined like this:
create type image_type as enum ('png');
create table logos (
id UUID primary key,
bytes bytea not null,
type image_type not null,
created timestamp with time zone default current_timestamp not null
);
create index logo_id_idx on logos(id);
I want to be able to insert records into this table in 2 ways.
The first (and most common) way rows will be inserted in the table will be that a user will provide a PNG image file via an html file upload form. The code processing the request on the server will receive a byte array containing the data in the PNG image file and insert a record in the table using something very similar to what is explained here. There are plenty of example of how to insert byte arrays into a postgresql field of type bytea on the internet. This is an easy exercise. An example of the insert code would look like this:
insert into logos (id, bytes, type, created) values (?, ?, ?, now())
And the bytes would be set with something like:
...
byte[] bytes = ... // read PNG file into a byte array.
...
ps.setBytes(2, bytes);
...
The second way rows will be inserted in the table will be from a plain text file script. The reason this is needed is only to populate test data into the table for automated tests, or to initialize the database with a few records for a remote development environment.
Regardless of how the data is entered in the table, the application will obviously need to be able to select the bytea data from the table and convert it back into a PNG image.
Question
How does one properly encode a byte array, to be able to insert the data from within a script, in such a way that only the original bytes contained in the file are stored in the database?
I can write code to read the file and spit out insert statements to populate the script. But I don't know how to encode the byte array for the plain text script such that when running the script from psql the image data will be the same as if the file was inserted using the setBytes jdbc code.
I would like to run the script with something like this:
psql -U username -d dataBase -a -f test_data.sql
The easiest way, IMO, to represent bytea data in an SQL file is to use the hex format:
8.4.1. bytea Hex Format
The "hex" format encodes binary data as 2 hexadecimal digits per byte, most significant nibble first. The entire string is preceded by the sequence \x (to distinguish it from the escape format). In some contexts, the initial backslash may need to be escaped by doubling it, in the same cases in which backslashes have to be doubled in escape format; details appear below. The hexadecimal digits can be either upper or lower case, and whitespace is permitted between digit pairs (but not within a digit pair nor in the starting \x sequence). The hex format is compatible with a wide range of external applications and protocols, and it tends to be faster to convert than the escape format, so its use is preferred.
Example:
SELECT E'\\xDEADBEEF';
Converting an array of bytes to hex should be trivial in any language that a sane person (such a yourself) would use to write the SQL file generator.