Can I use nvarchar data type in plpgsql? - postgresql

Can I use nvarchar data type in plpgsql?
If so, do I have to follow the same syntax?
If not, then what is the alternative?

Just use the text or varchar types.
There's no need for, or support for, nvarchar. All PostgreSQL text-like types are always in the database's text encoding, you can't choose a 1-byte or 2-byte type like in MS SQL Server. Generally you just use a UTF-8 encoded DB, and everything works with no hassle.

Related

Convert a BLOB to VARCHAR instead of VARCHAR FOR BIT

I have a BLOB field in a table that I am selecting. This field data consists only of JSON data.
If I do the following:
Select CAST(JSONBLOB as VARCHAR(2000)) from MyTable
--> this returns the value in VARCHAR FOR BIT DATA format.
I just want it as a standard string or varcher - not in bit format.
That is because I need to use JSON2BSON function to convert the JSON to BSON. JSON2BSON accepts a string but it will not accept a VarChar for BIT DATA type...
This conversation should be easy.
I am able to do the select as a VARCHAR FOR BIT DATA.. Manually COPY it using the UI. Paste it into a select literal and convert that to BSON. I need to migrate a bunch of data in this BLOB from JSON to BSON, and doing it manually won't be fast. I just want to explain how simple of a use case this should be.
What is the select command to essentially get this to work:
Select JSON2BSON(CAST(JSONBLOB as VARCHAR(2000))) from MyTable
--> Currently this fails because the CAST converts this (even though its only text characters) to VARCHAR for BIT DATA type and not standard VARCHAR.
What is the suggestion to fix this?
DB2 11 on Windows.
If the data is JSON, then the table column should be CLOB in the first place...
Having the table column a BLOB might make sense if the data is actually already BSON.
You could change the blob into a clob using the converttoclob procedure then you should be ok.
https://www.ibm.com/support/knowledgecenter/SSEPGG_11.5.0/com.ibm.db2.luw.apdv.sqlpl.doc/doc/r0055119.html
You can use this function to remove the "FOR BIT DATA" flag on a column
CREATE OR REPLACE FUNCTION DB_BINARY_TO_CHARACTER(A VARCHAR(32672 OCTETS) FOR BIT DATA)
RETURNS VARCHAR(32672 OCTETS)
NO EXTERNAL ACTION
DETERMINISTIC
BEGIN ATOMIC
RETURN A;
END
or if you are on Db2 11.5 the function SYSIBMADM.UTL_RAW.CAST_TO_VARCHAR2 will also work

How to convert oid datatype to varchar during updation

i want update a column image_name data type is oid
using image1 column datatype is varchar
during the updation I'm getting error like
update mst_memberdetail set image_name=image1;
column "image" is of type oid but expression is of type character varying
can u please tell me how to convert explicitly datatype
I assume that image1 is an OID that refers to an LOB? If so, you probably don't want to update that directly to varchar since you will get escaped binary (possibly hexadecimal). This is something you want to do in a client library, not in the db backend, if that is your use case.
Alternatively you could write a PL/Perl function to unescape the binary, and use the lo_* functions to read it, pass it to PL/Perl and update the varchar, but if you have encoding issues that will fail, and it will be slow.
If you are using lobs I would just recommend leaving them as is.

what is the equivalent of varbinary(10) in postgresql

I'm trying to define a bit field of 10 bytes. In SQL Server I'd use varbinary(10). I know that bytea replaces varbinary (MAX) for images, but didn't find any documentation on limiting the number of bits in it.
Is there a way to do that?
you want to look at bit(n), not bytea
http://www.postgresql.org/docs/current/static/datatype-bit.html

Convert an UTF-8 varchar to bytea with latin1 encoding in PostgreSQL

I'm porting a proprietary (and old) checksum algorithm from a MySQL procedure to PostgreSQL 8.4.
The whole database is UTF-8, but for this algorithm I need to convert the UTF-8 input to a bytea value with latin1 encoding. In MySQL, variables can be of different encoding and conversion is performed on-the-fly. Is there any function in PostgreSQL to do such an conversion?
The only alternative I see is to write a custom utf8_convert() C function which returns a bytea value and uses internally iconv() to convert the input to latin1. But I want to avoid such C functions.
From String Functions and Operators:
convert_to(text_in_database,'LATIN1')
But you have to be sure that the text can be encoded in Latin1 — you'll get an exception otherwise.

How to get and change encoding schema for a DB2 z/OS database using dynamic SQL statement

A DB2 for z/OS database has been setup for me. Now I want to know the encoding scheme of the database and change it to Unicode if the database is other type of encoding.
How can I do this? Can I do this using dynamic SQL statements in my Java application?
Thanks!
You need to specify that the encoding scheme is UNICODE when you are creating your table (and database and tablepsace) by using the CCSID UNICODE clause.
According to the documentation:
By default, the encoding scheme of a table is the same as the encoding scheme of its table space. Also by default, the encoding scheme of the table space is the same as the encoding scheme of its database. You can override the encoding scheme with the CCSID clause in the CREATE TABLESPACE or CREATE TABLE statement. However, all tables within a table space must have the same CCSID.
For more, see Creating a Unicode Table in the DB2 for z/os documentation.
You are able to create tables via Java/JDBC, but I doubt that you will be able to create databases and tablespaces that way. I wouldn't recommend it anyway, I would find your closest z/os DBA and get that person to help you.