How to get and change encoding schema for a DB2 z/OS database using dynamic SQL statement - db2

A DB2 for z/OS database has been setup for me. Now I want to know the encoding scheme of the database and change it to Unicode if the database is other type of encoding.
How can I do this? Can I do this using dynamic SQL statements in my Java application?
Thanks!

You need to specify that the encoding scheme is UNICODE when you are creating your table (and database and tablepsace) by using the CCSID UNICODE clause.
According to the documentation:
By default, the encoding scheme of a table is the same as the encoding scheme of its table space. Also by default, the encoding scheme of the table space is the same as the encoding scheme of its database. You can override the encoding scheme with the CCSID clause in the CREATE TABLESPACE or CREATE TABLE statement. However, all tables within a table space must have the same CCSID.
For more, see Creating a Unicode Table in the DB2 for z/os documentation.
You are able to create tables via Java/JDBC, but I doubt that you will be able to create databases and tablespaces that way. I wouldn't recommend it anyway, I would find your closest z/os DBA and get that person to help you.

Related

Proc SQL to Postgres UFT8 database: non printable character shown as normal question mark, can't remove

I need advice for a problem I'm facing:
I use SAS 9.4 (desktop version) to connect to a Postgres database with the Unicode Postgres ODBC driver.
I'm using a proc sql statement to retrieve the data and create a sas data file.
There is one issue:
One entry has the following value in the database it PgAdmin: "CAR "
But when I look at the SAS data file that proc sql created it looks like this: "CAR ?"
Just a normal question mark.
The compress function with _FIELD = compress(_FIELD, ,'kw'); doesn't seem to work, as the question mark is just a normal question mark and not a non printable character.
Postgres database has UTF8 as server encoding.
The ODBC connection uses Unicode Postgres drivers.
I tried running SAS with the English (creates a Wlatin1 dataset) and Unicode option (creates a UTF8 dataset) but nothing changes.
I would like to be able to remove this character.
Any tips or suggestions would be helpful.
Thanks!

How to create a synonym for a table in PostgreSQL

I am migrating this Oracle command to PostgreSQL:
CREATE SYNONYM &user..emp FOR &schema..emp;
Please suggest to me how I can migrate the above command.
PostgreSQL does not support SYNOSYM or ALIAS. Synonym is a non SQL 2003 feature implemented by Microsoft SQL 2005 (I think). While it does provide an interesting abstraction, but due to lack of relational integrity, it can be considered a risk.
That is, you can create a synonym, advertise it to you programmers, the code is written around it, including stored procedures, then one day the backend of this synonym (or link or pointer) is changed/deleted/etc leading to a run time error. I don't even think a prepare would catch that.
It is the same trap as the symbolic links in unix and null pointers in C/C++.
But rather create a DB view and it is already updateable as long as it contains one table select only.
CREATE VIEW schema_name.synonym_name AS SELECT * FROM schema_name.table_name;
You don't need synonyms.
There are two approaches:
using the schema search path:
ALTER DATABASE xyz SET search_path = schema1, schema2, ...;
Put the schema that holds the table on the search_path of the database (or user), then it can be used without schema qualification.
using a view:
CREATE VIEW dest_schema.tab AS SELECT * FROM source_schema.tab;
The first approach is good if you have a lot of synonyms for objects in the same schema.

DB2 DBCLOB data INSERT with Unicode data

The problem at hand is to insert data into a DB2 table which has a DBCLOB column. The table's encoding is Unicode. The subsystem is a MIXED YES with Japanese CCSID set of (290, 930, 300). The application is bound ENCODING CCSID.
I was successful in FETCHING the DBCLOB's data in Unicode, no problem there. But when I turn around and try to INSERT it back, the data inserted is being interpreted as not being Unicode, seems DB2 thinks its EBCDIC DBCS/GRAPHIC, and the inserted row shows Unicode 0xFEFE. When I manually update the data being inserted to valid DBCS then the data inserts OK and shows the expected Unicode DBCS values.
To insert the data I am using a dynamically prepared INSERT statement with a placeholder for the DBCLOB column. The SQLVAR entry associated with the placeholder is a DBCLOB_LOCATOR with the CCSID set to 1200.
A DBCLOB locator is being created doing a SET dbclobloc = SUBSTR(dbclob, 1, length). The created locator is being put into SQLDA. Then the prepared INSERT is being executed.
It seems DB2 is ignoring the 1200 CCSID associated with the DBCLOB_LOCATOR SQLVAR. Attempts to put a CAST(? AS DBCLOB CCSID UNICODE) on the placeholder in the INSERT do not help because at that time DB2 seems to have made up its mind about the encoding of the data to be inserted.
I am stuck :( Any ideas?
Greg
I think I figured it out and it is not good: the SET statement for the DBCLOB_LOCATOR is static SQL and the DBRM is bound ENCODING EBCDIC. Hence DB2 has no choice but to assume the data is in the CCSID of the plan.
I also tried what the books suggest and used a SELECT ... FROM SYSIBM.SYSDUMMYU to set the DBCLOB_LOCATOR. This should have told DB2 that the data was coming in Unicode. But it failed again, with symptoms indicating it still assumed the DBCS EBCDIC CCSID.
Not good.

Is it safe to change Collation on Postgres (keeping encoding)?

I have a Postgres 9.3 database which, by mistake, has been set to:
but I need it to be:
Since the Encoding doesn't change, it is safe to dump the DB and restore it later (see here) to a database with the new Collation / Character type?
Perfectly safe -- the collation is just telling Postgres which set of rules to apply when sorting text.
You can even set it dynamically on a query basis in the order by clause, and should be able to alter it without needing to dump the database.

How to add collations UNICODE_CI/_AI to an old database

I have an old database running on a Firebird 2.5 server, on which is missing the collations UNICODE_AI and UNICODE_CI. Direct selection on meta tables is indicating this.
select * from RDB$COLLATIONS where rdb$collation_name like 'U%'
returns UNICODE, UNICODE_FSS, UTF8 and UCS_BASIC.
How can I add these collation to my current database?
Doing a backup restore of the meta table solve the issue.