jboss jbpm task comment does not accept Chinese - jboss

Jboss JBPM user task comment does not accept Chinese.
Seems related to UTF-8. I tried to alter table charset. Not working yet.
ALTER TABLE task_comment MODIFY text longtext CHARACTER SET utf8mb4
COLLATE utf8mb4_unicode_ci;
How can I fix this issue?

Related

Upper() doesn't uppercase accented characters

Within our postgres 12 database, using united_states.utf8 collation, we have a dataset with Spanish data that includes accented characters. However when we upper() the values in the field, unaccented characters are correctly uppercased, but accented character are not.
upper('anuncio genérico para web del cliente') gives 'ANUNCIO GENéRICO PARA WEB DEL CLIENTE'
How can I correct this to get the expected result of 'ANUNCIO GENÉRICO PARA WEB DEL CLIENTE'?
I have tried forcing the string into c and posix collations, but these are ANSI only.
I have discovered the problem and a solution to it. My DB, as mentioned in the question, used 'English_United States.utf8' collation as default to match the encoding of UTF8. However on a windows environment, the 'English_United Kingdom.1252' collation works with UTF8 encoding, despite being specified for ANSI, and returns the uppercase characters as expected.
To resolve the issue I had to create the collation in the db using;
CREATE COLLATION "English_United Kingdom.1252" (LC_COLLATE='English_United Kingdom.1252', LC_CTYPE='English_United Kingdom.1252');
Which places it in the public schema. You can then manually correct the issue in queries by calling collate "English_United Kingdom.1252" against any strings with accents, or use
alter table [table_name] alter column [column_name] type text collate "English_United Kingdom.1252"
Against any columns that have accented characters to fix the collation permanently. Unfortunately there is no way to change the default collation for a DB once it is created without doing a full backup, drop, and restore.

Problems with COLLATE in PostgreSQL 12

My problem:
I work in Windows 10 and my computer is set-up to Portuguese (pt_BR);
I'm building a database in PostgreSQL where I need certain columns to remain in Portuguese, but others to be in en_US - namely, those storing numbers and currency. I need $ instead of R$ and 1,000.00 instead of 1.000,00.
I tried to create columns this way using the COLLATE statement, as:
CREATE TABLE crm.TESTE (
prodserv_id varchar(30) NOT NULL,
prodserv_name varchar(140) NULL,
fk_prodservs_rep_acronym varchar(4) NULL,
prodserv_price numeric null collate "en_US",
CONSTRAINT pk_prodservs_prodserv_id PRIMARY KEY (prodserv_id)
);
But I get the error message:
SQL Error [42704]: ERROR: collation "en_US" for encoding "UTF8" does not exist
Database metadata shows Default Encoding: UTF8 and Collate Portuguese_Brazil.1252
It will be deployed at my ISP, which runs Linux.
Any suggestions would be greatly appreciated.
Thanks in advance.
A collation defines how strings are compared. It is not applicable to numerical data.
Moreover, PostgreSQL uses the operating system's collations, which causes problems when porting a database from Windows to other operating systems. The collation would be called English on Windows and en_US.utf8 on operating systems that use glibc.
To influence the formatting of numbers and currency symbols, set the lc_numeric and lc_monetary parameters appropriately (English on Windows, en_US elsewhere). Note that while lc_monetary affects the string representation of the money data type, these settings do not influence the string representation of numbers. You need to use to_char like this:
to_char(1000, '999G999G999D00 L');

ORA-12899 - value too large for column when upgrading to Oracle 12C

My project is going through a tech upgrade so we are upgrading Oracle DB from 11g to 12c. SAP DataServices is upgraded to version 14.2.7.1156.
The tables in Oracle 12C is defaulted to varchar (byte) when it shoud be varchar (char). I understand this is normal. So, I altered the session for each datastore running
`ALTER session SET nls_length_semantics=CHAR;`
When I create a new table, with varchar (1), I am able to load unicode characters like Chinese characters (i.e 东) into the new table from Oracle.
However, when I try to load the same unicode character via SAPDS into the same table, it throws me an error 'ORA-12899 - value too large for column'. My datastore settings are:
Locale
Language: default
Code Page: utf-8
Server code page: utf-8
Additional session parameters:
ALTER session SET nls_length_semantics=CHAR
I would really appreciate to know what settings I need to change in my SAP BODS since my Oracle seems to be working fine.
I think, you should consider modifying table column from varchar2(x BYTE) to varchar2(x CHAR) to allow Unicode (UTF-8 format) data and avoid ORA-12899 .
create table test1 (name varchar2(100));
insert into test1 values ('east');
insert into test1 values ('东');
alter table test1 modify name varchar2(100 char);
-- You can check 'char_used' for each column like -
select column_name, data_type, char_used from user_tab_columns where table_name='TEST1';

Change MySQL-Charset from utf8 to utf8mb4 with PHPMyAdmin

When I want to change MySQL-Charset from utf8 (utf8_general_ci) to utf8mb4 (utf8_unicode_ci) with PHPMyAdmin, it is sufficient when I do these things?
Change database collation to "utf8_unicode_ci"
Change tables collation to "utf8_unicodel_ci"
Change every text column to "utf8_unicodel_ci"
Change set_charset in my PHP code to "utf8mb4"
Is this correct or is something missing what to to? Where can occur any problems?
all coalition the same:
for database - utf8mb4_unicode_ci
table - utf8mb4_unicode_ci
columns - utf8mb4_unicode_ci
if You want avoid feature mistakes with string functions by mix coalition, good also set default coalition for server:
edit my.cnf (my.ini):
[mysqld]
collation-server = utf8mb4_unicode_ci
init-connect='SET NAMES utf8mb4'
character-set-server = utf8mb4
because if You create new tables in feature not manual, but for from script - it create all new tables with default for server settings, and string function may stop work properly

How to get and change encoding schema for a DB2 z/OS database using dynamic SQL statement

A DB2 for z/OS database has been setup for me. Now I want to know the encoding scheme of the database and change it to Unicode if the database is other type of encoding.
How can I do this? Can I do this using dynamic SQL statements in my Java application?
Thanks!
You need to specify that the encoding scheme is UNICODE when you are creating your table (and database and tablepsace) by using the CCSID UNICODE clause.
According to the documentation:
By default, the encoding scheme of a table is the same as the encoding scheme of its table space. Also by default, the encoding scheme of the table space is the same as the encoding scheme of its database. You can override the encoding scheme with the CCSID clause in the CREATE TABLESPACE or CREATE TABLE statement. However, all tables within a table space must have the same CCSID.
For more, see Creating a Unicode Table in the DB2 for z/os documentation.
You are able to create tables via Java/JDBC, but I doubt that you will be able to create databases and tablespaces that way. I wouldn't recommend it anyway, I would find your closest z/os DBA and get that person to help you.