I have a database with one column of the type nvarchar. If I write
INSERT INTO table VALUES ("玄真")
It shows ¿¿ in the table. What should I do?
I'm using SQL Developer.
Use single quotes, rather than double quotes, to create a text literal and for a NVARCHAR2/NCHAR text literal you need to prefix it with N
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE table_name ( value NVARCHAR2(20) );
INSERT INTO table_name VALUES (N'玄真');
Query 1:
SELECT * FROM table_name
Results:
| VALUE |
|-------|
| 玄真 |
First, using NVARCHAR might not even be necessary.
The 'N' character data types are for storing data that doesn't 'fit' in the database's defined character set. There's an auxiliary character set defined as the NCHAR Character set. It's kind of a band aid - once you create a database it can be difficult to change its character set. Moral of this story - take great care in defining the Character Set when creating your database and do not just accept the defaults.
Here's a scenario (LiveSQL) where we're storing a Chinese string in both NVARCHAR and VARCHAR2.
CREATE TABLE SO_CHINESE ( value1 NVARCHAR2(20), value2 varchar2(20 char));
INSERT INTO SO_CHINESE VALUES (N'玄真', '我很高興谷歌翻譯。' )
select * from SO_CHINESE;
Note that both the character sets are in the Unicode family. Note also I told my VARCHAR2 string to hold 20 characters. That's because some characters may require up to 4 bytes to be stored. Using a definition of (20) would give you only room to store 5 of those characters.
Let's look at the same scenario using SQL Developer and my local database.
And to confirm the character sets:
SQL> clear screen
SQL> set echo on
SQL> set sqlformat ansiconsole
SQL> select *
2 from database_properties
3 where PROPERTY_NAME in
4 ('NLS_CHARACTERSET',
5 'NLS_NCHAR_CHARACTERSET');
PROPERTY_NAME PROPERTY_VALUE DESCRIPTION
NLS_NCHAR_CHARACTERSET AL16UTF16 NCHAR Character set
NLS_CHARACTERSET AL32UTF8 Character set
First of all, you should to establish the Chinese character encoding on your Database, for example
UTF-8, Chinese_Hong_Kong_Stroke_90_BIN, Chinese_PRC_90_BIN, Chinese_Simplified_Pinyin_100_BIN ...
I show you an example with SQL Server 2008 (Management Studio) that incorporates all of this Collations, however, you can find the same characters encodings in other Databases (MySQL, SQLite, MongoDB, MariaDB...).
Create Database with Chinese_PRC_90_BIN, but you can choose other Coallition:
Select a Page (Left Header) Options > Collation > Choose the Collation
Create a Table with the same Collation:
Execute the Insert Statement
INSERT INTO ChineseTable VALUES ('玄真');
Related
I've been having some problems trying to save a string word with limited varchar(9).
create database big_text
LOCALE 'en_US.utf8'
ENCODING UTF8
create table big_text(
description VARCHAR(9) not null
)
# OK
insert into big_text (description) values ('sintético')
# I Got error here
insert into big_text (description) values ('sintético')
I already know that the problem is because one word is using 'é' -> Latin small letter E with Acute (this case only have 1 codepoint) and another word is using 'é' -> Latin Small Letter E + Combining Acute Accent Modifier. (this case I have 2 codepoint).
How can I store the same word using both representation in a limited varchar(9)? There is some configuration that the database is able to understand both ways? I thought that database being UTF8 is enough but not.
I appreciate any explanation that could help me understand where am I wrong? Thank you!
edit: Actually I would like to know if there is any way for postgres automatically normalize for me.
A possible workaround using CHECK to do the character length constraint.
show lc_ctype;
lc_ctype
-------------
en_US.UTF-8
create table big_text(
description VARCHAR not null CHECK (length(normalize(description)) <= 9)
)
-- Note shortened string. Explanation below.
select 'sintético'::varchar(9);
varchar
----------
sintétic
insert into big_text values ('sintético');
INSERT 0 1
select description, length(description) from big_text;
description | length
-------------+--------
sintético | 10
insert into big_text values ('sintético test');
ERROR: new row for relation "big_text" violates check constraint "big_text_description_check"
DETAIL: Failing row contains (sintético test).
From here Character type the explanation for the string truncation vs the error you got when inserting:
An attempt to store a longer string into a column of these types will result in an error, unless the excess characters are all spaces, in which case the string will be truncated to the maximum length.(This somewhat bizarre exception is required by the SQL standard.)
If one explicitly casts a value to character varying(n) or character(n), then an over-length value will be truncated to n characters without raising an error. (This too is required by the SQL standard.)
We are migrating DB2 data to PostgreSQL 11.x using AWS DMS, we have varchar fields in db2 with trailing spaces and without any TRIM these fields working fine when we are using these fields in a WHERE clause. I think DB2 internally trimming them as these are varchar fields. But after moving to PostgreSQL these fields are not working without TRIM and also some times these giving unexpected results even if you use TRIM. below is the detailed problem.
Source: DB2 - RECIP_NUM -- VARCHAR(10) -- 'ST001 '
select RECIP_NUMBER, SERV_TYPE, LENGTH(SERV_TYPE) AS before_trim_COL_LENGTH, LENGTH(trim(SERV_TYPE)) AS after_trim_COL_LENGTH
from serv_type rst
WHERE SERV_TYPE = 'ST001' -- THIS WORKS FINE WITHOUT TRIM
Output:Output of DB2
Target: PGSQL -- RECIP_NUM -- VARCHAR(10) -- 'ST001 '
select RECIP_NUMBER, SERV_TYPE, LENGTH(SERV_TYPE) AS COL_LENGTH
from serv_type rst
WHERE trim(SERV_TYPE) = 'ST001' -- THIS IS NOT GIVING ANY OUTPUT WITHOUT TRIM
Output: Output of PostgreSQL
Is there any way we can tell PostgreSQL to ignore the trailing spaces of a VARCHAR Column?
Postgres doesn't follow the SQL standard, which requires the shorter string be padded, when comparing VARCHAR or TEXT strings; it only pads the CHAR strings. Therefore, you can use ...WHERE SERV_TYPE::char = 'ST001'::char to simulate the Db2 behaviour. Note though that this will preclude the use of index on SERV_TYPE, same as when using trim(SERV_TYPE).
In my table product I have a column: product_name with type VARCHAR and size of 100: product_name varchar(100)
When I try to insert a name with special characters like this one:
°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%°°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°%âä°
I get this error:
ERROR : org.hibernate.util.JDBCExceptionReporter:78 : logExceptions() : Error for batch element #1: DB2 SQL Error: SQLCODE=-302, SQLSTATE=22001, SQLERRMC=null, DRIVER=3.57.82
My product_name column can have 100 characters, for me 'â' is 1 character.
Is there in DB2 another type (other than varchar), to set it for the product_name column?
thus I can execute this query:
alter table product alter column product_name set data type otherType(100);
Check out the
STRING_UNITS
database configuration parameter. You can use it to switch fronm the default byte length to a character length. This mean char(100) will be default interpreted by DB2 as 100 Bytes. As chracter in a unicode database can span 1-4 bytes 100 bytes are not enough to store 100 chars. After switching to STRING_UNITS = CODEUNITS32 you get 100 chars when defining a column varchar(100).
So you do not need another type but another db cfg setting.
I am trying to run a CSV import using the COPY command for some data that includes a guillemet (»). Redshift complains that the column value is too long for the varchar column I have defined. The error in the "Loads" tab in the Redshift GUI displays this character as two dots: .. - had it been treated as one, it would have fit in the varchar column. It's not clear whether there is some sort of conversion error occurring or if there is a display issue.
When trying to do plain INSERTs I run into strange behavior as well:
dev=# create table test (name varchar(3));
CREATE TABLE
dev=# insert into test values ('bla');
INSERT 0 1
3 characters treated as 4?
dev=# insert into test values ('bl»');
ERROR: value too long for type character varying(3)
dev=# insert into test values ('b»');
INSERT 0 1
Why does char_length return 2?
dev=# select char_length(name), name from test;
char_length | name
-------------+------
2 | b»
I've checked the client encoding and database encodings and those all seem to be UTF8/UNICODE.
You need to increase the length of your varchar field. Multibyte characters use more than one character and length in the definition of varchar field are byte based. So, your special char might be taking more than a byte. If it still doesn't work refer to the doc page for Redshift below,
http://docs.aws.amazon.com/redshift/latest/dg/multi-byte-character-load-errors.html
I've run into a problem in a project I'm working on: some of the string values in a specific SQL Server 2008 table column contain Unicode characters. For example, instead of a dash some strings will instead contain an EM DASH (http://www.fileformat.info/info/unicode/char/2014/index.htm).
The column values that contain Unicode characters are causing problems when I send HTTP requests to a third-party server. Is there a way to query what rows contain one-or-more Unicode characters, so I can at least begin to identify how many rows need to be fixed?
You want to find all strings that contain one or more characters outside ASCII characters 32-126.
I think this should do the job.
SELECT *
FROM your_table
WHERE your_column LIKE N'%[^ -~]%' collate Latin1_General_BIN
One way you can do it is to see which rows no longer equal themselves when converted to a datatype that doesn't support unicode.
CREATE TABLE myStrings (
string nvarchar(max) not null
)
INSERT INTO myStrings (string)
SELECT 'This is not unicode' union all
SELECT 'This has '+nchar(500)+' unicode' union all
SELECT 'This also does not have unicode' union all
SELECT 'This has lots of unicode '+nchar(600)+nchar(700)+nchar(800)+'!'
SELECT cast(string as varchar)
FROM myStrings
SELECT *
FROM myStrings
WHERE cast(cast(string as varchar(max)) as nvarchar(max)) <> string
SELECT *
FROM your_table
WHERE your_column LIKE N'%[^ -~]%' collate Latin1_General_BIN
finds all strings that contain one or more characters within ASCII characters 32-126.
I thought the purpose was to find strings where ASCII characters are not in the range 32-126?
NOT is possible with LIKE. Wouldn't this work?
SELECT *
FROM your_table
WHERE your_column NOT LIKE N'%[^ -~]%'
No collate required.