I have psql (PostgreSQL) 10.10 and client_encoding is UTF8. Now entries are made by an older Delphi version which cannot use UTF8 so the entries in the DB have the special signs not represented as UTF8. A ™ sign is represented by \u0099 for instance. Is it possible to force a conversion when the sign is entered into the data base? Switching Delphi is not an option right now. I am sorry if this is a basic question. My knowledge about data bases is limited.
It looks like your Delphi client is not using LATIN1, but WINDOWS-1252, because ™ is code point 99 in that encoding.
You can change client_encoding per session, and that is what you should do.
Either let your application execute
SET client_encoding = WIN1252;
or set the PGCLIENTENCODING environment variable or specify client_encoding as part of the connect string.
I have a DB server and client with encoding UTF8,and the client some time write data in SJIS or LATIN1,
ERROR: invalid byte sequence for encoding "UTF8": 0xd5 0x78
I tried to set client_encoding to SJIS then it's worked
I wonder why this error happened?
Because I think UTF8 was support both of them descriped in this docs
https://www.postgresql.org/docs/11/multibyte.html#id-1.6.10.5.7
Are the any way to make it automatic convert without setting encoding manually?
No, you need to explicitly tell it what encoding the client uses.
Postgres does support automatic conversion between the encodings, but it does not support automatic encoding detection. If the client is assumed to use UTF-8 but then writes some SJIS bytes, they simply might end up as invalid.
I cannot seem to get Hungarian accented characters to store properly in my Firebird database despite using ISO8859_2 character set and ISO_HUN collation.
This string for example:
Magyar Képzőművészeti Egyetem, Festő szak, mester: Klimó Károly
gets displayed as
Magyar Képzomuvészeti Egyetem, Festo szak, mester: Klimo Karoly
What am I doing wrong?
Your string is UTF8 encoded. It's working fine with IBExpert and an UTF8 database. Make sure that you're using the correct character set (DB connection, DB column, string).
I have a 9.0 postgres server instance and a database using UTF8 character encoding with German_Germany.1252 collation. I'm trying to get my libpq error messages on the client as US-ASCII strings. To this end I do:
PQsetClientEncoding( connection, "SQL_ASCII" );
which returns no error. However, the strings returned from PQerrorMessage() still seem to be UTF8.
Is the return value from PQerrorMessage always guaranteed to be UTF8? No matter the client/server settings?
SQL_ASCII as a client encoding means, pass the bytes through as is, which is exactly what you didn't want. There actually isn't any client encoding that corresponds to just ASCII. If your messages are in German, then you might want a setting such as LATIN1 or LATIN9. Otherwise change the language to English and the messages will be in ASCII anyway.
I am getting the following exception:
Caused by: org.postgresql.util.PSQLException: ERROR: character 0xefbfbd of encoding "UTF8" has no equivalent in "WIN1252"
Is there a way to eradicate such characters, either via SQL or programmatically?
(SQL solution should be preferred).
I was thinking of connecting to the DB using WIN1252, but it will give the same problem.
I had a similar issue, and I solved by setting the encoding to UTF8 with \encoding UTF8 in the client before attempting an INSERT INTO foo (SELECT * from bar WHERE x=y);. My client was using WIN1252 encoding but the database was in UTF8, hence the error.
More info is available on the PostgreSQL wiki under Character Set Support (devel docs).
What do you do when you get this message? Do you import a file to Postgres? As devstuff said it is a BOM character. This is a character Windows writes as first to a text file, when it is saved in UTF8 encoding - it is invisible, 0-width character, so you'll not see it when opening it in a text editor.
Try to open this file in for example Notepad, save-as it in ANSI encoding and add (or replace similar) set client_encoding to 'WIN1252' line in your file.
Don't eridicate the characters, they're real and used for good reasons. Instead, eridicate Win1252.
I had a very similar issue. I had a linked server from SQL Server to a PostgreSQL database. Some data I had in the table I was selecting from using an openquery statement had some character that didn't have an equivalent in Win1252. The problem was that the System DSN entry (to be found under the ODBC Data Source Administrator) I had used for the connection was configured to use PostgreSQL ANSI(x64) rather than PostgreSQL Unicode(x64). Creating a new data source with the Unicode support and creating a new modified linked server and refernecing the new linked server in your openquery resolved the issue for me. Happy days.
That looks like the byte sequence 0xBD, 0xBF, 0xEF as a little-endian integer. This is the UTF8-encoded form of the Unicode byte-order-mark (BOM) character 0xFEFF.
I'm not sure what Postgre's normal behaviour is, but the BOM is normally used only for encoding detection at the beginning of an input stream, and is usually not returned as part of the result.
In any case, your exception is due to this code point not having a mapping in the Win1252 code page. This will occur with most other non-Latin characters too, such as those used in Asian scripts.
Can you change the database encoding to be UTF8 instead of 1252? This will allow your columns to contain almost any character.
I was able to get around it by using Postgres' substring function and selecting that instead:
select substring(comments from 1 for 200) from billing
The comment that the special character started each field was a great help in finally resolving it.
This problem appeared for us around 19/11/2016 with our old Access 97 app accessing a postgresql 9.1 DB.
This was solved by changing the driver to UNICODE instead of ANSI (see plang comment).
Here's what worked for me :
1 enable ad-hoc queries in sp_configure.
2 add ODBC DSN for your linked PostgreSQL server.
3 make sure you have both ANSI and Unicode (x64) drivers (try with both).
4 run query like this below - change UID, server ip, db name and password.
5 just keep the query in last line in postgreSQL format.
EXEC sp_configure 'show advanced options', 1
RECONFIGURE
GO
EXEC sp_configure 'ad hoc distributed queries', 1
RECONFIGURE
GO
SELECT * FROM OPENROWSET('MSDASQL',
'Driver=PostgreSQL Unicode(x64);
uid=loginid;
Server=1.2.3.41;
port=5432;
database=dbname;
pwd=password',
'select * FROM table_name limit 10;')
I have face this issue when my Windows 10 using Mandarin China as default language. This problem has occurred because I did try to import a database with UTF-8. Checking via psql and do "\l", it shows collate and cytpe is Mandarin China.
The solution, reset OS language back to US and re-install PostgreSQL. As the collate back to UTF-8, you can reset back your OS language again.
I write the full context and solution here https://www.yodiw.com/fix-utf8-encoding-win1252-cputf8-postgresql-windows-10/