PostgreSQL - COPY FROM - "unacceptable encoding" error - postgresql

I get an error when trying to use copy utility to extract data from csv file with UCS-2 LE BOM encoding (as reported by notepad++).
COPY pub.calls (............ )
FROM 'c:\IMPORT\calls.csv'
WITH
DELIMITER ','
HEADER
CSV
ENCODING 'UCS2';
The error is something like this
SQL Error [22023]Error The argument of encoding parameter should be
acceptable encoding name.
UCS-2 gives the same error.

For the list of supported charsets:
https://www.postgresql.org/docs/current/static/multibyte.html
or in psql type \encoding and dbl tab for autocomplete:
postgres=# \encoding
BIG5 EUC_JP GB18030 ISO_8859_6 JOHAB LATIN1 LATIN3 LATIN6 LATIN9 SJIS UTF8 WIN1252 WIN1255 WIN1258
EUC_CN EUC_KR GBK ISO_8859_7 KOI8R LATIN10 LATIN4 LATIN7 MULE_INTERNAL SQL_ASCII WIN1250 WIN1253 WIN1256 WIN866
EUC_JIS_2004 EUC_TW ISO_8859_5 ISO_8859_8 KOI8U LATIN2 LATIN5 LATIN8 SHIFT_JIS_2004 UHC WIN1251 WIN1254 WIN1257 WIN874

Related

How do I fix an encoding error in order to upload a 5GB text file using psql? (ERROR: invalid byte sequence for encoding "UTF8": 0x92)

I am trying to upload a series of tables (.txt files) into a PostgreSQL database that runs on my Windows 10 desktop. I use psql upload the files. I have successfully uploaded a couple of tables but the largest one (5GB with over 20 million rows) is giving me trouble:
databasename=# \copy table1 FROM 'C:\Users\tablename.txt' DELIMITER ',' CSV HEADER;
ERROR: character with byte sequence 0x9d in encoding "WIN1252" has no equivalent in encoding "UTF8"
CONTEXT: COPY table1, line 581330
I found an answer here which suggested I check the client encoding...
databasename=# SHOW client_encoding;
client_encoding
-----------------
WIN1252
(1 row)
and then change it, which I tried:
databasename=# SET CLIENT_ENCODING TO 'utf8';
SET
I then try the same copy command again and get the following error:
ERROR: invalid byte sequence for encoding "UTF8": 0x92
CONTEXT: COPY table1, line 206051
I've read a little about 0x92 here. It sounds like there is a character in the file which cannot be encoded when I try and perform the \copy command.
Some background:
I was able to upload about 1 million rows into SQL Server 2019 (free version) using the SQL Server Import and Export Wizard. (I stopped the import because it was taking too long.) I was also able to view the file in R using read.csv. Not sure if any of this is helpful. Thank you all in advance.

Does the postgres COPY function support utf 16 encoded files?

I am trying to use the postgreSQL COPY function to insert a UTF 16 encoded csv into a table. However, when running the below query:
COPY temp
FROM 'C:\folder\some_file.csv'
WITH (
DELIMITER E'\t',
FORMAT csv,
HEADER);
I get the error below:
ERROR: invalid byte sequence for encoding "UTF8": 0xff
CONTEXT: COPY temp, line 1
SQL state: 22021
and when I run the same query, but adding the encoding settings Encoding 'UTF-16' or Encoding 'UTF 16' to the with block, I get the error below:
ERROR: argument to option "encoding" must be a valid encoding name
LINE 13: ENCODING 'UTF 16' );
^
SQL state: 22023
Character: 377
I've looked through the postgres documentation to try to find the correct encoding, but haven't managed to find anything. Is this because the copy function does support UTF 16 encoded files? I would have thought that this would almost certainly have been possible!
I'm running postgres 12, on windows 10 pro
Any help would be hugely appreciated!
No, you cannot do that.
UTF-16 is not in the list of supported encodings.
PostgreSQL will never support an encoding that is not an extension of ASCII.
You will have to convert the file to UTF-8.

PotgreSQL- ERROR: invalid byte sequence for encoding "UTF8": 0xeb 0x6e 0x74

I am working on PostgreSQL and getting below error during insert statement execution from batch script(command line).
ERROR: invalid byte sequence for encoding "UTF8": 0xeb 0x6e 0x74
I have checked client_encoding by show client_encoding command and it is showing UTF-8.
Also checked database properties by using command
select * from pg_database where datname='<mydbName>'
In Output:
datcollate = English_United States.1252
datctype = English_United States.1252
How to resolve this issue?
If the three bytes quoted by the error message are supposed to encode the string “ënt”, you can solve your problem by setting the correct client encoding, e.g.
SET client_encoding = WIN1252;
This happened because you have special characteres in the name of the columns.
Rename and delete special characters change the encoding for win-1252 and will work.

PostgreSQL: Export data from SQL Server 2008 R2 to PostgreSQL 9.5

I have a table to export data from SQL Server to PostgreSQL.
Steps I followed:
Step 1: Export data from SQL Server:
Source: SQL Server Table
Destination: Flat file Destination
Table Or Query to copy: Query
Query:
SELECT
COALESCE(convert(varchar(max),id),'NULL') + '|'
+COALESCE(convert(varchar(max),Name),'NULL') + '|'
COALESCE(convert(varchar(max),EDate,121),'NULL') AS A
FROM tbl_Employee;
File Name: file.txt
Step 2: Copy to PostgreSQL.
Command:
\COPY tbl_employee FROM '$FilePath\file.txt' DELIMITER '|' NULL AS 'NULL' ENCODING 'LATIN1'
Getting Following error message:
ERROR: invalid byte sequence for encoding "UTF8": 0xc1 0x20
You tell Postgres the source would be encoded as LATIN1:
\copy ... ENCODING 'LATIN1'
But that's either not the case or the file is damaged. Else we would not see the error message. What is the true encoding of '$FilePath\file.txt'?
The current client_encoding is not relevant for this since, quoting the manual on COPY:
ENCODING
Specifies that the file is encoded in the encoding_name. If this option is omitted, the current client encoding is used.
(\copy is jut a wrapper for SQL COPY in psql.)
And your server_encoding is largely irrelevant, too - as long as Postgres can use a built-in conversion and the target encoding contains all characters of the source encoding - which is the case for LATIN1 -> UTF8: iso_8859_1_to_utf8.
So the remaining source of error is your file, which is almost certainly not valid LATIN1.

JasperReports Server: DB encoding issue - the Cyrillic symbols are not displaying

I'm trying to run report on JasperReports Server. My MySQL db encoding is cp1251.
The result of running report
� "Emika" Ltd, 3
������� 2012
?- must be the Russian symbols. I found solution to set URL in report datasource
jdbc:mysql://localhost:3306/database?useUnicode=true&characterEncoding=cp1251.
But it doesn't works. What I'm doing wrong?
Connection settings
are
Variable_name Value
character_set_client cp1251
character_set_connection cp1251
character_set_database cp1251
character_set_filesystem binary
character_set_result cp1251
character_set_server cp1251
character_set_system utf8
I resolve the problem. I set the parameter init-connect="SET NAMES utf8" in file /etc/mysql/my.cnf