sphinxsearch: ? signs instead of non-latin characters - sphinx

I have an "?" (question sign) instead of non-latin characters in search results. I have sphinx 2.0.3 and latest Debian. Not the first time working with the Sphinx, but this problem is the first time. Encoding and database connection fully in UTF:
character set client utf8
character set connection utf8
character set database utf8
character set results utf8
character set server utf8
character set system utf8
collation connection utf8_general_ci
collation database utf8_general_ci
collation server utf8_general_ci
In sphinx config:
sql_query_pre = SET NAMES utf8
sql_query_pre = SET CHARACTER SET utf8
In index section of sphinx config:
charset_type = utf-8
charset_table = 0..9, A..Z->a..z, _, a..z, U+410..U+42F->U+430..U+44F, U+430..U+44F
What i'm doing wrong?

Instead of sql_query_pre = SET CHARACTER SET utf8 use:
sql_query_pre = SET CHARACTER_SET_RESULTS=utf8

Related

PostgreSQL - how check encoding .sql in .bat file

In my .bat I merge all my .sql files to run. but I have this error:
character with byte sequence 0x81 in encoding "WIN1252" has no equivalent in encoding "UTF8"
how can configurate .bat to check encoding and exit if found some problem?
my database is like this:
CREATE DATABASE WITH ENCODING = 'UTF8';
ALTER DATABASE SET client_encoding = 'WIN1252';
Thank you.

Does pg_dump preserve all Unicode characters when .sql file is ANSI?

I use
pg_dump.exe -U postgres -f "file-name.sql" database-name
to backup UTF-8 encoded databases on PostgreSQL 8.4 and 9.5, Windows host. Some may have foreign characters such as Chinese, Thai etc stored in Characters columns.
The resulting .sql file shows ANSI encoding when opening in Notepad++ (I'm NOT applying ANSI to opened files by default). How do I know if Unicode characters are always preserved in the dump file? Should I be using an archive (object) backup file instead?
Quote from the manual
By default, the dump is created in the database encoding.
There is no difference in a text file in ANSI encoding and UTF-8 if no extended characters are used. Maybe your dump has no special characters and thus the editor doesn't identify it as UTF-8.
If you want the SQL dump in a specific encoding, use the --encoding=encoding parameter or the PGCLIENTENCODING environment variable

Character with byte sequence 0x9d in encoding 'WIN1252' has no equivalent in encoding 'UTF8'

I am reading a csv file in my sql script and copying its data into a postgre sql table. The line of code is below :
\copy participants_2013 from 'C:/Users/Acrotrend/Desktop/mip_sahil/mip/reelportdata/Participating_Individual_Extract_Report_MIPJunior_2013_160414135957.Csv' with CSV delimiter ',' quote '"' HEADER;
I am getting following error : character with byte sequence 0x9d in encoding 'WIN1252' has no equivalent in encoding 'UTF8'.
Can anyone help me with what the cause of this issue and how can I resolve it?
The problem is that 0x9D is not a valid byte value in WIN1252.
There's a table here: https://en.wikipedia.org/wiki/Windows-1252
The problem may be that you are importing a UTF-8 file and postgresql is defaulting to Windows-1252 (which I believe is the default on many windows systems).
You need to change the character set on your windows command line before running the script with chcp. Or in postgresql you can:
SET CLIENT_ENCODING TO 'utf8';
Before importing the file.
Simply specify encoding 'UTF-8' as the encoding in the \copy command, e.g. (I broke it into two lines for readability but keep it all on the same line):
\copy dest_table from 'C:/src-data.csv'
(format csv, header true, delimiter ',', encoding 'UTF8');
More details:
The problem is that the Client Encoding is set to WIN1252, most likely because it is running on Windows machine but the file has a UTF-8 character in it.
You can check the Client Encoding with
SHOW client_encoding;
client_encoding
-----------------
WIN1252
Any encoding has numeric ranges of valid code. Are you sure so your data are in win1252 encoding?
Postgres is very strict and doesn't import any possible encoding broken files. You can use iconv that can works in tolerant mode, and it can remove broken chars. After cleaning by iconv you can import the file.
I had this problem today and it was because inside of a TEXT column I had fancy quotes that had been copy/pasted from an external source.

Invalid byte sequence for encoding utf8 postgreSQL

When I try to do the following in the PSQL windows command shell:
INSERT INTO NAMES (surname) VALUES ('børre')
I get the following:
ERROR: invalid byte sequence for encoding "UTF8": 0x9b
Show client_encoding and show server_encoding gives "utf8".
Why cant the server utf8 encoding handle ø ? I've tried to change the client_encoding to latin1, which solves the problem in the terminal, but if I insert via python or other, the character isn't saved as utf8.

Postgres using cp1252 encoding?

I have a postgres database that uses UTF-8 as encoding, and has client_encoding set to UTF8 as well. However, when using a script file that should be UTF8-encoded as well, it seems to assume the encoding is really cp1252, and gives me the following error:
FEHLER: Zeichen mit Byte-Folge 0x81 in Kodierung "WIN1252" hat keine Entsprechung in Kodierung "UTF8"
What is wrong here? Shouldn't the DB assume the file is in UTF8, instead of trying to convert it from cp1252? I even added the line
SET client_encoding='UNICODE';
But that didn't change anything (as said, the database is already configured that way...)
I had to manually insert the BOM, then it worked. (What the heck!)