Hungarian characters in Firebird database - firebird

I cannot seem to get Hungarian accented characters to store properly in my Firebird database despite using ISO8859_2 character set and ISO_HUN collation.
This string for example:
Magyar Képzőművészeti Egyetem, Festő szak, mester: Klimó Károly
gets displayed as
Magyar Képzomuvészeti Egyetem, Festo szak, mester: Klimo Karoly
What am I doing wrong?

Your string is UTF8 encoded. It's working fine with IBExpert and an UTF8 database. Make sure that you're using the correct character set (DB connection, DB column, string).

Related

Can PostgreSQL convert entries to UTF-8 even though the input is Latin1?

I have psql (PostgreSQL) 10.10 and client_encoding is UTF8. Now entries are made by an older Delphi version which cannot use UTF8 so the entries in the DB have the special signs not represented as UTF8. A ™ sign is represented by \u0099 for instance. Is it possible to force a conversion when the sign is entered into the data base? Switching Delphi is not an option right now. I am sorry if this is a basic question. My knowledge about data bases is limited.
It looks like your Delphi client is not using LATIN1, but WINDOWS-1252, because ™ is code point 99 in that encoding.
You can change client_encoding per session, and that is what you should do.
Either let your application execute
SET client_encoding = WIN1252;
or set the PGCLIENTENCODING environment variable or specify client_encoding as part of the connect string.

Convert Emoji UTF8 to Unicode in Powershell

I need to persist data in a database with powershell. Occasionally the data contains an emoji.
DB: On the DB side everything should be fine. The attributes are set to NVARCHAR which makes it possible to persist emojis. When I inserted the data manually the emoji got displayed after I query them(🤝💰).
I tested it with example data in SSMS and it worked perfectly.
Powershell: When preparing the SQL Statement in Powershell I noticed that the emojis are interpreted in UTF8 (ðŸ¤ðŸ’°). Basically gibberish.
Is a conversion from UTF8 to Unicode even necessary? How can I persist the emojis as 🤝💰 and not as ðŸ¤ðŸ’°/1f600
My colleague had the correct answer to this problem.
To persist emojis in a MS SQL Database you need to declare the column as nvarchar(max) (max is not necessarily), which I already did.
I tried to persist example data which I had hardcoded in my PS Script like this
#{ description = "Example Description😊😊 }
Apparently VS Code adds some kind of encoding on top of the data(our guess).
What basically solved the issue was simply requesting the data from the API and persist it into the database with prefix string literal with N + the nvarchar(max) column datatype
Example:
SET #displayNameFormatted = N'"+$displayName+"'
And then include that variable in my insert statement.
Does this answer your question? "Use NVARCHAR(size) datatype and prefix string literal with N:"
Add emoji / emoticon to SQL Server table
1 Emoji in powershell is 2 utf16 surrogate characters, since the code for it is too high for 16 bits. Surrogates and Supplementary Characters
'🤝'.length
2
Microsoft has made "unicode" a confusing term, since that's what they call utf16 le encoding.
Powershell 5.1 doesn't recognize utf8 no bom encoded scripts automatically.
We don't know what your commandline actually is, but see also: Unicode support for Invoke-Sqlcmd in PowerShell

postgresql odbc is not storing utf8 correctly into json/jsonb column

I'm using odbc at my application(client side), trying to insert a list of utf-8 encoded characters into postgresql json column. I'm using postgresOdbcUnicode driver V11.1.
At server side, the srvEncoding and clientEncoding are all set to UTF-8.
The odbc binding code in the application is :
SQLBindParameter(hstmt, ordinal, SQL_PARAM_INPUT, SQL_C_CHAR, SQL_CHAR, columnSize, 0/*decimalDigits*/, valuePtr, 0/*bufferLength*/, strLen_or_indPtr);
However, the actual content that is stored is scrambled : for example, ö is stored as ö.
it looks like the driver or something in the postgres srv treats my input string as win1252, and convert them into UTF8, thus
ö becomes ö.
My question is what i'm missing here..

Escape Cyrillic, Chinese, Arabic, Hebrew characters in postgresql

I'm trying to load in a postgres table, records from a flat file, I'm doing it with the Copy command, which has worked well so far.
But now I am receiving fields with words in Chinese, Japanese, Cyrillic and other languages, and when I try to do it, it gives me an error in the load.
How could those characters escape in Postgres, I searched, but I have not found any reference to this type of topic.
You should not escape the characters, you should load them as they are.
Your database encoding is UTF8, so that's no problem. If your database encoding is not UTF8, change that.
For each file, figure out what its encoding is and use the ENCODING option of COPY or the environment variable PGCLIENTENCODING so that PostgreSQL knows which encoding the file is in.

Character encoding for Postgres API function return values?

I have a 9.0 postgres server instance and a database using UTF8 character encoding with German_Germany.1252 collation. I'm trying to get my libpq error messages on the client as US-ASCII strings. To this end I do:
PQsetClientEncoding( connection, "SQL_ASCII" );
which returns no error. However, the strings returned from PQerrorMessage() still seem to be UTF8.
Is the return value from PQerrorMessage always guaranteed to be UTF8? No matter the client/server settings?
SQL_ASCII as a client encoding means, pass the bytes through as is, which is exactly what you didn't want. There actually isn't any client encoding that corresponds to just ASCII. If your messages are in German, then you might want a setting such as LATIN1 or LATIN9. Otherwise change the language to English and the messages will be in ASCII anyway.