Try to read the chinese text but did not succeed.
I have used Npgsql as a Provider and npgqsl.dll as dll
I have used ADO.NET NpgsqlConnection,NpgsqlCommand,NpgsqlDataReader and NpgsqlDataAdapter class objects.
I want to read the chinese text which is stored in table of postgreSQL database.
Anyone help me.
If your database has encoding SQL_ASCII, you are lost.
Other that that, set the connection string parameter Client Encoding to the value your .NET application expects.
Related
I have a generic ODBC application reading and writing data via ODBC to some db(can be ms sql, mysql or anything else). The received and sent data can be Unicode. I'm using SQL_C_WCHAR for my bindings in this case.
So I have two questions here:
Can I determine the encoding in which the data came from the ODBC data source?
In which encoding should I send data to the ODBC data source? I'm running parameterised insert statement for this purpose.
My researched showed that some data sources have connection options to set the encoding, but I want to write a generic application working with anything.
Couldn't find any ODBC option telling me the encoding of the data source. Is there something like that? ODBC docs just say use SQL_C_WCHAR. Is SQL_C_WCHAR for UTF-16?
I did some more research and both Microsoft docs and unixodbc docs seem to point out that ODBC only supports UCS-2. So I think all the data sent or received needs to be UCS-2 encoded.
I have an agent written in Lotuscript (IBM Domino 9.0.1 - Windows 10) that reads records into a DB2 database and writes them to Notes documents. The table in DB2 (Centos OS) contains international names in Varchar fields such as "Łódź".
The DB2 database was created as UTF-8 CodePage: 1208 and Domino by its nature supports UNICODE. Unfortunately, the value loaded in the notes document is not "Łódź" as it should be but it is "? Ód?".
How can I import special characters from DB2
in Domino NSF DBs in correct ways?
Thank you
To import the table I used the following code taken from OpenNtfs XSnippets:
https://openntf.org/XSnippets.nsf/snippet.xsp?id=db2-run-from-lotusscript-into-notes-form
Find where the codepage conversion is happening. Alter the lotusscript to dump the hex of the received data for the column-concerned to a file or in a dialog-box. If the hex codes differ from what is in the column, then it may be your Db2-client that is using the wrong codepage. Are you aware of the DB2CODEPAGE environment variable for Windows? That might help if it is the Db2-client that is doing the codepage conversion.
i.e setting environment variable DB2CODEPAGE=1208 may help, although careful testing is required to ensure it does not cause other symptoms that are mentioned online.
Our application successfully communicates with various databases (MSSQL, Oracle, Firebird) via ADO, that's why I'm trying to use ADO to add the PostgreSQL option to our software. I use standard PostgreSQL ODBC provider. All was fine until I faced with the problem with the large TEXT fields. When I use Unicode version of the provider and try to get TEXT field AsString, the application is just crushed with EAccessViolation in method RemoveMediumFreeBlock(). ANSI version works, but it cuts the content of the field (I guess the characters besides default 8190 LongVarChar limit). Small TEXT fields are read OK.
Could you suggest me what to do with this issue?
Is there the better option to work with PostgreSQL via ADO in Delphi?
I'm trying to understand PostgreSQL and Npgsql in regards to "Full Text Search". Is there something in the Npgsql project that helps doing those searches on a database?
I found the NpgsqlTsVector.cs/NpgsqlTsQuery.cs classes in the Npgsql source code project. Can they be used for "Full Text Search", and, if so, how?
Yes, since 3.0.0 Npgsql has special support for PostgreSQL's full text search types (tsvector and tsquery).
Make sure to read the PostgreSQL docs and understand the two types and how they work.
Npgsql's support for these types means that it allows you to seamlessly send and receive tsvector and tsquery from PostgreSQL. In other words, you can create an instance of NpgsqlTsVector, populate it with the lexemes you want, and then set it as a parameter in an NpgsqlCommand just like any other parameter type (the same goes for reading a tsvector or tsquery).
For more generic help on using Npgsql to interact with PostgreSQL you can read the Npgsql docs.
I had an application that used a Sybase ASA 8 database. However, the application is not working anymore and the vendor went out of business.
Therefore, I've been trying to extract the data from the database, which has Arabic characters. When I connect to the database and display the contents, Arabic characters do not display correctly; instead, it looks something like ÇáÏãÇã.
Which is incorrect.
I tried to export the data to a text file. Same result. Tried to save the text file with UTF-8 encoding, but to no avail.
I have no idea what collation the tables are set to. Is there a way to export the data correctly, or convert it to the correct encoding?
the problem was solved by exporting the data from the database using "Windows-1252" encoding, and then importing it to other applications with "Windows-1256" encoding.
When you connect to the database, use the CHARSET=UTF-8 connection parameter. That will tell the server to convert the data to UTF-8 before sending it to the client application. Then you can save the data from the client to a file.
This, of course, is assuming that the data was saved with the correct character set to begin with. If it wasn't, you may be out of luck.