Sybase ASA 8 character issue - £ vs ú - sybase-asa

I have an issue at a client where the same data field displays differently in our legacy application installed on two different machines. The character in question is the UK pound sign £ which on some machines displays as ú.
I have tried to over-type this character with £ on machines where it's wrong, but this then "breaks" it on machines where it was working correctly before.
Oddly this issue has started to spread to other machines even though there has been no changes to the application for several years, and the client assures me that no new software or updates have been applied to them. The displaying of the fields value is consistent on all connections to the database, ie. through our application, Interactive SQL and Crystal Reports 8, 9 & 10.
All client machines are connecting via ODBC to the same ASA 8.0.2.4234 database server service over TCP/IP.

The Sybase ODBC clients will almost certainly be using different character sets. Without knowing a bit more about your application it is difficult to know where the wrong character set is being picked up, but you can override the character set in use when you set up the ODBC data source.

Related

PostgreSQL + Delphi XE7 + ADO + ODBC

Our application successfully communicates with various databases (MSSQL, Oracle, Firebird) via ADO, that's why I'm trying to use ADO to add the PostgreSQL option to our software. I use standard PostgreSQL ODBC provider. All was fine until I faced with the problem with the large TEXT fields. When I use Unicode version of the provider and try to get TEXT field AsString, the application is just crushed with EAccessViolation in method RemoveMediumFreeBlock(). ANSI version works, but it cuts the content of the field (I guess the characters besides default 8190 LongVarChar limit). Small TEXT fields are read OK.
Could you suggest me what to do with this issue?
Is there the better option to work with PostgreSQL via ADO in Delphi?

Database encoding in PostgreSQL

I have recently started using PostgreSQL for creating/updating existing SQL databases. Being rather new in this I came across an issue of selecting correct encoding type while creating new database. UTF-8 (default) did not work for me as data to be included is of various languages (English, Chinese, Japanese, Russia etc) as well as includes symbolic characters.
Question: What is the right database encoding type to satisfy my needs.
Any help is highly appreciated.
There are four different encoding settings at play here:
The server side encoding for the database
The client_encoding that the PostgreSQL client announces to the PostgreSQL server. The PostgreSQL server assumes that text coming from the client is in client_encoding and converts it to the server encoding.
The operating system default encoding. This is the default client_encoding set by psql if you don't provide a different one. Other client drivers might have different defaults; eg PgJDBC always uses utf-8.
The encoding of any files or text being sent via the client driver. This is usually the OS default encoding, but it might be a different one - for example, your OS might be set to use utf-8 by default, but you might be trying to COPY some CSV content that was saved as latin-1.
You almost always want the server encoding set to utf-8. It's the rest that you need to change depending on what's appropriate for your situation. You would have to give more detail (exact error messages, file contents, etc) to be able to get help with the details.

Different Firebird query results from dev machine and production server (cyrillic characters)

I'm connecting to a remote Firebird 2.1 DB Server and i'm querying data that contains some cyrillic characters togeather with some latin ones.
The problem is that when i deploy the app on the production system, the cyrillic characters look like this: ÂÚÇÄÓØÍÀ. In addition, when trying to log what comes in from the DB, the cyrillic content is just skipped in the log file (i.e. i'm not seeing the ÂÚÇÄÓØÍÀ at all).
At this point i'm not sure whether i'm getting inconsistent data from the DB OR the production environment can't recognize those characters for some reason.
I've been wandering about for quite some time now and ran out of ideas, so any hints would be great.
The Dev machine that i use runs Windows 7 Ultimate SP1. My system locale is Bulgarian
The Production Server is accessed via Paralles Plesk Panel, and i'm not sure what's underneath.
If you did not specify any character set in your connection properties, then almost all Firebird drivers default to connection character set NONE. This means that Firebird will send the bytes of strings as they are stored in the database without any conversion, on the other side the driver will use the default system character set to convert those bytes to strings. If you use multiple systems with various default system character sets you will get different results.
You should always explicitly specify a connection characterset (WIN1251 in your case), unless you really know what you are doing.

refused database connection causes encoding error

I have a postgreSQL Server with some databases. Every user can only connect to certain databases.
So far so good. I wanted to test if everthing worked, so i used pgAdmin III to log in with a restricted user. when i try to connect to a database the user has no connection rights to, something seems to happen to the logfile!
it can't be read by the server-status window anymore. All i get are a lot of messages about invalid Byte-sequences for encoding utf8.
The only way of stopping those messages windows is to kill the programm and force postgre to create a new logfile.
can anyone explain to me why that happens and how i can stop it???
OK, I think the problem is the "ü" in "für". The error message seems to be complaining about a character code 0xfc which in latin1 (and similar) is lower case u with umlaut.
Messages sent back via a database connection should be translated to the client encoding. However, the log-file contains output from a variety of sources and according to this there were issues fairly recently (2012):
It's a known issue, I'm afraid. The PostgreSQL postmaster logs in the
system locale, and the PostgreSQL backends log in whatever encoding
their database is in. They all write to the same log file, producing a
log file full of mixed encoding data that'll choke many text editors.
So - I'm guessing your system locale is 8859-1 (or -15 perhaps) whereas pg-admin is expecting UTF-8. Short-term, you could set the system encoding to UTF-8, longer term drop a bug report over to the pgadmin team - one error message is helpful, after that it should probably just put hexcodes in the text or some such.

Oracle 11g handling newlines differently than 10g

We have a web service (closed source) that accesses an Oracle 11g database. It was recently upgraded from 10g to 11g. It returns records, one of the columns being an NCLOB type. The string sent to the webservice may contain Windows NewLines, \r\n. Unfortunately, I'm not sure what, if anything the web service was doing to manipulate the data sent/received from the DB.
In 10g, the string returned from the NCLOB column was Big Endian Unicode, and all '\r' were dropped, so new lines would return as a \n.
In 11g, the string returned from the NCLOB is ASCII encoded, and all '\r' were replaced with '\n', so new lines return as \n\n.
Does this seem reasonable? Honestly, we've been handling Oracle newline issues for a while (the behavior of 10g), and I'm pretty sure that this is a result of upgrading to 11g. Does anyone have information on differences between 10g and 11g, related to newline or escape character sequence storage or the NCLOB datatype? I'm trying to do damage control here and point the finger at Oracle 11g, but need some evidence.
How this is interpreted on the client depends on the client nls settings. You need to check if your client nls settings changed. Especially check the setting of the NLS_LANG environment variable. This needs to be set on a user basis. In windows this might be set in the registry.