I hope there is some person from Georgia who might be able to help with my setup.
I have problem with Georgian database with SQL Server 2008R2. SQL Server is set to have default locale General Latin 1. it is running on Windows 7 which is installed with default English language. I am using this server to work with English, German, Slovak, Russian, Hebrew and Latvian databases without any problem.
now, when i try to create database using Georgian_Modern_Sort_CI_AS collation then database is created successfully, database structure is created but later when i try to use it it fails with error "The Collation specified by SQL Server is not supported."
I noticed that on MSDN page related to collations https://msdn.microsoft.com/en-us/library/ms143508%28v=sql.105%29.aspx Georgian collation is marked with star. However I didn't found any description what this star means.
I checked regional settings in my Windows and I noticed that if I want to change system locale then Georgian is not available in the list. I can install Georgian as display language, but it made no change in available system locales anyway.
Any idea what should I do to be able to work with Georgian databases?
Ouch, not sure if this is good, but I found something in mssql 2005 that is saying there is no way for Georgian or Hindi and varchar :/
The last example returns 0 (Unicode) as the code page for Hindi. This example illustrates the fact that many locales, such as Georgian and Hindi, do not have c?code pages, as they are Unicode-only collations. Those collations are not appropriate for columns that use the char, varchar, or text data type, and some collations have been deprecated. For a list of available collations and which collations are Unicode-only, see Collation Settings in Setup in SQL Server 2005 Books Online.
resources:
https://technet.microsoft.com/en-us/library/bb330962(v=sql.90).aspx
When must we use NVARCHAR/NCHAR instead of VARCHAR/CHAR in SQL Server?
DO SOMEONE HAVE DIFFERENT WAY HOW TO AVOID USING NVARCHARs FOR GEORGIAN SCRIPT OR HINDI?
Related
I have an agent written in Lotuscript (IBM Domino 9.0.1 - Windows 10) that reads records into a DB2 database and writes them to Notes documents. The table in DB2 (Centos OS) contains international names in Varchar fields such as "Łódź".
The DB2 database was created as UTF-8 CodePage: 1208 and Domino by its nature supports UNICODE. Unfortunately, the value loaded in the notes document is not "Łódź" as it should be but it is "? Ód?".
How can I import special characters from DB2
in Domino NSF DBs in correct ways?
Thank you
To import the table I used the following code taken from OpenNtfs XSnippets:
https://openntf.org/XSnippets.nsf/snippet.xsp?id=db2-run-from-lotusscript-into-notes-form
Find where the codepage conversion is happening. Alter the lotusscript to dump the hex of the received data for the column-concerned to a file or in a dialog-box. If the hex codes differ from what is in the column, then it may be your Db2-client that is using the wrong codepage. Are you aware of the DB2CODEPAGE environment variable for Windows? That might help if it is the Db2-client that is doing the codepage conversion.
i.e setting environment variable DB2CODEPAGE=1208 may help, although careful testing is required to ensure it does not cause other symptoms that are mentioned online.
*How can I confirm that Chinese characters are supported by my oracle database ?*
See this answer to understand how to retrieve current NLS parameters. The full list of possible character sets. I presume that to support chinese database should have either AL32UTF8, or some appropriate national character set from the list above.
I have recently started using PostgreSQL for creating/updating existing SQL databases. Being rather new in this I came across an issue of selecting correct encoding type while creating new database. UTF-8 (default) did not work for me as data to be included is of various languages (English, Chinese, Japanese, Russia etc) as well as includes symbolic characters.
Question: What is the right database encoding type to satisfy my needs.
Any help is highly appreciated.
There are four different encoding settings at play here:
The server side encoding for the database
The client_encoding that the PostgreSQL client announces to the PostgreSQL server. The PostgreSQL server assumes that text coming from the client is in client_encoding and converts it to the server encoding.
The operating system default encoding. This is the default client_encoding set by psql if you don't provide a different one. Other client drivers might have different defaults; eg PgJDBC always uses utf-8.
The encoding of any files or text being sent via the client driver. This is usually the OS default encoding, but it might be a different one - for example, your OS might be set to use utf-8 by default, but you might be trying to COPY some CSV content that was saved as latin-1.
You almost always want the server encoding set to utf-8. It's the rest that you need to change depending on what's appropriate for your situation. You would have to give more detail (exact error messages, file contents, etc) to be able to get help with the details.
I'm connecting to a remote Firebird 2.1 DB Server and i'm querying data that contains some cyrillic characters togeather with some latin ones.
The problem is that when i deploy the app on the production system, the cyrillic characters look like this: ÂÚÇÄÓØÍÀ. In addition, when trying to log what comes in from the DB, the cyrillic content is just skipped in the log file (i.e. i'm not seeing the ÂÚÇÄÓØÍÀ at all).
At this point i'm not sure whether i'm getting inconsistent data from the DB OR the production environment can't recognize those characters for some reason.
I've been wandering about for quite some time now and ran out of ideas, so any hints would be great.
The Dev machine that i use runs Windows 7 Ultimate SP1. My system locale is Bulgarian
The Production Server is accessed via Paralles Plesk Panel, and i'm not sure what's underneath.
If you did not specify any character set in your connection properties, then almost all Firebird drivers default to connection character set NONE. This means that Firebird will send the bytes of strings as they are stored in the database without any conversion, on the other side the driver will use the default system character set to convert those bytes to strings. If you use multiple systems with various default system character sets you will get different results.
You should always explicitly specify a connection characterset (WIN1251 in your case), unless you really know what you are doing.
The project which I am working now is upgrading the database from mysql to postgreSQL in Zend framework. I had migrated the database to PostgreSQL through "ESF Database Migration Toolkit". How ever the field names like "Emp_FirstName", "Emp_LastName" etc are stored in PostgreSQL as "emp_firstname" and "emp_lastname". This caused Errors in code. However when I updated the filed in PostgreSQL to Emp_FirstName it showing error
********** Error **********
ERROR: column "Emp_FirstName" does not exist
SQL state: 42703
Character: 8
Is it possible to give filed name exactly like in MYSQL?
The migration tool isn't "double quoting" identifiers, so their case is being flattened to lower-case by PostgreSQL. Your code must be quoting the identifiers so they're case-preserved. PostgreSQL is case sensitive and case-flattens unquoted identifiers, wheras MySQL is case-insensitive on Windows and Mac and case-sensitive on *nix.
See the PostgreSQL manual section on identifiers and keywords for details on PostgreSQL's behaviour. You should probably read that anyway, so you understand how string quoting works among other things.
You need to pick one of these options:
Change your code not to quote identifiers;
Change your migration tool to quote identifiers when creating the schema;
Hand migrate the schema instead of using a migration tool;
Fix the quoting of identifers in the tool-generated SQL by hand; or
lower-case all identifiers so it doesn't matter for Pg
The last option won't help you when you add Oracle support and discover that Oracle upper-cases all identifiers, so I'd recommend picking one of the first four options. I didn't find a way to get the migration tool to quote identifiers in a quick 30second Google search, but didn't spend much time on it. I'd look for options to control quoting mode in the migration tool first.
PostgreSQL does not have a configuration option to always treat identifiers as quoted or to use case-insensitive identifier comparisons.
This is far from the only incompatibility you will encounter, so be prepared to change queries and application code. In some cases you might even need one query for MySQL and another for PostgreSQL if you plan to continue to support MySQL.
If you weren't using sql_mode = ANSI and using STRICT mode in MySQL you'll have a lot more trouble with porting than if you were using those options, since both options bring MySQL closer to SQL standard behaviour.