I need to make changes to an old legacy project in Delphi 7.
I need to save a TStringList to a file with Unicode encoding. All resources I have found describe a second parameter for specifying an encoding in the SaveToFile()/LoadFromFile() methods, but there is no such parameter in Delphi 7. It was probably added in later versions.
How can I save UTF-8 text to a file (.csv) in Delphi 7?
The parameter you are looking for was introduced in Delphi 2009, when Delphi's String type migrated from AnsiString to UnicodeString.
Prior to Delphi 2009, you will have to encode the TStringList entries to UTF-8 yourself. You can put UTF-8 data in an AnsiString (UTF8String in those versions was just an alias for AnsiString), and TStringList will save the data as-is.
However, you may be tempted to use the RTL's UTF8Encode() function, but know that prior to Delphi 2009 it doesn't support Unicode codepoints above U+FFFF. If you need to handle codepoints that high, you will have to use Microsoft's MultiByteToWideChar() function instead.
Related
I believe Windows currently defaults to UTF-16 for “Unicode”, but that this may not be the case in the future.
For this reason, would it be better to use
[System.Text.Encoding]::UTF8.GetString($someByteArray)
instead of the following?:
[System.Text.Encoding]::Unicode.GetString($someByteArray)
this may not be the case in the future.
Unicode isn't a potentially-variable encoding; it's just Microsoft's (sadly misleading) name for UTF-16LE.
It isn't going to change. Even if Microsoft moved towards implementing Windows APIs natively in UTF-8 or UTF-32 (something there's no sign of ever happening), System.Text.Encoding.Unicode would have to remain UTF-16LE as that is how it is defined by the .NET specification.
would it be better to use UTF8 instead of Unicode?
Use UTF8 if the byte array contains UTF-8-encoded bytes, and use Unicode if they are in UTF-16LE.
If you get to choose what encoding is used to store data at rest, UTF-8 is usually the better choice for space efficiency reasons.
First, yes Windows defaults to UTF-16. Personally I would use UTF-8, because most of the applications I write have to communicate with Linux applications or some form of http so UTF-8 is more likely.
Besides even if all your code is used with Microsoft systems it's easy to convert to UTF-8 and a simple substitute regular expression could change everything over to Unicode (UTF-16) if .NET started requiring it.
My client j2me application reading text input stream using UTF-8
reader = new InputStreamReader(in,"UTF-8");
and my server when gets connected sends text using this statement
AContext->Connection->IOHandler->WriteLn(cxMemo1->Text,TEncoding::UTF8);
but result text showing weird characters like ?????????????????????????? ?????????????
Where I'm doing wrong?
also when i tried to load from utf-8 encoding data file in such a way
AContext->Connection->IOHandler->WriteFile("c:\\fids.xml");
it's all the same!
Indy 10 completely supports UTF-8 encoding. I've myself worked with it's TIdFTP component & successfully uploaded Unicode text files. From what I can make of it:
Your connection/transfer type is set to ftASCII rather than ftBinary.
Your J2ME applet/Host platform does not suport UTF-8
'?' characters occur when data is going through a Unicode-to-Ansi conversion to an Ansi charset that does not support the Unicode characters being converted.
What version of C++Builder are you using? In versions prior to CB2009, you should tell Indy the encoding of the AnsiString data that you are passing in. Indy defaults to ASCII (ie: TIdTextEncoding::ASCII) for most String-based operation. That can be overridden when needed, either with optional AAnsiEncoding parameters, the TIdIOHandler::DefAnsiEncoding property, or the global Idglobal::GIdDefaultAnsiEncoding setting. If you do not specify the correct encoding, the AnsiString data may not be converted to Unicode correctly before then being converted to UTF-8. For example:
AContext->Connection->IOHandler->WriteLn(cxMemo1->Text, TIdTextEncoding_UTF8, TTIdTextEncoding_Default);
Or:
AContext->Connection->IOHandler->DefAnsiEncoding = TIdTextEncoding_Default;
AContext->Connection->IOHandler->WriteLn(cxMemo1->Text, TIdTextEncoding_UTF8);
You can optionally also use the TIdIOHandler::DefStringEncoding property if you do not want to specify the UTF-8 encoding on every call:
AContext->Connection->IOHandler->DefStringEncoding = TIdTextEncoding_UTF8;
AContext->Connection->IOHandler->WriteLn(cxMemo1->Text);
Now, with that said, the fact that WriteFile() is also sending data that J2ME is not handling correctly tells me that Indy is not the root of the issue. WriteFile() simply dups the raw file data as-is to the connection without any interpretation at all. If you send a UTF-8 encoded file, then UTF-8 encoded octets will be sent to J2ME.
I suggest you use a packet sniffer, such as Wireshark, to verify the data that Indy is sending. That will tell you for sure whether Indy is really at fault or not.
*PS: notice in the examples above that I use Indy's TIdTextEncoding macros instead of TEncoding directly. This is because Indy's TIdTextEncoding logic works around some bugs in Embarcadero's TEncoding classes. Also, we're going to phase out direct support for TEncoding in Indy 11 and expand on TIdTextEncoding so Indy has more control than Embarcadero offers.
I am creating a coldfusion page, that takes language translation data stored in a table in my database, and makes static js files for each language pairing of english to ___ etc...
I am now starting to work on russian, I was able to get the other languages to work fine..
However, when it saves the file, all the text looks like question marks. Even when I run my translation app, the text for just that language looks like all ?????
I have tried writing it via cffile as utf-8 or ISO-8859-1 but neither seems to get it to display properly.
Any suggestions?
Have you tried ISO-8859-5? I believe it's the encoding that "should" be used for Russian.
By all means do use UTF-8 over any other encoding type. You need to make sure that:
your cfm templates were written to disk with UTF-8 encoding (notepad++ handles that nicely, and so does Eclipse or the new ColdFusion Builder)
your database was created with the proper codepage for nvarchar (and varchar) datatypes
your database connection handles UTF-8
How to go about the last two items depends on your database back-end. Coldfusion is quite agnostic in that regard, as it will happily use any jdbc driver that you may need.
When working in a multi-character set environment, character set conversion issues can occur and it can be difficult to determine where the conversion issue occurred.
There are two categories into which conversion issues can be placed. The first involves sending data in the wrong format to the client API. Although this cannot happen with Unicode APIs, it is possible with all other client APIs and results in garbage data.
The second category of issue involves a character that does not have an equivalent in the final character set, or in one of the intermediate character sets. In this case, a substitution character is used. This is called lossy conversion and can happen with any client API. You can avoid lossy conversions by configuring the database to use UTF-8 for the database character set.
The advantage of UTF-8 over any other encoding is that you can handle any number of languages in the same database / client.
I can't personally reproduce this problem at all. Is the ColdFusion template that is making the call itself UTF-8? (with or without a BOM it matters not for Russian). In any case UTF-8 is absolutely what you should be using. Make sure you get a UTF-8 compliant editor. Which is most things on Mac. On Windows you could use Scite or GVim.
The correct encoding to use in a .js file is whatever encoding the parent page is in. Whilst there are methods to serve JavaScript using a different encoding to the page including it, they don't work on all browsers.
So make sure your web page is being saved and served in an encoding that contains the Russian characters, and then save the .js file using the same encoding. That will be either:
ISO-8859-5. A single-byte encoding with Cyrillic in the high bytes, similar to Windows code page 1251. cp1251 will be the default encoding when you save in a text editor from a Russian install of Windows;
or UTF-8. A multi-byte encoding that contains every character. All modern websites should be using UTF-8.
(ISO-8859-1 is Western European and does not include any Cyrillic. It is similar to code page 1252, the default on a Western Windows install. It's of no use to you.)
So, best is to save both the cf template and the js file as UTF-8, and add <cfprocessingdirective pageencoding="utf-8"> if CF doesn't pick it up automatically.
If you can't control the encoding of the page that includes the script (for example because it's a third party), then you can't use any non-ASCII characters directly. You would have to use JavaScript string literal escapes instead:
var translation_ru= {
launchMyCalendar: '\u0417\u0430\u043f\u0443\u0441\u043a \u041c\u043e\u0439 \u043a\u0430\u043b\u0435\u043d\u0434\u0430\u0440\u044c'
};
when it saves to file it is "·ÐßãáÚ ¼ÞÙ ÚÐÛÕÝÔÐàì" so the charset is wrong
Looks like you've saved as cp1251 (ie. default codepage on a Russian machine) and then copied the file to a Western server where the default codepage is cp1252.
I also just found out that my text editor of choice, textpad, doesn't support unicode.
Yes, that was my reason for no longer using it too. EmEditor (commercial) and Notepad++ (open-source) are good replacements.
I work on an app that gets distributed via a single installer containing multiple localizations. The build process includes a script that updates the .ism string table with translations for each supported language.
This works fine for languages like French and German. But when testing the installer in, i.e. Japanese, the text shows up as a series of squares. It's unlikely to be a font problem, since the InstallShield-supplied strings show up fine; only the string table entries are mangled. So the problem seems to be that the strings are in the wrong encoding.
The .ism is in XML format, with UTF-8 declared as its encoding, so I assumed the strings needed to be UTF-8 encoded as well. Do they actually need to use the encoding of the target platform? Is there any concern, then, about targets having different encodings, i.e. Chinese systems using one GB-encoding versus another? What is the right thing to do here?
Edit: Using InstallShield 2009, since there is apparently a difference between that and 2010.
In InstallShield 2009 and earlier, the encoding is a base-64 encoding of the binary string in the ANSI encoding specific to the language in question (e.g. CP932 for Japanese). In InstallShield 2010 and later, it will still accept that or use UTF-8, depending on other columns in that table.
Thanks (up-voted his answer) go to Michael Urman, for pointing us in the right direction. But this is the actual working (with InstallShield 2009) algorithm, reverse-engineered by a co-worker:
Start with a unicode (multi-byte-character) string
Write out the length as the encoded-length field in the ism-file
Encode the string as UTF-16-little-endian
Base-64 using the uuencode dictionary, except with ` (back-tick) instead of spaces.
Write the result to the ism-file, escaping XML entities
Be aware that base-64ing using the uuencode dictionary is not the same as using the uuencode algorithm. Standard uuencode produces a set of newline-separated lines, including a header, footers and one or more data lines, each of which begins with a length-character. If you're implementing this using a uuencode codec, you'll need to strip all of that off.
I'm also trying to figure this out...
I've inhereted some Installshield 12 (which is pre-2009) projects with string table entries containing characters outside the range of base64 'target' characters.
For example, one of the Japanese strings is:
4P!H&$9!O'<4!R&\=!E&,=``#$(80!C&L=0!P"00!G`&4`;#!T`)(PI##S,+DPR##\,.LP5S!^,%DP`C
After much searching I happened upon Base85 encoding, which looks much closer to being plausible, but have not yet verified this to be the solution.
I'm using codegear c++ builder 2007. I'm trying to read a string value with a path from the registry. This path can contain unicode characters, for example russian.
I have added a string value with regedit and verified by exporting that the value really contains the expected unicode characters. The result in S1, S2 and S3 below all contains '?' (0x3F) instead of the unicode characters. What am I missing?
TRegistry *Registry = new TRegistry;
try
{
Registry->RootKey = HKEY_CURRENT_USER;
if (Registry->OpenKey ("Software\\qwe\\asd", false))
{
AnsiString S1 = Registry->ReadString ("zxc");
WideString S2 = Registry->ReadString ("zxc");
UTF8String S3 = Registry->ReadString ("zxc");
}
}
__finally
{
delete Registry;
}
/Björn
The VCL in C++Builder (and Delphi) 2007 uses Ansi, not Unicode. TRegistry::ReadString() is internally calling the Win32 API RegQueryValueExA() function instead of RegQueryValueExW(), and TRegistry::ReadString() returns an AnsiString that uses the OS default Ansi codepage. Any Unicode data gets automatically converted to Ansi by the OS before your code ever sees it. The '?' character means that a Unicode character got converted to an Ansi codepage that does not support that character. It does not matter what string type you assign the result of ReadString() to, the Unicode data has already been lost before ReadString() even exits.
If you need to read Unicode data as Unicode, then you need to call RegQueryValueExW() directly instead of using TRegistry::ReadString() (or upgrade to C++Builder 2009 or later, which now use Unicode).
http://do-the-right-things.blogspot.com/2008/03/codegear-delphi-2006nets-tregistry.html
CodeGear Delphi 2006.Net's TRegistry fails in Framework 2 SP1
I don't know whether C++ 2007 is also affected, but if it is, maybe there is a patch available somewhere.