RichTextBox use to retrieve Text property in C++ - unicode

I am using a hidden RichTextBox to retrieve Text property from a RichEditCtrl.
rtb->Text; returns the text portion of either English of national languages – just great!
But I need this text in \u12232? \u32232? instead of national characters and symbols. to work with my db and RichEditCtrl. Any idea how to get from “пассажирским поездом Невский” to “\u12415?\u12395?\u23554?\u20219?\u30456?\u35527?\u21729? (where each national character is represented as “\u23232?”
If you have, that would be great.
I am using visual studio 2008 C++ combination of MFC and managed code.
Cheers and have a wonderful weekend

If you need a System::String as an output as well, then something like this would do it:
String^ s = rtb->Text;
StringBuilder^ sb = gcnew StringBuilder(s->Length);
for (int i = 0; i < s->Length; ++i) {
sb->AppendFormat("\u{0:D5}?", (int)s[i]);
}
String^ result = s->ToString();
By the way, are you sure the format is as described? \u is a traditional Escape sequence for a hexadecimal Unicode codepoint, exactly 4 hex digits long, e.g. \u0F3A. It's also not normally followed by ?. If you actually want that, format specifier {0:X4} should do the trick.

You don't need to use escaping to put formatted Unicode in a RichText control. You can use UTF-8. See my answer here: Unicode RTF text in RichEdit.
I'm not sure what your restrictions are on your database, but maybe you can use UTF-8 there too.

Related

Weird Normalization on .net

I am trying to normalize a string (using .net standard 2.0) using Form D, and it works perfectly and running on a Windows machine.
[TestMethod]
public void TestChars()
{
var original = "é";
var normalized = original.Normalize(NormalizationForm.FormD);
var originalBytesCsv = string.Join(',', Encoding.Unicode.GetBytes(original));
Assert.AreEqual("233,0", originalBytesCsv);
var normalizedBytesCsv = string.Join(',', Encoding.Unicode.GetBytes(normalized));
Assert.AreEqual("101,0,1,3", normalizedBytesCsv);
}
When I run this on Linux, it returns "253,255" for both strings, before and after normalization. These two bytes form the word 65533 which is the Unicode Replacement char, used when something goes wrong with encoding. That's the part where I am lost.
What am I missing here? Is there someone to point me in the right direction?
It might be related to the encoding of the source file. I'm not sure which encodings .net on Linux supports, but to be on the safe side, you should use plain ASCII source files and Unicode escapes for Non-ASCII characters:
var original = "\u00e9";
There is no text but encoded text.
When communicating text to person or program, both the bytes and the character encoding are essential.
The C# compiler (like all programs that process text, except in special cases like JSON) must know which character encoding the input files use. You must inform it accurately. The default is UTF-8 and that is a fine choice, especially for C# files, which are, lexically, sequences of Unicode codepoints.
If you used your editor or IDE or file transfer without full mindfulness of these requirements, you might have used an unintended character encoding.
For example, "é" when saved as Windows-1252 (0xE9) but read as UTF-8 (leading code unit that should be followed by two continuation code units), would give � to indicate this mishandling to the readers.
To be on the safe side, use UTF-8 everywhere but do it mindfully.

Converting emoji from hex code to unicode

I want to use emojis in my iOS and Android app. I checked the list of emojis here and it lists out the hex code for the emojis. When I try to use the hex code such as U+1F600 directly, I don't see the emoji within the app. I found one other way of representing emoji which looks like \uD83D\uDE00. When using this notation, the emoji is seen within the app without any extra code. I think this is a Unicode string for the emoji. I think this is more of a general question that specific to emojis. How can I convert an emoji hex code to the Unicode string as shown above. I didn't find any list where the Unicode for the emojis is listed.
It seems that your question is really one of "how do I display a character, knowing its code point?"
This question turns out to be rather language-dependent! Modern languages have little trouble with this. In Swift, we do this:
$ swift
Welcome to Apple Swift version 3.0.2 (swiftlang-800.0.63 clang-800.0.42.1). Type :help for assistance.
1> "\u{1f600}"
$R0: String = "😀"
In JavaScript, it is the same:
$ node
> "\u{1f600}"
'😀'
In Java, you have to do a little more work. If you want to use the code point directly you can say:
new StringBuilder().appendCodePoint(0x1f600).toString();
The sequence "\uD83D\uDE00" also works in all three languages. This is because those "characters" are actually what Unicode calls surrogates and when they are combined together a certain way they stand for a single character. The details of how this all works can be found on the web in many places (look for UTF-16 encoding). The algorithm is there. In a nutshell you take the code point, subtract 10000 hex, and spread out the 20 bits of that difference like this: 110110xxxxxxxxxx110111xxxxxxxxxx.
But rather than worrying about this translation, you should use the code point directly if your language supports it well. You might also be able to copy-paste the emoji character into a good text editor (make sure the encoding is set to UTF-8). If you need to use the surrogates, your best best is to look up a Unicode chart that shows you something called the "UTF-16 encoding."
In Delphi XE #$1F600 is equivalent to #55357#56832 or D83D DE04 smile.
Within a program, I use it in the following way:
const smilepage : array [1..3] of WideString =(#$1F600,#$1F60A,#$2764);
JavaScript - two way
let hex = "😀".codePointAt(0).toString(16)
let emo = String.fromCodePoint("0x"+hex);
console.log(hex, emo);

How to discover what codepage to use when converting RTF hex literals to Unicode

I'm parsing RTF 1.5+ files generated by Word 2003+ that may have content from other languages. This content is usually encoded as hex literals (\'xx). I would like to convert these literals to unicode values.
I know my document's code page by looking for ansicpg (\ansi\ansicpg1252).
When I use the ansicpg codepage to decode to Unicode, many languages (like French) seem to convert to the Unicode char values that I expect.
However when I see Russian text (like below), codepage 1252 decodes the content to jibberish.
\f277\lang1049\langfe1033\langnp1049\insrsid5989826\charrsid6817286
\'d1\'f2\'f0\'e0\'ed\'e8\'f6\'fb \'e1\'e5\'e7 \'ed\'e0\'e7\'e2\'e0\'ed\'e8\'ff. \'dd\'f2
\'e0 \'f1\'f2\'f0\'e0\'ed\'e8\'f6\'e0 \'ed\'e5 \'e4\'ee\'eb\'e6\'ed\'e0
\'ee\'f2\'ee\'e1\'f0\'e0\'e6\'e0\'f2\'fc\'f1\'ff \'e2 \'f2\'e0\'e1\'eb\'e8\'f6\'e5
\'e2 \'f1\'ee\'e4\'e5\'f0\'e6\'e0\'ed\'e8\'e8.
I assume that lang1049, langfe1033, langnp1049 should provide me clues so I can programmatically choose a different (non-default) code page for the text that they reference? If so, where can I find information that explains how to map a lang* code to a codepage? Or should I be looking for some other RTF command/directive to provide me with the information I'm looking for? (Or must I use \f277 as a font reference and see if it has an associated codepage?)
\lang really only marks up particular stretches of the text as being in a particular language, and shouldn't impact what code page is to be used for the old non-Unicode \' escapes.
Putting an \ansicpg token in the header should perhaps do it, but seems to be ignored by Word (for both raw bytes and \' escapes.
Or must I use \f277 as a font reference and see if it has an associated codepage?
It looks that way. Changing the \fcharset of the font assigned to a particular stretch of text is the only way I can get Word to change how it treats the bytes, anyway. The codes in this token (see eg here for list) are, aggravatingly, different again from either the language ID or the code page number.
It is not so clear but you can use the RichEdit control in order to convert the RTF to UTF-8 format according to the MSDN:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb774304(v=vs.85).aspx
Take a look to the SF_USECODEPAGE for the EM_STREAMOUT message.

Unicode RTF text in RichEdit

I'm having trouble getting a RichEdit control to display unicode RTF text. My application is Unicode, so all strings are wchar_t strings.
If I create the control as "RichEdit20A" I can use e.g. SetWindowText, and the text is displayed with the proper formatting. If I create the control as "RichEdit20W" then using SetWindowText shows the text verbatim, i.e. all the RTF code is displayed. The same happens if I use the EM_SETTEXTEX parameter, specifying codepage 1200 which MSDN tells me is used to indicate unicode.
I've tried using the StreamIn function, but this only seems to work if I stream in ASCII text. If I stream in widechars then I get empty text in the control. I use the SF_RTF|SF_UNICODE flags, and MSDN hints that this combination may not be allowed.
So what to do? Is there any way to get widechars into a RichEdit without losing RTF interpretation, or do I need to encode it? I've thought about trying UTF-8, or perhaps use the encoding facilities in RTF, but am unsure what the best choice is.
I had to do this recently, and noticed the same sorts of observations you're making.
It seems that, despite what MSDN almost suggests, the "RTF" parser will only work with 8-bit encodings. So what I ended up doing was using UTF-8, which is an 8 bit encoding but still can represent the full range of Unicode characters. You can get UTF-8 from a PWSTR via WideCharToMultiByte():
PWSTR WideString = /* Some string... */;
DWORD WideLength = wcslen(WideString) + 1;
PSTR Utf8;
DWORD Length;
INT ReturnedLength;
// A utf8 representation shouldn't be longer than 4 times the size
// of the utf16 one.
Length = WideLength * 4;
Utf8 = malloc(Length);
if (!Utf8) { /* TODO: handle failure */ }
ReturnedLength = WideCharToMultiByte(CP_UTF8,
0,
WideString,
WideLength-1,
Utf8,
Length-1,
NULL,
NULL);
if (ReturnedLength)
{
// Need to zero terminate...
Utf8[ReturnedLength] = 0;
}
else { /* TODO: handle failure */ }
Once you have it in UTF-8, you can do:
SETTEXTEX TextInfo = {0};
TextInfo.flags = ST_SELECTION;
TextInfo.codepage = CP_UTF8;
SendMessage(hRichText, EM_SETTEXTEX, (WPARAM)&TextInfo, (LPARAM)Utf8);
And of course (I left this out originally, but while I'm being explicit...):
free(Utf8);
RTF is ASCII, any charactor out of ASCII would be encoded using escape sequence.
RTF 1.9.1 specification (March 2008)
Take a look at \uN literal in rtf specification so you have to convert your wide string to string of unicode characters like \u902?\u300?\u888?
http://www.biblioscape.com/rtf15_spec.htm#Heading9
The numbers in this case represent the characters decimal code and the question mark is the character which will replace the unicode char in case if RichEdit does not support unicode (RichEdit v1.0).
For example for unicode string L"TIME" the rtf data will be "\u84?\u73?\u77?\u69?"

Search for unicode text inside Windows XP

Is there a way of searching for unicode characters inside a text file under Windows XP? For example suppose I wish to find text documents with the euro symbol. Although the standard XP search allows me to search for the euro symbol it does not produce any matches when I know they should be at least a few. Wingrep has the same issue. Is there any simple software/setting the I have missed?
The input encoding of the search field (in Windows XP, UTF-16) may not match the encoding of the text file (probably UTF-8).
I haven't used this tool (freeware), but it might work for your needs.
In windows or what ever else system you can find out that is it the document unicode (have a unicode character ) or not ?
To achieve this just use this simpl code, not that this code, written in C# and you should use your own equevalent.
public bool IsUnicode(string str)
{
int asciiBytesCount = System.Text.Encoding.ASCII.GetByteCount(str);
int unicodBytesCount = System.Text.Encoding.UTF8.GetByteCount(str);
if (asciiBytesCount!=unicodBytesCount )
return true;
return false;
}
if you do not want to write any code and find out that , is document contain any unicode character just see the document (save) Type.