I'm using Cocos Code IDE to create a simple project with Lua.
main.lua was saved in UTF-8 Without BOM and contains the following snippet:
local label1 = cc.Label:createWithTTF("長い","fonts/Marker Felt.ttf",32);
local label2 = cc.Label:createWithTTF("LONG","fonts/Marker Felt.ttf",32);
local label3 = cc.Label:createWithTTF("rất dài","fonts/Marker Felt.ttf",32);
label1 was an Japanese string which was rendered correctly, as was label2 which was an ANSI string. however, label3 wasn't rendered correctly. In fact, it was rendered like "rt dài".
I had tried to use other Unicode fonts which clearly has that character. Still, I couldn't get those character rendered correctly. What have I done wrong?
I have the same issue, and i have solved.
I figure out that, the font file Marker Felt.tff include in cocos project is not unicode version. So after a while, i have found the unicode version of Marker Felt font in my Mac PC in: System/Library/Fonts/MarkerFelt.ttc
MarkerFelt Unicode Image
Related
I'm using Freetype 2.5.3 on a portable OpenGL application.
My issue is that i can't get unicode on my Windows machine, while i get them correctly on linux-based systems (lubuntu, OSX, Android)
i'm using the famous arialuni.ttf (23mb) so i'm pretty sure it contains everything. In fact, i had this working in my previous Windows installation (Win7), then re-installed Win7 from another source and now unicode is not working right.
Specifically when i draw a string, then only latin are rendered while unicode are getting skipped. I dug deeper and i found that character codes are not what they should be in wstring. For example, i'm using some greek letters in the string like γ which i know it should have a code point of 947.
My engine just iterates the wstring characters and drives the above code point to another vector that holds texture coordinates so i can draw the glyph.
The problem is that on my Windows 7 machine, the wstring does not give me 947 for a γ, but instead it gives me a 179. In addition, the character of Ά returns as 2 characters of 206 code (??) instead of one of 902.
It's like simple iterating a wstring, like:
for(size_t c=0,sz=wtext.size();c<sz;c++) {
uint32_t ch = wtext[c]; // code point
...
}
This is only happening on my newly installed Win7; it worked before on another Win7 system, along with my all linux machines. Now it's broken on this, and also on my XP virtual machine.
I don't use any wide formatting functions on this, just like:
wstring wtext = L"blΆh";
In addition, i can see my glyphs being rendered correctly in my OpenGL texture, so not a font issue either. My font generator uses the greek range of ~900-950 code points to collect the glyphs.
I add the code points per language with this:
FT_UInt charcode;
FT_ULong character = FT_Get_First_Char(face, &charcode);
do {
character = FT_Get_Next_Char(face, character, &charcode);
...
} while(charcode);
Not sure why but i fixed it by saving the file as UTF-8 BOM, rather UTF-8 (i had it by default).
I have loaded the Mono Chinese/ Japanese font onto my ZM400 printer. So far I have no success printing both Chinese & English together on the same field.
Here is some example code:
^XA^CW1,B:ANMDS.TTF
^SEB:GB.DAT^CI14
^FO100,100^A1,50,50^FD中文English Here^FS
^XZ
Since I change the international code to 14 (with ^CI14), it only prints the Chinese text without the English text.
I have also try using the ^FL command, but can't seen to get it to work.
Does anyone have a working example of printing Chinese / Japanese text along with English text on the same FD (data field)?
You should probably use ^CI28 (UTF-8), and make sure that your labels are encoded in UTF-8.
As far as I know, ^CI14 only supports Asian encodings.
If anyone is looking at how to do this, I imagine what I did for Japanese will work for Chinese.
Firstly, I didn't want to purchase the Asian Font Pack because I think it's a bit of a ripoff, so I found an appropriate open source Japanese Unitype Font. I then uploaded this to the printer using Zebra Tools... make sure you upload it as a file, NOT using the font upload.
Then I managed to get it printing by escaping the characters.
So my final ZPL is
^XA
^LL150
^CI28^A#N,60,60,E:OSAKA.TTF
^FO0,0
^FH
^FD_5F_E3_81_93_E3_82_8C_E3_81_AF_E4_BD_95_E3_81_A8_E8_A8_80_E3_81_A3_E3_81_A6_E3_81_84_E3_81_BE_E3_81_99^FS
^XZ
Essentially you have to escape the bytes of each value (original Japanese これは何と言っています)
You also have to put ^FH in front of ^FD so it knows you're escaping characters.
Hopefully this helps the poster and anyone else who is looking to overcome problems with ZPL and Unicode fonts / characters.
I have figured out why. The Chinese text needs to be in gibberish format.
What I meant by gibberish is that. When you use Chinese in ZPL code, it needs to be in the windows codepage format text. This windows codepage format Text that is Chinese will be displayed as gibberish in English environment.
For example. In ZPL Code, your code might look like this:
^H ~!!####$ (this gibberish is actual the ASCII representation of Chinese text in windows code page format)
However, you can't type in unicode Chinese because ZPL would not print it.
^H 中文 (this is Chinese text in unicode format)
I have an old program written in VB6.
I am trying to get it work right on Windows 8.1.
Everything works, except sending text in Hebrew to the printer.
The printer prints "???" instead of Hebrew characters.
It is obvious that this is an encoding problem, but I don't find a way to solve it.
The program works on Windows 7 without any problem!
the relevant code:
Printer.Font.Charset = 177 'Hebrew encoding
Printer.Print "<text in Hebrew>"
Printer.EndDoc
If someone has an advice, I will appreciate it a lot.
Thanks!
It usualy means the font used does not have those characters. Arial has stuff like גּוּלּ֧֧֧֯.
object.FontName [= font]
The FontName property syntax has these parts:
Part Description
object An object expression that evaluates to an object in the Applies To list.
font A string expression specifying the font name to use.
Remarks
The default for this property is determined by the system. Fonts available with Visual Basic vary depending on your system configuration, display devices, and printing devices. Font-related properties can be set only to values for which fonts exist.
In general, you should change FontName before setting size and style attributes with the FontSize, FontBold, FontItalic, FontStrikethru, and FontUnderline properties.
You might need to set the Language for non-Unicode programs to Hebrew. In Win 8 you do it like this.
i am using free pdf library libharu to generate PDF file,
but i have a encoding problem, i can not draw Thai lanugage text on PDF file,
all the text shows "???.."
Somebody know how to fix it?
Thanks
I have succeeded in rendering hieroglyphic texts (not Thai, but Chinese and Japanese) using libharu. First of all, I used Unicode mode, please refer to HPDF_UseUTFEncodings() function documentation.
For C language, here is a sequence of libharu API calls needed to overcome your trouble:
HPDF_UseUTFEncodings(docHandle);
HPDF_SetCurrentEncoder(docHandle, "UTF-8");
Here docHandle is a valid HPDF_Doc object.
Next part is proper work with UTF fonts:
const char * libFontName = HPDF_LoadTTFontFromFile(docHandle, fontFileName.c_str(), font_embed::EmbedFonts);
HPDF_Font font = HPDF_GetFont(docHandle, libFontName, "UTF-8");
After these calls you may render unicode texts containing Thai characters. Also note about embedding flag (3rd param of LoadTTFontFromFile) - your PDF file may be unreadable due to external font references. If you are not crazy with output PDF size, you may just embed fonts.
I've tested couple of Thai .ttf fonts found in Google and they were rendered OK. Also (it may be important, but I'm not sure) I'm using fork of libharu https://github.com/kdeforche/libharu which is now merged into master branch.
When you write text to the PDF, use the correct font and encoding. In the libharu documentation you have all the possibilities: https://github.com/libharu/libharu/wiki/Fonts
In your case, you must use the ISO8859-11 Thai, TIS 620-2569 character set
An example (in spanish):
HPDF_Font fontEn = HPDF_GetFont(pdf, "Helvetica-Bold", "ISO8859-2");
HPDF_Page_TextOut(page1, 50.00, 750.00, [#"Código para correcta codificación en libharu" cStringUsingEncoding:NSISOLatin1StringEncoding]);
How can i use Norwegian characters to show them in UILabel in my application.
if i use it directly it shows garbage value.
The characters should work. I'm using a UILabel which loads from a nib file into the view of a ViewController. The label works in Times New Roman and Palatino (and presumably other fonts) when I set the text with the following code: [myLabel setText:#"ÆØÅ æøå"]; . I would try copying and pasting that line directly and see what happens.
I would guess that you're getting your characters from somewhere whose representation is not quite right in NSString terms. I grabbed these from Wikipedia. (I don't speak Norwegian so I'm not sure if you need other non-English alphabet characters - as far as I could see these were the only ones.)