Print Chinese / Japanese character in Zebra Printer with ZPL - unicode

I have loaded the Mono Chinese/ Japanese font onto my ZM400 printer. So far I have no success printing both Chinese & English together on the same field.
Here is some example code:
^XA^CW1,B:ANMDS.TTF
^SEB:GB.DAT^CI14
^FO100,100^A1,50,50^FD中文English Here^FS
^XZ
Since I change the international code to 14 (with ^CI14), it only prints the Chinese text without the English text.
I have also try using the ^FL command, but can't seen to get it to work.
Does anyone have a working example of printing Chinese / Japanese text along with English text on the same FD (data field)?

You should probably use ^CI28 (UTF-8), and make sure that your labels are encoded in UTF-8.
As far as I know, ^CI14 only supports Asian encodings.

If anyone is looking at how to do this, I imagine what I did for Japanese will work for Chinese.
Firstly, I didn't want to purchase the Asian Font Pack because I think it's a bit of a ripoff, so I found an appropriate open source Japanese Unitype Font. I then uploaded this to the printer using Zebra Tools... make sure you upload it as a file, NOT using the font upload.
Then I managed to get it printing by escaping the characters.
So my final ZPL is
^XA
^LL150
^CI28^A#N,60,60,E:OSAKA.TTF
^FO0,0
^FH
^FD_5F_E3_81_93_E3_82_8C_E3_81_AF_E4_BD_95_E3_81_A8_E8_A8_80_E3_81_A3_E3_81_A6_E3_81_84_E3_81_BE_E3_81_99^FS
^XZ
Essentially you have to escape the bytes of each value (original Japanese これは何と言っています)
You also have to put ^FH in front of ^FD so it knows you're escaping characters.
Hopefully this helps the poster and anyone else who is looking to overcome problems with ZPL and Unicode fonts / characters.

I have figured out why. The Chinese text needs to be in gibberish format.
What I meant by gibberish is that. When you use Chinese in ZPL code, it needs to be in the windows codepage format text. This windows codepage format Text that is Chinese will be displayed as gibberish in English environment.
For example. In ZPL Code, your code might look like this:
^H ~!!####$ (this gibberish is actual the ASCII representation of Chinese text in windows code page format)
However, you can't type in unicode Chinese because ZPL would not print it.
^H 中文 (this is Chinese text in unicode format)

Related

Chinese in Japanese encoding

This may sound like a stupid question. I typed some Chinese characters into an empty text file in VS code text editor (default utf8). Then I saved the file in an encoding for Japanese: shift JIS, which apparently doesn't cover all the characters I have typed in.
However, before I close the file, all Chinese characters are displayed properly in VS code. Now after I closed the file and reopened it using shift JIS encoding, several characters are displayed as a question mark ?. I guess these are the Chinese characters not covered by the Japanese encoding?
What happened in the process? Is there anyway I can 'get back' the Chinese characters that are now shown in ?? I don't really understand how encoding works in this scenario...
Not all encodings cover all characters. (Unicode encodings, in principle, do, but even they don't have quite everything yet.) If you save some text in an encoding which does not include all characters in that text, something has to give.
Options:
you get an error message,
nothing saves at all,
the characters which cannot be included are silently dropped,
the characters which cannot be included are converted to some other character (such as the question mark).
Once that conversion is done, the data is lost, and cannot be recovered. Why not use UTF-8 or another Unicode encoding? (GB 18030 might be the best for large amounts of Chinese text.)

Read turkish characters from txt file

I am trying to read string data from txt file which has special turkish characters in it.
I want to store content in a string. I tried some methods like textscan , fileread but, instead of special turkish characters like ş,ç,ı,ö,ğ, there are some weird symbols. Are there any way to do that?
I created a file called turkish.txt with the characters you mentioned (ş,ç,ı,ö,ğ). Trying to read it gave me the following:
fid = fopen('turkish.txt','r','n','UTF-8');
str=fread(fid);
native2unicode(str')
ans =
ÿþ_, ç , 1, ö ,
As you can see, ş,ı,ğ are not rendered correctly. If you type
help slCharacterEncoding
You can see a list of most commonly supported encodings by platforms. I played with the encodings a little, some which I have checked were:
ISO-8891-1
US-ASCII
Windows-1252
Shift_JIS
The last one is related to japanese characters. They contain some of the turkish characters, which were rendered correctly such as ç and ö, but not all of them.
If you skim through the docs it says:
If you want to use a different character encoding, you need to start MATLAB with the appropriate locale settings for your operating system. Consult your operating system manual to change the locale setting.
The instructions for setting the locale on windows platforms, which I haven't tried, can be found here.
Hope it helps.

How to convert punjabi unicode to English Text?

I have records saved in SQL SERVER database in form of punjabi unicode. Now i want to convert these punjabi unicode to English Text. Is there any utility which can help me? Please reply if anyone have solution paid/free. Thanks in advance.
The question is nonsensical -- in the sense that it makes no sense.
Unicode is not a language. It merely provides a mapping from characters (more precisely, glyphs) to a binary code, in such a way that text in a font using Punjabi characters will stay that way when another font is applied. There is no "English" Unicode, and no "Punjabi" Unicode either.
You can only 'translate' from Punjabi to English using translating software. (Given the current state of automatic translation software, you are better off with a human who is fluent in both languages.)
If you wants to change Punjabi Unicode converted into English Text as example
ਨਿੱਕੀ ਕਹਾਣੀ (unicode)
in`kI khwxI (Converted into Gurmukhi Lipi, shows as English ! When you change its font into GurmukhiLipi it shows in punjabi)
You can check my website, previously in UNICODE and now in GURBANI LIPI (I have installed a plugin to convert English Letters as Punjabi)

MFC multibyte application shows junk "????" on pasting Chinese characters, but typing works

Our MFC application uses Multi Byte Character Set (MBCS). OS is Windows 7.
We could type in Chinese Simplified characters by virtual keyboard, but copy pasting Chinese characters from Google Translate to an edit box in the application shows junk characters "????"
Is this a known issue with MBCS applications? Is there a workaround?
When copying and pasting into a multi-byte app the Unicode characters will be converted into the local code page. If they can't be converted you'll get ?. You really should be compiling and distributing your app in Unicode otherwise you'll be fighting these sorts of issues all the time.
If you can't re-compile in Unicode try catching the 'Paste' action and handle the clipboard yourself. Use GetClipboardData and read the value for CF_UNICODETEXT, which will be the valid text. You'll then need to do your own conversion to the correct multi-byte format.

Convert non english characters into Unicode (UTF-8)

I copied large amount of text from another system to my PC. When I viewed the text in my PC, it looked weird. So I copied all the fonts from the other PC and installed them in mine too. Now the text looks okay, but actually it seems that is not in Unicode. For example, if I copy the text and paste in another UTF-8 supported editor such as Notepad++, I get English characters ("bgah;") only like shown below.
How to convert this whole text into unicode text, like the one below. So I can copy the text and paste anywhere else.
பெயர்
The above text was manually obtained using http://www.google.com/transliterate/indic/Tamil
I need this conversion to be done, so I can copy them into database tables.
'Ja-01' is a font with a custom 'visual encoding'.
That is to say, the sequence of characters really is "bgah;" and it only looks like Tamil to you because the font's shapes for the Latin characters bg look like பெ.
This is always to be avoided, because by storing the content as "bgah;" you lose the ability to search and process it as real Tamil, but this approach was common in the pre-Unicode days especially for less-widespread scripts without mature encoding standards. This application probably predates widespread use of TSCII.
Because it is a custom encoding not shared by any other font, it is very unlikely you will be able to find a tool to convert content in this encoding to proper Unicode characters. It does not appear to be any standard character ordering, so you will have to look at the font (eg in charmap.exe) and note down every character, find the matching character in Unicode and map between them.
For example here's a trivial Python script to replace characters in a file:
mapping= {
u'a': u'\u0BAF', # Tamil letter Ya
u'b': u'\u0BAA', # Tamil letter Pa
u'g': u'\u0BC6', # Tamil vowel sign E (combining)
u'h': u'\u0BB0', # Tamil letter Ra
u';': u'\u0BCD', # Tamil sign virama (combining)
# fill in the rest of the mapping information here!
}
with open('ja01data.txt', 'rb') as fp:
data= fp.read().decode('utf-8')
for char in mapping:
data= data.replace(char, mapping[char])
with open('utf8data.txt', 'wb') as fp:
fp.write(data.encode('utf-8'))
The font you found is getting you into trouble. The actual cell text is "bgah;", it gets rendered to பெயர் because you found a font that can work with 8-bit non-Unicode characters. So reading it or pasting it into Notepad++ is going to produce "bgah;" since that's the real text. It can only ever be rendered properly again by forcing the program that displays the string to use that same font.
Ditch the font and enter Unicode so it looks like this:
"bgah" looks like a Baamini based system, which is pre-unicode. It was popular in Canada (and the SL Tamil diaspora in general) in the 90s.
As the others mentioned, it looks like a custom visual encoding that mimics the performance of a foreign script while maintaining ASCII encoding.
Google "Baamini to unicode convertor". The University of Colombo seems to have put one up: http://www.ucsc.cmb.ac.lk/ltrl/services/feconverter/?maps=t_b-u.xml
Let me know if this works. If not, I can ask around and get something for you.
You could first check whether the encoding is TSCII, as this sounds most probable. It is an 8-bit encoding, and the fonts you copied are probably based on that encoding. Check out whether the TSCII to UTF-8 converter at SourceForge is suitable. The project there is called “Any Tamil Encoding to Unicode” but they say that only TSCII is supported for now.