How to draw Thai text to PDF file by using libharu library - libharu

i am using free pdf library libharu to generate PDF file,
but i have a encoding problem, i can not draw Thai lanugage text on PDF file,
all the text shows "???.."
Somebody know how to fix it?
Thanks

I have succeeded in rendering hieroglyphic texts (not Thai, but Chinese and Japanese) using libharu. First of all, I used Unicode mode, please refer to HPDF_UseUTFEncodings() function documentation.
For C language, here is a sequence of libharu API calls needed to overcome your trouble:
HPDF_UseUTFEncodings(docHandle);
HPDF_SetCurrentEncoder(docHandle, "UTF-8");
Here docHandle is a valid HPDF_Doc object.
Next part is proper work with UTF fonts:
const char * libFontName = HPDF_LoadTTFontFromFile(docHandle, fontFileName.c_str(), font_embed::EmbedFonts);
HPDF_Font font = HPDF_GetFont(docHandle, libFontName, "UTF-8");
After these calls you may render unicode texts containing Thai characters. Also note about embedding flag (3rd param of LoadTTFontFromFile) - your PDF file may be unreadable due to external font references. If you are not crazy with output PDF size, you may just embed fonts.
I've tested couple of Thai .ttf fonts found in Google and they were rendered OK. Also (it may be important, but I'm not sure) I'm using fork of libharu https://github.com/kdeforche/libharu which is now merged into master branch.

When you write text to the PDF, use the correct font and encoding. In the libharu documentation you have all the possibilities: https://github.com/libharu/libharu/wiki/Fonts
In your case, you must use the ISO8859-11 Thai, TIS 620-2569 character set
An example (in spanish):
HPDF_Font fontEn = HPDF_GetFont(pdf, "Helvetica-Bold", "ISO8859-2");
HPDF_Page_TextOut(page1, 50.00, 750.00, [#"Código para correcta codificación en libharu" cStringUsingEncoding:NSISOLatin1StringEncoding]);

Related

Cyrillic is not displayed in TextMeshPro

Cyrillic is not displayed in TextMeshPro, squares are displayed instead of Russian letters. I looked on the Internet for analysis of the same issue from other people, but I did not understand anything and did not help. In Asset Creator, it seems that Hex was correctly indicated, but still not. Who can help me figure out what the problem is?enter image description here
enter image description hereenter image description here
Unity employee Stephan_B explains:
In the Font Asset Creator - Character Set, you have selected ASCII
which will only include the ASCII character set in the font asset.
To include characters for different languages like Russian which uses
the Cyrillic character set, you have to include those in your
selection. According to the Unicode Chart where you can learn about
the Unicode range for all languages, Cyrillic is located in the range
of 0400-04FF.
Here is an example of an SDF Font Asset that I can use as Fallback
using Unicode Range 400-4FF which include the Cyrillic set.
Please take the time to watch the video about Font Asset Creation as
all these options are explained including localization. I also
strongly suggest you watch the video about Material Presets which is
equally important.
I also suggest you use SDF 16 or SDF 32 as this will give you the most
flexibility and allow you to use Material Presets to define different
visual styles for your text.

How to include special characters in A-frame web VR (čšž...)

I want to create a website readable not only in english but i have problems with special characters. I've tried ascii html.
Any idea?
If You have troble with the text component there are three ways I can think of:
1) The proper way: find or generate a font from a fontset containing those characters. The docs describe how to use custom fonts:
<a-entity text="text: Hello World; font: ../fonts/CustomFnt.fnt;
fontImage: ../fonts/CustomFnt.png"></a-entity>
But you need to have a font file + a .png grid with the font images.
The docs provide a link to a tool for generating fonts, as well as a tutorial.
2) check out Don McCurdy's custom font generator !
3) The workaround: You could make a transparent image containing Your text and put it on an <a-plane>, like I did here.

Where can I get a Font-family to language pair map for Microsoft Word

I am programmatically generating a MSWord 2011 bilingual file(contains text from 2 languages) using docx4j. My plan is to set the font-family of text based on the language in the text. eg: When I have a Latin and Indian language passed, all text containing English will have 'Times New Roman' and Hindi as 'Devanagari' as their font type.
MS Word documentation doesn't have any information on this. Any help to find a list of all prominent languages MS-Word supports and their corresponding Font-Families appreciated.
The starting point is the rFonts element.
As it says:
This element specifies the fonts which shall be used to display the
text contents of this run. Within a single run, there may be up to
four types of content present which shall each be allowed to use a
unique font:
• ASCII
• High ANSI
• Complex Script
• East Asian
The use of each of these fonts shall be determined by the Unicode
character values of the run content, unless manually overridden via
use of the cs element
For further commentary and the actual algorithm used by docx4j (in its PDF output), which aims to mimic Word, see RunFontSelector
To simplify a bit, you need to work out which of the 4 attributes Word would use for your Hindi (from its Unicode character values), then set that attribute to the font you want.
You can set the attribute to an actual font name, or use a theme reference (see the RunFontSelector code for how that works).
If I were you, I'd create a docx in Word which is set up as you like, then look at its underlying XML. If it uses theme references in the font attributes, you can either use the docx you created as a template for your docx4j work, or you can manually 'resolve' the references and replace them with the actual font names.
If you want to programmatically reproduce what Word has created for you, you can upload your docx to the docx4j webapp to generate suitable code.
Finally, note that the fonts need to be available on the computer opening the docx. (Unless the fonts are embedded in the docx) If they aren't, another font may be substituted.

Convert non english characters into Unicode (UTF-8)

I copied large amount of text from another system to my PC. When I viewed the text in my PC, it looked weird. So I copied all the fonts from the other PC and installed them in mine too. Now the text looks okay, but actually it seems that is not in Unicode. For example, if I copy the text and paste in another UTF-8 supported editor such as Notepad++, I get English characters ("bgah;") only like shown below.
How to convert this whole text into unicode text, like the one below. So I can copy the text and paste anywhere else.
பெயர்
The above text was manually obtained using http://www.google.com/transliterate/indic/Tamil
I need this conversion to be done, so I can copy them into database tables.
'Ja-01' is a font with a custom 'visual encoding'.
That is to say, the sequence of characters really is "bgah;" and it only looks like Tamil to you because the font's shapes for the Latin characters bg look like பெ.
This is always to be avoided, because by storing the content as "bgah;" you lose the ability to search and process it as real Tamil, but this approach was common in the pre-Unicode days especially for less-widespread scripts without mature encoding standards. This application probably predates widespread use of TSCII.
Because it is a custom encoding not shared by any other font, it is very unlikely you will be able to find a tool to convert content in this encoding to proper Unicode characters. It does not appear to be any standard character ordering, so you will have to look at the font (eg in charmap.exe) and note down every character, find the matching character in Unicode and map between them.
For example here's a trivial Python script to replace characters in a file:
mapping= {
u'a': u'\u0BAF', # Tamil letter Ya
u'b': u'\u0BAA', # Tamil letter Pa
u'g': u'\u0BC6', # Tamil vowel sign E (combining)
u'h': u'\u0BB0', # Tamil letter Ra
u';': u'\u0BCD', # Tamil sign virama (combining)
# fill in the rest of the mapping information here!
}
with open('ja01data.txt', 'rb') as fp:
data= fp.read().decode('utf-8')
for char in mapping:
data= data.replace(char, mapping[char])
with open('utf8data.txt', 'wb') as fp:
fp.write(data.encode('utf-8'))
The font you found is getting you into trouble. The actual cell text is "bgah;", it gets rendered to பெயர் because you found a font that can work with 8-bit non-Unicode characters. So reading it or pasting it into Notepad++ is going to produce "bgah;" since that's the real text. It can only ever be rendered properly again by forcing the program that displays the string to use that same font.
Ditch the font and enter Unicode so it looks like this:
"bgah" looks like a Baamini based system, which is pre-unicode. It was popular in Canada (and the SL Tamil diaspora in general) in the 90s.
As the others mentioned, it looks like a custom visual encoding that mimics the performance of a foreign script while maintaining ASCII encoding.
Google "Baamini to unicode convertor". The University of Colombo seems to have put one up: http://www.ucsc.cmb.ac.lk/ltrl/services/feconverter/?maps=t_b-u.xml
Let me know if this works. If not, I can ask around and get something for you.
You could first check whether the encoding is TSCII, as this sounds most probable. It is an 8-bit encoding, and the fonts you copied are probably based on that encoding. Check out whether the TSCII to UTF-8 converter at SourceForge is suitable. The project there is called “Any Tamil Encoding to Unicode” but they say that only TSCII is supported for now.

Non unicode to Unicode conversion, for any font!

I have a html file with text encoded in a non-unicode font. I need to convert that file to unicode. I searched for a convertor. But, most of the convertors work for only a list of fonts, not for all fonts.
My font is very specific, text is in Devanagari script.
I have the file, I have the font, now, please suggest me a tool or technique. Thanks.
Unicode is not about fonts, it is about encoding. You need to find a converter that can convert your text to Unicode. What is the encoding of your text?
Apache Tika has the ability to pull text from PDF files via knowledge of font behavior. So if the file is in fact a PDF you have a chance. If you have a text file full of font indices in no particular encoding, you have a big programming job ahead of you.