I tried to output some Unicode text using both iText-2.1.7 and iText-5.1.3.
While Devanagari characters got stacked properly, I am unable to see Tibetan characters properly stacked.
In stead, each character is occupying a separate space. I tried with BaseFonts both ARIALUNI.TTF and TibMachUni-1.901b.ttf but without success.
Googling gave me a post of 2009 which indicated it was not readily possible then.
I am stuck in the middle of a Unicode project. I would appreciate get clues to
proceed.
Currently the only Ligaturizer within iText is the ArabicLigaturizer, I'm surprised that Devanagari works for you actually.
Please see Bruno Lowagie's (the primary developer) post from earlier this year along with the post that he linked to, specifically:
None of the current iText developers understand Hindi or any other
Indic language, so it's very difficult for them (if not impossible) to
write such an IndicLigaturizer
Replace "Indic" with "Tibetan" or any other language that needs ligatures.
Related
I am using the Actions on Google Trivia Game template.
Special characters () are not displaying in the chat window.
In google sheets, I have given in the following format.
Question: How to Add an item to the end of the list
Answer1: append()
Answer2: extend()
In google assistant, it was displaying without parenthesis. How to give questions and answers with parenthesis and other special characters?
This is a good one - it looks like the processor that uses what you entered removes special characters. This does seem odd when you look at the question and the suggestion chips.
However... it makes sense if you think about how people are expected to answer the question. If you run it in "speaker" mode, it won't display the suggestion chips, but users will be expected to verbally give an answer. It is pretty difficult to give an answer with parentheses - so the system automatically removes those from what is expected.
I have a situation where I want to open a website by clicking a desktop icon. I found AppJS to be the answer. I created an app with just a full window covering iframe. The only problem I am facing is, Unicode characters are not shown properly (all I see is boxes in place of text). I could not understand where should I look for. Please, give me some pointers. or maybe an alternative to fulfill the original purpose.
Thanks in advance
In the recent months, I've seen that many friends (many of them coders) started writing, long, elaborate mails with footnotes for link, meaning that they write paragraphs [1] and then [2] put the links at the bottom [3]. Like this.
[1] www.example.com/1
[2] www.example.com/2
[3] www.example.com/3
I think it is smart and everything, but I don't understand the process of putting the numbers while you write: I put the reference both when I write and when I edit the text, I swap paragraphs, thus I make a mess with numbers.
Is it a common practice used by some community or is there some editor/plugin that automatically puts the right number in footnotes? Is this mutated from Markdown?
definitely is not Markdown, that compile in HTML, so you should see an anchor instead of plain text.
I think could be an habit from people that want to send plain text mail (not HTML encoded) without use URL inside paragraph to keep readability.
Maybe influenced by Markdown emerging community, where square brackets are used for footnotes. Instead of wiki-markup or LaTeX that use curly brackets for footnotes.
For a quick example, check the very own StackOverflow flavored markdown for links.
Maybe you should also check W3C Web Annotation Working Group for some news.
A quick question regarding Angular Forms and Japanese characters. Am using Angular 1.2.17, and modern Chrome web browser on Mac OSX (latest).
Am writing an AngularJS application for Japanese market. Everything works great displaying Kana etc on the HTML pages etc. There are no issues with the web server, or Database etc, UTF8 support is throughout the application.
However, for the AngularJS forms, it does not read the Kanji / Hiragana / Katakana unless the word or sentence starts with latin character. Angular $scope appears to be unable to distinguish the fact that the JP characters have been typed at all unless prefixed with a latin character.
Example:
こにちわ does NOT register when typed into the input field, and hence form validation fails because it will think a required field is empty.
Whereas:
adsfこにちわ does register and the form can be submitted successfully. End to end, the JP characters are handled correctly, and get stored into DB correctly. So Angular / JS is parsing the UTF8 text correctly. The issue is likely something to do with how Angular binds to the data ($scope) when only JP characters are provided. It doesn't handle this properly by default.
Does anyone know of any HTML or Angular configuration etc - required Angular module or params, meta tags etc etc, that would coerce the Form to behave properly. Have not tested, but am pretty certain this issue is not specific to JP characters - it is likely anyone working with a non latin alphabet might have experienced the same behaviour.
Must be missing something obvious here.
Thank you for any help at all!
OK, updating this question very late - actually solved it very shortly after asking.
This turned out to be a timewaster question. Apologies.
But if anyone should come across a similar problem then please check for any REGEX declarations in the Form fields. For instance, a ng-pattern="/^[a-zA-Z]/"
Yes, this will do what it says, and exclude Kanji. Surprisingly it does NOT then put helpful validation error on the form field so from UI perspective it appears that the foreign language characters simply weren't registered.
Is it possible to customize Recaptcha to display text in English only words?
As recently, I found that text could be displayed in other language like Hebrew.
Here is example:
To be honest it is not possible to type such words for ordinary users having keyboard with Roman alphabet and not many know that image can be redrawn.
AFAIK, via the API you may only customize the interface, not the images.
reCAPTCHA uses scans from the real books, so sometimes, even in latin books there are some non latin characters too.
But there should be no problem here. reCAPTCHA displays always two words: one unknown even for the reCAPTCHA (probably the Hebrew in this case), and the other one, which is really checked.
So the user may misspell the Hebrew, but it's OK when he types the other one (latin) word as expected.
(Only guesses, but I think that's how this thing works).
Have you looked at: http://code.google.com/apis/recaptcha/docs/customization.html#i18n?
That's the API. It talks about setting the translation, but I've never used it, so I'm not 100% sure whether it can do what you want.
Due to where Recaptcha gets its captcha string from (text scanned from books), it could very well be limited to languages that use the Latin alphabet.
I bet that Reviled is the challenge word (the one scanned from the book) and the other is the test word (the one it uses to verify whether or not the person who typed the challenge word is actually typing something legitimate or not).