How do you translate Sakai tool names and descriptions? - sakai

The names of several Sakai tools always appear in English even if I have set the Java default locale to Russian.
I see this problem with the following tools in a new Sakai 10 build: Roster and Sign-up.
How do I translate these tool names and descriptions?

Typically strings are collected into .properties files in the resource bundle of a Sakai tool. The strings in these files must be carefully translated into a new language, named using the HTML language codes, pt-BR for Brazilian Portuguese, for example. There are limits on the size of strings in properties, but most strings won't this limit.

Related

Where can I get a Font-family to language pair map for Microsoft Word

I am programmatically generating a MSWord 2011 bilingual file(contains text from 2 languages) using docx4j. My plan is to set the font-family of text based on the language in the text. eg: When I have a Latin and Indian language passed, all text containing English will have 'Times New Roman' and Hindi as 'Devanagari' as their font type.
MS Word documentation doesn't have any information on this. Any help to find a list of all prominent languages MS-Word supports and their corresponding Font-Families appreciated.
The starting point is the rFonts element.
As it says:
This element specifies the fonts which shall be used to display the
text contents of this run. Within a single run, there may be up to
four types of content present which shall each be allowed to use a
unique font:
• ASCII
• High ANSI
• Complex Script
• East Asian
The use of each of these fonts shall be determined by the Unicode
character values of the run content, unless manually overridden via
use of the cs element
For further commentary and the actual algorithm used by docx4j (in its PDF output), which aims to mimic Word, see RunFontSelector
To simplify a bit, you need to work out which of the 4 attributes Word would use for your Hindi (from its Unicode character values), then set that attribute to the font you want.
You can set the attribute to an actual font name, or use a theme reference (see the RunFontSelector code for how that works).
If I were you, I'd create a docx in Word which is set up as you like, then look at its underlying XML. If it uses theme references in the font attributes, you can either use the docx you created as a template for your docx4j work, or you can manually 'resolve' the references and replace them with the actual font names.
If you want to programmatically reproduce what Word has created for you, you can upload your docx to the docx4j webapp to generate suitable code.
Finally, note that the fonts need to be available on the computer opening the docx. (Unless the fonts are embedded in the docx) If they aren't, another font may be substituted.

Displaying Chinese characters on a form from an INI File

My plugin reads the control caption text from an INI file (ANSI as UTF-8 encoding) in order to display multiple languages. Key point being it is a plugin, I have no control nor ability to change this INI file format or file type.
They are currently being read into my plugin with TINIFile.ReadString and stored as a string. I can modify this (data type, read method, etc) as needed.
The main application reads from its own application language files that are UCS-2 Little Endian encoded as a TXT file. These display fine when the language is changed, even when the Windows OS is kept in English (in other words no OS locale changes need to be made for the application to switch display languages).
My plugin's form cannot display Asian characters (Chinese, Japanese, Korean, etc). English language is fine.
I have tried various fonts, using various combinations of AnsiString, String, etc. What am I missing to be able to display Asian characters on the form? I have not found a similar question to what I'm trying to do specifically with how my language text is being read into the plugin.
If the .INI file reader does not interpret the contents of the values, and allows all values through transparently, then you need to map the strings into one with the correct locale.
There is a similar question at Delphi 2010: how do I convert a UTF8-encoded PAnsiChar to a UnicodeString? that explains how to do the conversion. You may need to extract the contents into a RawByteString to avoid the implicit conversions.

Dynamically generating Ge'ez unicodes

Hi. If you look at the image above, you will see a set of very weird-looking characters displayed along with some Latin characters. The weird ones are Eritrean characters. They are the characters we use in my country. So, to go strait to the point, I am hoping to create even the simplest possible bit of software or maybe even a batch file (if possible) to help me make these characters applicable on the web and make PCs understand and display them when being typed. Just like Arabic, Hindu, Chinese... characters are used. I think, since the question of 'creating a language' is often rare or because I may not know the correct term to use, when I searched the internet to find any tutorial or even a freelancer or anything, all I got was... nothing. So, I am hoping, if anyone can give me a step-by-step guide, or even just a clue about how to create this, would be very helpful.
Thanks.
Your question asks "how to create a language", so I will describe all the pieces that need to be in place for a new language (or more accurately, writing system). You ask specifically about the Eritrean alphabet, so I will provide specific examples of how that is supported on modern systems, and try to provide you pointers for the pieces you are missing. The answer is long, and provides lots of links, to support the two explanations.
To work with a script like Ge'ez (also known as Ethiopic, the script used to write Amharic in Ethiopia and Tigrinya in Eritrea) you need a few things. The first is a way to encode the characters; a set of numbers representing each character, that the computer can use to represent the text. Luckily, Unicode has become widespread, and Unicode is designed to be a universal character set that includes all of the world's languages. Unicode 3.0 introduced Ethiopic in the range U+1200-U+137F, and later versions added supplements of more obscure characters in the ranges U+1380-U+1394, U+2D80-U+2DDF and U+AB00-U+AB2F. If you wanted to support a language that Unicode didn't yet support, you would either need to use the private use area and define your own mapping of characters to code points, or submit a proposal to have your script added to Unicode; for example, see the proposal for Ethiopic.
Now, Unicode is just a character set; an abstract mapping between characters and numbers. To actually transmit these characters as a sequence of bytes, you use a character encoding. There are many encodings; some of them, like ASCII and ISO-8859-1 only cover a subset of the full Unicode character set, while others, like UTF-8 and UTF-16, cover the full range. For documents on the web, UTF-8 is the recommended character encoding; you should never use anything else if you can help it. In UTF-8, you can write Ge'ez directly in the document, for example: ኤርትራ. One thing to watch out for is that some programs (especially on Windows) will offer you "Unicode" as an encoding, when they mean UTF-16; you want to make sure to choose UTF-8, as it's more efficient and more compatible with a wider variety of software.
If you are using encodings that don't cover the full range of Unicode, or you don't have a good way to type those characters, and you are writing HTML or XML, you can use numeric character references instead. To do this, you write the Unicode code point of the character you want to refer between &# and ;. You can write the number in decimal, or in hexadecimal prefixed with an x. For example, ሀ can be written ሀ or ሀ (the semicolon at the end is important; it wasn't working for you in the comments because you were missing it).
Now that you have a character set, and a way of encoding it, you need a way to display it. Some scripts are easier to display in others. For all scripts, you need a font; a file defining how each character looks. A font contains a collection of glyphs, or drawings of each character. Some scripts, like the Latin alphabet (the alphabet used for English and most European languages) are relatively simple; each character is a separate glyph, and how they are drawn doesn't depend on what characters come before or after (though diacritics and ligatures can make it a little more complicated). Others, like Arabic and Indic scripts are written in cursive, where letters join to each other so how they are drawn can depend on the characters near them. These languages require special rendering support like Uniscribe or DirectWrite on Windows, Pango on Linux, or advanced font technology like Apple Advanced Typography or Graphite.
Luckily, Ge'ez is a fairly simple writing system, that doesn't require any specialized rending support or advanced font systems. Each of the characters is a separate glyph, and it doesn't require any reordering. So a normal OpenType font, displayed with the rendering systems already available on most computers, will do the job. But you still need the font in order to be able to display the characters. To create you own font, you can use FontForge (a free/open source tool), Fontographer, FontLab Studio, or other similar software.
For Ethiopic, you don't need to create your own. There are numerous fonts available that include the Ethiopic characters, but one that I would recommend is Abyssinica SIL from SIL (the Summer Institute of Linguistics), which does a lot of great work for minority languages and writing systems. Their fonts are available under a free license, that allows you to use the font, redistribute the font, and modify the font, so their fonts are quite flexible and can be used in a wide variety of situations. Windows ships with Nyala, which includes Ethiopic characters, since Windows Vista, and Ebrima, which added support for Ethiopic characters in Windows 8; so people on Windows Vista or later should be able to view Ethiopic characters already. Mac OS X ships with Kefa as of 10.6.
Once you have the font, you will be able to view Ethiopic characters. But other people reading your documents might not have those fonts (if they are using an older version of Windows or Mac OS X, if they didn't install all of the fonts that came with Windows, or the like), in which case the characters will probably show up as boxes or question marks on their machine. You could give those people a redistributable font like Abyssinica SIL, or they could buy a font that includes Ethiopic characters, but that can be inconvenient. For working with word processor documents or plain text, that's probably the best you can do; they will need the font installed on their computer to be able to display the text. If you create a PDF on your computer, it should embed the fonts that it needs to display the text, so creating a PDF can be a convenient way to include uncommon fonts with your document.
On a web page, you can use web fonts to link to a font from your stylesheet, allowing the users web browser to load that font for that web page. Web fonts are supported all the way back to IE 6, and in recent versions of most other web browsers, so they are actually quite widely supported. Different web browsers support different font file formats (EOT, TTF, OpenType, SVG, and WOFF), and slightly different syntaxes for the CSS (older versions of IE are based on an older draft), so it can be a bit tricky to make a page that is compatible with all browsers. Luckily, people have automated that process. Some web fonts are available online from Google Web Fonts or FontSquirrel, but sadly, I couldn't find any Ethiopic fonts already hosted. However, you can upload a font to FontSquirrel, and it will convert it into all of the major formats, and provide example CSS that will work on all modern browsers. Note that you should only do this with fonts that allow web embedding; not all fonts do. Since Abyssinica SIL is available under the Open Font License, you can use it, and I've run it through FontSquirrel for you; you can see how it works (check out the Glyphs & Languages tab), or download the kit. To use it, just put the font files (.ttf, .eot, .svg, and .woff) on your server in the same directory as your CSS, and include the following in your CSS:
#font-face {
font-family: 'abyssinica_silregular';
src: url('abyssinicasil-r.eot');
src: url('abyssinicasil-r.eot?#iefix') format('embedded-opentype'),
url('abyssinicasil-r.woff') format('woff'),
url('abyssinicasil-r.ttf') format('truetype'),
url('abyssinicasil-r.svg#abyssinica_silregular') format('svg');
font-weight: normal;
font-style: normal;
}
Now that you know how to encode Ethiopic, view Ethiopic characters, and share documents containing Ethiopic characters, you are probably going to want to type them into documents. If you are using HTML, you could just type the numeric character reference described above. In other documents, you could just copy and paste the characters from a chart of all of them, like the Wikipedia page. But that would become pretty cumbersome. Depending on your system and settings, you can also use Unicode Hex Input to enter arbitrary Unicode characters, but that is also cumbersome.
To fully support typing a script on your computer, you need a keyboard layout or input method. Some scripts can be typed with a simple keyboard layout, which says which keys correspond to which characters. If a script has more characters than there are keys on the keyboard, Shift and Alt (or Option on the Mac) can be used to map to more characters. Dead keys can also be used to expand the range of characters that you type; dead keys are sequences of two or more keystrokes that produce a single glyph; for example, on Mac OS X, to type "á", you can type Option-E A. To create a keyboard layout on Windows, you can use the Microsoft Keyboard Layout Creator. Mac OS X uses an XML format for keyboard layouts, so you can create one directly, or use Ukelele from SIL to create one more easily. On systems using X11 (like Linux), you can create your own XKB layouts.
If you need more characters than can be supported with modifiers and dead keys, like typing Chinese or Japanese, then you need a full-fledged input method. An input method allows you to run arbitrary code to map what someone types into the text it produces; for example, in a Japanese input method, you may type a phonetic representation of what you you are writing, and it will show you a drop down list of possible characters that match that representation, allowing you to choose the appropriate ones. Windows provides the Input Method Manager for writing input methods, Mac OS X the Input Method Kit, and X11 has a few ways to do it, such as SCIM and iBus.
The standard input method for Ethiopic makes extensive use of dead keys. It looks like the most popular existing input method for Ethiopic is Keyman, which is a commercial input method that works on Mac and Windows, and in addition there's a free variant, KMFL, that works on Linux. SIL has keyboard downloads for this input method; they also have a keyboard layout for Mac OS X which uses dead keys to achieve the same thing. Mac OS X has more extensive dead key support, so it doesn't require an input method to support this form of input, while on Windows you need to use an input method like Keyman to be able to enter input this way. Google has a free input method for Windows, Google Input Tools for Windows, which supports Amharic, and allows you to customize its input schemes; you could try adapting their Amharic support for Tigrinya.
If you just need to support input on a web site, you could do this in JavaScript, by writing an input method in JavaScript that transliterates from what someone types into Ethiopic. I do not know of any existing frameworks for doing this; however, I have found Korean and Japanese input methods implemented in JavaScript. You could take a look at how those are implemented. Upon looking further, I've found that Tavultesoft, who make Keyman, also have KeymanWeb, a JavaScript based input method that you can buy and embed in your site. MediaWiki also has an input method extension Narayam, that includes a JavaScript based input method for MediaWiki based sites like Wikipedia, which includes an experimental Amharic input method. There is also a draft W3C IME API, which helps provide an interface between web apps and native IMEs, as well as JavaScript based IMEs. Given that it's still a draft, I don't know if it is yet supported anywhere.
With all the above (a character set, encoding, fonts, rendering support, and an input method), you will be able to create, share, and view documents in your script. If that's all you need, great; the above will allow you to work with documents in a given script. But for full support for a language on your computer, not just its script or writing system, there are two more pieces that you need: a locale, and your software to be localized (translated and adapted) for your language.
A locale specifies how programs should manipulate text in a given script, language, culture, and/or encoding. There are many common text processing operations that programs do: displaying numbers, displaying dates and times, sorting strings or names, and so on. How these should work can differ based on the language, script, and culture of the person using the program; for instance, in Swedish "ü" is sorted along with "y", while in English and German it's sorted along with "u". Differences may not be based on language: both Mexico and Spain use Spanish, but in Mexico numbers are displayed with . as the decimal separator (1½ is written "1.5"), while in Spain , is used as the decimal separator (1½ is written "1,5"). A locale specifies all of these rules. Because the locale can vary based on language, culture, and sometimes other factors, the language and country are usually used to specify the locale, and other information can be used as well.
The most widely used standard for naming locales is RFC 4646 (BCP 47). Locales are usually specified as "ln-CC" with the language code ln and country code CC: US English is en-US, British English is en-UK, and French in France is fr-FR. If more information needs to be specified, it can be included. For instance, Serbian can be written with either Latin or Cyrillic, and so Serbian in Serbia can be either sr-Latn-CS or sr-Cyrl-CS. Tigrinya in Eritrea is written ti-ER.
There are a variety of different formats for defining the rules that a particular locale has. Windows uses NLP files, a custom format that can be created with Microsoft Locale Builder. POSIX (Unix/Linux) locales can be created using localedef. Many systems these days are moving towards the Unicode Common Locale Data Registry, which specifies a standardized format for locale data as well as a comprehensive database of locales for many of the worlds languages. ICU is a library for C and Java (and used by many other environments) for manipulating Unicode text according to Unicode rules and locale data; they have a good browser for the data from the CLDR and their own locale data. For example, take a look at their entry for ti-ER.
Finally, for full support of a language, you need to translate the software itself into that language. There are, of course, many pieces of software, and each one contains many strings that need to be translated. Some software is not designed to be translated; it has not been internationalized. Some software can only be translated by whoever created it; the strings are built into the program and cannot be easily modified by a third party. But it is possible to localize some software, translating it to your language and culture. If the software has already been localized for several other languages and cultures, it is likely to be flexible enough to support a new language, and if it uses formats that are easily modifiable for localization information, it can be modified by third parties.
For instance, applications on Mac OS X store their localization data in separate files within the application bundle. There is a tool called AppleGlot (you need to register for the Mac Developer Program and go to the downloads area to find it) which can help you extract that data, provide a file with all of the strings which need to be translated, and allow you to combine that with the application again once you have. For open source software, such as much software available on Linux, you can work with the developers to provide translation. Some software uses gettext for translation strings, which use the PO file format that you can edit using poedit. Some uses Qt, for which you can use Qt Linguist. Or for dealing with a wide variety of formats, you can use a commercial offering like Swordfish or Transifex.
Of course, no one person can do all of the above; it takes many people working together to build support for a new language on modern computer systems. This is all intended to be a high-level tour of all of the components that go into language support for a given language, with references that will help you follow up on whichever aspects you would like to work on, as well as demonstrate what already works for Tigrinya and the Ge'ez script.
If they are Unicode characters they should be displayable just like characters of any other language. I googled it and found this, hopefully they're the same ones you're asking about:
የ ዩ ዪ ያ ዬ ይ ዮ
ዸ ዺ ዻ ዼ ዽ ዾ
See? No extra work required to display them on web browsers or other programs.
These are characters from the Unicode Ethiopic set (U+1200..U+137C), encoded in UTF-8:
Line 1:
የ = 0xE1 0x8B 0xA8 = U+12E8 = ETHIOPIC SYLLABLE YA
ዩ = 0xE1 0x8B 0xA9 = U+12E9 = ETHIOPIC SYLLABLE YU
ዪ = 0xE1 0x8B 0xAA = U+12EA = ETHIOPIC SYLLABLE YI
ያ = 0xE1 0x8B 0xAB = U+12EB = ETHIOPIC SYLLABLE YAA
ዬ = 0xE1 0x8B 0xAC = U+12EC = ETHIOPIC SYLLABLE YEE
ይ = 0xE1 0x8B 0xAD = U+12ED = ETHIOPIC SYLLABLE YE
ዮ = 0xE1 0x8B 0xAE = U+12EE = ETHIOPIC SYLLABLE YO
Line 2:
ዸ = 0xE1 0x8B 0xB8 = U+12F8 = ETHIOPIC SYLLABLE DDA
ዺ = 0xE1 0x8B 0xBA = U+12FA = ETHIOPIC SYLLABLE DDI
ዻ = 0xE1 0x8B 0xBB = U+12FB = ETHIOPIC SYLLABLE DDAA
ዼ = 0xE1 0x8B 0xBC = U+12FC = ETHIOPIC SYLLABLE DDEE
ዽ = 0xE1 0x8B 0xBD = U+12FD = ETHIOPIC SYLLABLE DDE
ዾ = 0xE1 0x8B 0xBE = U+12FE = ETHIOPIC SYLLABLE DDO
Using Ethiopian characters on web pages is mostly a matter of fonts these days. (You may also have a problem with entering them conveniently, but this depends on your authoring environmentPeople using e.g. Windows 7 have at least one font containing them, but old computers typically lack such fonts. The following fonts contain them (there may be others):
Code 2000, was freeware, the author has disappeared, so the status is obscure
Unifont, a free bitmap font
FreeSerif, a free font
Nyala, distributed with some versions of Windows
SunExt-A, a free font
Fixedsys Excelsior, a free bitmap font I suppose (haven’t tested)
I would probably use FreeSerif as a downloadable font, with #font-face.
Just came accross the same problem but there is a easy solution: Google provides now webfonts for many languages, also ethiopic:
http://www.google.com/fonts/earlyaccess
To write amharic or Tigrigna in web forms you can simply use Any Key firefox add on https://addons.mozilla.org/en-US/firefox/addon/any-key/ and there is for chrome too !!
But To create an editor using javascript you can see a site here http://www.lexilogos.com/keyboard/amharic.htm and try to firgure it out how they implemented it !!
You probably want to look at
http://senamirmir.org/
which unless I am wrong has done what you want to do.
If you don't like their fonts SIL Abyssinica should be fine too (but it only includes one writing style).
The layout status will vary from system to system, to target *nix like systems you need a layout merged in
http://www.freedesktop.org/wiki/Software/XKeyboardConfig/
#Samaya, by now you probably got the answer you were looking for. But let me drop what I think. Based on your original question, I think you are trying to develop a small software which can be selected as utility(as a feature) and be used to display Geez alphabets without the need of installing a separate Geez application. For that, I reckon, the utility application should be developed in a way that it could be selected as a feature (language feature) in an operating system (Like Amharic in windows for instance). However, your subsequent comments seem to focus more on displaying Geez characters on a web. As many have suggested, we already have that functionality. But if you still want to develop an application for it, I would suggest you to have unicode (U1260-በ for instance) array and matching transcription array of your choices from a keyboard ( be - በ for instance). Your application then would use the array of transcription when keyboard key are entered and match them to the unicode to show the right alphabet in Geez. Not sure if I fully understood what you're looking for but I myself with colleagues did a project that included this type of work for the particular application. By the way, do you have to install Geez software to view Tigrigna/Geez transcript based website? If so, check your version of browser.

Website localization for multibyte languages

I have started to code a multi-language feature for a medium-sized website with a lot of hardcoded text. As the website is supposed to be translated into Japanese and Korean (multibyte character set) I am considering the following:
If I use string externalization, do the strings for Japanese or Korean need to be in unicode form within the locale file (i.e. 台北 instead of 台北 as string value)?
Would it make more sense to store the localization in a DB (i.e. MySQL) and retrieve the respective values via a localization function in PHP?
Your thought input is much appreciated.
Best regards
$0.02 from someone who has some experience with i18n...
Keep your translations in human-readable form, as it will likely be translators and not coders managing these resources.
If this text (hard-coded, you say) is not subject to frequent change, then you may wish to store these resources as files that you read in at runtime.
If this text is subject to frequent change, then you may wish to explore other alternatives for storing resources, such as databases or in-memory key-value stores.
Depending upon your requirements, you may want to consider a mixture of the above.
But I strongly suggest that you avoid mixing code (the HTML character entities) with your translation resources. Most translators will not understand what they mean and may break them when they are translating. And on the flip-side, a programmer may not understand how to insert code or formatting into the translation resources properly, unless they actually understand that language.
tl;dr
- use UTF-8
- don't mix any code/formatting into the translations themselves
- how you store the translations depends upon your requirements
I doubt that string externalization would be your biggest problem. But let me give you some advise.
String externalization
Of course you would need to separate translatable strings from the code. I would recommend storing translation in plain text, UTF-8 encoded file containing key-value pairs:
some.key=some translation
Of course you would need to write a helper script to resolve this at runtime. The script would need to detect end-user's language.
Language detection
Web browsers are so nice to send AcceptLanguage header each time they send a request. What you need to do, is to read the content of this header and check if you support any of the language user has listed. If so, read the resource file (as defined above) and return strings for given language, return your default language otherwise. The code example below will give you the most desired language (which is not necessary the one you support):
<?php
$locale = Locale::acceptFromHttp($_SERVER['HTTP_ACCEPT_LANGUAGE']);
echo $locale;
?>
This is still, not the biggest of your challenges.
Styles and style sheets
The real problem with multilingual web sites or web applications are styles. People tend to put style definitions in-line, which is problematic to say the least. Also, designers tend to think that Arial is the best font for entire Universe, as well as emphasis always have to come with bolded font. The only problem is, the font might be unreadable under some circumstances.
I must admit, I don't know why it happens, but most of the times web browsers tend to ignore bold attribute for Asian scripts (which is good), but sometimes they do not and it could became a major challenge for end users if your font definition is say font-family:Arial; font-size:10px;.
The other problem could be colors. Depending on your web site design, some colors used might be inappropriate for target customers. That is because we all tend to assign meaning to colors based on our cultural background.
Images containing localizable text could also give you a headache, you would need to either externalize such texts (and write them down just like any other HTML element), or prepare multilingual resources structure (i.e. put all images to directories named after language code ("en", "ja", "ko")).
The real challenge however, are hard-coded formatting tags like <b>, <i>, <u>, <strong>, etc. Nobody should use them nowadays, style classes should be used instead but the common practice is different. You would probably need to replace them with style classes; each element could have more than one style class, which to my surprise is not common knowledge (for example <p class="main boldText">).
OK, once you have your styles externalized, you would probably be forced to implement some sort of CSS Localization Mechanism. This is needed in the lights of what I wrote above. The easiest way to do that is to create directory structure similar to the one I mentioned before - "en" for English base CSS files, "ja" for Japanese and "ko" for Korean, so each language would have their own, separate set of CSS files. This is similar to UI skins, only in that case user won't be able to choose the skin, you will decide on which CSS to present them - you would detect language anyway.
As for in-line style definitions (<p style="whatever">), after you define CSS L10n Mechanism, you could override any style by forcing it with !important keyword. That is, unless somebody in his very wrong mind put this keyword to in-line style definition.
Concatenations
Well, this is your biggest challenge. Even people who understand the need of string externalization tend to concatenate the strings like this:
$result = $label + ": " + $product;
$message = "$your_basket_is + $basket_status + ".";
This poses serious problem for Internationalization (and if it is not resolved for Localization as well). That is because, the order of the sentence tend to be different after translating text into different language (this especially regards to Korean). Also, I showed you hard-coded punctuations, which are not necessary correct for Asian languages. That is what I have to go through on a daily basis :/
What you would probably need to do, is to remove such concatenations, or use some means of message formatting. The PHP example (taken directly from web page I am referencing) would be:
<?php
$fmt = new MessageFormatter("en_US", "{0,number,integer} monkeys on {1,number,integer} trees make {2,number} monkeys per tree");
echo $fmt->format(array(4560, 123, 4560/123));
$fmt = new MessageFormatter("de", "{0,number,integer} Affen auf {1,number,integer} Bäumen sind {2,number} Affen pro Baum");
echo $fmt->format(array(4560, 123, 4560/123));
?>
As you can see in this example, numbers are also formatted to much locale style. This leads us to:
Locale aware formatting
Dates, times, numbers and currencies or other similar information need to be formatted according to user-detected Locale. There is a slight difference here: you should attempt to do that, even if you do not support related language resources (do not have translations). Of course for currency symbol, you would use whatever is your real currency, not the user's default, but the format should respect end user's cultural background.
Summary
I have just presented you with a short introduction to multilingual web site design with focus on Japanese and Korean target markets. If at some point you would need to support Chinese Simplified as well, support for GB18030 encoding would be probably needed as well. This would be very challenging...
You do not want to store all your text as HTML entities. It'll drive you mad. The only reason to do this is if you need to serve your document in an ASCII encoding and cannot embed the characters directly. But in this day and age there's no reason for that; serve your document as UTF-8 and write and store your contents in UTF-8 and be done with it.
Whether or not to store translations in the database depends on many factors, including performance, caching, whether you need to be able to search for the text, whether the text should be editable by non-programmers etc. Usually .mo/.po translation files with gettext are a good way to go unless proven otherwise.

How to install a platform dictionary in Eclipse

For spell checking purpose I would like to install an addictional "platform dictionary" in my Eclipse IDE.
You can see the list of platform dictionaries installed in Window > Preferences > General > Editors > Text Editors > Spelling, in the field "Platform dictionary". In my Helios Service Release 1 there are only english of UK or USA. I would to put the language of my country, so I can write comments in my language and have spell check. Eclipse help doesn't explain how.
If you can't find your language word list you can generate one using aspell.
aspell --lang=pl dump master | aspell --lang=pl expand | tr ' ' '\n' > pl.dict
In Ubuntu aspell generates list in UTF-8 in other systems you can add encoding option.
--encoding=utf-8
I am not sure you can add a "Platform dictionary", so that leaves you with a "user defined" one:
Eclipse supports a standard one-word-per-line format for the 'dictionary' file.
You can have several of those at Kevin's Word List on Sourceforge.net, including links to other sites.
If you can't find a good worldlist and can't run aspell, you can also get wordlists from Debian. On Windows, I used the Swiss German wordlist. Klick all, pick a mirror, download the .deb file, use 7-zip or similar to open it, open the data.tar inside, and find the file you are looking for. In my case it was /usr/share/dict/swiss.
Thank you #VonC, #Konrad Nowicki and #Alex Schröder for the earlier answers. I didn't find the other answers fully satisfying so I wanted to write my own answer. Your question:
How to install a platform dictionary in Eclipse?
For English:
Since you mentioned that US and UK English is included, no need to explain.
For non-English languages, like languages with strange characters like åäöü, example here is Swedish (tested and it works):
Download a text file of the words. My method for downloading a file of words: I googled swedish word list txt (just change swedish to whatever language you're looking for a dictionary and I hope you'll find a txt dictionary) and found this (this link worked 2018-09-25) GitHub repo with an Swedish dictionary: https://github.com/martinlindhe/wordlist_swedish. As long as your dictionary file is formatted correctly in UTF-8 and have EOL (End of lines) characters Unix (LF) it should be fine. However, if it isn't formatted as UTF-8 you'll have to convert the åäöü characters to reflect the UTF-8 standard, example program for this: Notepad++. If the dictionary has another EOL characters: Windows (CR LF) or Macintosh (CR), then just convert it to Unix (LF), example program for this: Notepad++. Then open a text editor, example program for this: Notepad++, to append the new dictionary to your custom dictionary where it is located, often times %userprofile%/eclipse/dictionary.txt or ~/eclipse/dictionary.txt depending on where you installed Eclipse. Restart Eclipse and it should work.