Markdown with unicode bullets / itemization marks - unicode

I use a keyboard layout (Neo 2), which lets me directly enter lots of unicode characters – like for example • (U+2022, “bullet”), – (U+2013, “en dash”) and — (U+2014, “em dash”).
I’d like to use these characters in markdown files. Raw MD files then would already look halfway marked-up and I’m already very much used to type those characters. Are there Markdown dialects which support this?

No, there are currently (2019) no Markdown dialects that support Unicode list-item markers, such as the "bullet": •.
The reference for that claim is Babelmark, a GitHub-hosted tool for comparing the output of various Markdown implementations. As of this writing, the Markdown source
• item 1
• item 2
is rendered as a regular paragraph of text, not a list, by all of the 35 Markdown implementations the tool incorporates — which, arguably, are all implementations of practical relevance. The on-screen output of the above would typically look like this:
• item 1 • item 2
Of note in this particular context is that Markdown inventor John Gruber considers the lack of support for the actual bullet • to denote bulleted lists a “glaring omission”. In a blog post from 2017 he goes on to explain that, when he worked on the first Markdown parser back in 2003, he would have included Unicode syntax markers, first and foremost • for lists, had it not been for character-encoding mismatches, a frequent real-world issue at the time, which is why he restricted the special characters to the 7-bit ASCII range.
CommonMark, the post-hoc standardization effort for Markdown syntax, has not included •. A lengthy and sometimes heated discussion on this very topic has been going on its message board since 2014, with contributions from some of CommonMark's most prominent proponents. The last word, however, may yet be spoken, as the finalized 1.0 specification still awaits publication.
reStructuredText, for what it's worth, a text-file format with a minimal-markup design philosophy very similar to Markdown's, does have support for Unicode list markers (•, but also ‣ and ⁃), which was added in 2006.

Related

Is the Demotic script represented in Unicode?

Does Unicode have signs for Demotic script? Is there any font containing such signs?
The Unicode has assigned 1072 characters for Egyptian hieroglyphs and for Hieratic (which is the parent system for Demotic and the cursive version of hieroglyphs) - so I wonder if there is any Unicode support for Demotic too
Although Demotic is still not encoded, there are already texts encoded in rich-text documents (using specific fonts).
They are based on the Coptic script, with a few additions for the diacritical Yodh on some letters; this works with some ligatures and slightly modified letter forms, but this is not purely a "hack" because in fact the Coptic script was developed from Demotic (on its cursive form used in Thebes) with the simplified forms from Greek adapted for the Late Ancient Egyptian language (which was then transcribed in the same period and the same area of Thebes with BOTH the Demotic and Coptic scripts; while the Demotic script also coexisted with Hieratic, i.e. the cursive form of the complex hieroglyphs highly simplified).
You can see this here:
https://ucbclassics.dreamhosters.com/djm/demotic.html
This work is the working base for a future encoding of Demotic in Unicode, but many searchers can use this font (and the keyboard input layout, which is based on Classical Greek, with a few modifications) on MacOS, Windows and now as well Linux, within several Office word processors, and now as well on the web (provided the web browsers support Opentype features, and webfonts). It still does not allow plain-text, but this works, using the Coptic encoding (with just a few additional generic diacritics, plain-text is possible and even directly readable by Egyptologists).
So the good question is: will Demotic be encoded separately, or will Unicode just consider to unify it with Coptic with the few additions needed? Unicode already chose to unify Egyptian hieroglyphs with Egyptian Hieratic, but this is quite controversial as Hieratic is very far from hieroglyphs (currently encoded for its monumental form carved on stone that have been used with lots of variants during 2 millenia), and much nearer from Demotic.
So may be Demotic will be encoded separately by Unicode (to avoid breaking the modern Coptic script still used today) but unified with Hieratic (which will be separated from Hieroglyphs). This would create an Unicode "Hieratic-Demotic" script, i.e. "Late Egyptian Cursive" (not to be confused with "Egyptian Cursive Hieroglyphs", which is extremely similar to the older monumental Hieroglyphs, but were developped to be painted on papyrus instead of being carved in monumental stones, so their form is much less angular and a bit simplified by the speed of drawing with a brush, but a lower precision of the brush and diffusion of ink on papyrus). For now it is not decided. But Egyptologists already have their tools to create documents easily and discuss them... using a rich text form.
There are other existing fonts. However msot of them are not free. They initially requires proprietary rich text formats, but this is not logner the case with free office suites like LibreOffice and OpenOffice (which can also process MsOffice formats, all supporting as well the ODF formar instead of the old MS formats). Note that ODF is easily convertible to HTML+CSS: this makes publication on the web possible as well.
Note that for Egyptian Demotic, you need much less characters than for Egyptian Hieroglyphs and Egyptian Hieratic: using the Coptic set (mostly based on Greek) with a few diacritics (much less than those used in Classical Greek!) along with rich-text and specific font designs is still the best choice today.
But the most important problem with borrowing the Coptic script for writing Demotic is the directionality (note that this is also a problem inside the Greek script for writing Ancient Greek...)
Also Unicode still does not support boustrophedon correctly and does not support a suitable model the layout needed for hieroglyphs that are encoded, with the same level that Unicode adapted its model for Hangul squares compositions and for the vertical rendering of sinographic scripts! This will also be a problem for other scripts still to be encoded (e.g. SignWriting, or chemical and mathematical notations, or musical notations; all of them having modern use but requiring specific layouts that are still not representable in plan-text with jut Unicode encoding alone).
So you can't do all you want with just Unicode plain-text, and you need rich-text formats: a solution may be found with HTML+CSS, then supported by OpenType, long before Unicode decides doing something, or just resignates to do nothing before long (because most modern scripts are encoded and there are less companies interested in paying the development of paleographic scripts, and paying their membership to add it and work on it), or there's some new proposals to better encode complex text layouts than just basic directionality (and syllabic square layouts in Hangul, or Arabic-like and Brahmic conjoining layouts, all of them being fully supported by their specific properties) !
Another source you may look at, for a candidate font is
http://paleography.atspace.com/
which introduces this set of 279 paleographic fonts for 30 old scripts, available at:
https://download.cnet.com/Paleofonts/3000-2190_4-10547504.html
or individually at:
https://github.com/reclaimed/paleofonts
(which is where resides now all the archived fonts).
However this huge set only contains one "Demotic" font (in fact for the "Meroitic Demotic" script, not the Egyptian Meroitic, which has partial coverage with just mappings on top of ASCII Latin letters and not the needed diacritics and necessary ligatures). And this legacy font set does not have the quality that we find today: no OpenType features (only TrueType), no or incomplete Unicode mappings, partial coverage, poor metrics, no hinting: they are just small enough to replace fallback fonts that would just display mojibake in Unicode, or for legacy texts translittated to other input scripts.
So many of these paleographic scripts will be developed by community efforts (e.g. within the Noto opensourced project, and with help of Unicode contributors and other opensourcers to work on them and find and discuss the rare ressources used by paleographers). You'll have to be very patient or try to develop you own community of interest with rare linguists spread in universities around the world with very small budgets, which often have poor knowledge of the technical requirements for developing modern fonts.
However there's now a renewal of efforts, because tools to develop fonts are easier and more reliable to use, and just a few persons with good contacts (in various working languages) could seriously help develop this support that many linguists and poor students would appreciate for their work to revive this important human heritage: Egyptian Demotic with its 2600 years of active use and its real importance for many cultures with which it has been in contact, is really a big gap we should fill. Unicode is just waiting for proposals and active experimentations and talks (which should also involve other standard bodies like W3C for CSS Text, and OpenType for font designs, and various OS vendors). Of course, if this development requires encoding additional characters in the UCS for usage of these scripts in plain-text, ISO working groups will be involved too and will need to agree with Unicode (but we know that this can take many years after proposing encoding new scripts or desunifying any existing script).

What are valid uses of U+0080 through U+009F?

I'm making a virtual computer with a custom font and programming environment (Mini Micro), all Unicode based. I have need for a few custom glyphs in my environment. I know about the Private Use Areas, but I'm wondering about the "control" code points at U+0080 through U+009F. I can't find any documentation on what these points are for beyond "control".
Would it be a gross abuse of Unicode to tuck a few of my custom glyphs in there? What would be a proper use of them?
Wikipedia lists their meaning. You get 2 of them for your use, U+0091 and U+0092.
The 0x80 - 0x9F range you referto to is generally called the C1 control characters. Like other control codes, the C1s are for code extension, and by their very nature, some are generally left open for further expansion and thus have only vague standardization.
The original and most comprehensive reference is probably ECMA-48 - up to the Fifth Edition in June 1991. (The link takes you to a free download in PDF format.)
For additional glyphs, C1 codes would not be appropriate. In effect, the whole idea of control codes is that they are the special case of non-graphical codes.
UNICODE has continued to evolve, with an emoji block that has a lot of "characters" you might not expect. Let's try one: 💎 it is officially called the GemStone Emoji. I used this copy/paste website to insert it, you might look to see if something you can use has been standardized in the Emoji code block.
One of the interesting things about the emoji characters is that they are double-wide, even in a fixed-width font.
Microsoft uses them for smart quotes the Euro and a few other symbols in its latin-1 extension cp1252. As this character encoding is frequently reported as latin-1 using these code points for other uses can cause problems, especially as latin-1 is supposed to be code point equivalent to Unicode. This Wikipedia page gives some history and the meanings of these control characters.

Dynamically generating Ge'ez unicodes

Hi. If you look at the image above, you will see a set of very weird-looking characters displayed along with some Latin characters. The weird ones are Eritrean characters. They are the characters we use in my country. So, to go strait to the point, I am hoping to create even the simplest possible bit of software or maybe even a batch file (if possible) to help me make these characters applicable on the web and make PCs understand and display them when being typed. Just like Arabic, Hindu, Chinese... characters are used. I think, since the question of 'creating a language' is often rare or because I may not know the correct term to use, when I searched the internet to find any tutorial or even a freelancer or anything, all I got was... nothing. So, I am hoping, if anyone can give me a step-by-step guide, or even just a clue about how to create this, would be very helpful.
Thanks.
Your question asks "how to create a language", so I will describe all the pieces that need to be in place for a new language (or more accurately, writing system). You ask specifically about the Eritrean alphabet, so I will provide specific examples of how that is supported on modern systems, and try to provide you pointers for the pieces you are missing. The answer is long, and provides lots of links, to support the two explanations.
To work with a script like Ge'ez (also known as Ethiopic, the script used to write Amharic in Ethiopia and Tigrinya in Eritrea) you need a few things. The first is a way to encode the characters; a set of numbers representing each character, that the computer can use to represent the text. Luckily, Unicode has become widespread, and Unicode is designed to be a universal character set that includes all of the world's languages. Unicode 3.0 introduced Ethiopic in the range U+1200-U+137F, and later versions added supplements of more obscure characters in the ranges U+1380-U+1394, U+2D80-U+2DDF and U+AB00-U+AB2F. If you wanted to support a language that Unicode didn't yet support, you would either need to use the private use area and define your own mapping of characters to code points, or submit a proposal to have your script added to Unicode; for example, see the proposal for Ethiopic.
Now, Unicode is just a character set; an abstract mapping between characters and numbers. To actually transmit these characters as a sequence of bytes, you use a character encoding. There are many encodings; some of them, like ASCII and ISO-8859-1 only cover a subset of the full Unicode character set, while others, like UTF-8 and UTF-16, cover the full range. For documents on the web, UTF-8 is the recommended character encoding; you should never use anything else if you can help it. In UTF-8, you can write Ge'ez directly in the document, for example: ኤርትራ. One thing to watch out for is that some programs (especially on Windows) will offer you "Unicode" as an encoding, when they mean UTF-16; you want to make sure to choose UTF-8, as it's more efficient and more compatible with a wider variety of software.
If you are using encodings that don't cover the full range of Unicode, or you don't have a good way to type those characters, and you are writing HTML or XML, you can use numeric character references instead. To do this, you write the Unicode code point of the character you want to refer between &# and ;. You can write the number in decimal, or in hexadecimal prefixed with an x. For example, ሀ can be written ሀ or ሀ (the semicolon at the end is important; it wasn't working for you in the comments because you were missing it).
Now that you have a character set, and a way of encoding it, you need a way to display it. Some scripts are easier to display in others. For all scripts, you need a font; a file defining how each character looks. A font contains a collection of glyphs, or drawings of each character. Some scripts, like the Latin alphabet (the alphabet used for English and most European languages) are relatively simple; each character is a separate glyph, and how they are drawn doesn't depend on what characters come before or after (though diacritics and ligatures can make it a little more complicated). Others, like Arabic and Indic scripts are written in cursive, where letters join to each other so how they are drawn can depend on the characters near them. These languages require special rendering support like Uniscribe or DirectWrite on Windows, Pango on Linux, or advanced font technology like Apple Advanced Typography or Graphite.
Luckily, Ge'ez is a fairly simple writing system, that doesn't require any specialized rending support or advanced font systems. Each of the characters is a separate glyph, and it doesn't require any reordering. So a normal OpenType font, displayed with the rendering systems already available on most computers, will do the job. But you still need the font in order to be able to display the characters. To create you own font, you can use FontForge (a free/open source tool), Fontographer, FontLab Studio, or other similar software.
For Ethiopic, you don't need to create your own. There are numerous fonts available that include the Ethiopic characters, but one that I would recommend is Abyssinica SIL from SIL (the Summer Institute of Linguistics), which does a lot of great work for minority languages and writing systems. Their fonts are available under a free license, that allows you to use the font, redistribute the font, and modify the font, so their fonts are quite flexible and can be used in a wide variety of situations. Windows ships with Nyala, which includes Ethiopic characters, since Windows Vista, and Ebrima, which added support for Ethiopic characters in Windows 8; so people on Windows Vista or later should be able to view Ethiopic characters already. Mac OS X ships with Kefa as of 10.6.
Once you have the font, you will be able to view Ethiopic characters. But other people reading your documents might not have those fonts (if they are using an older version of Windows or Mac OS X, if they didn't install all of the fonts that came with Windows, or the like), in which case the characters will probably show up as boxes or question marks on their machine. You could give those people a redistributable font like Abyssinica SIL, or they could buy a font that includes Ethiopic characters, but that can be inconvenient. For working with word processor documents or plain text, that's probably the best you can do; they will need the font installed on their computer to be able to display the text. If you create a PDF on your computer, it should embed the fonts that it needs to display the text, so creating a PDF can be a convenient way to include uncommon fonts with your document.
On a web page, you can use web fonts to link to a font from your stylesheet, allowing the users web browser to load that font for that web page. Web fonts are supported all the way back to IE 6, and in recent versions of most other web browsers, so they are actually quite widely supported. Different web browsers support different font file formats (EOT, TTF, OpenType, SVG, and WOFF), and slightly different syntaxes for the CSS (older versions of IE are based on an older draft), so it can be a bit tricky to make a page that is compatible with all browsers. Luckily, people have automated that process. Some web fonts are available online from Google Web Fonts or FontSquirrel, but sadly, I couldn't find any Ethiopic fonts already hosted. However, you can upload a font to FontSquirrel, and it will convert it into all of the major formats, and provide example CSS that will work on all modern browsers. Note that you should only do this with fonts that allow web embedding; not all fonts do. Since Abyssinica SIL is available under the Open Font License, you can use it, and I've run it through FontSquirrel for you; you can see how it works (check out the Glyphs & Languages tab), or download the kit. To use it, just put the font files (.ttf, .eot, .svg, and .woff) on your server in the same directory as your CSS, and include the following in your CSS:
#font-face {
font-family: 'abyssinica_silregular';
src: url('abyssinicasil-r.eot');
src: url('abyssinicasil-r.eot?#iefix') format('embedded-opentype'),
url('abyssinicasil-r.woff') format('woff'),
url('abyssinicasil-r.ttf') format('truetype'),
url('abyssinicasil-r.svg#abyssinica_silregular') format('svg');
font-weight: normal;
font-style: normal;
}
Now that you know how to encode Ethiopic, view Ethiopic characters, and share documents containing Ethiopic characters, you are probably going to want to type them into documents. If you are using HTML, you could just type the numeric character reference described above. In other documents, you could just copy and paste the characters from a chart of all of them, like the Wikipedia page. But that would become pretty cumbersome. Depending on your system and settings, you can also use Unicode Hex Input to enter arbitrary Unicode characters, but that is also cumbersome.
To fully support typing a script on your computer, you need a keyboard layout or input method. Some scripts can be typed with a simple keyboard layout, which says which keys correspond to which characters. If a script has more characters than there are keys on the keyboard, Shift and Alt (or Option on the Mac) can be used to map to more characters. Dead keys can also be used to expand the range of characters that you type; dead keys are sequences of two or more keystrokes that produce a single glyph; for example, on Mac OS X, to type "á", you can type Option-E A. To create a keyboard layout on Windows, you can use the Microsoft Keyboard Layout Creator. Mac OS X uses an XML format for keyboard layouts, so you can create one directly, or use Ukelele from SIL to create one more easily. On systems using X11 (like Linux), you can create your own XKB layouts.
If you need more characters than can be supported with modifiers and dead keys, like typing Chinese or Japanese, then you need a full-fledged input method. An input method allows you to run arbitrary code to map what someone types into the text it produces; for example, in a Japanese input method, you may type a phonetic representation of what you you are writing, and it will show you a drop down list of possible characters that match that representation, allowing you to choose the appropriate ones. Windows provides the Input Method Manager for writing input methods, Mac OS X the Input Method Kit, and X11 has a few ways to do it, such as SCIM and iBus.
The standard input method for Ethiopic makes extensive use of dead keys. It looks like the most popular existing input method for Ethiopic is Keyman, which is a commercial input method that works on Mac and Windows, and in addition there's a free variant, KMFL, that works on Linux. SIL has keyboard downloads for this input method; they also have a keyboard layout for Mac OS X which uses dead keys to achieve the same thing. Mac OS X has more extensive dead key support, so it doesn't require an input method to support this form of input, while on Windows you need to use an input method like Keyman to be able to enter input this way. Google has a free input method for Windows, Google Input Tools for Windows, which supports Amharic, and allows you to customize its input schemes; you could try adapting their Amharic support for Tigrinya.
If you just need to support input on a web site, you could do this in JavaScript, by writing an input method in JavaScript that transliterates from what someone types into Ethiopic. I do not know of any existing frameworks for doing this; however, I have found Korean and Japanese input methods implemented in JavaScript. You could take a look at how those are implemented. Upon looking further, I've found that Tavultesoft, who make Keyman, also have KeymanWeb, a JavaScript based input method that you can buy and embed in your site. MediaWiki also has an input method extension Narayam, that includes a JavaScript based input method for MediaWiki based sites like Wikipedia, which includes an experimental Amharic input method. There is also a draft W3C IME API, which helps provide an interface between web apps and native IMEs, as well as JavaScript based IMEs. Given that it's still a draft, I don't know if it is yet supported anywhere.
With all the above (a character set, encoding, fonts, rendering support, and an input method), you will be able to create, share, and view documents in your script. If that's all you need, great; the above will allow you to work with documents in a given script. But for full support for a language on your computer, not just its script or writing system, there are two more pieces that you need: a locale, and your software to be localized (translated and adapted) for your language.
A locale specifies how programs should manipulate text in a given script, language, culture, and/or encoding. There are many common text processing operations that programs do: displaying numbers, displaying dates and times, sorting strings or names, and so on. How these should work can differ based on the language, script, and culture of the person using the program; for instance, in Swedish "ü" is sorted along with "y", while in English and German it's sorted along with "u". Differences may not be based on language: both Mexico and Spain use Spanish, but in Mexico numbers are displayed with . as the decimal separator (1½ is written "1.5"), while in Spain , is used as the decimal separator (1½ is written "1,5"). A locale specifies all of these rules. Because the locale can vary based on language, culture, and sometimes other factors, the language and country are usually used to specify the locale, and other information can be used as well.
The most widely used standard for naming locales is RFC 4646 (BCP 47). Locales are usually specified as "ln-CC" with the language code ln and country code CC: US English is en-US, British English is en-UK, and French in France is fr-FR. If more information needs to be specified, it can be included. For instance, Serbian can be written with either Latin or Cyrillic, and so Serbian in Serbia can be either sr-Latn-CS or sr-Cyrl-CS. Tigrinya in Eritrea is written ti-ER.
There are a variety of different formats for defining the rules that a particular locale has. Windows uses NLP files, a custom format that can be created with Microsoft Locale Builder. POSIX (Unix/Linux) locales can be created using localedef. Many systems these days are moving towards the Unicode Common Locale Data Registry, which specifies a standardized format for locale data as well as a comprehensive database of locales for many of the worlds languages. ICU is a library for C and Java (and used by many other environments) for manipulating Unicode text according to Unicode rules and locale data; they have a good browser for the data from the CLDR and their own locale data. For example, take a look at their entry for ti-ER.
Finally, for full support of a language, you need to translate the software itself into that language. There are, of course, many pieces of software, and each one contains many strings that need to be translated. Some software is not designed to be translated; it has not been internationalized. Some software can only be translated by whoever created it; the strings are built into the program and cannot be easily modified by a third party. But it is possible to localize some software, translating it to your language and culture. If the software has already been localized for several other languages and cultures, it is likely to be flexible enough to support a new language, and if it uses formats that are easily modifiable for localization information, it can be modified by third parties.
For instance, applications on Mac OS X store their localization data in separate files within the application bundle. There is a tool called AppleGlot (you need to register for the Mac Developer Program and go to the downloads area to find it) which can help you extract that data, provide a file with all of the strings which need to be translated, and allow you to combine that with the application again once you have. For open source software, such as much software available on Linux, you can work with the developers to provide translation. Some software uses gettext for translation strings, which use the PO file format that you can edit using poedit. Some uses Qt, for which you can use Qt Linguist. Or for dealing with a wide variety of formats, you can use a commercial offering like Swordfish or Transifex.
Of course, no one person can do all of the above; it takes many people working together to build support for a new language on modern computer systems. This is all intended to be a high-level tour of all of the components that go into language support for a given language, with references that will help you follow up on whichever aspects you would like to work on, as well as demonstrate what already works for Tigrinya and the Ge'ez script.
If they are Unicode characters they should be displayable just like characters of any other language. I googled it and found this, hopefully they're the same ones you're asking about:
የ ዩ ዪ ያ ዬ ይ ዮ
ዸ ዺ ዻ ዼ ዽ ዾ
See? No extra work required to display them on web browsers or other programs.
These are characters from the Unicode Ethiopic set (U+1200..U+137C), encoded in UTF-8:
Line 1:
የ = 0xE1 0x8B 0xA8 = U+12E8 = ETHIOPIC SYLLABLE YA
ዩ = 0xE1 0x8B 0xA9 = U+12E9 = ETHIOPIC SYLLABLE YU
ዪ = 0xE1 0x8B 0xAA = U+12EA = ETHIOPIC SYLLABLE YI
ያ = 0xE1 0x8B 0xAB = U+12EB = ETHIOPIC SYLLABLE YAA
ዬ = 0xE1 0x8B 0xAC = U+12EC = ETHIOPIC SYLLABLE YEE
ይ = 0xE1 0x8B 0xAD = U+12ED = ETHIOPIC SYLLABLE YE
ዮ = 0xE1 0x8B 0xAE = U+12EE = ETHIOPIC SYLLABLE YO
Line 2:
ዸ = 0xE1 0x8B 0xB8 = U+12F8 = ETHIOPIC SYLLABLE DDA
ዺ = 0xE1 0x8B 0xBA = U+12FA = ETHIOPIC SYLLABLE DDI
ዻ = 0xE1 0x8B 0xBB = U+12FB = ETHIOPIC SYLLABLE DDAA
ዼ = 0xE1 0x8B 0xBC = U+12FC = ETHIOPIC SYLLABLE DDEE
ዽ = 0xE1 0x8B 0xBD = U+12FD = ETHIOPIC SYLLABLE DDE
ዾ = 0xE1 0x8B 0xBE = U+12FE = ETHIOPIC SYLLABLE DDO
Using Ethiopian characters on web pages is mostly a matter of fonts these days. (You may also have a problem with entering them conveniently, but this depends on your authoring environmentPeople using e.g. Windows 7 have at least one font containing them, but old computers typically lack such fonts. The following fonts contain them (there may be others):
Code 2000, was freeware, the author has disappeared, so the status is obscure
Unifont, a free bitmap font
FreeSerif, a free font
Nyala, distributed with some versions of Windows
SunExt-A, a free font
Fixedsys Excelsior, a free bitmap font I suppose (haven’t tested)
I would probably use FreeSerif as a downloadable font, with #font-face.
Just came accross the same problem but there is a easy solution: Google provides now webfonts for many languages, also ethiopic:
http://www.google.com/fonts/earlyaccess
To write amharic or Tigrigna in web forms you can simply use Any Key firefox add on https://addons.mozilla.org/en-US/firefox/addon/any-key/ and there is for chrome too !!
But To create an editor using javascript you can see a site here http://www.lexilogos.com/keyboard/amharic.htm and try to firgure it out how they implemented it !!
You probably want to look at
http://senamirmir.org/
which unless I am wrong has done what you want to do.
If you don't like their fonts SIL Abyssinica should be fine too (but it only includes one writing style).
The layout status will vary from system to system, to target *nix like systems you need a layout merged in
http://www.freedesktop.org/wiki/Software/XKeyboardConfig/
#Samaya, by now you probably got the answer you were looking for. But let me drop what I think. Based on your original question, I think you are trying to develop a small software which can be selected as utility(as a feature) and be used to display Geez alphabets without the need of installing a separate Geez application. For that, I reckon, the utility application should be developed in a way that it could be selected as a feature (language feature) in an operating system (Like Amharic in windows for instance). However, your subsequent comments seem to focus more on displaying Geez characters on a web. As many have suggested, we already have that functionality. But if you still want to develop an application for it, I would suggest you to have unicode (U1260-በ for instance) array and matching transcription array of your choices from a keyboard ( be - በ for instance). Your application then would use the array of transcription when keyboard key are entered and match them to the unicode to show the right alphabet in Geez. Not sure if I fully understood what you're looking for but I myself with colleagues did a project that included this type of work for the particular application. By the way, do you have to install Geez software to view Tigrigna/Geez transcript based website? If so, check your version of browser.

Understanding the terms - Character Encodings, Fonts, Glyphs

I am trying to understand this stuff so that I can effectively work on internationalizing a project at work. I have just started and very much like to know from your expertise whether I've understood these concepts correct. So far here is the dumbed down version(for my understanding) of what I've gathered from web:
Character Encodings -> Set of rules that tell the OS how to store characters. Eg., ISO8859-1,MSWIN1252,UTF-8,UCS-2,UTF-16. These rules are also called Code Pages/Character Sets which maps individual characters to numbers. Apparently unicode handles this a bit differently than others. ie., instead of a direct mapping from a number(code point) to a glyph, it maps the code point to an abstract "character" which might be represented by different glyphs.[ http://www.joelonsoftware.com/articles/Unicode.html ]
Fonts -> These are implementation of character encodings. They are files of different formats (True Type,Open Type,Post Script) that contain mapping for each character in an encoding to number.
Glyphs -> These are visual representation of characters stored in the font files.
And based on the above understanding I have the below questions,
1)For the OS to understand an encoding, should it be installed separately?. Or installing a font that supports an encoding would suffice?. Is it okay to use the analogy of a protocol say TCP used in a network to an encoding as it is just a set of rules. (which ofcourse begs the question, how does the OS understands these network protocols when I do not install them :-p)
2)Will a font always have the complete implementation of a code page or just part of it?. Is there a tool that I can use to see each character in a font(.TTF file?)[Windows font viewer shows how a style of the font looks like but doesn't give information regarding the list of characters in the font file]
3)Does a font file support multiple encodings?. Is there a way to know which encoding(s) a font supports?
I apologize for asking too many questions, but I had these in my mind for some time and I couldn't find any site that is simple enough for my understanding. Any help/links for understanding this stuff would be most welcome. Thanks in advance.
If you want to learn more, of course I can point you to some resources:
Unicode, writing systems, etc.
The best source of information would probably be this book by Jukka:
Unicode Explained
If you were to follow the link, you'd also find these books:
CJKV Information Processing - deals with Chinese, Japanese, Korean and Vietnamese in detail but to me it seems quite hard to read.
Fonts & Encodings - personally I haven't read this book, so I can't tell you if it is good or not. Seems to be on topic.
Internationalization
If you want to learn about i18n, I can mention countless resources. But let's start with book that will save you great deal of time (you won't become i18n expert overnight, you know):
Developing International Software - it might be 8 years old but this is still worth every cent you're going to spend on it. Maybe the programming examples regard to Windows (C++ and .Net) but the i18n and L10n knowledge is really there. A colleague of mine said once that it saved him about 2 years of learning. As far as I can tell, he wasn't overstating.
You might be interested in some blogs or web sites on the topic:
Sorting it all out - Michael Kaplan's blog, often on i18n support on Windows platform
Global by design - John Yunker is actively posting bits of i18n knowledge to this site
Internationalization (I18n), Localization (L10n), Standards, and Amusements - also known as i18nguy, the web site where you can find more links, tutorials and stuff.
Java Internationalization
I am afraid that I am not aware of many up to date resources on that topic (that is publicly available ones). The only current resource I know is Java Internationalization trail. Unfortunately, it is fairly incomplete.
JavaScript Internationalization
If you are developing web applications, you probably need also something related to i18n in js. Unfortunately, the support is rather poor but there are few libraries which help dealing with the problem. The most notable examples would be Dojo Toolkit and Globalize.
The prior is a bit heavy, although supports many aspects of i18n, the latter is lightweight but unfortunately many stuff is missing. If you choose to use Globalize, you might be interested in the latest Jukka's book:
Going Global with JavaScript & Globalize.js - I read this and as far I can tell, it is great. It doesn't cover the topics you were originally asking for but it is still worth reading, even for hands-on examples of how to use Globalize.
Apparently unicode handles this a bit differently than others. ie.,
instead of a direct mapping from a number(code point) to a glyph, it
maps the code point to an abstract "character" which might be
represented by different glyphs.
In the Unicode Character Encoding Model, there are 4 levels:
Abstract Character Repertoire (ACR) — The set of characters to be encoded.
Coded Character Set (CCS) — A one-to-one mapping from characters to integer code points.
Character Encoding Form (CEF) — A mapping from code points to a sequence of fixed-width code units.
Character Encoding Scheme (CES) — A mapping from code units to a serialized sequence of bytes.
For example, the character 𝄞 is represented by the code point U+1D11E in the Unicode CCS, the two code units D834 DD1E in the UTF-16 CEF, and the four bytes 34 D8 1E DD in the UTF-16LE CES.
In most older encodings like US-ASCII, the CEF and CES are trivial: Each character is directly represented by a single byte representing its ASCII code.
1) For the OS to understand an encoding, should it be installed
separately?.
The OS doesn't have to understand an encoding. You're perfectly free to use a third-party encoding library like ICU or GNU libiconv to convert between your encoding and the OS's native encoding, at the application level.
2)Will a font always have the complete implementation of a code page or just part of it?.
In the days of 7-bit (128-character) and 8-bit (256-character) encodings, it was common for fonts to include glyphs for the entire code page. It is not common today for fonts to include all 100,000+ assigned characters in Unicode.
I'll provide you with short answers to your questions.
It's generally not the OS that supports an encoding but the applications. Encodings are used to convert a stream of bytes to lists of characters. For example, in C# reading a UTF-8 string will automatically make it UTF-16 if you tell it to treat it as a string.
No matter what encoding you use, C# will simply use UTF-16 internally and when you want to, for example, print a string from a foreign encoding, it will convert it to UTF-16 first, then look up the corresponding characters in the character tables (fonts) and shows the glyphs.
I don't recall ever seeing a complete font. I don't have much experience with working with fonts either, so I cannot give you an answer for this one.
The answer to this one is in #1, but a short summary: fonts are usually encoding-independent, meaning that as long as the system can convert the input encoding to the font encoding you'll be fine.
Bonus answer: On "how does the OS understand network protocols it doesn't know?": again it's not the OS that handles them but the application. As long as the OS knows where to redirect the traffic (which application) it really doesn't need to care about the protocol. Low-level protocols usually do have to be installed, to allow the OS to know where to send the data.
This answer is based on my understanding of encodings, which may be wrong. Do correct me if that's the case!

Fast, Unicode-capable, cross-platform programmer's text editor that shows invisibles like ZWSP?

Our publishing workflow includes Windows and Linux machines (there are some Macs too, but not in the critical-path workflow). Many texts include both English and Khmer and are marked-up in XML.
XML Copy Editor is the best cross-platform open-source XML editor I've discovered. It utilizes the Scintilla editing component, which is generally good with Unicode but which does not enable non-printing or invisible characters like U+200B (zero-width space) and U+200C (zero-width non-joiner) to be displayed. Khmer does not separate words with a space character as Western languages do, so ZWSP is used in electronic texts to enable applications to break lines easily.
Ideally I'd edit the markup and the content in a single editor, but XML awareness is less important at times than being able to display invisibles. (OpenOffice.org Writer and Microsoft Word are the only two apps I know that will display ZWSP. They are not suitable for the markup and text manipulations that need to be done to prepare manuscripts for publication, unfortunately, although I guess they're fine for authoring.)
I tried out a promising editor last week, but a search-and-replace regex operation that took under a second in TextPad 4.7.3 lasted over twenty seconds. So I want to mention that speed and the ability to handle large (up to 150mb) files is also a concern.
Is there a good, fast, free or not too expensive text editor, with versions on Windows and Linux and maybe mac too, Unicode-aware and capable of displaying invisibles like ZWSP? That has syntax highlighting, can handle large files and is customizable enough that I won't tear my hair out in frustration?
I don't know about ZWSP in particular, but EditPadPro is good, fast, not expensive, has a very good regex engine and is Unicode-aware (and well-suited to editing XML, too). The developer (Jan Goyvaerts) lives in Thailand and knows about requirements for Eastern scripts and languages, so chances are good that it will be able to handle these texts.
EditPad Pro does not (yet) have the ability to visualize non-printable characters other than the ASCII space and tab. Version 6 does recognize ZWSP as a word boundary when doing word wrapping and selecting words by double-clicking or Ctrl+Shift+Left/Right.
What you can do is to search for the regular expression \u200B. Though this doesn't make the zero-width space visible, it will select it and put the cursor after it. You could use the regex \u200B\X and turn on the Highlight button on the search panel to highlight each grapheme after U+200B. You could even use the syntax coloring scheme editor to edit the provided XML scheme to use that regex always highlight each grapheme after U+200B.
EditPad Pro easily handles 150 MB files and has a powerful regex engine (same as used in RegexBuddy and PowerGREP). Maximum file size is 2 GB. Windows only.
I'm using CKEditor , it's cross platform and completly support unicode.
Take a look at it