Is there a way to render multiple characters as one glyph in a true type font? - unicode

I am looking for a way to render several characters (e.g. the sequence tlh) as one glyph in a true type font. This should not simply leave out the l and h in the font, as the standalone characters l or h have their own appearance in the Klingon writing system.
This rendering is required for displaying Latin transcriptions of Klingon texts. Klingon characters are transcripted in Latin the following way.
a b ch D e gh H I j l m n ng o p q Q r S t tlh u v w y ʼ
I am hence looking for a way to render the characters ch, gh, ng and tlh as their Klingon equivalent without losing the ability to render standalone n, t, l or.
Is this somehow possible in a true type font? Or would you recommend any other font type?
Up to now, usually a transcription is used for Latin rendering which is known as xifan hol, where the longer characters are simply replaced by single characters like tlh→ x or ng → f. I would like to avoid this workaround

Related

What is the maximum number of character changes for different Unicode normalization form?

Using different Unicode normalization forms may result in different length of output for the same input. For example:
>>> import unicodedata
>>> c = "å"
>>> a = len(unicodedata.normalize("NFC", c))
>>> b = len(unicodedata.normalize("NFKD", c))
>>> print(a, b)
1 2
When you change c, what is the maximum value of b/a?
As of the time of writing (Unicode 13.0), the biggest length difference between a character’s NFC and NFKD expansions is a factor of 18. U+FDFA ﷺ ARABIC LIGATURE SALLALLAHOU ALAYHE WASALLAM is unaffected by normalisation forms C and D, but decomposes into a sequence of 18 codepoints (صلى الله عليه وسلم) under KC and KD.
However, there is no formally defined limit to this in the standard. Future updates could in theory add a new character with an even longer decomposition mapping. The Unicode Standard only guarantees an upper bound for NFC. Per Unicode Stability Policy:
Canonical mappings (Decomposition_Mapping property values) are always
limited so that no string when normalized to NFC expands to more than
3× in length (measured in code units).

Why do LATIN SMALL LETTER DOTLESS I, COMBINING DOT ABOVE not get normalized to "i" in NFC form?

Example in Python:
>>> s = 'ı̇'
>>> len(s)
2
>>> list(s)
['ı', '̇']
>>> print(", ".join(map(unicodedata.name, s)))
LATIN SMALL LETTER DOTLESS I, COMBINING DOT ABOVE
>>> normalized = unicodedata.normalize('NFC', s)
>>> print(", ".join(map(unicodedata.name, normalized)))
LATIN SMALL LETTER DOTLESS I, COMBINING DOT ABOVE
As you can see, NFC normalization does not compose the dotless i + a dot to a normal i. Is there a rationale for this? Is this an oversight? Or is it not included because NFC is supposed to be the perfect inverse of NFD (and one wouldn’t want to decompose i to dotless i + dot).
While NFC isn't the "perfect inverse" of NFD, this follows from NFC being defined in terms of the same decomposition mappings as NFD. NFC is basically defined as NFD followed by recomposing certain NFD decomposition pairs. Since there's no decomposition mapping for LATIN SMALL LETTER I, it can never be the result of a recomposition.

why number of string should be greater than or equal to number of states in pumping lemma?

If L is a regular language, then there exists a constant n (which depends on L) such that for every string w in the language L, such that the length of w is greater than or equal to n, we can divide w into three strings, w = xyz.
w = length of string. n = Number of States.
Why should we pick w greater than or equal to n?
and what is Pumping length?
If you look at the complete statement of the lemma (http://en.wikipedia.org/wiki/Pumping_lemma_for_regular_languages), you can see that it is actually stating that every string is formed by a prefix x, a part that can be repeated any number of times y and a suffix z. Now it is obvious that, in the shortest case (when the repeating part is taken only once), the length of w equals the number of states needed for the language. This Wikipedia image is very useful:
http://en.wikipedia.org/wiki/File:Pumping-Lemma_xyz_svg.svg
You seem to be misunderstanding the lemma (which you also have not stated completely), and mixing aspects of a proof with what you did state. The lemma says that for every regular language L, there is a constant p such that every string of at least p symbols that belongs to L has a non-empty substring of length no greater than p that can be "pumped", always yielding another element of L. The constant p is the (a) "pumping length".
This can be proved by observing that if a language is regular then there is a finite state automaton that accepts it, and taking p to be the number of states in that automaton (details omitted).
That does not imply, however, that the number of states in the smallest FSA the recognizes a given regular language is the smallest possible pumping length for that language. For instance, consider the language consisting of the union of { an } and { bn } for all n. You need a four-state FSA to recognize this language, but its minimum pumping length is 1.

Set of unambiguous looking letters & numbers for user input

Is there an existing subset of the alphanumerics that is easier to read? In particular, is there a subset that has fewer characters that are visually ambiguous, and by removing (or equating) certain characters we reduce human error?
I know "visually ambiguous" is somewhat waffly of an expression, but it is fairly evident that D, O and 0 are all similar, and 1 and I are also similar. I would like to maximize the size of the set of alpha-numerics, but minimize the number of characters that are likely to be misinterpreted.
The only precedent I am aware of for such a set is the Canada Postal code system that removes the letters D, F, I, O, Q, and U, and that subset was created to aid the postal system's OCR process.
My initial thought is to use only capital letters and numbers as follows:
A
B = 8
C = G
D = 0 = O = Q
E = F
H
I = J = L = T = 1 = 7
K = X
M
N
P
R
S = 5
U = V = Y
W
Z = 2
3
4
6
9
This problem may be difficult to separate from the given type face. The distinctiveness of the characters in the chosen typeface could significantly affect the potential visual ambiguity of any two characters, but I expect that in most modern typefaces the above characters that are equated will have a similar enough appearance to warrant equating them.
I would be grateful for thoughts on the above – are the above equations suitable, or perhaps are there more characters that should be equated? Would lowercase characters be more suitable?
I needed a replacement for hexadecimal (base 16) for similar reasons (e.g. for encoding a key, etc.), the best I could come up with is the following set of 16 characters, which can be used as a replacement for hexadecimal:
0 1 2 3 4 5 6 7 8 9 A B C D E F Hexadecimal
H M N 3 4 P 6 7 R 9 T W C X Y F Replacement
In the replacement set, we consider the following:
All characters used have major distinguishing features that would only be omitted in a truly awful font.
Vowels A E I O U omitted to avoid accidentally spelling words.
Sets of characters that could potentially be very similar or identical in some fonts are avoided completely (none of the characters in any set are used at all):
0 O D Q
1 I L J
8 B
5 S
2 Z
By avoiding these characters completely, the hope is that the user will enter the correct characters, rather than trying to correct mis-entered characters.
For sets of less similar but potentially confusing characters, we only use one character in each set, hopefully the most distinctive:
Y U V
Here Y is used, since it always has the lower vertical section, and a serif in serif fonts
C G
Here C is used, since it seems less likely that a C would be entered as G, than vice versa
X K
Here X is used, since it is more consistent in most fonts
F E
Here F is used, since it is not a vowel
In the case of these similar sets, entry of any character in the set could be automatically converted to the one that is actually used (the first one listed in each set). Note that E must not be automatically converted to F if hexadecimal input might be used (see below).
Note that there are still similar-sounding letters in the replacement set, this is pretty much unavoidable. When reading aloud, a phonetic alphabet should be used.
Where characters that are also present in standard hexadecimal are used in the replacement set, they are used for the same base-16 value. In theory mixed input of hexadecimal and replacement characters could be supported, provided E is not automatically converted to F.
Since this is just a character replacement, it should be easy to convert to/from hexadecimal.
Upper case seems best for the "canonical" form for output, although lower case also looks reasonable, except for "h" and "n", which should still be relatively clear in most fonts:
h m n 3 4 p 6 7 r 9 t w c x y f
Input can of course be case-insensitive.
There are several similar systems for base 32, see http://en.wikipedia.org/wiki/Base32 However these obviously need to introduce more similar-looking characters, in return for an additional 25% more information per character.
Apparently the following set was also used for Windows product keys in base 24, but again has more similar-looking characters:
B C D F G H J K M P Q R T V W X Y 2 3 4 6 7 8 9
My set of 23 unambiguous characters is:
c,d,e,f,h,j,k,m,n,p,r,t,v,w,x,y,2,3,4,5,6,8,9
I needed a set of unambiguous characters for user input, and I couldn't find anywhere that others have already produced a character set and set of rules that fit my criteria.
My requirements:
No capitals: this supposed to be used in URIs, and typed by people who might not have a lot of typing experience, for whom even the shift key can slow them down and cause uncertainty. I also want someone to be able to say "all lowercase" so as to reduce uncertainty, so I want to avoid capital letters.
Few or no vowels: an easy way to avoid creating foul language or surprising words is to simply omit most vowels. I think keeping "e" and "y" is ok.
Resolve ambiguity consistently: I'm open to using some ambiguous characters, so long as I only use one character from each group (e.g., out of lowercase s, uppercase S, and five, I might only use five); that way, on the backend, I can just replace any of these ambiguous characters with the one correct character from their group. So, the input string "3Sh" would be replaced with "35h" before I look up its match in my database.
Only needed to create tokens: I don't need to encode information like base64 or base32 do, so the exact number of characters in my set doesn't really matter, besides my wanting to to be as large as possible. It only needs to be useful for producing random UUID-type id tokens.
Strongly prefer non-ambiguity: I think it's much more costly for someone to enter a token and have something go wrong than it is for someone to have to type out a longer token. There's a tradeoff, of course, but I want to strongly prefer non-ambiguity over brevity.
The confusable groups of characters I identified:
A/4
b/6/G
8/B
c/C
f/F
9/g/q
i/I/1/l/7 - just too ambiguous to use; note that european "1" can look a lot like many people's "7"
k/K
o/O/0 - just too ambiguous to use
p/P
s/S/5
v/V
w/W
x/X
y/Y
z/Z/2
Unambiguous characters:
I think this leaves only 9 totally unambiguous lowercase/numeric chars, with no vowels:
d,e,h,j,m,n,r,t,3
Adding back in one character from each of those ambiguous groups (and trying to prefer the character that looks most distinct, while avoiding uppercase), there are 23 characters:
c,d,e,f,h,j,k,m,n,p,r,t,v,w,x,y,2,3,4,5,6,8,9
Analysis:
Using the rule of thumb that a UUID with a numerical equivalent range of N possibilities is sufficient to avoid collisions for sqrt(N) instances:
an 8-digit UUID using this character set should be sufficient to avoid collisions for about 300,000 instances
a 16-digit UUID using this character set should be sufficient to avoid collisions for about 80 billion instances.
Mainly drawing inspiration from this ux thread, mentioned by #rwb,
Several programs use similar things. The list in your post seems to be very similar to those used in these programs, and I think it should be enough for most purposes. You can add always add redundancy (error-correction) to "forgive" minor mistakes; this will require you to space-out your codes (see Hamming distance), though.
No references as to particular method used in deriving the lists, except trial and error
with humans (which is great for non-ocr: your users are humans)
It may make sense to use character grouping (say, groups of 5) to increase context ("first character in the second of 5 groups")
Ambiguity can be eliminated by using complete nouns (from a dictionary with few look-alikes; word-edit-distance may be useful here) instead of characters. People may confuse "1" with "i", but few will confuse "one" with "ice".
Another option is to make your code into a (fake) word that can be read out loud. A markov model may help you there.
If you have the option to use only capitals, I created this set based on characters which users commonly mistyped, however this wholly depends on the font they read the text in.
Characters to use: A C D E F G H J K L M N P Q R T U V W X Y 3 4 6 7 9
Characters to avoid:
B similar to 8
I similar to 1
O similar to 0
S similar to 5
Z similar to 2
What you seek is an unambiguous, efficient Human-Computer code. What I recommend is to encode the entire data with literal(meaningful) words, nouns in particular.
I have been developing a software to do just that - and most efficiently. I call it WCode. Technically its just Base-1024 Encoding - wherein you use words instead of symbols.
Here are the links:
Presentation: https://docs.google.com/presentation/d/1sYiXCWIYAWpKAahrGFZ2p5zJX8uMxPccu-oaGOajrGA/edit
Documentation: https://docs.google.com/folder/d/0B0pxLafSqCjKOWhYSFFGOHd1a2c/edit
Project: https://github.com/San13/WCode (Please wait while I get around uploading...)
This would be a general problem in OCR. Thus for end to end solution where in OCR encoding is controlled - specialised fonts have been developed to solve the "visual ambiguity" issue you mention of.
See: http://en.wikipedia.org/wiki/OCR-A_font
as additional information : you may want to know about Base32 Encoding - wherein symbol for digit '1' is not used as it may 'confuse' the users with the symbol for alphabet 'l'.
Unambiguous looking letters for humans are also unambiguous for optical character recognition (OCR). By removing all pairs of letters that are confusing for OCR, one obtains:
!+2345679:BCDEGHKLQSUZadehiopqstu
See https://www.monperrus.net/martin/store-data-paper
It depends how large you want your set to be. For example, just the set {0, 1} will probably work well. Similarly the set of digits only. But probably you want a set that's roughly half the size of the original set of characters.
I have not done this, but here's a suggestion. Pick a font, pick an initial set of characters, and write some code to do the following. Draw each character to fit into an n-by-n square of black and white pixels, for n = 1 through (say) 10. Cut away any all-white rows and columns from the edge, since we're only interested in the black area. That gives you a list of 10 codes for each character. Measure the distance between any two characters by how many of these codes differ. Estimate what distance is acceptable for your application. Then do a brute-force search for a set of characters which are that far apart.
Basically, use a script to simulate squinting at the characters and see which ones you can still tell apart.
Here's some python I wrote to encode and decode integers using the system of characters described above.
def base20encode(i):
"""Convert integer into base20 string of unambiguous characters."""
if not isinstance(i, int):
raise TypeError('This function must be called on an integer.')
chars, s = '012345689ACEHKMNPRUW', ''
while i > 0:
i, remainder = divmod(i, 20)
s = chars[remainder] + s
return s
def base20decode(s):
"""Convert string to unambiguous chars and then return integer from resultant base20"""
if not isinstance(s, str):
raise TypeError('This function must be called on a string.')
s = s.translate(bytes.maketrans(b'BGDOQFIJLT7KSVYZ', b'8C000E11111X5UU2'))
chars, i, exponent = '012345689ACEHKMNPRUW', 0, 1
for number in s[::-1]:
i += chars.index(number) * exponent
exponent *= 20
return i
base20decode(base20encode(10))
base58:123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz

Prove language irregular with pumping Lemma

I am trying to prove that the following language is not regular using the pumping lemma
L= { a^i b^j | i^2 > j}
Any tips on this? I am completely stuck.
Thanks.
The pumping lemma says:
If a language A is regular => there is a number p (pumping length) where, if s is any string in L such that |s| >= p, then s may be divided into three pieces s=xyz, satisfying the following condition:
xyiz is in L for each i>=0
|y|>=0
p>=|xy|
The right way to show that a certain language L is not regular is to suppose L regular and try to reach a contradiction.
Lets try to demonstrate that L = {0n1n}|n>=0} is not regular.
We start assuming to the contrary that L is regular.
You can think about this kind of demonstration as a game:
Challenger: He choose the pumping length p. You cannot do any presumption on it.
You: Now it is your turn: choose the "kind" of string that represents the irregularity of the language.
Lets say that the string is in the form 0p1p.
A good tip in this step is to try to limit the adversary next move.
Challenger: He presents to you a string s in the form 0p1p.
You: It's time to pump! If you chose correctly the form of the string in your previous move, you can do some assumption. In our case, for example, we know that the substring y consists only of 0s (at least one 0 because |y|>0), because |xy|<=p and first p-elements are 0s.
Now we show that it exists i>=0 such that xyiz is not in L. For example, for i=2 the string xyyz has more 0s than 1s and so is not a member of L. This case is a contradiction. => L is not regular.
Never forget to demonstrate why the pumped string cannot be a member of L.
If you have any doubt, feel free to ask :)
Cheers.
To the above answer, "The pumping lemma says: If a language A is regular => there is a number p (pumping length) where, if s is any string in L such that |s| >= p, then s may be divided into three pieces s=xyz, satisfying the following condition:"
You mean "If a language L is regular"
Also, the three conditions
1. xy^iz is in L for each i>=0
2. |y|>=0
3. p>=|xy|
The second should be just |y| > 0 not >=
Say you choose the string:
a^2b^5
aabbbbb. Which is in the language.
Now your opponent can choose XYZ.
Their options:
1.) X(empty)Y(some a's)
2.) X(some a's)Y(some a's and some b's)
3.) X(some a's)Y(some a's)
Based on their possible choices, you pump up Y using Y^i where i is an arbitrary number of your choice.
Say they choose 1.)
X(-)Y(a)Z(abbbbb)
If you "pump" up Y^i choosing i = 0. The new string becomes abbbbb. Which is not in the language.
Repeat this for each possible choice of the opponent, if you can pump up Y in a way that produces a string that is not in the language L, then you've succeeded in proving that the language is not regular.