Simplifying a 9 variable boolean expression - boolean

I am trying to create a tic-tac-toe program as a mental exercise and I have the board states stored as booleans like so:
http://i.imgur.com/xBiuoAO.png
I would like to simplify this boolean expression...
(a&b&c) | (d&e&f) | (g&h&i) | (a&d&g) | (b&e&h) | (c&f&i) | (a&e&i) | (g&e&c)
My first thoughts were to use a Karnaugh Map but there were no solvers online that supported 9 variables.
and heres the question:
First of all, how would I know if a boolean condition is already as simple as possible?
and second: What is the above boolean condition simplified?

2. Simplified condition:
The original expression
a&b&c|d&e&f|g&h&i|a&d&g|b&e&h|c&f&i|a&e&i|g&e&c
can be simplified to the following, knowing that & is more prioritary than |
e&(d&f|b&h|a&i|g&c)|a&(b&c|d&g)|i&(g&h|c&f)
which is 4 chars shorter, performs in the worst case 18 & and | evaluations (the original one counted 23)
There is no shorter boolean formula (see point below). If you switch to matrices, maybe you can find another solution.
1. Making sure we got the smallest formula
Normally, it is very hard to find the smallest formula. See this recent paper if you are more interested. But in our case, there is a simple proof.
We will reason about a formula being the smallest with respect to the formula size, where for a variable a, size(a)=1, for a boolean operation size(A&B) = size(A|B) = size(A) + 1 + size(B), and for negation size(!A) = size(A) (thus we can suppose that we have Negation Normal Form at no cost).
With respect to that size, our formula has size 37.
The proof that you cannot do better consists in first remarking that there are 8 rows to check, and that there is always a pair of letter distinguishing 2 different rows. Since we can regroup these 8 checks in no less than 3 conjuncts with the remaining variable, the number of variables in the final formula should be at least 8*2+3 = 19, from which we can deduce the minimal tree size.
Detailed proof
Let us suppose that a given formula F is the smallest and in NNF format.
F cannot contain negated variables like !a. For that, remark that F should be monotonic, that is, if it returns "true" (there is a winning row), then changing one of the variables from false to true should not change that result. According to Wikipedia, F can be written without negation. Even better, we can prove that we can remove the negation. Following this answer, we could convert back and from DNF format, removing negated variables in the middle or replacing them by true.
F cannot contain a sub-tree like a disjunction of two variables a|b.
For this formula to be useful and not exchangeable with either a or b, it would mean that there are contradicting assignments such that for example
F[a|b] = true and F[a] = false, therefore that a = false and b = true because of monotonicity. Also, in this case, turning b to false makes the whole formula false because false = F[a] = F[a|false] >= F[a|b](b = false).
Therefore there is a row passing by b which is the cause of the truth, and it cannot go through a, hence for example e = true and h = true.
And the checking of this row passes by the expression a|b for testing b. However, it means that with a,e,h being true and all other set to false, F is still true, which contradicts the purpose of the formula.
Every subtree looking like a&b checks a unique row. So the last letter should appear just above the corresponding disjunction (a&b|...)&{c somewhere for sure here}, or this leaf is useless and either a or b can be removed safely. Indeed, suppose that c does not appear above, and the game is where a&b&c is true and all other variables are false. Then the expression where c is supposed to be above returns false, so a&b will be always useless. So there is a shorter expression by removing a&b.
There are 8 independent branches, so there is at least 8 subtrees of type a&b. We cannot regroup them using a disjunction of 2 conjunctions since a, f and h never share the same rows, so there must be 3 outer variables. 8*2+3 makes 19 variables appear in the final formula.
A tree with 19 variables cannot have less than 18 operators, so in total the size have to be at least 19+18 = 37.
You can have variants of the above formula.
QED.

One option is doing the Karnaugh map manually. Since you have 9 variables, that makes for a 2^4 by 2^5 grid, which is rather large, and by the looks of the equation, probably not very interesting either.
By inspection, it doesn't look like a Karnaugh map will give you any useful information (Karnaugh maps basically reduce expressions such as ((!a)&b) | (a&b) into b), so in that sense of simplification, your expression is already as simple as it can get. But if you want to reduce the number of computations, you can factor out a few variables using the distributivity of the AND operators over ORs.

The best way to think of this is how a person would think of it. No person would say to themselves, "a and b and c, or if d and e and f," etc. They would say "Any three in a row, horizontally, vertically, or diagonally."
Also, instead of doing eight checks (3 rows, 3 columns, and 2 diagonals), you can do just four checks (three rows and one diagonal), then rotate the board 90 degrees, then do the same checks again.
Here's what you end up with. These functions all assume that the board is a three-by-three matrix of booleans, where true represents a winning symbol, and false represents a not-winning symbol.
def win?(board)
winning_row_or_diagonal?(board) ||
winning_row_or_diagonal?(rotate_90(board))
end
def winning_row_or_diagonal?(board)
winning_row?(board) || winning_diagonal?(board)
end
def winning_row?(board)
3.times.any? do |row_number|
three_in_a_row?(board, row_number, 0, 1, 0)
end
end
def winning_diagonal?(board)
three_in_a_row?(board, 0, 0, 1, 1)
end
def three_in_a_row?(board, x, y, delta_x, delta_y)
3.times.all? do |i|
board[x + i * delta_x][y + i * deltay]
end
end
def rotate_90(board)
board.transpose.map(&:reverse)
end
The matrix rotate is from here: https://stackoverflow.com/a/3571501/238886
Although this code is quite a bit more verbose, each function is clear in its intent. Rather than a long boolean expresion, the code now expresses the rules of tic-tac-toe.

You know it's a simple as possible when there are no common sub-terms to extract (e.g. if you had "a&b" in two different trios).
You know your tic tac toe solution must already be as simple as possible because any pair of boxes can belong to at most only one winning line (only one straight line can pass through two given points), so (a & b) can't be reused in any other win you're checking for.
(Also, "simple" can mean a lot of things; specifying what you mean may help you answer your own question. )

Related

How does aggregate generalise fold and fold generalise reduce?

As far as I understand aggregate is a generalisation of fold which in turn is a generalisation of reduce.
Similarily combineByKey is a generalisation of aggregateByKey which in turn is a generalisation of foldByKey which in turn is a generalisation of reduceByKey.
However I have trouble finding simple examples for each of those seven methods which can in turn only be expressed by them and not their less general versions. For example I found http://blog.madhukaraphatak.com/spark-rdd-fold/ giving an example for fold, but I have been able to use reduce in the same situation as well.
What I found out so far:
I read that the more generalised methods can be more efficient, but that would be a non-functional requirement and I would like to get examples which can not be implemented with the more specific method.
I also read that e.g. the function passed to fold only has to be associative, while the one for reduce has to be commutative additionally: https://stackoverflow.com/a/25158790/4533188 (However, I still don't know any good simple example.) whereas in https://stackoverflow.com/a/26635928/4533188 I read that fold needs both properties to hold...
We could think of the zero value as a feature (e.g. for fold over reduce) as in "add all elements and add 3" and using 3 as the zero value, but that would be misleading, because 3 would be added for each partition, not just once. Also this is simply not the purpose of fold as far as I understood - it wasn't meant as a feature, but as a necessity to implement it to be able to take non-commutative functions.
What would simple examples for those seven methods be?
Let's work through what is actually needed logically.
First, note that if your collection is unordered, any set of (binary) operations on it need to be both commutative and associative, or you'll get different answers depending on which (arbitrary) order you choose each time. Since reduce, fold, and aggregate all use binary operations, if you use these things on a collection that is unordered (or is viewed as unordered), everything must be commutative and associative.
reduce is an implementation of the idea that if you can take two things and turn them into one thing, you can collapse an arbitrarily long collection into a single element. Associativity is exactly the property that it doesn't matter how you pair things up as long as you eventually pair them all and keep the left-to-right order unchanged, so that's exactly what you need.
a b c d a b c d a b c d
a # b c d a # b c d a b # c d
(a#b) c # d (a#b) # c d a (b#c) d
(a#b) # (c#d) ((a#b)#c) # d a # ((b#c)#d)
All of the above are the same as long as the operation (here called #) is associative. There is no reason to swap around which things go on the left and which go on the right, so the operation does not need to be commutative (addition is: a+b == b+a; concat is not: ab != ba).
reduce is mathematically simple and requires only an associative operation
Reduce is limited, though, in that it doesn't work on empty collections, and in that you can't change the type. If you're working sequentially, you can a function that takes a new type and the old type, and produces something with the new type. This is a sequential fold (left-fold if the new type goes on the left, right-fold if it goes on the right). There is no choice about the order of operations here, so commutativity and associativity and everything are irrelevant. There's exactly one way to work through your list sequentially. (If you want your left-fold and right-fold to always be the same, then the operation must be associative and commutative, but since left- and right-folds don't generally get accidentally swapped, this isn't very important to ensure.)
The problem comes when you want to work in parallel. You can't sequentially go through your collection; that's not parallel by definition! So you have to insert the new type at multiple places! Let's call our fold operation #, and we'll say that the new type goes on the left. Furthermore, we'll say that we always start with the same element, Z. Now we could do any of the following (and more):
a b c d a b c d a b c d
Z#a b c d Z#a b Z#c d Z#a Z#b Z#c Z#d
(Z#a) # b c d (Z#a) # b (Z#c) # d
((Z#a)#b) # c d
(((Z#a)#b)#c) # d
Now we have a collection of one or more things of the new type. (If the original collection was empty, we just take Z.) We know what to do with that! Reduce! So we make a reduce operation for our new type (let's call it $, and remember it has to be associative), and then we have aggregate:
a b c d a b c d a b c d
Z#a b c d Z#a b Z#c d Z#a Z#b Z#c Z#d
(Z#a) # b c d (Z#a) # b (Z#c) # d Z#a $ Z#b Z#c $ Z#d
((Z#a)#b) # c d ((Z#a)#b) $ ((Z#c)#d) ((Z#a)$(Z#b)) $ ((Z#c)$(Z#d))
(((Z#a)#b)#c) # d
Now, these things all look really different. How can we make sure that they end up to be the same? There is no single concept that describes this, but the Z# operation has to be zero-like and $ and # have to be homomorphic, in that we need (Z#a)#b == (Z#a)$(Z#b). That's the actual relationship that you need (and it is technically very similar to a semigroup homomorphism). There are all sorts of ways to pick badly even if everything is associative and commutative. For example, if Z is the double value 0.0 and # is actually +, then Z is zero-like and # is associative and commutative. But if $ is actually *, which is also associative and commutative, everything goes wrong:
(0.0+2) * (0.0+3) == 2.0 * 3.0 == 6.0
((0.0+2) + 3) == 2.0 + 3 == 5.0
One example of a non-trival aggregate is building a collection, where # is the "append an element" operator and $ is the "concat two collections" operation.
aggregate is tricky and requires an associative reduce operation, plus a zero-like value and a fold-like operation that is homomorphic to the reduce
The bottom line is that aggregate is not simply a generalization of reduce.
But there is a simplification (less general form) if you're not actually changing the type. If Z is actually z and is an actual zero, we can just stick it in wherever we want and use reduce. Again, we don't need commutativity conceptually; we just stick in one or more z's and reduce, and our # and $ operations can be the same thing, namely the original # we used on the reduce
a b c d () <- empty
z#a z#b z
z#a (z#b)#c
z#a ((z#b)#c)#d
(z#a)#((z#b)#c)#d
If we just delete the z's from here, it works perfectly well, and in fact is equivalent to if (empty) z else reduce. But there's another way it could work too. If the operation # is also commutative, and z is not actually a zero but just occupies a fixed point of # (meaning z#z == z but z#a is not necessarily just a), then you can run the same thing, and since commutivity lets you switch the order around, you conceptually can reorder all the z's together at the beginning, and then merge them all together.
And this is a parallel fold, which is really a rather different beast than a sequential fold.
(Note that neither fold nor aggregate are strictly generalizations of reduce even for unordered collections where operations have to be associative and commutative, as some operations do not have a sensible zero! For instance, reducing strings by shortest length has as its "zero" the longest possible string, which conceptually doesn't exist, and practically is an absurd waste of memory.)
fold requires an associative reduce operation plus either a zero value or a reduce operation that's commutative plus a fixed-point value
Now, when would you ever use a parallel fold that wasn't just a reduceOrElse(zero)? Probably never, actually, though they can exist. For example, if you have a ring, you often have fixed points of the type we need. For instance, 10 % 45 == (10*10) % 45, and * is both associative and commutative in integers mod 45. Thus, if our collection is numbers mod 45, we can fold with a "zero" of 10 and an operation of *, and parallelize however we please while still getting the same result. Pretty weird.
However, note that you can just plug the zero and operation of fold into aggregate and get exactly the same result, so aggregate is a proper generalization of fold.
So, bottom line:
Reduce requires only an associative merge operation, but doesn't change the type, and doesn't work on empty collecitons.
Parallel fold tries to extend reduce but requires a true zero, or a fixed point and the merge operation must be commutative.
Aggregate changes the type by (conceptually) running sequential folds followed by a (parallel) reduce, but there are complex relationships between the reduce operation and the fold operation--basically they have to be doing "the same thing".
An unordered collection (e.g. a set) always requires an associative and commutative operation for any of the above.
With regard to the byKey stuff: it's just the same as this, except it only applies it to the collection of values associated with a (potentially repeated) key.
If Spark actually requires commutativity where the above analysis does not suggest it's needed, one could reasonably consider that a bug (or at least an unnecessary limitation of the implementation, given that operations like map and filter preserve order on ordered RDDs).
the function passed to fold only has to be associative, while the one for reduce has to be commutative additionally.
It is not correct. fold on RDDs requires the function to be commutative as well. It is not the same operation as fold on Iterable what is pretty well described in the official documentation:
This behaves somewhat differently from fold operations implemented for non-distributed
collections in functional languages like Scala.
This fold operation may be applied to
partitions individually, and then fold those results into the final result, rather than
apply the fold to each element sequentially in some defined ordering. For functions
that are not commutative, the result may differ from that of a fold applied to a
non-distributed collection.
As you can see order of merging partial values is not part of the contract hence function which is used for fold has to be commutative.
I read that the more generalised methods can be more efficient
Technically speaking there should be no significant difference. For fold vs reduce you can check my answers to reduce() vs. fold() in Apache Spark and Why is the fold action necessary in Spark?
Regarding *byKey methods all are implemented using the same basic construct which is combineByKeyWithClassTag and can be reduced to three simple operations:
createCombiner - create "zero" value for a given partition
mergeValue - merge values into accumulator
mergeCombiners - merge accumulators created for each partition.

And operator Lisp

Why does and operator returns a value? What is the returned value dependent on?
When I try the following example -
(write (and a b c d)) ; prints the greatest integer among a b c d
where a, b, c and d are positive integers, then and returns the greatest of them.
However when one of a, b, c and d is 0 or negative, then the smallest integer is returned. Why is this the case?
As stated in the documentation:
The macro and evaluates each form one at a time from left to right. As soon as any form evaluates to nil, and returns nil without evaluating the remaining forms. If all forms but the last evaluate to true values, and returns the results produced by evaluating the last form. If no forms are supplied, (and) returns t.
So the returned value doesn't depend on which value is the "greatest" or "smallest" of the arguments.
and can be regarded as a generalization of the two-place if. That is to say, (and a b) works exactly like (if a b). With and in the place of if we can add more conditions: (and a1 a2 a3 a4 ... aN b). If they all yield true, then b is returned. If we use if to express this, it is more verbose, because we still have to use and: (if (and a1 a2 a3 ... aN) b). and also generalizes in that it can be used with only one argument (if cannot), and even with no arguments at all (yields t in that case).
Mathematically, and forms a group. The identity element of that group is t, which is why (and) yields t. To describe the behavior of the N-argument and, we just need these rules:
(and) -> yield t
(and x y) -> { if x is true evaluate and yield y
{ otherwise yield nil
Now it turns out that this rule for a two-place and obeys the associative law: namely (and (and x y) z) behaves the same way as (and x (and y z)). The effect of the above rules is that no matter in which of these two ways we group the terms in this compound expression, x, y, and z are evaluated from left to right, and evaluation either stops at the first nil which it encounters and yields nil, or else it evaluates through to the end and yields the value of z.
Thus because we have this nice associative property, the rational thing to do in a nice language like Lisp which isn't tied to infix operators is to recognize that since the associative grouping doesn't matter, let's just have the flat syntax (and x1 x2 x3 ... xN), with any number of arguments including zero. This syntax denotes any one of the possible associations of N-1 binary ands which all yield the same behavior.
In other words, let's not make the poor programmer write a nested (and (and (and ...) ...) ...) to express a four-term and, and just let them write (and ...) with four arguments.
Summary:
the zero-place and yields t, which has to do with t being the identity element for the and operation.
the two-place and yields the second value if the first one is true. This is a useful equivalence to the two-place if. Binary and could be defined as yielding t when both arguments are true, but that would be less useful. In Lisp, any value that is not nil is a Boolean true. If we replace a non-nil value with t, it remains Boolean true, but we have lost potentially useful information.
the behavior of the n-place and is a consequence of the associative property; or rather preserving the equivalence between the flat N-argument form and all the possible binary groupings which are already equivalent to each other thanks to the associative property.
One consequence of all this is that we can have an extended if, like (if cond1 cond2 cond3 cond4 ... condN then-form), where then-form is evaluated and yielded if all the conditions are true. We just have to spell if using the and symbol.

which hash functions are orthogonal to each other?

I'm interested in multi-level data integrity checking and correcting. Where multiple error correcting codes are being used (they can be 2 of the same type of codes). I'm under the impression that a system using 2 codes would achieve maximum effectiveness if the 2 hash codes being used were orthogonal to each other.
Is there a list of which codes are orthogonal to what? Or do you need to use the same hashing function but with different parameters or usage?
I expect that the first level ecc will be a reed-solomon code, though I do not actually have control over this first function, hence I cannot use a single code with improved capabilities.
Note that I'm not concerned with encryption security.
Edit: This is not a duplicate of
When are hash functions orthogonal to each other? due to it essentially asking what the definition of orthogonal hash functions are. I want examples of which hash functions that are orthogonal.
I'm not certain it is even possible to enumerate all orthogonal hash functions. However, you only asked for some examples, so I will endeavour to provide some as well as some intuition as to what properties seem to lead to orthogonal hash functions.
From a related question, these two functions are orthogonal to each other:
Domain: Reals --> Codomain: Reals
f(x) = x + 1
g(x) = x + 2
This is a pretty obvious case. It is easier to determine orthogonality if the hash functions are (both) perfect hash functions such as these are. Please note that the term "perfect" is meant in the mathematical sense, not in the sense that these should ever be used as hash functions.
It is a more or less trivial case for perfect hash functions to satisfy orthogonality requirements. Whenever the functions are injective they are perfect hash functions and are thus orthogonal. Similar examples:
Domain: Integers --> Codomain: Integers
f(x) = 2x
g(x) = 3x
In the previous case, this is an injective function but not bijective as there is exactly one element in the codomain mapped to by each element in the domain, but there are many elements in the codomain that are not mapped to at all. These are still adequate for both perfect hashing and orthogonality. (Note that if the Domain/Codomain were Reals, this would be a bijection.)
Functions that are not injective are more tricky to analyze. However, it is always the case that if one function is injective and the other is not, they are not orthogonal:
Domain: Reals --> Codomain: Reals
f(x) = e^x // Injective -- every x produces a unique value
g(x) = x^2 // Not injective -- every number other than 0 can be produced by two different x's
So one trick is thus to know that one function is injective and the other is not. But what if neither is injective? I do not presently know of an algorithm for the general case that will determine this other than brute force.
Domain: Naturals --> Codomain: Naturals
j(x) = ceil(sqrt(x))
k(x) = ceil(x / 2)
Neither function is injective, in this case because of the presence of two obvious non-injective functions: ceil and abs combined with a restricted domain. (In practice most hash functions will not have a domain more permissive than integers.) Testing out values will show that j will have non-unique results when k will not and vice versa:
j(1) = ceil(sqrt(1)) = ceil(1) = 1
j(2) = ceil(sqrt(2)) = ceil(~1.41) = 2
k(1) = ceil(x / 2) = ceil(0.5) = 1
k(2) = ceil(x / 2) = ceil(1) = 1
But what about these functions?
Domain: Integers --> Codomain: Reals
m(x) = cos(x^3) % 117
n(x) = ceil(e^x)
In these cases, neither of the functions are injective (due to the modulus and the ceil) but when do they have a collision? More importantly, for what tuples of values of x do they both have a collision? These questions are hard to answer. I would suspect they are not orthogonal, but without a specific counterexample, I'm not sure I could prove that.
These are not the only hash functions you could encounter, of course. So the trick to determining orthogonality is first to see if they are both injective. If so, they are orthogonal. Second, see if exactly one is injective. If so, they are not orthogonal. Third, see if you can see the pieces of the function that are causing them to not be injective, see if you can determine its period or special cases (such as x=0) and try to come up with counter-examples. Fourth, visit math-stack-exchange and hope someone can tell you where they break orthogonality, or prove that they don't.

MATLAB: how to stack up arrays "shape-agnostically"?

Suppose that f is a function of one parameter whose output is an n-dimensional (m1 × m2… × mn) array, and that B is a vector of length k whose elements are all valid arguments for f.
I am looking for a convenient, and more importantly, "shape-agnostic", MATLAB expression (or recipe) for producing the (n+1)-dimensional (m1 × m2 ×…× mn × k) array obtained by "stacking" the k n-dimensional arrays f(b), where the parameter b ranges over B.
To do this in numpy, I would use an expression like this one:
C = concatenate([f(b)[..., None] for b in B], -1)
In case it's of any use, I'll unpack this numpy expression below (see APPENDIX), but the feature of it that I want to emphasize now is that it is entirely agnostic about the shapes/sizes of f(b) and B. For the types of applications I have in mind, the ability to write such "shape-agnostic" code is of utmost importance. (I stress this point because much MATLAB code I come across for doing this sort of manipulation is decidedly not "shape-agnostic", and I don't know how to make it so.)
APPENDIX
In general, if A is a numpy array, then the expression A[..., None] can be thought as "reshaping" A so that it gets one extra, trivial, dimension. Thus, if f(b) is an n-dimensional (m1 × m2… × mn) array, then, f(b)[..., None] is the corresponding (n+1)-dimensional (m1 × m2 ×…× mn × 1) array. (The reason for adding this trivial dimension will become clear below.)
With this clarification out of the way, the meaning of the first argument to concatenate, namely:
[f(b)[..., None] for b in B]
is not too hard to decipher. It is a standard Python "list comprehension", and it evaluates to the sequence of the k (n+1)-dimensional (m1 × m2 ×…× mn × 1) arrays f(b)[..., None], as the parameter b ranges over the vector B.
The second argument to concatenate is the "axis" along which the concatenation is to be performed, expressed as the index of the corresponding dimension of the arrays to be concatenated. In this context, the index -1 plays the same role as the end keyword does in MATLAB. Therefore, the expression
concatenate([f(b)[..., None] for b in B], -1)
says "concatenate the arrays f(b)[..., None] along their last dimension". It is in order to provide this "last dimension" to concatenate over that it becomes necessary to reshape the f(b) arrays (with, e.g., f(b)[..., None]).
One way of doing that is:
% input:
f=#(x) x*ones(2,2)
b=1:3;
%%%%
X=arrayfun(f,b,'UniformOutput',0);
X=cat(ndims(X{1})+1,X{:});
Maybe there are more elegant solutions?
Shape agnosticity is an important difference between the philosophies underlying NumPy and Matlab; it's a lot harder to accomplish in Matlab than it is in NumPy. And in my view, shape agnosticity is a bad thing, too -- the shape of matrices has mathematical meaning. If some function or class were to completely ignore the shape of the inputs, or change them in a way that is not in accordance with mathematical notations, then that function destroys part of the language's functionality and intent.
In programmer terms, it's an actually useful feature designed to prevent shape-related bugs. Granted, it's often a "programmatic inconvenience", but that's no reason to adjust the language. It's really all in the mindset.
Now, having said that, I doubt an elegant solution for your problem exists in Matlab :) My suggestion would be to stuff all of the requirements into the function, so that you don't have to do any post-processing:
f = #(x) bsxfun(#times, permute(x(:), [2:numel(x) 1]), ones(2,2, numel(x)) )
Now obviously this is not quite right, since f(1) doesn't work and f(1:2) does something other than f(1:4), so obviously some tinkering has to be done. But as the ugliness of this oneliner already suggests, a dedicated function might be a better idea. The one suggested by Oli is pretty decent, provided you lock it up in a function of its own:
function y = f(b)
g = #(x)x*ones(2,2); %# or whatever else you want
y = arrayfun(g,b, 'uni',false);
y = cat(ndims(y{1})+1,y{:});
end
so that f(b) for any b produces the right output.

Set of unambiguous looking letters & numbers for user input

Is there an existing subset of the alphanumerics that is easier to read? In particular, is there a subset that has fewer characters that are visually ambiguous, and by removing (or equating) certain characters we reduce human error?
I know "visually ambiguous" is somewhat waffly of an expression, but it is fairly evident that D, O and 0 are all similar, and 1 and I are also similar. I would like to maximize the size of the set of alpha-numerics, but minimize the number of characters that are likely to be misinterpreted.
The only precedent I am aware of for such a set is the Canada Postal code system that removes the letters D, F, I, O, Q, and U, and that subset was created to aid the postal system's OCR process.
My initial thought is to use only capital letters and numbers as follows:
A
B = 8
C = G
D = 0 = O = Q
E = F
H
I = J = L = T = 1 = 7
K = X
M
N
P
R
S = 5
U = V = Y
W
Z = 2
3
4
6
9
This problem may be difficult to separate from the given type face. The distinctiveness of the characters in the chosen typeface could significantly affect the potential visual ambiguity of any two characters, but I expect that in most modern typefaces the above characters that are equated will have a similar enough appearance to warrant equating them.
I would be grateful for thoughts on the above – are the above equations suitable, or perhaps are there more characters that should be equated? Would lowercase characters be more suitable?
I needed a replacement for hexadecimal (base 16) for similar reasons (e.g. for encoding a key, etc.), the best I could come up with is the following set of 16 characters, which can be used as a replacement for hexadecimal:
0 1 2 3 4 5 6 7 8 9 A B C D E F Hexadecimal
H M N 3 4 P 6 7 R 9 T W C X Y F Replacement
In the replacement set, we consider the following:
All characters used have major distinguishing features that would only be omitted in a truly awful font.
Vowels A E I O U omitted to avoid accidentally spelling words.
Sets of characters that could potentially be very similar or identical in some fonts are avoided completely (none of the characters in any set are used at all):
0 O D Q
1 I L J
8 B
5 S
2 Z
By avoiding these characters completely, the hope is that the user will enter the correct characters, rather than trying to correct mis-entered characters.
For sets of less similar but potentially confusing characters, we only use one character in each set, hopefully the most distinctive:
Y U V
Here Y is used, since it always has the lower vertical section, and a serif in serif fonts
C G
Here C is used, since it seems less likely that a C would be entered as G, than vice versa
X K
Here X is used, since it is more consistent in most fonts
F E
Here F is used, since it is not a vowel
In the case of these similar sets, entry of any character in the set could be automatically converted to the one that is actually used (the first one listed in each set). Note that E must not be automatically converted to F if hexadecimal input might be used (see below).
Note that there are still similar-sounding letters in the replacement set, this is pretty much unavoidable. When reading aloud, a phonetic alphabet should be used.
Where characters that are also present in standard hexadecimal are used in the replacement set, they are used for the same base-16 value. In theory mixed input of hexadecimal and replacement characters could be supported, provided E is not automatically converted to F.
Since this is just a character replacement, it should be easy to convert to/from hexadecimal.
Upper case seems best for the "canonical" form for output, although lower case also looks reasonable, except for "h" and "n", which should still be relatively clear in most fonts:
h m n 3 4 p 6 7 r 9 t w c x y f
Input can of course be case-insensitive.
There are several similar systems for base 32, see http://en.wikipedia.org/wiki/Base32 However these obviously need to introduce more similar-looking characters, in return for an additional 25% more information per character.
Apparently the following set was also used for Windows product keys in base 24, but again has more similar-looking characters:
B C D F G H J K M P Q R T V W X Y 2 3 4 6 7 8 9
My set of 23 unambiguous characters is:
c,d,e,f,h,j,k,m,n,p,r,t,v,w,x,y,2,3,4,5,6,8,9
I needed a set of unambiguous characters for user input, and I couldn't find anywhere that others have already produced a character set and set of rules that fit my criteria.
My requirements:
No capitals: this supposed to be used in URIs, and typed by people who might not have a lot of typing experience, for whom even the shift key can slow them down and cause uncertainty. I also want someone to be able to say "all lowercase" so as to reduce uncertainty, so I want to avoid capital letters.
Few or no vowels: an easy way to avoid creating foul language or surprising words is to simply omit most vowels. I think keeping "e" and "y" is ok.
Resolve ambiguity consistently: I'm open to using some ambiguous characters, so long as I only use one character from each group (e.g., out of lowercase s, uppercase S, and five, I might only use five); that way, on the backend, I can just replace any of these ambiguous characters with the one correct character from their group. So, the input string "3Sh" would be replaced with "35h" before I look up its match in my database.
Only needed to create tokens: I don't need to encode information like base64 or base32 do, so the exact number of characters in my set doesn't really matter, besides my wanting to to be as large as possible. It only needs to be useful for producing random UUID-type id tokens.
Strongly prefer non-ambiguity: I think it's much more costly for someone to enter a token and have something go wrong than it is for someone to have to type out a longer token. There's a tradeoff, of course, but I want to strongly prefer non-ambiguity over brevity.
The confusable groups of characters I identified:
A/4
b/6/G
8/B
c/C
f/F
9/g/q
i/I/1/l/7 - just too ambiguous to use; note that european "1" can look a lot like many people's "7"
k/K
o/O/0 - just too ambiguous to use
p/P
s/S/5
v/V
w/W
x/X
y/Y
z/Z/2
Unambiguous characters:
I think this leaves only 9 totally unambiguous lowercase/numeric chars, with no vowels:
d,e,h,j,m,n,r,t,3
Adding back in one character from each of those ambiguous groups (and trying to prefer the character that looks most distinct, while avoiding uppercase), there are 23 characters:
c,d,e,f,h,j,k,m,n,p,r,t,v,w,x,y,2,3,4,5,6,8,9
Analysis:
Using the rule of thumb that a UUID with a numerical equivalent range of N possibilities is sufficient to avoid collisions for sqrt(N) instances:
an 8-digit UUID using this character set should be sufficient to avoid collisions for about 300,000 instances
a 16-digit UUID using this character set should be sufficient to avoid collisions for about 80 billion instances.
Mainly drawing inspiration from this ux thread, mentioned by #rwb,
Several programs use similar things. The list in your post seems to be very similar to those used in these programs, and I think it should be enough for most purposes. You can add always add redundancy (error-correction) to "forgive" minor mistakes; this will require you to space-out your codes (see Hamming distance), though.
No references as to particular method used in deriving the lists, except trial and error
with humans (which is great for non-ocr: your users are humans)
It may make sense to use character grouping (say, groups of 5) to increase context ("first character in the second of 5 groups")
Ambiguity can be eliminated by using complete nouns (from a dictionary with few look-alikes; word-edit-distance may be useful here) instead of characters. People may confuse "1" with "i", but few will confuse "one" with "ice".
Another option is to make your code into a (fake) word that can be read out loud. A markov model may help you there.
If you have the option to use only capitals, I created this set based on characters which users commonly mistyped, however this wholly depends on the font they read the text in.
Characters to use: A C D E F G H J K L M N P Q R T U V W X Y 3 4 6 7 9
Characters to avoid:
B similar to 8
I similar to 1
O similar to 0
S similar to 5
Z similar to 2
What you seek is an unambiguous, efficient Human-Computer code. What I recommend is to encode the entire data with literal(meaningful) words, nouns in particular.
I have been developing a software to do just that - and most efficiently. I call it WCode. Technically its just Base-1024 Encoding - wherein you use words instead of symbols.
Here are the links:
Presentation: https://docs.google.com/presentation/d/1sYiXCWIYAWpKAahrGFZ2p5zJX8uMxPccu-oaGOajrGA/edit
Documentation: https://docs.google.com/folder/d/0B0pxLafSqCjKOWhYSFFGOHd1a2c/edit
Project: https://github.com/San13/WCode (Please wait while I get around uploading...)
This would be a general problem in OCR. Thus for end to end solution where in OCR encoding is controlled - specialised fonts have been developed to solve the "visual ambiguity" issue you mention of.
See: http://en.wikipedia.org/wiki/OCR-A_font
as additional information : you may want to know about Base32 Encoding - wherein symbol for digit '1' is not used as it may 'confuse' the users with the symbol for alphabet 'l'.
Unambiguous looking letters for humans are also unambiguous for optical character recognition (OCR). By removing all pairs of letters that are confusing for OCR, one obtains:
!+2345679:BCDEGHKLQSUZadehiopqstu
See https://www.monperrus.net/martin/store-data-paper
It depends how large you want your set to be. For example, just the set {0, 1} will probably work well. Similarly the set of digits only. But probably you want a set that's roughly half the size of the original set of characters.
I have not done this, but here's a suggestion. Pick a font, pick an initial set of characters, and write some code to do the following. Draw each character to fit into an n-by-n square of black and white pixels, for n = 1 through (say) 10. Cut away any all-white rows and columns from the edge, since we're only interested in the black area. That gives you a list of 10 codes for each character. Measure the distance between any two characters by how many of these codes differ. Estimate what distance is acceptable for your application. Then do a brute-force search for a set of characters which are that far apart.
Basically, use a script to simulate squinting at the characters and see which ones you can still tell apart.
Here's some python I wrote to encode and decode integers using the system of characters described above.
def base20encode(i):
"""Convert integer into base20 string of unambiguous characters."""
if not isinstance(i, int):
raise TypeError('This function must be called on an integer.')
chars, s = '012345689ACEHKMNPRUW', ''
while i > 0:
i, remainder = divmod(i, 20)
s = chars[remainder] + s
return s
def base20decode(s):
"""Convert string to unambiguous chars and then return integer from resultant base20"""
if not isinstance(s, str):
raise TypeError('This function must be called on a string.')
s = s.translate(bytes.maketrans(b'BGDOQFIJLT7KSVYZ', b'8C000E11111X5UU2'))
chars, i, exponent = '012345689ACEHKMNPRUW', 0, 1
for number in s[::-1]:
i += chars.index(number) * exponent
exponent *= 20
return i
base20decode(base20encode(10))
base58:123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz