Working with Sets as Functions - scala

From a FP course:
type Set = Int => Boolean // Predicate
/**
* Indicates whether a set contains a given element.
*/
def contains(s: Set, elem: Int): Boolean = s(elem)
Why would that make sense?
assert(contains(x => true, 100))
Basically what it does is provide the value 100 to the function x => true. I.e., we provide 100, it returns true.
But how is this related to sets?
Whatever we put, it returns true. Where is the sense of it?
I understand that we can provide our own set implementation/function as a parameter that would represent the fact that provided value is inside a set (or not) - then (only) this implementation makes the contains function be filled by some sense/meaning/logic/functionality.
But so far it looks like a nonsense function. It is named contains but the name does not represent the logic. We could call it apply() because what it does is to apply a function (the 1st argument) to a value (the 2nd argument). Having only the name contains may tell to a reader what an author might want to say. Isn't it too abstract, maybe?

In the code snippet you show above, any set S is represented by what is called its characteristic function, i.e., a function that given some integer i checks whether i is contained in the set S or not. Thus you can think of such a characteristic function f like it was a set, namely
{i | all integers i for which f i is true}
If you think of any function with type Int => Boolean as set (which is indicated by the type synonym Set = Int => Boolean) then you could describe contains by
Given a set f and an integer i, contains(f, i) checks whether i is an element of f or not.
Some example sets might make the idea more obvious:
Set Characeristic Function
empty set x => false
universal set x => true
set of odd numbers x => x % 2 == 1
set of even numbers x => x % 2 == 0
set of numbeers smaller than 10 x => x < 10
Example: The set {1, 2, 3} can be represented by
val S: Set = (x => 0 <= x && x <= 3)
If you want to know whether some number n is in this set you could do
contains(S, n)
but of course (as you already observed yourself) you would get the same result by directly doing
S(n)
While this is shorter, the former is maybe easier to read (since the intention is somewhat obvious).

Sets (both mathematically and in the context of computer representation) can be represented in various different ways. Using characteristic functions is one possibility. The idea is that a subset S of a given universal set U is completely determined by a function f:U-->{true,false} (called the characteristic function of the subset). simply since you can treat f(u) as answering the question "is u an element in S?".
Any particular choice of representing sets has advantages and disadvantages when compared to other methods. In particular, some representations are better suited to be modeled in a functional language than in imperative languages. If we compare managing sets as characteristic functions vs. as (either sorted or unsorted) lists (or arrays), then, for instance, creating unions, intersections, and set difference, is very efficient with characteristic functions but not so efficient with lists. Checking for the existence of an element is as easy as computing f(-) with characteristic functions, as opposed to searching a list. However, printing out the elements in the set is immediate with a list, but may require lots of computations with a characteristic function.
Having said that, a fundamental difference is that with characteristic functions one can model infinite sets, while this is impossible with array. Of course, no set will actually be infinite, but a set like (x: BigInt) x => (x % 2) == 0 truly represents that set of all even integers and one can actually compute with it (as long as you don't try to print all the elements).
So, every representation has pros and cons (duh).

Related

Are side effects everything that cannot be found in a pure function?

Is it safe to say that the following dichotomy holds:
Each given function is
either pure
or has side effects
If so, side effects (of a function) are anything that can't be found in a pure function.
This very much depends on the definitions that you choose. It is definitely fair to say that a function is pure or impure. A pure function always returns the same result and does not modify the environment. An impure function can return different results when it is executed repeatedly (which can be caused by doing something to the environment).
Are all impurities side-effects? I would not say so - a function can depend on something in the environment in which it executes. This could be reading some configuration, GPS location or reading data from the internet. These are not really "side-effects" because it does not do anything to the world.
I think there are two different kinds of impurities:
Output impurity is when a function does something to the world. In Haskell, this is modelled using monads - an impure function a -> b is actually a function a -> M b where M captures the other things that it does to the world.
Input impurity is when a function requires something from the environment. An impure function a -> b can be modelled as a function C a -> b where the type C captures other things from the environment that the function may access.
Monads and output impurities are certainly better known, but I think input impurities are equally important. I wrote my PhD thesis about input impurities which I call coeffects, so I this might be a biased answer though.
For a function to be pure it must:
not be affected by side-effects (i.e. always return same value for same arguments)
not cause side-effects
But, you see, this defines functional purity with the property or absence of side-effects. You are trying to apply backwards logic to deduct the definition of side-effects using pure functions, which logically should work, but practically the definition of a side-effect has nothing to do with functional purity.
I don't see problems with the definition of a pure function: a pure function is a function. I.e. it has a domain, a codomain and maps the elements of the former to the elements of the latter. It's defined on all inputs. It doesn't do anything to the environment, because "the environment" at this point doesn't exist: there are no machines that can execute (for some definition of "execute") a given function. There is just a total mapping from something to something.
Then some capitalist decides to invade the world of well-defined functions and enslave such pure creatures, but their fair essence can't survive in our cruel reality, functions become dirty and start to make the CPU warmer.
So it's the environment is responsible for making the CPU warmer and it makes perfect sense to talk about purity before its owner was abused and executed. In the same way referential transparency is a property of a language — it doesn't hold in the environment in general, because there can be a bug in the compiler or a meteorite can fall upon your head and your program will stop producing the same result.
But there are other creatures: the dark inhabitants of the underworld. They look like functions, but they are aware of the environment and can interact with it: read variables, send messages and launch missiles. We call these fallen relatives of functions "impure" or "effectful" and avoid as much as possible, because their nature is so dark that it's impossible to reason about them.
So there is clearly a big difference between those functions which can interact with the outside and those which doesn't. However the definition of "outside" can vary too. The State monad is modeled using only pure tools, but we think about f : Int -> State Int Int as about effectful computation. Moreover, non-termination and exceptions (error "...") are effects, but haskellers usually don't consider them so.
Summarizing, a pure function is a well-defined mathematical concept, but we usually consider functions in programming languages and what is pure there depends on your point of view, so it doesn't make much sense to talk about dichotomies when involved concepts are not well-defined.
A way to define purity of a function f is ∀x∀y x = y ⇒ f x = f y, i.e. given the same argument the function returns the same result, or it preserves equality.
This isn't what people usually mean when they talk about "pure functions"; they usually mean "pure" as "does not have side effects". I haven't figured out how to qualify a "side effect" (comments welcome!) so I don't have anything to say about it.
Nonetheless, I'll explore this concept of purity because it might offer some related insight. I'm no expert here; this is mostly me just rambling. I do however hope it sparks some insightful (and corrective!) comments.
To understand purity we have to know what equality we are talking about. What does x = y mean, and what does f x = f y mean?
One choice is the Haskell semantic equality. That is, equality of the semantics Haskell assigns to its terms. As far as I know there are no official denotational semantics for Haskell, but Wikibooks Haskell Denotational Semantics offers a reasonable standard that I think the community more or less agrees to ad-hoc. When Haskell says its functions are pure this is the equality it refers to.
Another choice is a user-defined equality (i.e. (==)) by deriving the Eq class. This is relevant when using denotational design — that is, we are assigning our own semantics to terms. With this choice we can accidentally write functions which are impure; Haskell is not concerned with our semantics.
I will refer to the Haskell semantic equality as = and the user-defined equality as ==. Also I assume that == is an equality relation — this does not hold for some instances of == such as for Float.
When I use x == y as a proposition what I really mean is x == y = True ∨ x == y = ⊥, because x == y :: Bool and ⊥ :: Bool. In other words, when I say x == y is true, I mean that if it computes to something other than ⊥ then it computes to True.
If x and y are equal according to Haskell's semantics then they are equal according to any other semantic we may choose.
Proof: if x = y then x == y ≡ x == x and x == x is true because == is pure (according to =) and reflexive.
Similarly we can prove ∀f∀x∀y x = y ⇒ f x == f y. If x = y then f x = f y (because f is pure), therefore f x == f y ≡ f x == f x and f x == f x is true because == is pure and reflexive.
Here is a silly example of how we can break purity for a user-defined equality.
data Pair a = Pair a a
instance (Eq a) => Eq (Pair a) where
Pair x _ == Pair y _ = x == y
swap :: Pair a -> Pair a
swap (Pair x y) = Pair y x
Now we have:
Pair 0 1 == Pair 0 2
But:
swap (Pair 0 1) /= swap (Pair 0 2)
Note: ¬(Pair 0 1 = Pair 0 2) so we were not guaranteed that our definition of (==) would be okay.
A more compelling example is to consider Data.Set. If x, y, z :: Set A then you would hope this holds, for example:
x == y ⇒ (Set.union z) x == (Set.union z) y
Especially when Set.fromList [1,2,3] and Set.fromList [3,2,1] denote the same set but probably have different (hidden) representations (not equivalent by Haskell's semantics). That is to say we want to be sure that ∀z Set.union z is pure according to (==) for Set.
Here is a type I have played with:
newtype Spry a = Spry [a]
instance (Eq a) => Eq (Spry a) where
Spry xs == Spry ys = fmap head (group xs) == fmap head (group ys)
A Spry is a list which has non-equal adjacent elements. Examples:
Spry [] == Spry []
Spry [1,1] == Spry [1]
Spry [1,2,2,2,1,1,2] == Spry [1,2,1,2]
Given this, what is a pure implementation (according to == for Spry) for flatten :: Spry (Spry a) -> Spry a such that if x is an element of a sub-spry it is also an element of the flattened spry (i.e. something like ∀x∀xs∀i x ∈ xs[i] ⇒ x ∈ flatten xs)? Exercise for the reader.
It is also worth noting that the functions we've been talking about are across the same domain, so they have type A → A. That is except for when we proved ∀f∀x∀y x = y ⇒ f x == f y which crosses from Haskell's semantic domain to our own. This might be a homomorphism of some sorts… maybe a category theorist could weigh in here (and please do!).
Side effects are part of the definition of the language. In the expression
f e
the side effects of e are all the parts of e's behavior that are 'moved out' and become part of the behavior of the application expression, rather than being passed into f as part of the value of e.
For a concrete example, consider this program:
f x = x; x
f (print 3)
where conceptually the syntax x; x means 'run x, then run it again and return the result'.
In a language where print writes to stdout as a side effect, this writes
3
because the output is part of the semantics of the application expression.
In a language where the output of print is not a side effect, this writes
3
3
because the output is part of the semantics of the x variable inside the definition of f.

And operator Lisp

Why does and operator returns a value? What is the returned value dependent on?
When I try the following example -
(write (and a b c d)) ; prints the greatest integer among a b c d
where a, b, c and d are positive integers, then and returns the greatest of them.
However when one of a, b, c and d is 0 or negative, then the smallest integer is returned. Why is this the case?
As stated in the documentation:
The macro and evaluates each form one at a time from left to right. As soon as any form evaluates to nil, and returns nil without evaluating the remaining forms. If all forms but the last evaluate to true values, and returns the results produced by evaluating the last form. If no forms are supplied, (and) returns t.
So the returned value doesn't depend on which value is the "greatest" or "smallest" of the arguments.
and can be regarded as a generalization of the two-place if. That is to say, (and a b) works exactly like (if a b). With and in the place of if we can add more conditions: (and a1 a2 a3 a4 ... aN b). If they all yield true, then b is returned. If we use if to express this, it is more verbose, because we still have to use and: (if (and a1 a2 a3 ... aN) b). and also generalizes in that it can be used with only one argument (if cannot), and even with no arguments at all (yields t in that case).
Mathematically, and forms a group. The identity element of that group is t, which is why (and) yields t. To describe the behavior of the N-argument and, we just need these rules:
(and) -> yield t
(and x y) -> { if x is true evaluate and yield y
{ otherwise yield nil
Now it turns out that this rule for a two-place and obeys the associative law: namely (and (and x y) z) behaves the same way as (and x (and y z)). The effect of the above rules is that no matter in which of these two ways we group the terms in this compound expression, x, y, and z are evaluated from left to right, and evaluation either stops at the first nil which it encounters and yields nil, or else it evaluates through to the end and yields the value of z.
Thus because we have this nice associative property, the rational thing to do in a nice language like Lisp which isn't tied to infix operators is to recognize that since the associative grouping doesn't matter, let's just have the flat syntax (and x1 x2 x3 ... xN), with any number of arguments including zero. This syntax denotes any one of the possible associations of N-1 binary ands which all yield the same behavior.
In other words, let's not make the poor programmer write a nested (and (and (and ...) ...) ...) to express a four-term and, and just let them write (and ...) with four arguments.
Summary:
the zero-place and yields t, which has to do with t being the identity element for the and operation.
the two-place and yields the second value if the first one is true. This is a useful equivalence to the two-place if. Binary and could be defined as yielding t when both arguments are true, but that would be less useful. In Lisp, any value that is not nil is a Boolean true. If we replace a non-nil value with t, it remains Boolean true, but we have lost potentially useful information.
the behavior of the n-place and is a consequence of the associative property; or rather preserving the equivalence between the flat N-argument form and all the possible binary groupings which are already equivalent to each other thanks to the associative property.
One consequence of all this is that we can have an extended if, like (if cond1 cond2 cond3 cond4 ... condN then-form), where then-form is evaluated and yielded if all the conditions are true. We just have to spell if using the and symbol.

Scala constraint based types and literals

I was thinking whether it would be possible in Scala to define a type like NegativeNumber. This type would represent a negative number and it would be checked by the compiler similarly to Ints, Strings etc.
val x: NegativeNumber = -34
val y: NegativeNumber = 34 // should not compile
Likewise:
val s: ContainsHello = "hello world"
val s: ContainsHello = "foo bar" // this should not compile either
I could use these types just like other types, eg:
def myFunc(x: ContainsHello): Unit = println(s"$x contains hello")
These constrained types could be backed by casual types (Int, String).
Is it possible to implement these types (maybe with macros)?
How about custom literals?
val neg = -34n //neg is of type NegativeNumber because of the suffix
val pos = 34n // compile error
Unfortunately, no this isn't something you could easily check at compile time. Well - at least not if you aren't restricting the operations on your type. If your goal is simply to check that a number literal is non-zero, you could easily write a macro that checks this property. However, I do not see any benefit in proving that a negative literal is indeed negative.
The problem isn't a limitation of Scala - which has a very strong type system - but the fact that (in a reasonably complex program) you can't statically know every possible state. You can however try to overapproximate the set of all possible states.
Let us consider the example of introducing a type NegativeNumber that only ever represents a negative number. For simplicity, we define only one operation: plus.
Say you would only allow addition of multiple NegativeNumber, then, the type system could be used to guarantee that each NegativeNumber is indeed a negative number. But this seems really restrictive, so a useful example would certainly allow us to add at least a NegativeNumber and a general Int.
What if you had an expression val z: NegativeNumber = plus(x, y) where you don't know the value of x and y statically (maybe they are returned by a function). How do you know (statically) that z is indead a negative number?
An approach to solve the problem would be to introduce Abstract Interpretation which must be run on a representation of your program (Source Code, Abstract Syntax Tree, ...).
For example, you could define a Lattice on the numbers with the following elements:
Top: all numbers
+: all positive numbers
0: the number 0
-: all negative numbers
Bottom: not a number - only introduced that each pair of elements has a greatest lower bound
with the ordering Top > (+, 0, -) > Bottom.
Then you'd need to define semantics for your operations. Taking the commutative method plus from our example:
plus(Bottom, something) is always Bottom, as you cannot calculate something using invalid numbers
plus(Top, x), x != Bottom is always Top, because adding an arbitrary number to any number is always an arbitrary number
plus(+, +) is +, because adding two positive numbers will always yield a positive number
plus(-, -) is -, because adding two negative numbers will always yield a negative number
plus(0, x), x != Bottom is x, because 0 is the identity of the addition.
The problem is that
plus - + will be Top, because you don't know if it's a positive or negative number.
So to be statically safe, you'd have to take the conservative approach and disallow such an operation.
There are more sophisticated numerical domains but ultimately, they all suffer from the same problem: They represent an overapproximation to the actual program state.
I'd say the problem is similar to integer overflow/underflow: Generally, you don't know statically whether an operation exhibits an overflow - you only know this at runtime.
It could be possible if SIP-23 was implemented, using implicit parameters as a form of refinement types. However, it would be of questionable value though as the Scala compiler and type system is not really not well equipped for proving interesting things about for example integers. For that it would be much nicer to use a language with dependent types (Idris etc.) or refinement types checked by an SMT solver (LiquidHaskell etc.).

which hash functions are orthogonal to each other?

I'm interested in multi-level data integrity checking and correcting. Where multiple error correcting codes are being used (they can be 2 of the same type of codes). I'm under the impression that a system using 2 codes would achieve maximum effectiveness if the 2 hash codes being used were orthogonal to each other.
Is there a list of which codes are orthogonal to what? Or do you need to use the same hashing function but with different parameters or usage?
I expect that the first level ecc will be a reed-solomon code, though I do not actually have control over this first function, hence I cannot use a single code with improved capabilities.
Note that I'm not concerned with encryption security.
Edit: This is not a duplicate of
When are hash functions orthogonal to each other? due to it essentially asking what the definition of orthogonal hash functions are. I want examples of which hash functions that are orthogonal.
I'm not certain it is even possible to enumerate all orthogonal hash functions. However, you only asked for some examples, so I will endeavour to provide some as well as some intuition as to what properties seem to lead to orthogonal hash functions.
From a related question, these two functions are orthogonal to each other:
Domain: Reals --> Codomain: Reals
f(x) = x + 1
g(x) = x + 2
This is a pretty obvious case. It is easier to determine orthogonality if the hash functions are (both) perfect hash functions such as these are. Please note that the term "perfect" is meant in the mathematical sense, not in the sense that these should ever be used as hash functions.
It is a more or less trivial case for perfect hash functions to satisfy orthogonality requirements. Whenever the functions are injective they are perfect hash functions and are thus orthogonal. Similar examples:
Domain: Integers --> Codomain: Integers
f(x) = 2x
g(x) = 3x
In the previous case, this is an injective function but not bijective as there is exactly one element in the codomain mapped to by each element in the domain, but there are many elements in the codomain that are not mapped to at all. These are still adequate for both perfect hashing and orthogonality. (Note that if the Domain/Codomain were Reals, this would be a bijection.)
Functions that are not injective are more tricky to analyze. However, it is always the case that if one function is injective and the other is not, they are not orthogonal:
Domain: Reals --> Codomain: Reals
f(x) = e^x // Injective -- every x produces a unique value
g(x) = x^2 // Not injective -- every number other than 0 can be produced by two different x's
So one trick is thus to know that one function is injective and the other is not. But what if neither is injective? I do not presently know of an algorithm for the general case that will determine this other than brute force.
Domain: Naturals --> Codomain: Naturals
j(x) = ceil(sqrt(x))
k(x) = ceil(x / 2)
Neither function is injective, in this case because of the presence of two obvious non-injective functions: ceil and abs combined with a restricted domain. (In practice most hash functions will not have a domain more permissive than integers.) Testing out values will show that j will have non-unique results when k will not and vice versa:
j(1) = ceil(sqrt(1)) = ceil(1) = 1
j(2) = ceil(sqrt(2)) = ceil(~1.41) = 2
k(1) = ceil(x / 2) = ceil(0.5) = 1
k(2) = ceil(x / 2) = ceil(1) = 1
But what about these functions?
Domain: Integers --> Codomain: Reals
m(x) = cos(x^3) % 117
n(x) = ceil(e^x)
In these cases, neither of the functions are injective (due to the modulus and the ceil) but when do they have a collision? More importantly, for what tuples of values of x do they both have a collision? These questions are hard to answer. I would suspect they are not orthogonal, but without a specific counterexample, I'm not sure I could prove that.
These are not the only hash functions you could encounter, of course. So the trick to determining orthogonality is first to see if they are both injective. If so, they are orthogonal. Second, see if exactly one is injective. If so, they are not orthogonal. Third, see if you can see the pieces of the function that are causing them to not be injective, see if you can determine its period or special cases (such as x=0) and try to come up with counter-examples. Fourth, visit math-stack-exchange and hope someone can tell you where they break orthogonality, or prove that they don't.

Simplifying a 9 variable boolean expression

I am trying to create a tic-tac-toe program as a mental exercise and I have the board states stored as booleans like so:
http://i.imgur.com/xBiuoAO.png
I would like to simplify this boolean expression...
(a&b&c) | (d&e&f) | (g&h&i) | (a&d&g) | (b&e&h) | (c&f&i) | (a&e&i) | (g&e&c)
My first thoughts were to use a Karnaugh Map but there were no solvers online that supported 9 variables.
and heres the question:
First of all, how would I know if a boolean condition is already as simple as possible?
and second: What is the above boolean condition simplified?
2. Simplified condition:
The original expression
a&b&c|d&e&f|g&h&i|a&d&g|b&e&h|c&f&i|a&e&i|g&e&c
can be simplified to the following, knowing that & is more prioritary than |
e&(d&f|b&h|a&i|g&c)|a&(b&c|d&g)|i&(g&h|c&f)
which is 4 chars shorter, performs in the worst case 18 & and | evaluations (the original one counted 23)
There is no shorter boolean formula (see point below). If you switch to matrices, maybe you can find another solution.
1. Making sure we got the smallest formula
Normally, it is very hard to find the smallest formula. See this recent paper if you are more interested. But in our case, there is a simple proof.
We will reason about a formula being the smallest with respect to the formula size, where for a variable a, size(a)=1, for a boolean operation size(A&B) = size(A|B) = size(A) + 1 + size(B), and for negation size(!A) = size(A) (thus we can suppose that we have Negation Normal Form at no cost).
With respect to that size, our formula has size 37.
The proof that you cannot do better consists in first remarking that there are 8 rows to check, and that there is always a pair of letter distinguishing 2 different rows. Since we can regroup these 8 checks in no less than 3 conjuncts with the remaining variable, the number of variables in the final formula should be at least 8*2+3 = 19, from which we can deduce the minimal tree size.
Detailed proof
Let us suppose that a given formula F is the smallest and in NNF format.
F cannot contain negated variables like !a. For that, remark that F should be monotonic, that is, if it returns "true" (there is a winning row), then changing one of the variables from false to true should not change that result. According to Wikipedia, F can be written without negation. Even better, we can prove that we can remove the negation. Following this answer, we could convert back and from DNF format, removing negated variables in the middle or replacing them by true.
F cannot contain a sub-tree like a disjunction of two variables a|b.
For this formula to be useful and not exchangeable with either a or b, it would mean that there are contradicting assignments such that for example
F[a|b] = true and F[a] = false, therefore that a = false and b = true because of monotonicity. Also, in this case, turning b to false makes the whole formula false because false = F[a] = F[a|false] >= F[a|b](b = false).
Therefore there is a row passing by b which is the cause of the truth, and it cannot go through a, hence for example e = true and h = true.
And the checking of this row passes by the expression a|b for testing b. However, it means that with a,e,h being true and all other set to false, F is still true, which contradicts the purpose of the formula.
Every subtree looking like a&b checks a unique row. So the last letter should appear just above the corresponding disjunction (a&b|...)&{c somewhere for sure here}, or this leaf is useless and either a or b can be removed safely. Indeed, suppose that c does not appear above, and the game is where a&b&c is true and all other variables are false. Then the expression where c is supposed to be above returns false, so a&b will be always useless. So there is a shorter expression by removing a&b.
There are 8 independent branches, so there is at least 8 subtrees of type a&b. We cannot regroup them using a disjunction of 2 conjunctions since a, f and h never share the same rows, so there must be 3 outer variables. 8*2+3 makes 19 variables appear in the final formula.
A tree with 19 variables cannot have less than 18 operators, so in total the size have to be at least 19+18 = 37.
You can have variants of the above formula.
QED.
One option is doing the Karnaugh map manually. Since you have 9 variables, that makes for a 2^4 by 2^5 grid, which is rather large, and by the looks of the equation, probably not very interesting either.
By inspection, it doesn't look like a Karnaugh map will give you any useful information (Karnaugh maps basically reduce expressions such as ((!a)&b) | (a&b) into b), so in that sense of simplification, your expression is already as simple as it can get. But if you want to reduce the number of computations, you can factor out a few variables using the distributivity of the AND operators over ORs.
The best way to think of this is how a person would think of it. No person would say to themselves, "a and b and c, or if d and e and f," etc. They would say "Any three in a row, horizontally, vertically, or diagonally."
Also, instead of doing eight checks (3 rows, 3 columns, and 2 diagonals), you can do just four checks (three rows and one diagonal), then rotate the board 90 degrees, then do the same checks again.
Here's what you end up with. These functions all assume that the board is a three-by-three matrix of booleans, where true represents a winning symbol, and false represents a not-winning symbol.
def win?(board)
winning_row_or_diagonal?(board) ||
winning_row_or_diagonal?(rotate_90(board))
end
def winning_row_or_diagonal?(board)
winning_row?(board) || winning_diagonal?(board)
end
def winning_row?(board)
3.times.any? do |row_number|
three_in_a_row?(board, row_number, 0, 1, 0)
end
end
def winning_diagonal?(board)
three_in_a_row?(board, 0, 0, 1, 1)
end
def three_in_a_row?(board, x, y, delta_x, delta_y)
3.times.all? do |i|
board[x + i * delta_x][y + i * deltay]
end
end
def rotate_90(board)
board.transpose.map(&:reverse)
end
The matrix rotate is from here: https://stackoverflow.com/a/3571501/238886
Although this code is quite a bit more verbose, each function is clear in its intent. Rather than a long boolean expresion, the code now expresses the rules of tic-tac-toe.
You know it's a simple as possible when there are no common sub-terms to extract (e.g. if you had "a&b" in two different trios).
You know your tic tac toe solution must already be as simple as possible because any pair of boxes can belong to at most only one winning line (only one straight line can pass through two given points), so (a & b) can't be reused in any other win you're checking for.
(Also, "simple" can mean a lot of things; specifying what you mean may help you answer your own question. )