Can't demonstrate why !A&B OR !A&C OR !C&B = !C&B OR !A&C - boolean

I'm beginning a course on boolean logic and I got this boolean expression I need to prove. After a few hours of research I tried Wolfram Alpha, but unlike other equations it doesn't explain step-by-step how it simplified the longer expression. It's also pretty easy to see the (!A&B) isn't necessary in the expression with truth tables, but I can't demonstrate it. How should I do it?
The expression:
!A&B OR !A&C OR !C&B = !C&B OR !A&C
And a link to the Wolfram Alpha input: Wolfram
Thanks in advance, have a nice day.

Here is a derivation
!A&B | !A&C | !C&B
= !A&B&(C | !C) | !A&C&(B | !B) | !C&B&(A | !A) // x & T = x
= !A&B&C | !A&B&!C | !A&B&C | !A&!B&C | A&B&!C | !A&B&!C // distributive
= !A&B&C | !A&B&!C | !A&!B&C | A&B&!C // x | x = x
= !A&B&!C | A&B&!C | !A&B&C | !A&!B&C // commutative
= B&!C&(!A | A) | !A&C&(B | !B) // distributive
= B&!C | !A&C // x | !x = T, x & T = x

There are two ways of proving these kinds of equalities. One is formal: find a chain of equalities that arrive to the target formula. The other is intuitive: understand why the equality holds. Let me try the latter.
In your case, after rewriting the left hand side of your equation we have to show that:
(!C&B OR !A&C) OR !A&B = !C&B OR !A&C
which has the form p OR q = p, right?
So the question becomes: when p OR q = p? In other words, when q adds nothing to p? Well, if p is a consequence of q then q adds nothing to p. This is if q -> p (i.e., p is a consequence of q) then p OR q = p (please prove this formally!)
So, we have to show that !C&B OR !A&C is a consequence of !A&B. But this is easy because !A&B=true implies A=false and B=true. So, if C=false we have !C&B=true and if C=true then !A&C = true. Hence in both cases we have !C&B OR !A&C = true.

Related

How do I create a simplified logic circuit of this given [(A’B’)’ + (A’+ B’)’]’ ? and What is the simplified Boolean expression?

I have to draw the simplified logic circuit of this given [(A’B’)’ + (A’+ B’)’]’ and also get the simplified boolean expression
From DeMorgan Theory
((A'B')' + (A'+ B')')'=(A+B + AB)'
(A+B + AB)' = (A+B)'. (AB)'
(A+B)'. (AB)' = (A'.B') .(A'+B')
assume
X=A' ,Y=B'
we can conclude that
(XY)(X+Y) is (XY)
as both (XY) and (X+Y) has to be 1 to produce 1 at output as seen from truth table of (XY)(X+Y) is (XY) is identical to XY
so as final result
((A'B')' + (A'+ B')')'=(A+B + AB)'
(A+B + AB)' = (A+B)'. (AB)'
(A+B)'. (AB)' = (A'.B') .(A'+B') = A'B'
before simplifying
after simplifying
comparing outputs to make sure
(A'B')' = A + B
Using DeMorgan's theorem (AB)' = A' + B'
(A'+B')' = AB
Again, using DeMorgan's theorem (A+B)' = A'B'
Therefore, now we have the expression:
(A+B + AB)'
Taking A+B as X and AB as Y
(X+Y)' = X'Y'
= (A+B)'·(AB)'
Now, creating a logic circuit is fairly simple for this expression, inputs A and B are fed to a NOR gate and NAND gate simultaneously whose outputs act as input to an AND gate

Apple turicreate always return the same label

I'm test-driving turicreate, to resolve a classification issue, in which data consists of 10-uples (q,w,e,r,t,y,u,i,o,p,label), where 'q..p' is a sequence of characters (for now of 2 types), +,-, like this:
q,w,e,r,t,y,u,i,o,p,label
-,+,+,e,e,e,e,e,e,e,type2
+,+,e,e,e,e,e,e,e,e,type1
-,+,e,e,e,e,e,e,e,e,type2
'e' is just a padding character, so that vectors have a fixed lenght of 10.
note:data is significantly tilted toward one label (90% of it), and the dataset is small, < 100 points.
I use Apple's vanilla script to prepare and process the data (derived from here):
import turicreate as tc
# Load the data
data = tc.SFrame('data.csv')
# Note, for sake of investigating why predictions do not work on Swift, the model is deliberately over-fitted, with split 1.0
train_data, test_data = data.random_split(1.0)
print(train_data)
# Automatically picks the right model based on your data.
model = tc.classifier.create(train_data, target='label', features = ['q','w','e','r','t','y','u','i','o','p'])
# Generate predictions (class/probabilities etc.), contained in an SFrame.
predictions = model.classify(train_data)
# Evaluate the model, with the results stored in a dictionary
results = model.evaluate(train_data)
print("***********")
print(results['accuracy'])
print("***********")
model.export_coreml("MyModel.mlmodel")
Note:The model is over-fitted on the whole data (for now). Convergence seems ok,
PROGRESS: Model selection based on validation accuracy:
PROGRESS: ---------------------------------------------
PROGRESS: BoostedTreesClassifier : 1.0
PROGRESS: RandomForestClassifier : 0.9032258064516129
PROGRESS: DecisionTreeClassifier : 0.9032258064516129
PROGRESS: SVMClassifier : 1.0
PROGRESS: LogisticClassifier : 1.0
PROGRESS: ---------------------------------------------
PROGRESS: Selecting BoostedTreesClassifier based on validation set performance.
And the classification works as expected (although over-fitted)
However, when i use the mlmodel in my code, no matter what, it returns always the same label, here 'type2'. The rule is here type1 = only "+" and "e", type2 = all others combinations.
I tried using the text_classifier, the results are far less accurate...
I have no idea what I'm doing wrong....
Just in case someone wants to check, for a small data set, here's the raw data.
q,w,e,r,t,y,u,i,o,p,label
-,+,+,e,e,e,e,e,e,e,type2
-,+,e,e,e,e,e,e,e,e,type2
+,+,-,+,e,e,e,e,e,e,type2
-,-,+,-,e,e,e,e,e,e,type2
+,e,e,e,e,e,e,e,e,e,type1
-,-,+,+,e,e,e,e,e,e,type2
+,-,+,-,e,e,e,e,e,e,type2
-,+,-,-,e,e,e,e,e,e,type2
+,-,-,+,e,e,e,e,e,e,type2
+,+,e,e,e,e,e,e,e,e,type1
+,+,-,-,e,e,e,e,e,e,type2
-,+,-,e,e,e,e,e,e,e,type2
-,-,-,-,e,e,e,e,e,e,type2
-,-,e,e,e,e,e,e,e,e,type2
-,-,-,e,e,e,e,e,e,e,type2
+,+,+,+,e,e,e,e,e,e,type1
+,-,+,+,e,e,e,e,e,e,type2
+,+,+,e,e,e,e,e,e,e,type1
+,-,-,-,e,e,e,e,e,e,type2
+,-,-,e,e,e,e,e,e,e,type2
+,+,+,-,e,e,e,e,e,e,type2
+,-,e,e,e,e,e,e,e,e,type2
+,-,+,e,e,e,e,e,e,e,type2
-,-,+,e,e,e,e,e,e,e,type2
+,+,-,e,e,e,e,e,e,e,type2
e,e,e,e,e,e,e,e,e,e,type1
-,+,+,-,e,e,e,e,e,e,type2
-,-,-,+,e,e,e,e,e,e,type2
-,e,e,e,e,e,e,e,e,e,type2
-,+,+,+,e,e,e,e,e,e,type2
-,+,-,+,e,e,e,e,e,e,type2
And the swift code:
//Helper
extension MyModelInput {
public convenience init(v:[String]) {
self.init(q: v[0], w: v[1], e: v[2], r: v[3], t: v[4], y: v[5], u: v[6], i: v[7], o: v[8], p:v[9])
}
}
let classifier = MyModel()
let data = ["-,+,+,e,e,e,e,e,e,e,e", "-,+,e,e,e,e,e,e,e,e,e", "+,+,-,+,e,e,e,e,e,e,e", "-,-,+,-,e,e,e,e,e,e,e","+,e,e,e,e,e,e,e,e,e,e"]
data.forEach { (tt) in
let gg = MyModelInput(v: tt.components(separatedBy: ","))
if let prediction = try? classifier.prediction(input: gg) {
print(prediction.labelProbability)
}
}
The python code saves a MyModel.mlmodel file, which you can add to any Xcode project and use the code above.
note: the python part works fine, for example:
+---+---+---+---+---+---+---+---+---+---+-------+
| q | w | e | r | t | y | u | i | o | p | label |
+---+---+---+---+---+---+---+---+---+---+-------+
| + | + | + | + | e | e | e | e | e | e | type1 |
+---+---+---+---+---+---+---+---+---+---+-------+
is labelled as expected. But when using the swift code, the label comes out as type2. This thing is driving be berserk (and yes, I checked that the mlmodel replaces the old one whenever i create a new version, and also in Xcode).

How to specify mathematical expression using Z speification?

how can i specify a mathematical expression using Z notation ?
I think free types is appropriate for this situation because expressions has a recursive nature. please consider that we can have parenthesis and variables in our expression. and only ( + , - , / , * ) is allowed. for example :
A + 2 * ( 3 - B ) / 4
please help me ...
You need to use the axiomatic definition: the definition of an object is constrained by conditions.
There is a schema specified in zet for this.
Which is
| Declaration
------------------------------
| Predicate
|
|
A recursive example given:
[USERNAME] - An already defined type.
Given a username and a sequence of usernames(N1 --> USERNAME) return the number that the given username appears into the sequence.
|-occurs- USERNAME X seq USERNAME → N //here you define the input and what is returned.
---------------------------------------
|∀ u: USERNAME, s: seq USERNAME then
|s = < > => occurs(u,s) = 0
|s ≠ < > and head(s) = u => occurs(u,s) = 1+occurs(u,tail(s))
|s ≠ < > and head(s) ≠ u => occurs(u,s) = occurs(u,tail(s))

Alloy model an algebraic group

I am trying to model the structure of an algebraic group with Alloy.
A group just has a set of elements and a binary relation with certain properties so I thought it would be a good fit for alloy.
This is what I started with
sig Number{}
/* I call it Number but this is really just a name for some objects that are going to be in the group */
sig Group{
member: set Number,
product: member->member->member, /*This is the part I'm really not sure about the Group is supposed to have a a well-defined binary relation so I thought maybe I could write it like this, sort of as a Curried function...I think it's actually a ternary relation in Alloy language since it takes two members and returns a third member */
}{//I want to write the other group properties as appended facts here.
some e:member | all g:member| g->e->g in product //identity element
all g:member | some i:member| g->i->e in product /* inverses exist I think there's a problem here because i want the e to be the same as in the previous line*/
all a,b,c:member| if a->b->c and c->d->e and b->c->f then a->f->e //transitivity
all a,b:member| a->b->c in product// product is well defined
}
I've only just learned a bit of Alloy myself, but your "inverses exist" problem looks straightforward from a predicate logic perspective; replace your first two properties with
some e:member {
all g:member | g->e->g in product //identity element
all g:member | some i:member | g->i->e in product // inverses exist
}
By putting the inverse property in the scope of the quantifier of e, it is referring to that same e.
I haven't tested this.
Here is one way of encoding groups in Alloy:
module group[E]
pred associative[o : E->E->E]{ all x, y, z : E | (x.o[y]).o[z] = x.o[y.o[z]] }
pred commutative[o : E->E->E]{ all x, y : E | x.o[y] = y.o[x] }
pred is_identity[i : E, o : E->E->E]{ all x : E | (i.o[x] = x and x = x.o[i]) }
pred is_inverse[b : E->E, i : E, o : E->E->E]{ all x : E | (b[x].o[x] = i and i = x.o[b[x]]) }
sig Group{
op : E -> E->one E, inv : E one->one E, id : E
}{
associative[op] and is_identity[id, op] and is_inverse[inv, id, op] }
sig Abelian extends Group{}{ commutative[op] }
unique_identity: check {
all g : Group, id' : E | (is_identity[id', g.op] implies id' = g.id)
} for 13 but exactly 1 Group
unique_inverse: check {
all g : Group, inv' : E->E | (is_inverse[inv', g.id, g.op] implies inv' = g.inv)
} for 13 but exactly 1 Group

How to write the following boolean expression?

I've got three boolean values A, B and C. I need to write an IF statement which will execute if and only if no more than one of these values is True. In other words, here is the truth table:
A | B | C | Result
---+---+---+--------
0 | 0 | 0 | 1
0 | 0 | 1 | 1
0 | 1 | 0 | 1
0 | 1 | 1 | 0
1 | 0 | 0 | 1
1 | 0 | 1 | 0
1 | 1 | 0 | 0
1 | 1 | 1 | 0
What is the best way to write this? I know I can enumerate all possibilities, but that seems... too verbose. :P
Added: Just had one idea:
!(A && B) && !(B && C) && !(A && C)
This checks that no two values are set. The suggestion about sums is OK as well. Even more readable maybe...
(A?1:0) + (B?1:0) + (C?1:0) <= 1
P.S. This is for production code, so I'm going more for code readability than performance.
Added 2: Already accepted answer, but for the curious ones - it's C#. :) The question is pretty much language-agnostic though.
how about treating them as integer 1's and 0's, and checking that their sum equals 1?
EDIT:
now that we know that it's c#.net, i think the most readable solution would look somewhat like
public static class Extensions
{
public static int ToInt(this bool b)
{
return b ? 1 : 0;
}
}
the above tucked away in a class library (appcode?) where we don't have to see it, yet can easily access it (ctrl+click in r#, for instance) and then the implementation will simply be:
public bool noMoreThanOne(params bool[] bools)
{
return bools.ToList().Sum(b => b.ToInt()) <= 1;
}
...
bool check = noMoreThanOne(true, true, false, any, amount, of, bools);
You shold familiarize yourself with Karnaugh maps. Concept is most often applied to electronics but is very useful here too. It's very easy (thought Wikipedia explanation does look long -- it's thorough).
(A XOR B XOR C) OR NOT (A OR B OR C)
Edit: As pointed out by Vilx, this isn't right.
If A and B are both 1, and C is 0, A XOR B will be 0, the overall result will be 0.
How about:
NOT (A AND B) AND NOT (A AND C) AND NOT (B AND C)
If you turn the logic around, you want the condition to be false if you have any pair of booleans that are both true:
if (! ((a && b) || (a && c) || (b && c))) { ... }
For something completely different, you can put the booleans in an array and count how many true values there are:
if ((new bool[] { a, b, c }).Where(x => x).Count() <= 1) { ... }
I'd go for maximum maintainability and readability.
static bool ZeroOrOneAreTrue(params bool[] bools)
{
return NumThatAreTrue(bools) <= 1;
}
static int NumThatAreTrue(params bool[] bools)
{
return bools.Where(b => b).Count();
}
There are many answers here, but I have another one!
a ^ b ^ c ^ (a == b && b == c)
A general way of finding a minimal boolean expression for a given truth table is to use a Karnaugh map:
http://babbage.cs.qc.edu/courses/Minimize/
There are several online minimizers on the web. The one here (linked to from the article, it's in German, though) finds the following expression:
(!A && !B) || (!A && !C) || (!B && !C)
If you're going for code readability, though, I would probably go with the idea of "sum<=1". Take care that not all languages guarantee that false==0 and true==1 -- but you're probably aware of this since you've taken care of it in your own solution.
Good ol' logic:
+ = OR
. = AND
R = Abar.Bbar.Cbar + Abar.Bbar.C + Abar.B.Cbar + A.Bbar.Cbar
= Abar.Bbar.(Cbar + C) + Abar.B.Cbar + A.Bbar.Cbar
= Abar.Bbar + Abar.B.Cbar + A.Bbar.Cbar
= Abar.Bbar + CBar(A XOR B)
= NOT(A OR B) OR (NOT C AND (A XOR B))
Take the hint and simplify further if you want.
And yeah, get your self familiar with Karnaugh Maps
Depends whether you want something where it's easy to understand what you're trying to do, or something that's as logically simple as can be. Other people are posting logically simple answers, so here's one where it's more clear what's going on (and what the outcome will be for different inputs):
def only1st(a, b, c):
return a and not b and not c
if only1st(a, b, c) or only1st(b, a, c) or only1st(c, a, b):
print "Yes"
else:
print "No"
I like the addition solution, but here's a hack to do that with bit fields as well.
inline bool OnlyOneBitSet(int x)
{
// removes the leftmost bit, if zero, there was only one set.
return x & (x-1) == 0;
}
// macro for int conversion
#define BOOLASINT(x) ((x)?1:0)
// turn bools a, b, c into the bit field cba
int i = (BOOLASINT(a) << 0) | BOOLASINT(b) << 1 | BOOLASINT(c) << 2;
if (OnlyOneBitSet(i)) { /* tada */ }
Code demonstration of d's solution:
int total=0;
if (A) total++;
if (B) total++;
if (C) total++;
if (total<=1) // iff no more than one is true.
{
// execute
}