I've got three boolean values A, B and C. I need to write an IF statement which will execute if and only if no more than one of these values is True. In other words, here is the truth table:
A | B | C | Result
---+---+---+--------
0 | 0 | 0 | 1
0 | 0 | 1 | 1
0 | 1 | 0 | 1
0 | 1 | 1 | 0
1 | 0 | 0 | 1
1 | 0 | 1 | 0
1 | 1 | 0 | 0
1 | 1 | 1 | 0
What is the best way to write this? I know I can enumerate all possibilities, but that seems... too verbose. :P
Added: Just had one idea:
!(A && B) && !(B && C) && !(A && C)
This checks that no two values are set. The suggestion about sums is OK as well. Even more readable maybe...
(A?1:0) + (B?1:0) + (C?1:0) <= 1
P.S. This is for production code, so I'm going more for code readability than performance.
Added 2: Already accepted answer, but for the curious ones - it's C#. :) The question is pretty much language-agnostic though.
how about treating them as integer 1's and 0's, and checking that their sum equals 1?
EDIT:
now that we know that it's c#.net, i think the most readable solution would look somewhat like
public static class Extensions
{
public static int ToInt(this bool b)
{
return b ? 1 : 0;
}
}
the above tucked away in a class library (appcode?) where we don't have to see it, yet can easily access it (ctrl+click in r#, for instance) and then the implementation will simply be:
public bool noMoreThanOne(params bool[] bools)
{
return bools.ToList().Sum(b => b.ToInt()) <= 1;
}
...
bool check = noMoreThanOne(true, true, false, any, amount, of, bools);
You shold familiarize yourself with Karnaugh maps. Concept is most often applied to electronics but is very useful here too. It's very easy (thought Wikipedia explanation does look long -- it's thorough).
(A XOR B XOR C) OR NOT (A OR B OR C)
Edit: As pointed out by Vilx, this isn't right.
If A and B are both 1, and C is 0, A XOR B will be 0, the overall result will be 0.
How about:
NOT (A AND B) AND NOT (A AND C) AND NOT (B AND C)
If you turn the logic around, you want the condition to be false if you have any pair of booleans that are both true:
if (! ((a && b) || (a && c) || (b && c))) { ... }
For something completely different, you can put the booleans in an array and count how many true values there are:
if ((new bool[] { a, b, c }).Where(x => x).Count() <= 1) { ... }
I'd go for maximum maintainability and readability.
static bool ZeroOrOneAreTrue(params bool[] bools)
{
return NumThatAreTrue(bools) <= 1;
}
static int NumThatAreTrue(params bool[] bools)
{
return bools.Where(b => b).Count();
}
There are many answers here, but I have another one!
a ^ b ^ c ^ (a == b && b == c)
A general way of finding a minimal boolean expression for a given truth table is to use a Karnaugh map:
http://babbage.cs.qc.edu/courses/Minimize/
There are several online minimizers on the web. The one here (linked to from the article, it's in German, though) finds the following expression:
(!A && !B) || (!A && !C) || (!B && !C)
If you're going for code readability, though, I would probably go with the idea of "sum<=1". Take care that not all languages guarantee that false==0 and true==1 -- but you're probably aware of this since you've taken care of it in your own solution.
Good ol' logic:
+ = OR
. = AND
R = Abar.Bbar.Cbar + Abar.Bbar.C + Abar.B.Cbar + A.Bbar.Cbar
= Abar.Bbar.(Cbar + C) + Abar.B.Cbar + A.Bbar.Cbar
= Abar.Bbar + Abar.B.Cbar + A.Bbar.Cbar
= Abar.Bbar + CBar(A XOR B)
= NOT(A OR B) OR (NOT C AND (A XOR B))
Take the hint and simplify further if you want.
And yeah, get your self familiar with Karnaugh Maps
Depends whether you want something where it's easy to understand what you're trying to do, or something that's as logically simple as can be. Other people are posting logically simple answers, so here's one where it's more clear what's going on (and what the outcome will be for different inputs):
def only1st(a, b, c):
return a and not b and not c
if only1st(a, b, c) or only1st(b, a, c) or only1st(c, a, b):
print "Yes"
else:
print "No"
I like the addition solution, but here's a hack to do that with bit fields as well.
inline bool OnlyOneBitSet(int x)
{
// removes the leftmost bit, if zero, there was only one set.
return x & (x-1) == 0;
}
// macro for int conversion
#define BOOLASINT(x) ((x)?1:0)
// turn bools a, b, c into the bit field cba
int i = (BOOLASINT(a) << 0) | BOOLASINT(b) << 1 | BOOLASINT(c) << 2;
if (OnlyOneBitSet(i)) { /* tada */ }
Code demonstration of d's solution:
int total=0;
if (A) total++;
if (B) total++;
if (C) total++;
if (total<=1) // iff no more than one is true.
{
// execute
}
Related
My doubt is about the basic theory of "or logical operator". Especifically, logical OR returns true only if either one operand is true.
For instance, in this OR expression (x<O || x> 8) using x=5 when I evalute the 2 operand, I interpret it as both of them are false.
But I have an example that does not fit wiht it rule. On the contrary the expression works as range between 0 and 8, both included.
Following the code:
#include <stdio.h>
int main(void)
{
int x ; //This is the variable for being evaluated
do
{
printf("Imput a figure between 1 and 8 : ");
scanf("%i", &x);
}
while ( x < 1 || x > 8); // Why this expression write in this way determinate the range???
{
printf("Your imput was ::: %d ",x);
printf("\n");
}
printf("\n");
}
I have modified my first question. I really appreciate any helpo in order to clarify my doubt
In advance, thank you very much. Otto
It's not a while loop; it's a do ... while loop. The formatting makes it hard to see. Reformatted:
#include <stdio.h>
int main(void) {
int x;
// Execute the code in the `do { }` block once, no matter what.
// Keep executing it again and again, so long as the condition
// in `while ( )` is true.
do {
printf("Imput a figure between 1 and 8 : ");
scanf("%i", &x);
} while (x < 1 || x > 8);
// This creates a new scope. While perfectly valid C,
// this does absolutely nothing in this particular case here.
{
printf("Your imput was ::: %d ",x);
printf("\n");
}
printf("\n");
}
The block with the two printf calls is not part of the loop. The while (x < 1 || x > 8) makes it so that the code in the do { } block runs, so long as x < 1 or x > 8. In other words, it runs until x is between 1 and 8. This has the effect of asking the user to input a number again and again, until they finally input a number that's between 1 and 8.
Given the concrete syntax of a language, I would like to define a function "instance" with signature str (type[&T]) that could be called with the reified type of the syntax and return a valid instance of the language.
For example, with this syntax:
lexical IntegerLiteral = [0-9]+;
start syntax Exp
= IntegerLiteral
| bracket "(" Exp ")"
> left Exp "*" Exp
> left Exp "+" Exp
;
A valid return of instance(#Exp) could be "1+(2*3)".
The reified type of a concrete syntax definition does contain information about the productions, but I am not sure if this approach is better than a dedicated data structure. Any pointers of how could I implement it?
The most natural thing is to use the Tree data-type from the ParseTree module in the standard library. It is the format that the parser produces, but you can also use it yourself. To get a string from the tree, simply print it in a string like so:
str s = "<myTree>";
A relatively complete random tree generator can be found here: https://github.com/cwi-swat/drambiguity/blob/master/src/GenerateTrees.rsc
The core of the implementation is this:
Tree randomChar(range(int min, int max)) = char(arbInt(max + 1 - min) + min);
Tree randomTree(type[Tree] gr)
= randomTree(gr.symbol, 0, toMap({ <s, p> | s <- gr.definitions, /Production p:prod(_,_,_) <- gr.definitions[s]}));
Tree randomTree(\char-class(list[CharRange] ranges), int rec, map[Symbol, set[Production]] _)
= randomChar(ranges[arbInt(size(ranges))]);
default Tree randomTree(Symbol sort, int rec, map[Symbol, set[Production]] gr) {
p = randomAlt(sort, gr[sort], rec);
return appl(p, [randomTree(delabel(s), rec + 1, gr) | s <- p.symbols]);
}
default Production randomAlt(Symbol sort, set[Production] alts, int rec) {
int w(Production p) = rec > 100 ? p.weight * p.weight : p.weight;
int total(set[Production] ps) = (1 | it + w(p) | Production p <- ps);
r = arbInt(total(alts));
count = 0;
for (Production p <- alts) {
count += w(p);
if (count >= r) {
return p;
}
}
throw "could not select a production for <sort> from <alts>";
}
Tree randomChar(range(int min, int max)) = char(arbInt(max + 1 - min) + min);
It is a simple recursive function which randomly selects productions from a reified grammar.
The trick towards termination lies in the weight of each rule. This is computed a priori, such that every rule has its own weight in the random selection. We take care to give the set of rules that lead to termination at least 50% chance of being selected (as opposed to the recursive rules) (code here: https://github.com/cwi-swat/drambiguity/blob/master/src/Termination.rsc)
Grammar terminationWeights(Grammar g) {
deps = dependencies(g.rules);
weights = ();
recProds = {p | /p:prod(s,[*_,t,*_],_) := g, <delabel(t), delabel(s)> in deps};
for (nt <- g.rules) {
prods = {p | /p:prod(_,_,_) := g.rules[nt]};
count = size(prods);
recCount = size(prods & recProds);
notRecCount = size(prods - recProds);
// at least 50% of the weight should go to non-recursive rules if they exist
notRecWeight = notRecCount != 0 ? (count * 10) / (2 * notRecCount) : 0;
recWeight = recCount != 0 ? (count * 10) / (2 * recCount) : 0;
weights += (p : p in recProds ? recWeight : notRecWeight | p <- prods);
}
return visit (g) {
case p:prod(_, _, _) => p[weight=weights[p]]
}
}
#memo
rel[Symbol,Symbol] dependencies(map[Symbol, Production] gr)
= {<delabel(from),delabel(to)> | /prod(Symbol from,[_*,Symbol to,_*],_) := gr}+;
Note that this randomTree algorithm will not terminate on grammars that are not "productive" (i.e. they have only a rule like syntax E = E;
Also it can generate trees that are filtered by disambiguation rules. So you can check this by running the parser on a generated string and check for parse errors. Also it can generated ambiguous strings.
By the way, this code was inspired by the PhD thesis of Naveneetha Vasudevan of King's College, London.
I am trying to understand Boolean logic and operators.
I found this example but can't understand why this expression will evaluate to the one shown below.
Say, a = 0, b = 1, c = 0
Expression Will Evaluate to
val1 = !(a || b || c); !(0 || 1 || 0) = !(1) = 0
As I see it, val1 is not a or not b or not c, so why it evaluates to not 1 ?
Not(a or b or c) evaluates the or operations first, so it's not the same as (not a) or (not b) or (not c).
Indeed, it's the same as (not a) AND (not b) AND (not c).
Either operand to an OR being true will give a true result, and then the NOT flips that to a false result for the expression as a whole.
As with integer or real number arithmetic, order of operation can greatly alter the result.
.... val1 is not a or not b or not c ...
No, this is incorrect. The 0 || 1 || 0 inside the parenthesis is evaluated first. The example has it right.
Let's say val1 = 1
1 = !(0 || 1 || 0)
1 = !(1) - because it is the only value that is equal to val1
1 = 0 - then it negates it afterwards
Let's go step-by-step.
val1 = !(0 || 1 || 0);
Firstly, 0 || 1 will evaluate to 1, because || means 'true if at least one of them is true, otherwise false', and 1 = true, 0 = false.
So now it is
val1 = !(1 || 0); Here 1 || 0 will again evaluate to 1, because at least one of them is 1. Now we've got val1 = !(1);. ! means the opposite of the input, so !(1) = 0.
As I see it, val1 is not a or not b or not c, so why it evaluates to not 1 ?
Because what you say would be written as val1 = !0 || !1 || !0. Its quite different, because it doesn't have parenthesis. Parenthesis means 'evaluate everything in the parenthesis first'.
I often see code like
int hashCode(){
return a^b;
}
Why XOR?
Of all bit-operations XOR has the best bit shuffling properties.
This truth-table explains why:
A B AND
0 0 0
0 1 0
1 0 0
1 1 1
A B OR
0 0 0
0 1 1
1 0 1
1 1 1
A B XOR
0 0 0
0 1 1
1 0 1
1 1 0
As you can see for AND and OR do a poor job at mixing bits.
OR will on average produce 3/4 one-bits. AND on the other hand will produce on average 3/4 null-bits. Only XOR has an even one-bit vs. null-bit distribution. That makes it so valuable for hash-code generation.
Remember that for a hash-code you want to use as much information of the key as possible and get a good distribution of hash-values. If you use AND or OR you'll get numbers that are biased towards either numbers with lots of zeros or numbers with lots of ones.
XOR has the following advantages:
It does not depend on order of computation i.e. a^b = b^a
It does not "waste" bits. If you change even one bit in one of the components, the final value will change.
It is quick, a single cycle on even the most primitive computer.
It preserves uniform distribution. If the two pieces you combine are uniformly distributed so will the combination be. In other words, it does not tend to collapse the range of the digest into a narrower band.
More info here.
XOR operator is reversible, i.e. suppose I have a bit string as 0 0 1 and I XOR it with another bit string 1 1 1, the the output is
0 xor 1 = 1
0 1 = 1
1 1 = 0
Now I can again xor the 1st string with the result to get the 2nd string. i.e.
0 1 = 1
0 1 = 1
1 0 = 1
So, that makes the 2nd string a key. This behavior is not found with other bit operator
Please see this for more info --> Why is XOR used on Cryptography?
There is another use case: objects in which (some) fields must be compared without regarding their order. For example, if you want a pair (a, b) be always equal to the pair (b, a).
XOR has the property that a ^ b = b ^ a, so it can be used in hash function in such cases.
Examples: (full code here)
definition:
final class Connection {
public final int A;
public final int B;
// some code omitted
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Connection that = (Connection) o;
return (A == that.A && B == that.B || A == that.B && B == that.A);
}
#Override
public int hashCode() {
return A ^ B;
}
// some code omitted
}
usage:
HashSet<Connection> s = new HashSet<>();
s.add(new Connection(1, 3));
s.add(new Connection(2, 3));
s.add(new Connection(3, 2));
s.add(new Connection(1, 3));
s.add(new Connection(2, 1));
s.remove(new Connection(1, 2));
for (Connection x : s) {
System.out.println(x);
}
// output:
// Connection{A=2, B=3}
// Connection{A=1, B=3}
probably a simple question but I seem to be suffering from programmer's block. :)
I have three boolean values: A, B, and C. I would like to save the state combination as an unsigned tinyint (max 255) into a database and be able to derive the states from the saved integer.
Even though there are only a limited number of combinations, I would like to avoid hard-coding each state combination to a specific value (something like if A=true and B=true has the value 1).
I tried to assign values to the variables so (A=1, B=2, C=3) and then adding, but I can't differentiate between A and B being true from i.e. only C being true.
I am stumped but pretty sure that it is possible.
Thanks
Binary maths I think. Choose a location that's a power of 2 (1, 2, 4, 8 etch) then you can use the 'bitwise and' operator & to determine the value.
Say A = 1, B = 2 , C= 4
00000111 => A B and C => 7
00000101 => A and C => 5
00000100 => C => 4
then to determine them :
if( val & 4 ) // same as if (C)
if( val & 2 ) // same as if (B)
if( val & 1 ) // same as if (A)
if((val & 4) && (val & 2) ) // same as if (C and B)
No need for a state table.
Edit: to reflect comment
If the tinyint has a maximum value of 255 => you have 8 bits to play with and can store 8 boolean values in there
binary math as others have said
encoding:
myTinyInt = A*1 + B*2 + C*4 (assuming you convert A,B,C to 0 or 1 beforehand)
decoding
bool A = myTinyInt & 1 != 0 (& is the bitwise and operator in many languages)
bool B = myTinyInt & 2 != 0
bool C = myTinyInt & 4 != 0
I'll add that you should find a way to not use magic numbers. You can build masks into constants using the Left Logical/Bit Shift with a constant bit position that is the position of the flag of interest in the bit field. (Wow... that makes almost no sense.) An example in C++ would be:
enum Flags {
kBitMask_A = (1 << 0),
kBitMask_B = (1 << 1),
kBitMask_C = (1 << 2),
};
uint8_t byte = 0; // byte = 0b00000000
byte |= kBitMask_A; // Set A, byte = 0b00000001
byte |= kBitMask_C; // Set C, byte = 0b00000101
if (byte & kBitMask_A) { // Test A, (0b00000101 & 0b00000001) = T
byte &= ~kBitMask_A; // Clear A, byte = 0b00000100
}
In any case, I would recommend looking for Bitset support in your favorite programming language. Many languages will abstract the logical operations away behind normal arithmetic or "test/set" operations.
Need to use binary...
A = 1,
B = 2,
C = 4,
D = 8,
E = 16,
F = 32,
G = 64,
H = 128
This means A + B = 3 but C = 4. You'll never have two conflicting values. I've listed the maximum you can have for a single byte, 8 values or (bits).