How to convert numbers into boolean in Julia? - boolean

I want to convert numbers into equivalent boolean. based on the official doc:
"false is numerically equal to 0 and true is numerically equal to 1."
So I would like to convert the numeric values to the equivalent boolean. Example of expected behavior:
[1] Bool(5)
> true
But, Julia gives me:
ERROR: InexactError: Bool(5)
Stacktrace:
[1] Bool(x::Int64)
# Base .\float.jl:158
[2] top-level scope
# REPL[26]:1
But if I say:
[1] Bool(0.0)
# result
> false
It looks good on 0 and 0.0! But it doesn't work on numbers like 5 or 2.1, etc.

The quick answer is that it seems like you want to test whether a value is non-zero or not and you can test if a number x is non-zero like this: x != 0. Whether a number is considered true or false depends on what language you're in, however; more on that below.
Having gotten the quick answer out of the way, we can talk about why. The docs say that false and true are numerically equal to zero and one. We can test this out ourselves:
julia> false == 0
true
julia> true == 1.0
true
You can see that the equality holds regardless of numerical type—integers or floats or even more esoteric number types like complexes or rationals. We can also convert booleans to other types of numbers:
julia> Int(false)
0
julia> Float64(true)
1.0
We can convert those values back to booleans:
julia> Bool(0)
false
julia> Bool(1.0)
true
What you can't do is convert a number that isn't equal to zero or one to boolean:
julia> Bool(5)
ERROR: InexactError: Bool(5)
That is because the Bool type can only represent the values zero and one exactly and trying to convert any other numeric value to Bool will give an InexactError. This is the same error you get if you try to convert a float that isn't integer-valued to an integer type:
julia> Int(5.0)
5
julia> Int(5.5)
ERROR: InexactError: Int64(5.5)
Or an integer to a smaller type that isn't big enough to represent that value:
julia> Int8(123)
123
julia> Int8(1234)
ERROR: InexactError: trunc(Int8, 1234)
The exact same thing is happening here: Bool is not big enough to represent the value 5, so you get an error if you try to convert the value 5 to Bool.
The convention that many languages use for truthiness of numbers is that values that represent zero are falsey and values that represent no-zeros are truthy. Note that there is no sound mathematical reason for this: zero is not false and non-zero numbers are not true; it's just a convention that comes from the C programming language, which doesn't have a boolean type and uses this convention for treating integers as true/false in conditionals. This convention is far from universal, however, as there are popular languages that don't follow it: in Lisps and Ruby, for example, all numbers are truthy. I wrote a post recently on the Julia discourse forum about differing notions of truthiness in different languages and why Julia rejects truthiness and instead requires you to write out an explicit condition like comparison with zero for numbers, or non-emptiness for collections.
Since the condition you actually want to check is whether a number is equal to zero or not, that's just what you should do: explicitly compare the number to zero by doing x != 0. This will evaluate to a boolean value: true if x is non-zero and false if x is equal to zero. There's a predicate function that does this as well: iszero(x) checks if x is zero, which can be handy if you want to e.g. count how many zero or non-zero values there are in a collection:
julia> v = rand(-2:2, 100);
julia> count(iszero, v)
18
julia> count(!iszero, v)
82

Related

Checking whether values are Upper case or lower case

I'm trying to build a function in q/kdb+ that can detect whether the string passed to the function is Upper case (True) OR lower case (false)
I have done several attempts but I'm constantly hitting a road block
Here's my function, would appreciate the help. TIA
isAllCaps: {[input] if[input = lower; show 0b; show 1b]}
The above function basically takes an input as specified. It then checks from the if statement whether it is lower, if it is lower then it should return a false(0b), if not then return a true (1b). Really struggling with something so simple here. I'm just getting the following error:
evaluation error:
type
[1] isAllCaps:{[input] if[input = lower; show 0b; show 1b]}
^
[0] isAllCaps"a"
I have also tried other methods but only certain inputs were coming out to be successful.
For instance:
isAllCaps: {[x] if[ x = upper type 10h; show 1b; show 0b]}
This equates to if x is upper type string, show true, else show false. Again, getting a type error. Idk why? All the tests are in strings i.e., "John_Citizen" etc. etc.
EDIT
Tried this but I'm getting 2 outputs.
isAllCaps: {[L] if[L = lower L; show 0b; show 1b] } Am I missing something?
Try following code:
isAllCaps: {[L] show L~upper L};
It shows 1b if string is uppercase, and 0b otherwise.
There are 2 mistakes in you code
~ should be used to compare strings. = does character-wise comparison. I.e. "aa"~"aa" gives 1b, but "aa"="aa" gives 11b
Use if-else, instead of if. See 10.1.1 Basic Conditional Evaluation for more details
You can use the in-built uppercase alphabet .Q.A to achieve what you want:
{all x in .Q.A}
This lambda will return 1b if the input string consists only of capital letters and false otherwise.
Start by removing some misapprehensions.
lower returns the lower case of its argument.
show displays and returns its result. You can use it as you have to ensure the function’s result is printed on the console, but it is not required as, for example return would be in JavaScript.
If you have a boolean as the result of a test, that can just be your result.
Compare strings for equality using Match ~ rather than Equals
That said, two strategies: test for lower-case chars .Q.a, or convert to upper case and see if there is a difference.
q){not any x in .Q.a}"AAA_123"
1b
q){not any x in .Q.a}"AaA_123"
0b
The second strategy could hardly be simpler to write:
q){x~upper x}"AaA_123"
0b
q){x~upper x}"AAA_123"
1b
Together with the Apply . operator, the Zen monks construction gives you fast ‘point-free’ code – no need for a lambda:
q).[~] 1 upper\"AaA_123"
0b
q).[~] 1 upper\"AAA_123"
1b

Julia: Immutable composite types

I am still totally new to julia and very irritated by the following behaviour:
immutable X
x::ASCIIString
end
"Foo" == "Foo"
true
X("Foo") == X("Foo")
false
but with Int instead of ASCIIString
immutable Y
y::Int
end
3 == 3
true
Y(3) == Y(3)
true
I had expected X("Foo") == X("Foo") to be true. Can anyone clarify why it is not?
Thank you.
Julia have two types of equality comparison:
If you want to check that x and y are identical, in the sense that no program could distinguish them. Then the right choice is to use is(x,y) function, and the equivalent operator to do this type of comparison is === operator. The tricky part is that two mutable objects is equal if their memory addresses are identical, but when you compare two immutable objects is returns true if contents are the same at the bit level.
2 === 2 #=> true, because numbers are immutable
"Foo" === "Foo" #=> false
== operator or it's equivalent isequal(x,y) function that is called generic comparison and returns true if, firstly suitable method for this type of argument exists, and secondly that method returns true. so what if that method isn't listed? then == call === operator.
Now for the above case, you have an immutable type that do not have == operator, so you really call === operator, and it checks if two objects are identical in contents at bit level, and they are not because they refer to different string objects and "Foo" !== "Foo"
Check source Base.operators.jl.
Read documentation
EDIT:
as #Andrew mentioned, refer to Julia documentation, Strings are of immutable data types, so why "test"!=="true" #=> true? If you look down into structure of a String data type e.g by xdump("test") #=> ASCIIString data: Array(UInt8,(4,)) UInt8[0x74,0x65,0x73,0x74], you find that Strings are of composite data types with an important data field. Julia Strings are mainly a sequence of bytes that stores in data field of String type. and isimmutable("test".data) #=> false

Turn off Warning: Extension: Conversion from LOGICAL(4) to INTEGER(4) at (1) for gfortran?

I am intentionally casting an array of boolean values to integers but I get this warning:
Warning: Extension: Conversion from LOGICAL(4) to INTEGER(4) at (1)
which I don't want. Can I either
(1) Turn off that warning in the Makefile?
or (more favorably)
(2) Explicitly make this cast in the code so that the compiler doesn't need to worry?
The code will looking something like this:
A = (B.eq.0)
where A and B are both size (n,1) integer arrays. B will be filled with integers ranging from 0 to 3. I need to use this type of command again later with something like A = (B.eq.1) and I need A to be an integer array where it is 1 if and only if B is the requested integer, otherwise it should be 0. These should act as boolean values (1 for .true., 0 for .false.), but I am going to be using them in matrix operations and summations where they will be converted to floating point values (when necessary) for division, so logical values are not optimal in this circumstance.
Specifically, I am looking for the fastest, most vectorized version of this command. It is easy to write a wrapper for testing elements, but I want this to be a vectorized operation for efficiency.
I am currently compiling with gfortran, but would like whatever methods are used to also work in ifort as I will be compiling with intel compilers down the road.
update:
Both merge and where work perfectly for the example in question. I will look into performance metrics on these and select the best for vectorization. I am also interested in how this will work with matrices, not just arrays, but that was not my original question so I will post a new one unless someone wants to expand their answer to how this might be adapted for matrices.
I have not found a compiler option to solve (1).
However, the type conversion is pretty simple. The documentation for gfortran specifies that .true. is mapped to 1, and false to 0.
Note that the conversion is not specified by the standard, and different values could be used by other compilers. Specifically, you should not depend on the exact values.
A simple merge will do the trick for scalars and arrays:
program test
integer :: int_sca, int_vec(3)
logical :: log_sca, log_vec(3)
log_sca = .true.
log_vec = [ .true., .false., .true. ]
int_sca = merge( 1, 0, log_sca )
int_vec = merge( 1, 0, log_vec )
print *, int_sca
print *, int_vec
end program
To address your updated question, this is trivial to do with merge:
A = merge(1, 0, B == 0)
This can be performed on scalars and arrays of arbitrary dimensions. For the latter, this can easily be vectorized be the compiler. You should consult the manual of your compiler for that, though.
The where statement in Casey's answer can be extended in the same way.
Since you convert them to floats later on, why not assign them as floats right away? Assuming that A is real, this could look like:
A = merge(1., 0., B == 0)
Another method to compliment #AlexanderVogt is to use the where construct.
program test
implicit none
integer :: int_vec(5)
logical :: log_vec(5)
log_vec = [ .true., .true., .false., .true., .false. ]
where (log_vec)
int_vec = 1
elsewhere
int_vec = 0
end where
print *, log_vec
print *, int_vec
end program test
This will assign 1 to the elements of int_vec that correspond to true elements of log_vec and 0 to the others.
The where construct will work for any rank array.
For this particular example you could avoid the logical all together:
A=1-(3-B)/3
Of course not so good for readability, but it might be ok performance-wise.
Edit, running performance tests this is 2-3 x faster than the where construct, and of course absolutely standards conforming. In fact you can throw in an absolute value and generalize as:
integer,parameter :: h=huge(1)
A=1-(h-abs(B))/h
and still beat the where loop.

Can you set x == y equal to a Boolean in any any programming languages where x and y are of the same primitive type?

This may be an odd question, and generally I would just go test it, but I'm not near a computer with any IDEs, but can you set a Boolean equal to the expression x==y? The only reason I ask this is because in an if statement you can say... if(x==y) which is asking if the two are equal then do.... And if that is true then the statement x == y is evaluating to true can you set a variable to it
Yes you can, in Java you can do:
boolean z = (x==y);

Perl booleans, negation (and how to explain it)?

I'm new here. After reading through how to ask and format, I hope this will be an OK question. I'm not very skilled in perl, but it is the programming language what I known most.
I trying apply Perl to real life but I didn't get an great understanding - especially not from my wife. I tell her that:
if she didn't bring to me 3 beers in the evening, that means I got zero (or nothing) beers.
As you probably guessed, without much success. :(
Now factually. From perlop:
Unary "!" performs logical negation, that is, "not".
Languages, what have boolean types (what can have only two "values") is OK:
if it is not the one value -> must be the another one.
so naturally:
!true -> false
!false -> true
But perl doesn't have boolean variables - have only a truth system, whrere everything is not 0, '0' undef, '' is TRUE. Problem comes, when applying logical negation to an not logical value e.g. numbers.
E.g. If some number IS NOT 3, thats mean it IS ZERO or empty, instead of the real life meaning, where if something is NOT 3, mean it can be anything but 3 (e.g. zero too).
So the next code:
use 5.014;
use Strictures;
my $not_3beers = !3;
say defined($not_3beers) ? "defined, value>$not_3beers<" : "undefined";
say $not_3beers ? "TRUE" : "FALSE";
my $not_4beers = !4;
printf qq{What is not 3 nor 4 mean: They're same value: %d!\n}, $not_3beers if( $not_3beers == $not_4beers );
say qq(What is not 3 nor 4 mean: #{[ $not_3beers ? "some bears" : "no bears" ]}!) if( $not_3beers eq $not_4beers );
say ' $not_3beers>', $not_3beers, "<";
say '-$not_3beers>', -$not_3beers, "<";
say '+$not_3beers>', -$not_3beers, "<";
prints:
defined, value><
FALSE
What is not 3 nor 4 mean: They're same value: 0!
What is not 3 nor 4 mean: no bears!
$not_3beers><
-$not_3beers>0<
+$not_3beers>0<
Moreover:
perl -E 'say !!4'
what is not not 4 IS 1, instead of 4!
The above statements with wife are "false" (mean 0) :), but really trying teach my son Perl and he, after a while, asked my wife: why, if something is not 3 mean it is 0 ? .
So the questions are:
how to explain this to my son
why perl has this design, so why !0 is everytime 1
Is here something "behind" what requires than !0 is not any random number, but 0.
as I already said, I don't know well other languages - in every language is !3 == 0?
I think you are focussing to much on negation and too little on what Perl booleans mean.
Historical/Implementation Perspective
What is truth? The detection of a higher voltage that x Volts.
On a higher abstraction level: If this bit here is set.
The abstraction of a sequence of bits can be considered an integer. Is this integer false? Yes, if no bit is set, i.e. the integer is zero.
A hardware-oriented language will likely use this definition of truth, e.g. C, and all C descendants incl Perl.
The negation of 0 could be bitwise negation—all bits are flipped to 1—, or we just set the last bit to 1. The results would usually be decoded as integers -1 and 1 respectively, but the latter is more energy efficient.
Pragmatic Perspective
It is convenient to think of all numbers but zero as true when we deal with counts:
my $wordcount = ...;
if ($wordcount) {
say "We found $wordcount words";
} else {
say "There were no words";
}
or
say "The array is empty" unless #array; # notice scalar context
A pragmatic language like Perl will likely consider zero to be false.
Mathematical Perspective
There is no reason for any number to be false, every number is a well-defined entity. Truth or falseness emerges solely through predicates, expressions which can be true or false. Only this truth value can be negated. E.g.
¬(x ≤ y) where x = 2, y = 3
is false. Many languages which have a strong foundation in maths won't consider anything false but a special false value. In Lisps, '() or nil is usually false, but 0 will usually be true. That is, a value is only true if it is not nil!
In such mathematical languages, !3 == 0 is likely a type error.
Re: Beers
Beers are good. Any number of beers are good, as long as you have one:
my $beers = ...;
if (not $beers) {
say "Another one!";
} else {
say "Aaah, this is good.";
}
Boolification of a beer-counting variable just tells us if you have any beers. Consider !! to be a boolification operator:
my $enough_beer = !! $beers;
The boolification doesn't concern itself with the exact amount. But maybe any number ≥ 3 is good. Then:
my $enough_beer = ($beers >= 3);
The negation is not enough beer:
my $not_enough_beer = not($beers >= 3);
or
my $not_enough_beer = not $beers;
fetch_beer() if $not_enough_beer;
Sets
A Perl scalar does not symbolize a whole universe of things. Especially, not 3 is not the set of all entities that are not three. Is the expression 3 a truthy value? Yes. Therefore, not 3 is a falsey value.
The suggested behaviour of 4 == not 3 to be true is likely undesirable: 4 and “all things that are not three” are not equal, the four is just one of many things that are not three. We should write it correctly:
4 != 3 # four is not equal to three
or
not( 4 == 3 ) # the same
It might help to think of ! and not as logical-negation-of, but not as except.
How to teach
It might be worth introducing mathematical predicates: expressions which can be true or false. If we only ever “create” truthness by explicit tests, e.g. length($str) > 0, then your issues don't arise. We can name the results: my $predicate = (1 < 2), but we can decide to never print them out, instead: print $predicate ? "True" : "False". This sidesteps the problem of considering special representations of true or false.
Considering values to be true/false directly would then only be a shortcut, e.g. foo if $x can considered to be a shortcut for
foo if defined $x and length($x) > 0 and $x != 0;
Perl is all about shortcuts.
Teaching these shortcuts, and the various contexts of perl and where they turn up (numeric/string/boolean operators) could be helpful.
List Context
Even-sized List Context
Scalar Context
Numeric Context
String Context
Boolean Context
Void Context
as I already said, I don't know well other languages - in every language is !3 == 0?
Yes. In C (and thus C++), it's the same.
void main() {
int i = 3;
int n = !i;
int nn = !n;
printf("!3=%i ; !!3=%i\n", n, nn);
}
Prints (see http://codepad.org/vOkOWcbU )
!3=0 ; !!3=1
how to explain this to my son
Very simple. !3 means "opposite of some non-false value, which is of course false". This is called "context" - in a Boolean context imposed by negation operator, "3" is NOT a number, it's a statement of true/false.
The result is also not a "zero" but merely something that's convenient Perl representation of false - which turns into a zero if used in a numeric context (but an empty string if used in a string context - see the difference between 0 + !3 and !3 . "a")
The Boolean context is just a special kind of scalar context where no conversion to a string or a number is ever performed. (perldoc perldata)
why perl has this design, so why !0 is everytime 1
See above. Among other likely reasons (though I don't know if that was Larry's main reason), C has the same logic and Perl took a lot of its syntax and ideas from C.
For a VERY good underlying technical detail, see the answers here: " What do Perl functions that return Boolean actually return " and here: " Why does Perl use the empty string to represent the boolean false value? "
Is here something "behind" what requires than !0 is not any random number, but 0.
Nothing aside from simplicity of implementation. It's easier to produce a "1" than a random number.
if you're asking a different question of "why is it 1 instead of the original # that was negated to get 0", the answer to that is simple - by the time Perl interpreter gets to negate that zero, it no longer knows/remembers that zero was a result of "!3" as opposed to some other expression that resulted in a value of zero/false.
If you want to test that a number is not 3, then use this:
my_variable != 3;
Using the syntax !3, since ! is a boolean operator, first converts 3 into a boolean (even though perl may not have an official boolean type, it still works this way), which, since it is non-zero, means it gets converted to the equivalent of true. Then, !true yields false, which, when converted back to an integer context, gives 0. Continuing with that logic shows how !!3 converts 3 to true, which then is inverted to false, inverted again back to true, and if this value is used in an integer context, gets converted to 1. This is true of most modern programming languages (although maybe not some of the more logic-centered ones), although the exact syntax may vary some depending on the language...
Logically negating a false value requires some value be chosen to represent the resulting true value. "1" is as good a choice as any. I would say it is not important which value is returned (or conversely, it is important that you not rely on any particular true value being returned).