Please explain perl operator associativity - perl

The DOC explains perl operator precedence and associativity. I am interesting at -> and * operators. Both have left associativity.
I have next example and it seems that -> has right associativity. Not left as documentation states:
my #args;
sub call { print "call\n"; shift #args }
sub test { print "test\n"; 1 }
sub t4 { print "t4\n"; 4 }
#args=( \&test, 2, 3 );
call()->( #args, t4 );
sub m1 { print "m1\n"; 2 }
sub m2 { print "m2\n"; 4 }
m1()*(#args, m2);
The output is:
t4
call
test
m1
m2
Because -> has left associativity I expect next output:
call
t4
test
m1
m2
What did I miss? Why t4 is called before call?

You are asking about operand evaluation order (the order in which operands are evaluated), not operator associativity (the order in which operators of the same operator precedence are evaluated).
The operand evaluation order is undefined (or at least undocumented) for many operators, including ->. You will find it consistent, but one shouldn't rely on that.
You can use statement breaks to force the desired order:
my $sub = call();
$sub->( #args, t4 );
This would also work:
$_->( #args, t4 ) for call();
Operator Associativity
Associativity is the order in which operators of the same precedence are evaluated.
For example, * and / have the same precedence, so associativity determines that
3 / 4 * 5 * 6
is equivalent to
( ( 3 / 4 ) * 5 ) * 6 = 22.5
and not
3 / ( 4 * ( 5 * 6 ) ) = 0.025
As you can see, associativity can have a direct effect on the outcome of the expression.
Operand Evaluation Order
Operand evaluation order is the order in which operands are evaluated.
For example, operand evaluation order determines whether 1+2 is evaluated before 3+4, or vice-versa in
( 1 + 2 ) * ( 3 + 4 )
Unless evaluating the operand has a side-effect (e.g. changing a variable or printing to the screen), then operand evaluation order has no effect on the outcome of the expression. As such, some languages leave it undefined to allow for optimization.
In Perl, it's undefined (or at least undocumented) for many operators. It's only documented for the following:
The comma operator (, and =>): left to right
Binary boolean operators (and, or, &&, || and //): left, then maybe right[1]
Conditional operators (? :): left, then either second or third[1]
List and scalar assignment operators (=): right, then left[2]
Binding operators (=~ and !~): left, then right[3]
Required for short-circuited evaluation.
Required for my $x = $x;.
Required to function at all.

Associativity is a rule that tells you how to parse an expression where the same operator appears twice, or more than one operator with the same precedence level appears. For example, associativity determines that:
foo()->bar()->baz()
is equivalent to
(foo()->bar())->baz() # left associative
and not
foo()->(bar()->baz()) # right associative
Since your code only hase one -> operator in it, the associativity of that operator is irrelevant.
You seem to be looking for a rule about the order of evaluation of operands to the -> operator. As far as I know, there is no guaranteed order of evaluation for most operators in perl (much like C). If the order matters, you should split the expression into multiple statements with an explicit temporary variable:
my $tmp = call();
$tmp->( #args, t4 );

Associativity describes the order that multiple operators of equal precedence are handled. It has nothing to do with whether the left arguments or the right arguments are processed first. -> being left associative means that a->b->c is processed as (a->b)->c, not as a->(b->c).

Related

Calculating the e number using Raku

I'm trying to calculate the e constant (AKA Euler's Number) by calculating the formula
In order to calculate the factorial and division in one shot, I wrote this:
my #e = 1, { state $a=1; 1 / ($_ * $a++) } ... *;
say reduce * + * , #e[^10];
But it didn't work out. How to do it correctly?
I analyze your code in the section Analyzing your code. Before that I present a couple fun sections of bonus material.
One liner One letter1
say e; # 2.718281828459045
"A treatise on multiple ways"2
Click the above link to see Damian Conway's extraordinary article on computing e in Raku.
The article is a lot of fun (after all, it's Damian). It's a very understandable discussion of computing e. And it's a homage to Raku's bicarbonate reincarnation of the TIMTOWTDI philosophy espoused by Larry Wall.3
As an appetizer, here's a quote from about halfway through the article:
Given that these efficient methods all work the same way—by summing (an initial subset of) an infinite series of terms—maybe it would be better if we had a function to do that for us. And it would certainly be better if the function could work out by itself exactly how much of that initial subset of the series it actually needs to include in order to produce an accurate answer...rather than requiring us to manually comb through the results of multiple trials to discover that.
And, as so often in Raku, it’s surprisingly easy to build just what we need:
sub Σ (Unary $block --> Numeric) {
(0..∞).map($block).produce(&[+]).&converge
}
Analyzing your code
Here's the first line, generating the series:
my #e = 1, { state $a=1; 1 / ($_ * $a++) } ... *;
The closure ({ code goes here }) computes a term. A closure has a signature, either implicit or explicit, that determines how many arguments it will accept. In this case there's no explicit signature. The use of $_ (the "topic" variable) results in an implicit signature that requires one argument that's bound to $_.
The sequence operator (...) repeatedly calls the closure on its left, passing the previous term as the closure's argument, to lazily build a series of terms until the endpoint on its right, which in this case is *, shorthand for Inf aka infinity.
The topic in the first call to the closure is 1. So the closure computes and returns 1 / (1 * 1) yielding the first two terms in the series as 1, 1/1.
The topic in the second call is the value of the previous one, 1/1, i.e. 1 again. So the closure computes and returns 1 / (1 * 2), extending the series to 1, 1/1, 1/2. It all looks good.
The next closure computes 1 / (1/2 * 3) which is 0.666667. That term should be 1 / (1 * 2 * 3). Oops.
Making your code match the formula
Your code is supposed to match the formula:
In this formula, each term is computed based on its position in the series. The kth term in the series (where k=0 for the first 1) is just factorial k's reciprocal.
(So it's got nothing to do with the value of the prior term. Thus $_, which receives the value of the prior term, shouldn't be used in the closure.)
Let's create a factorial postfix operator:
sub postfix:<!> (\k) { [×] 1 .. k }
(× is an infix multiplication operator, a nicer looking Unicode alias of the usual ASCII infix *.)
That's shorthand for:
sub postfix:<!> (\k) { 1 × 2 × 3 × .... × k }
(I've used pseudo metasyntactic notation inside the braces to denote the idea of adding or subtracting as many terms as required.
More generally, putting an infix operator op in square brackets at the start of an expression forms a composite prefix operator that is the equivalent of reduce with => &[op],. See Reduction metaoperator for more info.
Now we can rewrite the closure to use the new factorial postfix operator:
my #e = 1, { state $a=1; 1 / $a++! } ... *;
Bingo. This produces the right series.
... until it doesn't, for a different reason. The next problem is numeric accuracy. But let's deal with that in the next section.
A one liner derived from your code
Maybe compress the three lines down to one:
say [+] .[^10] given 1, { 1 / [×] 1 .. ++$ } ... Inf
.[^10] applies to the topic, which is set by the given. (^10 is shorthand for 0..9, so the above code computes the sum of the first ten terms in the series.)
I've eliminated the $a from the closure computing the next term. A lone $ is the same as (state $), an anonynous state scalar. I made it a pre-increment instead of post-increment to achieve the same effect as you did by initializing $a to 1.
We're now left with the final (big!) problem, pointed out by you in a comment below.
Provided neither of its operands is a Num (a float, and thus approximate), the / operator normally returns a 100% accurate Rat (a limited precision rational). But if the denominator of the result exceeds 64 bits then that result is converted to a Num -- which trades performance for accuracy, a tradeoff we don't want to make. We need to take that into account.
To specify unlimited precision as well as 100% accuracy, simply coerce the operation to use FatRats. To do this correctly, just make (at least) one of the operands be a FatRat (and none others be a Num):
say [+] .[^500] given 1, { 1.FatRat / [×] 1 .. ++$ } ... Inf
I've verified this to 500 decimal digits. I expect it to remain accurate until the program crashes due to exceeding some limit of the Raku language or Rakudo compiler. (See my answer to Cannot unbox 65536 bit wide bigint into native integer for some discussion of that.)
Footnotes
1 Raku has a few important mathematical constants built in, including e, i, and pi (and its alias π). Thus one can write Euler's Identity in Raku somewhat like it looks in math books. With credit to RosettaCode's Raku entry for Euler's Identity:
# There's an invisible character between <> and i⁢π character pairs!
sub infix:<⁢> (\left, \right) is tighter(&infix:<**>) { left * right };
# Raku doesn't have built in symbolic math so use approximate equal
say e**i⁢π + 1 ≅ 0; # True
2 Damian's article is a must read. But it's just one of several admirable treatments that are among the 100+ matches for a google for 'raku "euler's number"'.
3 See TIMTOWTDI vs TSBO-APOO-OWTDI for one of the more balanced views of TIMTOWTDI written by a fan of python. But there are downsides to taking TIMTOWTDI too far. To reflect this latter "danger", the Perl community coined the humorously long, unreadable, and understated TIMTOWTDIBSCINABTE -- There Is More Than One Way To Do It But Sometimes Consistency Is Not A Bad Thing Either, pronounced "Tim Toady Bicarbonate". Strangely enough, Larry applied bicarbonate to Raku's design and Damian applies it to computing e in Raku.
There is fractions in $_. Thus you need 1 / (1/$_ * $a++) or rather $_ /$a++.
By Raku you could do this calculation step by step
1.FatRat,1,2,3 ... * #1 1 2 3 4 5 6 7 8 9 ...
andthen .produce: &[*] #1 1 2 6 24 120 720 5040 40320 362880
andthen .map: 1/* #1 1 1/2 1/6 1/24 1/120 1/720 1/5040 1/40320 1/362880 ...
andthen .produce: &[+] #1 2 2.5 2.666667 2.708333 2.716667 2.718056 2.718254 2.718279 2.718282 ...
andthen .[50].say #2.71828182845904523536028747135266249775724709369995957496696762772

Does pattern match in Raku have guard clause?

In scala, pattern match has guard pattern:
val ch = 23
val sign = ch match {
case _: Int if 10 < ch => 65
case '+' => 1
case '-' => -1
case _ => 0
}
Is the Raku version like this?
my $ch = 23;
given $ch {
when Int and * > 10 { say 65}
when '+' { say 1 }
when '-' { say -1 }
default { say 0 }
}
Is this right?
Update: as jjmerelo suggested, i post my result as follows, the signature version is also interesting.
multi washing_machine(Int \x where * > 10 ) { 65 }
multi washing_machine(Str \x where '+' ) { 1 }
multi washing_machine(Str \x where '-' ) { -1 }
multi washing_machine(\x) { 0 }
say washing_machine(12); # 65
say washing_machine(-12); # 0
say washing_machine('+'); # 1
say washing_machine('-'); # -1
say washing_machine('12'); # 0
say washing_machine('洗衣机'); # 0
TL;DR I've written another answer that focuses on using when. This answer focuses on using an alternative to that which combines Signatures, Raku's powerful pattern matching construct, with a where clause.
"Does pattern match in Raku have guard clause?"
Based on what little I know about Scala, some/most Scala pattern matching actually corresponds to using Raku signatures. (And guard clauses in that context are typically where clauses.)
Quoting Martin Odersky, Scala's creator, from The Point of Pattern Matching in Scala:
instead of just matching numbers, which is what switch statements do, you match what are essentially the creation forms of objects
Raku signatures cover several use cases (yay, puns). These include the Raku equivalent of the functional programming paradigmatic use in which one matches values' or functions' type signatures (cf Haskell) and the object oriented programming paradigmatic use in which one matches against nested data/objects and pulls out desired bits (cf Scala).
Consider this Raku code:
class body { has ( $.head, #.arms, #.legs ) } # Declare a class (object structure).
class person { has ( $.mom, $.body, $.age ) } # And another that includes first.
multi person's-age-and-legs # Declare a function that matches ...
( person # ... a person ...
( :$age where * > 40, # ... whose age is over 40 ...
:$body ( :#legs, *% ), # ... noting their body's legs ...
*% ) ) # ... and ignoring other attributes.
{ say "$age {+#legs}" } # Display age and number of legs.
my $age = 42; # Let's demo handy :$var syntax below.
person's-age-and-legs # Call function declared above ...
person # ... passing a person.
.new: # Explicitly construct ...
:$age, # ... a middle aged ...
body => body.new:
:head,
:2arms,
legs => <left middle right> # ... three legged person.
# Displays "42 3"
Notice where there's a close equivalent to a Scala pattern matching guard clause in the above -- where * > 40. (This can be nicely bundled up into a subset type.)
We could define other multis that correspond to different cases, perhaps pulling out the "names" of the person's legs ('left', 'middle', etc.) if their mom's name matches a particular regex or whatever -- you hopefully get the picture.
A default case (multi) that doesn't bother to deconstruct the person could be:
multi person's-age-and-legs (|otherwise)
{ say "let's not deconstruct this person" }
(In the above we've prefixed a parameter in a signature with | to slurp up all remaining structure/arguments passed to a multi. Given that we do nothing with that slurped structure/data, we could have written just (|).)
Unfortunately, I don't think signature deconstruction is mentioned in the official docs. Someone could write a book about Raku signatures. (Literally. Which of course is a great way -- the only way, even -- to write stuff. My favorite article that unpacks a bit of the power of Raku signatures is Pattern Matching and Unpacking from 2013 by Moritz. Who has authored Raku books. Here's hoping.)
Scala's match/case and Raku's given/when seem simpler
Indeed.
As #jjmerelo points out in the comments, using signatures means there's a multi foo (...) { ...} for each and every case, which is much heavier syntactically than case ... => ....
In mitigation:
Simpler cases can just use given/when, just like you wrote in the body of your question;
Raku will presumably one day get non-experimental macros that can be used to implement a construct that looks much closer to Scala's match/case construct, eliding the repeated multi foo (...)s.
From what I see in this answer, that's not really an implementation of a guard pattern in the same sense Haskell has them. However, Perl 6 does have guards in the same sense Scala has: using default patterns combined with ifs.
The Haskell to Perl 6 guide does have a section on guards. It hints at the use of where as guards; so that might answer your question.
TL;DR You've encountered what I'd call a WTF?!?: when Type and ... fails to check the and clause. This answer talks about what's wrong with the when and how to fix it. I've written another answer that focuses on using where with a signature.
If you want to stick with when, I suggest this:
when (condition when Type) { ... } # General form
when (* > 10 when Int) { ... } # For your specific example
This is (imo) unsatisfactory, but it does first check the Type as a guard, and then the condition if the guard passes, and works as expected.
"Is this right?"
No.
given $ch {
when Int and * > 10 { say 65}
}
This code says 65 for any given integer, not just one over 10!
WTF?!? Imo we should mention this on Raku's trap page.
We should also consider filing an issue to make Rakudo warn or fail to compile if a when construct starts with a compile-time constant value that's a type object, and continues with and (or &&, andthen, etc), which . It could either fail at compile-time or display a warning.
Here's the best option I've been able to come up with:
when (* > 10 when Int) { say 65 }
This takes advantage of the statement modifier (aka postfix) form of when inside the parens. The Int is checked before the * > 10.
This was inspired by Brad++'s new answer which looks nice if you're writing multiple conditions against a single guard clause.
I think my variant is nicer than the other options I've come up with in previous versions of this answer, but still unsatisfactory inasmuch as I don't like the Int coming after the condition.
Ultimately, especially if/when RakuAST lands, I think we will experiment with new pattern matching forms. Hopefully we'll come up with something nice that provides a nice elimination of this wart.
Really? What's going on?
We can begin to see the underlying problem with this code:
.say for ('TrueA' and 'TrueB'),
('TrueB' and 'TrueA'),
(Int and 42),
(42 and Int)
displays:
TrueB
TrueA
(Int)
(Int)
The and construct boolean evaluates its left hand argument. If that evaluates to False, it returns it, otherwise it returns its right hand argument.
In the first line, 'TrueA' boolean evaluates to True so the first line returns the right hand argument 'TrueB'.
In the second line 'TrueB' evaluates to True so the and returns its right hand argument, in this case 'TrueA'.
But what happens in the third line? Well, Int is a type object. Type objects boolean evaluate to False! So the and duly returns its left hand argument which is Int (which the .say then displays as (Int)).
This is the root of the problem.
(To continue to the bitter end, the compiler evaluates the expression Int and * > 10; immediately returns the left hand side argument to and which is Int; then successfully matches that Int against whatever integer is given -- completely ignoring the code that looks like a guard clause (the and ... bit).)
If you were using such an expression as the condition of, say, an if statement, the Int would boolean evaluate to False and you'd get a false negative. Here you're using a when which uses .ACCEPTS which leads to a false positive (it is an integer but it's any integer, disregarding the supposed guard clause). This problem quite plausibly belongs on the traps page.
Years ago I wrote a comment mentioning that you had to be more explicit about matching against $_ like this:
my $ch = 23;
given $ch {
when $_ ~~ Int and $_ > 10 { say 65}
when '+' { say 1 }
when '-' { say -1 }
default { say 0 }
}
After coming back to this question, I realized there was another way.
when can safely be inside of another when construct.
my $ch = 23;
given $ch {
when Int:D {
when $_ > 10 { say 65}
proceed
}
when '+' { say 1 }
when '-' { say -1 }
default { say 0 }
}
Note that the inner when will succeed out of the outer one, which will succeed out of the given block.
If the inner when doesn't match we want to proceed on to the outer when checks and default, so we call proceed.
This means that we can also group multiple when statements inside of the Int case, saving having to do repeated type checks. It also means that those inner when checks don't happen at all if we aren't testing an Int value.
when Int:D {
when $_ < 10 { say 5 }
when 10 { say 10}
when $_ > 10 { say 65}
}

What is the difference between Logical and Comparison operators in MySQL?

Studying MySQL I came to some confusion about operators categorization in MySQL.
NOT is a Logical operator (Details)
while NOT LIKE, LIKE, IS NOT, IS NULL are Comparison operators. (Details)
I'm unable to grasp the real difference.
A logical operator's operands are booleans; whereas comparison operators may have operands of any type.
The comparison operators test the relationship between their operands according to an ordering over the operands' type set, and return a boolean result: 1 < 2, 'hello' > 'aardvark', CURRENT_DATE = '2013-12-30', 'peanut' LIKE 'pea%', 'walnut' NOT LIKE 'pea%', '' IS NOT NULL, etc.
Booleans, on the other hand, don't have an "ordering" by which such relationships can be established*—it's pretty meaningless, for example, to say that FALSE < TRUE. Instead, we think about them in terms of their "truth" and the operators which act upon them in terms of their logical validity: TRUE AND TRUE, FALSE XOR TRUE, NOT FALSE, etc.
Of course, there are many situations where the same logical result can be expressed in multiple ways—for example:
1 < 2 is logically the same as both 2 > 1 and NOT (1 >= 2)
'walnut' NOT LIKE 'pea%' is logically the same as NOT ('walnut' LIKE 'pea%')
'' IS NOT NULL is logically the same as NOT ('' IS NULL)
However, negating a comparison involves a logical operation (negation) in addition to the comparison operation, whereas a single comparison operation that immediately yields the desired result is typically more concise/readable and may be easier for the computer to optimise.
* Some languages (such as MySQL) don't have real boolean types, instead using zero and non-zero integers to represent FALSE and TRUE respectfully. Consequently an ordering does exist over their booleans, albeit that doesn't affect the conceptual distinction.
Simply put, NOT LIKE, LIKE, IS NOT are used to compare two values. For example :
SELECT * FROM `table_name` WHERE `name` NOT LIKE `examplename`;
SELECT * FROM `table_name` WHERE `key_col` IS NULL;
NOT returns the inverse of the value. It's same as !=, for the most part. Example :
SELECT * FROM `student` WHERE NOT (student_id = 1);
The operator returns 1 if the operand is 0 and returns 0 if the operand is nonzero. It returns NULL if the operand is NOT NULL.
Visit : MySQL Operators
Visit : NOT ! Operator
In simple words:
Comparison operator compares 2 values in different manners as per operator used like ==, !=, >, <, >=, <= and others.
On the other hand, Logical operator will check conditions (more than 1) returns by comparison operator and then performs the operation as per result.

What's the difference between | and || in MATLAB?

What is the difference between the | and || logical operators in MATLAB?
I'm sure you've read the documentation for the short-circuiting operators, and for the element-wise operators.
One important difference is that element-wise operators can operate on arrays whereas the short-circuiting operators apply only to scalar logical operands.
But probably the key difference is the issue of short-circuiting. For the short-circuiting operators, the expression is evaluated from left to right and as soon as the final result can be determined for sure, then remaining terms are not evaluated.
For example, consider
x = a && b
If a evaluates to false, then we know that a && b evaluates to false irrespective of what b evaluates to. So there is no need to evaluate b.
Now consider this expression:
NeedToMakeExpensiveFunctionCall && ExpensiveFunctionCall
where we imagine that ExpensiveFunctionCall takes a long time to evaluate. If we can perform some other, cheap, test that allows us to skip the call to ExpensiveFunctionCall, then we can avoid calling ExpensiveFunctionCall.
So, suppose that NeedToMakeExpensiveFunctionCall evaluates to false. In that case, because we have used short-circuiting operators, ExpensiveFunctionCall will not be called.
In contrast, if we used the element-wise operator and wrote the function like this:
NeedToMakeExpensiveFunctionCall & ExpensiveFunctionCall
then the call to ExpensiveFunctionCall would never be skipped.
In fact the MATLAB documentation, which I do hope you have read, includes an excellent example that illustrates the point very well:
x = (b ~= 0) && (a/b > 18.5)
In this case we cannot perform a/b if b is zero. Hence the test for b ~= 0. The use of the short-circuiting operator means that we avoid calculating a/b when b is zero and so avoid the run-time error that would arise. Clearly the element-wise logical operator would not be able to avoid the run-time error.
For a longer discussion of short-circuit evaluation, refer to the Wikipedia article on the subject.
Logical Operators
MATLAB offers three types of logical operators and functions:
| is Element-wise — operate on corresponding elements of logical arrays.
Example:
vector inputs A and B
A = [0 1 1 0 1];
B = [1 1 0 0 1];
A | B = 11101
|| is Short-circuit — operate on scalar, logical expressions
Example:
|| : Returns logical 1 (true) if either input, or both, evaluate to true, and logical 0 (false) if they do not.
Operand: logical expressions containing scalar values.
A || B (B is only evaluated if A is false)
A = 1;
B = 0;
C =(A || (B = 1));
B is 0 after this expression and C is 1.
Other is, Bit-wise — operate on corresponding bits of integer values or arrays.
reference link
|| is used for scalar inputs
| takes array input in if/while statements
From the source:-
Always use the && and || operators when short-circuiting is required.
Using the elementwise operators (& and |) for short-circuiting can
yield unexpected results.
Short-circuit || means, that parameters will be evaluated only if necessarily in expression.
In our example expr1 || expr2 if expr1 evaluates to TRUE, than there is no need to evaluate second operand - the result will be always TRUE. If you have a long chain of Short-circuit operators A || B || C || D and your first evaluates to true, then others won't be evaluated.
If you substitute Element-wise logical | to A | B | C | D then all elements will be evaluated regardless of previous operands.
| represents OR as a logical operator. || is also a logical operator called a short-circuit OR
The most important advantage of short-circuit operators is that you can use them to evaluate an expression only when certain conditions are satisfied. For example, you want to execute a function only if the function file resides on the current MATLAB path. Short-circuiting keeps the following code from generating an error when the file, myfun.m, cannot be found:
comp = (exist('myfun.m') == 2) && (myfun(x) >= y)
Similarly, this statement avoids attempting to divide by zero:
x = (b ~= 0) && (a/b > 18.5)
You can also use the && and || operators in if and while statements to take advantage of their short-circuiting behavior:
if (nargin >= 3) && (ischar(varargin{3}))

What's the difference between & and && in MATLAB?

What is the difference between the & and && logical operators in MATLAB?
The single ampersand & is the logical AND operator. The double ampersand && is again a logical AND operator that employs short-circuiting behaviour. Short-circuiting just means the second operand (right hand side) is evaluated only when the result is not fully determined by the first operand (left hand side)
A & B (A and B are evaluated)
A && B (B is only evaluated if A is true)
&& and || take scalar inputs and short-circuit always. | and & take array inputs and short-circuit only in if/while statements. For assignment, the latter do not short-circuit.
See these doc pages for more information.
As already mentioned by others, & is a logical AND operator and && is a short-circuit AND operator. They differ in how the operands are evaluated as well as whether or not they operate on arrays or scalars:
& (AND operator) and | (OR operator) can operate on arrays in an element-wise fashion.
&& and || are short-circuit versions for which the second operand is evaluated only when the result is not fully determined by the first operand. These can only operate on scalars, not arrays.
Both are logical AND operations. The && though, is a "short-circuit" operator. From the MATLAB docs:
They are short-circuit operators in that they evaluate their second operand only when the result is not fully determined by the first operand.
See more here.
& is a logical elementwise operator, while && is a logical short-circuiting operator (which can only operate on scalars).
For example (pardon my syntax).
If..
A = [True True False True]
B = False
A & B = [False False False False]
..or..
B = True
A & B = [True True False True]
For &&, the right operand is only calculated if the left operand is true, and the result is a single boolean value.
x = (b ~= 0) && (a/b > 18.5)
Hope that's clear.
&& and || are short circuit operators operating on scalars. & and | operate on arrays, and use short-circuiting only in the context of if or while loop expressions.
A good rule of thumb when constructing arguments for use in conditional statements (IF, WHILE, etc.) is to always use the &&/|| forms, unless there's a very good reason not to. There are two reasons...
As others have mentioned, the short-circuiting behavior of &&/|| is similar to most C-like languages. That similarity / familiarity is generally considered a point in its favor.
Using the && or || forms forces you to write the full code for deciding your intent for vector arguments. When a = [1 0 0 1] and b = [0 1 0 1], is a&b true or false? I can't remember the rules for MATLAB's &, can you? Most people can't. On the other hand, if you use && or ||, you're FORCED to write the code "in full" to resolve the condition.
Doing this, rather than relying on MATLAB's resolution of vectors in & and |, leads to code that's a little bit more verbose, but a LOT safer and easier to maintain.