This is a rhetorical question about the uplus function in MATLAB, or its corresponding operator, the unary plus +.
Is there a case where this operator is useful? Even better, is there a case where this operator is necessary?
It is not necessary, a language without a unary plus does not allow to write +1. Obviously you could also write 1 but when importing data which always writes the + or - it's very nice to have.
Searching some source codes, I found a curious use of +
A=+A
which replaced the code:
if ~isnumeric(A)
A=double(A);
end
It casts chars and logicals to double, but all numeric data types remain untouched.
It can be useful when defining new numeric types.
Suppose you define quaternion and overload uplus:
classdef quaternion
...
end
Then in your code you can write:
x = quaternion(...);
y = [+x, -x];
z = +quaternion.Inf;
t = -quaternion.Inf;
If you don't you cannot have same syntax as for other numeric.
PS: To the question "is it useful" (in the sence mandatory for some syntaxes) ... well I can't find any reason ... but sometimes writting '+x' make things clearer when reading back the code.
I'm not sure if this fully constitutes "useful" or if it's the best programming practice, but in some cases, one may wish to use the unary + for symmetry/clarity reasons. There's probably a better example, but I'm thinking of something like this:
A = [+1 -1 +1;
-1 +1 -1;
+1 -1 +1];
As for the uplus function, it's kind of a NOOP for numeric operations. If one writes a function that requires a function handle input to specify an operation to perform, it might be useful to have do nothing option.
Lastly, numeric operators can be overloaded for other classes. The uplus function could have more use in other built-in classes or even one you might want write yourself.
Related
According to this manual page, we can use reduce to perform reduction like
summation (+):
var a = (+ reduce A) / num;
var b = + reduce abs(A);
var c = sqrt(+ reduce A**2);
and maximum value/location:
var (maxVal, maxLoc) = maxloc reduce zip(A, A.domain);
Here, Chapel defines reduce to be an infix operator rather than a function (e.g., reduce( A, + )). IMHO, the latter form seems to be a bit more readable because the arguments are always separated by parentheses. So I am wondering if there is some reason for this choice (e.g., to simplify some parallel syntax) or just a matter of history (convention)?
I'd say the answer is a matter of history / convention. A lot of Chapel's array and domain features were heavily inspired by the ZPL language from the University of Washington, and I believe this syntax was taken reasonably directly from ZPL.
At the time, we didn't have a notion of passing things like functions and operators around in Chapel, which is probably one of the reasons that we didn't consider more of a function-based approach. (Even now, first-class function support in Chapel is still somewhat in its infancy, and I don't believe we have a way to pass operators around).
I'd also say that Chapel is a language that generally favors syntax for key patterns rather than taking more of a "make everything look like a function / method call" approach (e.g., ranges are supported via a literal syntax and several key operators rather than using an object type with methods).
None of this is to say that the choice was obviously right or couldn't be reconsidered.
Im trying to make a n-nested loop method in Julia
function fun(n::Int64)
#nloops n i d->1:3 begin\n
#nexprs n j->(print(i_j))\n
end
end
But the #nloops definition is limited to
_nloops(::Int64, ::Symbol, ::Expr, ::Expr...)
and I get the error
_nloops(::Symbol, ::Symbol, ::Expr, ::Expr)
Is there any way to make this work? Any help greatly appreciated
EDIT:
What I ended up doing was using the combinations method
For my problem, I needed to get all k-combinations of indices to pull values from an array, so the loops would had to look like
for i_1 in 1:100
for i_2 in i_1:100
...
for i_k in i_[k-1]:100
The number of loops needs to be a compile-time constant – a numeric literal, in fact: the code generated for the function body cannot depend on a function argument. Julia's generated functions won't help either since n is just a plain value and not part of the type of any argument. Your best bet for having the number of nested loops depend on a runtime value like n is to use recursion.
In julia-0.4 and above, you can now do this:
function fun(n::Int)
for I in CartesianRange(ntuple(d->1:3, n))
#show I
end
end
In most cases you don't need the Base.Cartesian macros anymore (although there are still some exceptions). It's worth noting that, just as described in StefanKarpinski's answer, this loop will not be "type stable" because n is not a compile-time constant; if performance matters, you can use the "function barrier technique." See http://julialang.org/blog/2016/02/iteration for more information about all topics related to these matters.
Suppose one needs to select the real solutions after solving some equation.
Is this the correct and optimal way to do it, or is there a better one?
restart;
mu := 3.986*10^5; T:= 8*60*60:
eq := T = 2*Pi*sqrt(a^3/mu):
sol := solve(eq,a);
select(x->type(x,'realcons'),[sol]);
I could not find real as type. So I used realcons. At first I did this:
select(x->not(type(x,'complex')),[sol]);
which did not work, since in Maple 5 is considered complex! So ended up with no solutions.
type(5,'complex');
(* true *)
Also I could not find an isreal() type of function. (unless I missed one)
Is there a better way to do this that one should use?
update:
To answer the comment below about 5 not supposed to be complex in maple.
restart;
type(5,complex);
true
type(5,'complex');
true
interface(version);
Standard Worksheet Interface, Maple 18.00, Windows 7, February
From help
The type(x, complex) function returns true if x is an expression of the form
a + I b, where a (if present) and b (if present) are finite and of type realcons.
Your solutions sol are all of type complex(numeric). You can select only the real ones with type,numeric, ie.
restart;
mu := 3.986*10^5: T:= 8*60*60:
eq := T = 2*Pi*sqrt(a^3/mu):
sol := solve(eq,a);
20307.39319, -10153.69659 + 17586.71839 I, -10153.69659 - 17586.71839 I
select( type, [sol], numeric );
[20307.39319]
By using the multiple argument calling form of the select command we here can avoid using a custom operator as the first argument. You won't notice it for your small example, but it should be more efficient to do so. Other commands such as map perform similarly, to avoid having to make an additional function call for each individual test.
The types numeric and complex(numeric) cover real and complex integers, rationals, and floats.
The types realcons and complex(realcons) includes the previous, but also allow for an application of evalf done during the test. So Int(sin(x),x=1..3) and Pi and sqrt(2) are all of type realcons since following an application of evalf they become floats of type numeric.
The above is about types. There are also properties to consider. Types are properties, but not necessarily vice versa. There is a real property, but no real type. The is command can test for a property, and while it is often used for mixed numeric-symbolic tests under assumptions (on the symbols) it can also be used in tests like yours.
select( is, [sol], real );
[20307.39319]
It is less efficient to use is for your example. If you know that you have a collection of (possibly non-real) floats then type,numeric should be an efficient test.
And, just to muddy the waters... there is a type nonreal.
remove( type, [sol], nonreal );
[20307.39319]
The one possibility is to restrict the domain before the calculation takes place.
Here is an explanation on the Maplesoft website regarding restricting the domain:
4 Basic Computation
UPD: Basically, according to this and that, 5 is NOT considered complex in Maple, so there might be some bug/error/mistake (try checking what may be wrong there).
For instance, try putting complex without quotes.
Your way seems very logical according to this.
UPD2: According to the Maplesoft Website, all the type checks are done with type() function, so there is rather no isreal() function.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Say I have an object of type A. Consider this case for any function of the type A -> A (i.e. takes object of type A and returns another object of type A):
foo = func(foo)
Here, the simplest case would be to for the result of func(foo) to be copied into foo.
Is it possible to optimize this so that:
foo gets modified inplace in func
There are no constraints on the language used. What I want to know is what constraints and properties the language must have to enable such an optimization. Are there any existing languages which perform such an optimization?
Example(in pseudo code):
type Matrix = List<List<int>>
Matrix rotate90Deg(Matrix x):
Matrix result(x.columns, x.rows) #Assume it has a constructor which takes as args the num of rows, and num of cols.
for (int i = 0; i < x.rows; i++):
for (int j = 0; j < x.columns; j++):
result[i][j] = x[j][i]
return result
Matrix a = [[1,2,3],[4,5,6],[7,8,9]]
a = rotate90Deg(a)
Here, is it possible to optimize the code so that it doesn't allocate memory for a new matrix(result), and instead just modifies the original matrix passed.
First of all, you have to realize that some operations are inherently not possible to be computed in-place. Matrix-matrix multiplication is an example of this, and rotate90Deg would fall under this category since such an operation is actually a matrix multiplication by the appropriate multiplication matrix.
Now as for your example, you actually coded up a matrix transpose function. Matrix transpose can be done in-place since you are swapping pairs of numbers, but I doubt that any compilers can automatically detect this and optimize it for you. Indeed, there are many, many tricks that one can do to optimize matrix transpose in order to be cache-friendly in order to gain huge performance increases. Nevertheless, with an naive implementation, you will almost certainly end up with something very similar to what Aditya Kumar describes in his answer.
As I have foreshadowed by using the word "naive" earlier, programmers can coax the compiler to inline lots and lots of things in extremely optimized ways through advanced templating and other meta-programming techniques. (At least in C++, and maybe other languages that allow you to overload operator =.) For anyone interested in a case study of how this is done and what is involved, take a look at the Eigen matrix library, and how it handles a simple operation like u = v + w; where the three variables are all matrices of floats. Following is a brief overview of the key points.
A naive implementation would overload operator+ to return a temporary and operator= to copy that temporary to the result. Of course, in C++11 it is pretty easy to avoid the final copy during assignment by way of move constructors, but you will still have unnecessary temporaries if you had something a little more complex with multiple operators on the right hand side like u = 3.15f * u.transposed() + 5.0f; since each operator/method would return a temporary, and that temporary would have to be looped over in order to process the next operator.
Long story short, what Eigen does is rather than perform each operation when the corresponding function call occurs, the calls return a templated functor of sorts which merely describes the operation that needs to take place, and all the actual work ends up happening in operator =, thus enabling the compiler to emit a single, inlined loop for traversing the data only once and doing the operation truly in-place.
Yes it is possible, and this optimization is provided by at least C++11 (inlining).
To explain the optimization a little bit.
e.g.
foo_t foo;
foo = func(foo); // #1
foo_t func(foo_t foo1) {
foo_t new_foo;
// operate on new_foo by using foo1
return new_foo;
}
There are three instances of foo_t being made:
foo is copied and passed as foo1 to func
new_foo is created.
new_foo is assigned to foo by copying the contents of new_foo into foo;
All the three copies can be eliminated provided there are some invariants.
foo (the argument to be passed to function is never used later with the same original value. This is equivalent to saying that foo is 'dead' at line #1. This is established here as foo is reassigned.
the scope of object new_foo in function func has its lifetime that does not extend the life of function func. This is also established here as the way new_foo is created, it will be on stack and the lifetime of objects in stack is the same as the lifetime of the function in which the object was created.
In C++ it can be achieved using inlining the function func. After inlining, the code basically will look like this.
`foo_t foo;`
`foo_t new_foo;`
`// operate on new_foo by using foo`
`foo = new_foo;`
Although, C++ provides inlining as a language feature but almost any optimizing compiler do inlining these days.
Now it depends on what kind of operation you perform on new_foo and foo whether this extra new_foo will be optimized away or not. For some data types it is trivial (the compiler can do a 'copy-propagation' followed by 'dead-code elimination' to remove new_foo completely.
Is there an idiomatic way in Matlab to bind the value of an expression to the nth return value of another expression?
For example, say I want an array of indices corresponding to the maximum value of a number of vectors stored in a cell array. I can do that by
function I = max_index(varargin)
[~,I]=max(varargin{:});
cellfun(#max_index, my_data);
But this requires one to define a function (max_index) specific for each case one wants to select a particular return value in an expression. I can of course define a generic function that does what I want:
function y = nth_return(n,fun,varargin)
[vals{1:n}] = fun(varargin{:});
y = vals{n};
And call it like:
cellfun(#(x) nth_return(2,#max,x), my_data)
Adding such functions, however, makes code snippets less portable and harder to understand. Is there an idiomatic to achieve the same result without having to rely on the custom nth_return function?
This is as far as I know not possible in another way as with the solutions you mention. So just use the syntax:
[~,I]=max(var);
Or indeed create an extra function. But I would also suggest against this. Just write the extra line of code, in case you want to use the output in another function. I found two earlier questions on stackoverflow, which adress the same topic, and seem to confirm that this is not possible.
Skipping outputs with anonymous function in MATLAB
How to elegantly ignore some return values of a MATLAB function?
The reason why the ~ operator was added to MATLAB some versions ago was to prevent you from saving variables you do not need. If there would be a syntax like the one you are searching for, this would not have been necessary.