Comparing the terms "memoize" and "cache" and in reading Wikipedia's memoization entry, do people agree that using the term "memoize" implies
The memoized result is kept in the process' memory; in other words, it isn't stored in memcached .
One only "memoizes" functions, as in mathematical functions, e.g. Fibonacci, not values that may change over time, e.g. the number of registered users on the web site?
If you're doing anything else than the above, then one is just caching a result?
I'm not sure, but my understanding is that memoization requires that given a function y = f(u), that f be deterministic (that is, for a given u, the y must always be the same), in order that the results of f may be stored.
Caching to me seems to be more of a problem of determining what pieces of data are accessed frequently, and keeping those data in fast storage.
The former is deterministic while the latter is stochastic.
I believe that memoizing a function allows you to locally cache the result of a function for a given set of arguments. It's almost like:
function f(a, b, c) {
if (a==1 && b==2 && !c) {
return 5;
} else if (a==1 && b==3 && !c) {
return 17.2;
} /* ... etc ... */
// some horribly long/complex/expensive calculation
return result;
}
but with the initial huge "if" block being handled automatically and far more efficiently, and being added to as the function is called with different arguments.
Note that you can only memoize a function that is deterministic and has no side effects. This means that the function's result can only depend on its inputs, and it cannot change anything when it is run.
So in short, memoization is function-local caching in very specific circumstances, and so it's a specialisation of regular caching.
From my understanding, yes, memoization is caching, used to speed up a time-essential program such as one that calculates a number sequence (e.g. Fibonacci sequence).
It is debatable as the terms can loosely be used interchangeably.
To me the sole implication of 'memoize' would be - That the 'caching' is of previous inputs rather than precomputed tables. That is - memoization is a process of a function to remember its own return values.
Related
For example,
atomic_int test(void)
{
atomic_int tmp = ATOMIC_VAR_INIT(14);
tmp = 47; // Looks like atomic_store
atomic_int mc; // Probably just uninitialised data
memcpy(&mc,&tmp,sizeof(mc)); // Probably equivalent to a copy
tmp = mc + 4; // Arithmetic
return tmp; // A copy - perhaps load then store
}
Clang is happy with all this. I've read section 7.17 of the standard, and it says a lot about the memory model and the defined functions (init, store, load etc) but doesn't say anything about the usual operations (+, = etc).
Also of interest is the behaviour of passing struct wot { atomic_int value; } to functions.
I would like to believe that assignment behaves identically to an atomic load then store using memory_order_seq_cst.
Even more optimistically, I would like to believe that struct assignment, passing to function, returning from function and even memcpy also behaves identically to carefully copying the bit pattern across under memory_order_seq_cst.
I can't find any supporting evidence for either belief in the standard though. There's definitely a chance that assignment and memcpy of atomic primitives is undefined behaviour.
How should primitive operations on atomic primitives behave?
Thanks!
Operations on objects that are _Atomic qualified (and atomic_int is just a different writing for that) are guaranteed to have sequential consistency. You find that mentionned at the end of the semantics section for each of the operands. (And maybe the mention for assignment is missing.)
Your code is not correct at two places: initialization must use the ATOMIC_VAR_INIT macro (7.17.2.1), and memcpy is undefined (the sizes might not agree), although it probably will work on most of the architectures.
Also the line
tmp = mc + 4; // Arithmetic
doesn't do what your comment claims. This is not arithmetic on an atomic object, but a load followed by an ordinary addition. More interesting would be
mc += 4; // Arithmetic
which is an atomic operation with sequential consistency.
Suppose one needs to select the real solutions after solving some equation.
Is this the correct and optimal way to do it, or is there a better one?
restart;
mu := 3.986*10^5; T:= 8*60*60:
eq := T = 2*Pi*sqrt(a^3/mu):
sol := solve(eq,a);
select(x->type(x,'realcons'),[sol]);
I could not find real as type. So I used realcons. At first I did this:
select(x->not(type(x,'complex')),[sol]);
which did not work, since in Maple 5 is considered complex! So ended up with no solutions.
type(5,'complex');
(* true *)
Also I could not find an isreal() type of function. (unless I missed one)
Is there a better way to do this that one should use?
update:
To answer the comment below about 5 not supposed to be complex in maple.
restart;
type(5,complex);
true
type(5,'complex');
true
interface(version);
Standard Worksheet Interface, Maple 18.00, Windows 7, February
From help
The type(x, complex) function returns true if x is an expression of the form
a + I b, where a (if present) and b (if present) are finite and of type realcons.
Your solutions sol are all of type complex(numeric). You can select only the real ones with type,numeric, ie.
restart;
mu := 3.986*10^5: T:= 8*60*60:
eq := T = 2*Pi*sqrt(a^3/mu):
sol := solve(eq,a);
20307.39319, -10153.69659 + 17586.71839 I, -10153.69659 - 17586.71839 I
select( type, [sol], numeric );
[20307.39319]
By using the multiple argument calling form of the select command we here can avoid using a custom operator as the first argument. You won't notice it for your small example, but it should be more efficient to do so. Other commands such as map perform similarly, to avoid having to make an additional function call for each individual test.
The types numeric and complex(numeric) cover real and complex integers, rationals, and floats.
The types realcons and complex(realcons) includes the previous, but also allow for an application of evalf done during the test. So Int(sin(x),x=1..3) and Pi and sqrt(2) are all of type realcons since following an application of evalf they become floats of type numeric.
The above is about types. There are also properties to consider. Types are properties, but not necessarily vice versa. There is a real property, but no real type. The is command can test for a property, and while it is often used for mixed numeric-symbolic tests under assumptions (on the symbols) it can also be used in tests like yours.
select( is, [sol], real );
[20307.39319]
It is less efficient to use is for your example. If you know that you have a collection of (possibly non-real) floats then type,numeric should be an efficient test.
And, just to muddy the waters... there is a type nonreal.
remove( type, [sol], nonreal );
[20307.39319]
The one possibility is to restrict the domain before the calculation takes place.
Here is an explanation on the Maplesoft website regarding restricting the domain:
4 Basic Computation
UPD: Basically, according to this and that, 5 is NOT considered complex in Maple, so there might be some bug/error/mistake (try checking what may be wrong there).
For instance, try putting complex without quotes.
Your way seems very logical according to this.
UPD2: According to the Maplesoft Website, all the type checks are done with type() function, so there is rather no isreal() function.
Reading Scala docs written by the experts one can get the impression that tail recursion is better than a while loop, even when the latter is more concise and clearer. This is one example
object Helpers {
implicit class IntWithTimes(val pip:Int) {
// Recursive
def times(f: => Unit):Unit = {
#tailrec
def loop(counter:Int):Unit = {
if (counter >0) { f; loop(counter-1) }
}
loop(pip)
}
// Explicit loop
def :#(f: => Unit) = {
var lc = pip
while (lc > 0) { f; lc -= 1 }
}
}
}
(To be clear, the expert was not addressing looping at all, but in the example they chose to write a loop in this fashion as if by instinct, which is what the raised the question for me: should I develop a similar instinct..)
The only aspect of the while loop that could be better is the iteration variable should be local to the body of the loop, and the mutation of the variable should be in a fixed place, but Scala chooses not to provide that syntax.
Clarity is subjective, but the question is does the (tail) recursive style offer improved performance?
I'm pretty sure that, due to the limitations of the JVM, not every potentially tail-recursive function will be optimised away by the Scala compiler as so, so the short (and sometimes wrong) answer to your question on performance is no.
The long answer to your more general question (having an advantage) is a little more contrived. Note that, by using while, you are in fact:
creating a new variable that holds a counter.
mutating that variable.
Off-by-one errors and the perils of mutability will ensure that, on the long run, you'll introduce bugs with a while pattern. In fact, your times function could easily be implemented as:
def times(f: => Unit) = (1 to pip) foreach f
Which not only is simpler and smaller, but also avoids any creation of transient variables and mutability. In fact, if the type of the function you are calling would be something to which the results matter, then the while construction would start to be even more difficult to read. Please attempt to implement the following using nothing but whiles:
def replicate(l: List[Int])(times: Int) = l.flatMap(x => List.fill(times)(x))
Then proceed to define a tail-recursive function that does the same.
UPDATE:
I hear you saying: "hey! that's cheating! foreach is neither a while nor a tail-rec call". Oh really? Take a look into Scala's definition of foreach for Lists:
def foreach[B](f: A => B) {
var these = this
while (!these.isEmpty) {
f(these.head)
these = these.tail
}
}
If you want to learn more about recursion in Scala, take a look at this blog post. Once you are into functional programming, go crazy and read RĂșnar's blog post. Even more info here and here.
In general, a directly tail recursive function (i.e., one that always calls itself directly and cannot be overridden) will always be optimized into a while loop by the compiler. You can use the #tailrec annotation to verify that the compiler is able to do this for a particular function.
As a general rule, any tail recursive function can be rewritten (usually automatically by the compiler) as a while loop and vice versa.
The purpose of writing functions in a (tail) recursive style is not to maximize performance or even conciseness, but to make the intent of the code as clear as possible, while simultaneously minimizing the chance of introducing bugs (by eliminating mutable variables, which generally make it harder to keep track of what the "inputs" and "outputs" of the function are). A properly written recursive function consists of a series of checks for terminating conditions (using either cascading if-else or a pattern match) with the recursive call(s) (plural only if not tail recursive) made if none of the terminating conditions are met.
The benefit of using recursion is most dramatic when there are several different possible terminating conditions. A series of if conditionals or patterns is generally much easier to comprehend than a single while condition with a whole bunch of (potentially complex and inter-related) boolean expressions &&'d together, especially if the return value needs to be different depending on which terminating condition is met.
Did these experts say that performance was the reason? I'm betting their reasons are more to do with expressive code and functional programming. Could you cite examples of their arguments?
One interesting reason why recursive solutions can be more efficient than more imperative alternatives is that they very often operate on lists and in a way that uses only head and tail operations. These operations are actually faster than random-access operations on more complex collections.
Anther reason that while-based solutions may be less efficient is that they can become very ugly as the complexity of the problem increases...
(I have to say, at this point, that your example is not a good one, since neither of your loops do anything useful. Your recursive loop is particularly atypical since it returns nothing, which implies that you are missing a major point about recursive functions. The functional bit. A recursive function is much more than another way of repeating the same operation n times.)
While loops do not return a value and require side effects to achieve anything. It is a control structure which only works at all for very simple tasks. This is because each iteration of the loop has to examine all of the state to decide what to next. The loops boolean expression may also have to be come very complex if there are multiple potential exit paths (or that complexity has to be distributed throughout the code in the loop, which can be ugly and obfuscatory).
Recursive functions offer the possibility of a much cleaner implementation. A good recursive solution breaks a complex problem down in to simpler parts, then delegates each part on to another function which can deal with it - the trick being that that other function is itself (or possibly a mutually recursive function, though that is rarely seen in Scala - unlike the various Lisp dialects, where it is common - because of the poor tail recursion support). The recursively called function receives in its parameters only the simpler subset of data and only the relevant state; it returns only the solution to the simpler problem. So, in contrast to the while loop,
Each iteration of the function only has to deal with a simple subset of the problem
Each iteration only cares about its inputs, not the overall state
Sucess in each subtask is clearly defined by the return value of the call that handled it.
State from different subtasks cannot become entangled (since it is hidden within each recursive function call).
Multiple exit points, if they exist, are much easier to represent clearly.
Given these advantages, recursion can make it easier to achieve an efficient solution. Especially if you count maintainability as an important factor in long-term efficiency.
I'm going to go find some good examples of code to add. Meanwhile, at this point I always recommend The Little Schemer. I would go on about why but this is the second Scala recursion question on this site in two days, so look at my previous answer instead.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Say I have an object of type A. Consider this case for any function of the type A -> A (i.e. takes object of type A and returns another object of type A):
foo = func(foo)
Here, the simplest case would be to for the result of func(foo) to be copied into foo.
Is it possible to optimize this so that:
foo gets modified inplace in func
There are no constraints on the language used. What I want to know is what constraints and properties the language must have to enable such an optimization. Are there any existing languages which perform such an optimization?
Example(in pseudo code):
type Matrix = List<List<int>>
Matrix rotate90Deg(Matrix x):
Matrix result(x.columns, x.rows) #Assume it has a constructor which takes as args the num of rows, and num of cols.
for (int i = 0; i < x.rows; i++):
for (int j = 0; j < x.columns; j++):
result[i][j] = x[j][i]
return result
Matrix a = [[1,2,3],[4,5,6],[7,8,9]]
a = rotate90Deg(a)
Here, is it possible to optimize the code so that it doesn't allocate memory for a new matrix(result), and instead just modifies the original matrix passed.
First of all, you have to realize that some operations are inherently not possible to be computed in-place. Matrix-matrix multiplication is an example of this, and rotate90Deg would fall under this category since such an operation is actually a matrix multiplication by the appropriate multiplication matrix.
Now as for your example, you actually coded up a matrix transpose function. Matrix transpose can be done in-place since you are swapping pairs of numbers, but I doubt that any compilers can automatically detect this and optimize it for you. Indeed, there are many, many tricks that one can do to optimize matrix transpose in order to be cache-friendly in order to gain huge performance increases. Nevertheless, with an naive implementation, you will almost certainly end up with something very similar to what Aditya Kumar describes in his answer.
As I have foreshadowed by using the word "naive" earlier, programmers can coax the compiler to inline lots and lots of things in extremely optimized ways through advanced templating and other meta-programming techniques. (At least in C++, and maybe other languages that allow you to overload operator =.) For anyone interested in a case study of how this is done and what is involved, take a look at the Eigen matrix library, and how it handles a simple operation like u = v + w; where the three variables are all matrices of floats. Following is a brief overview of the key points.
A naive implementation would overload operator+ to return a temporary and operator= to copy that temporary to the result. Of course, in C++11 it is pretty easy to avoid the final copy during assignment by way of move constructors, but you will still have unnecessary temporaries if you had something a little more complex with multiple operators on the right hand side like u = 3.15f * u.transposed() + 5.0f; since each operator/method would return a temporary, and that temporary would have to be looped over in order to process the next operator.
Long story short, what Eigen does is rather than perform each operation when the corresponding function call occurs, the calls return a templated functor of sorts which merely describes the operation that needs to take place, and all the actual work ends up happening in operator =, thus enabling the compiler to emit a single, inlined loop for traversing the data only once and doing the operation truly in-place.
Yes it is possible, and this optimization is provided by at least C++11 (inlining).
To explain the optimization a little bit.
e.g.
foo_t foo;
foo = func(foo); // #1
foo_t func(foo_t foo1) {
foo_t new_foo;
// operate on new_foo by using foo1
return new_foo;
}
There are three instances of foo_t being made:
foo is copied and passed as foo1 to func
new_foo is created.
new_foo is assigned to foo by copying the contents of new_foo into foo;
All the three copies can be eliminated provided there are some invariants.
foo (the argument to be passed to function is never used later with the same original value. This is equivalent to saying that foo is 'dead' at line #1. This is established here as foo is reassigned.
the scope of object new_foo in function func has its lifetime that does not extend the life of function func. This is also established here as the way new_foo is created, it will be on stack and the lifetime of objects in stack is the same as the lifetime of the function in which the object was created.
In C++ it can be achieved using inlining the function func. After inlining, the code basically will look like this.
`foo_t foo;`
`foo_t new_foo;`
`// operate on new_foo by using foo`
`foo = new_foo;`
Although, C++ provides inlining as a language feature but almost any optimizing compiler do inlining these days.
Now it depends on what kind of operation you perform on new_foo and foo whether this extra new_foo will be optimized away or not. For some data types it is trivial (the compiler can do a 'copy-propagation' followed by 'dead-code elimination' to remove new_foo completely.
I understand different notions of functional programming by itself: side effects, immutability, pure functions, referential transparency. But I can't connect them together in my head. For example, I have the following questions:
What is the relation between ref. transparency and immutability. Does one implies the other?
Sometimes side effects and immutability is used interchangeably. Is it correct?
This question requires some especially nit-picky answers, since it's about defining common vocabulary.
First, a function is a kind of mathematical relation between a "domain" of inputs and a "range" (or codomain) of outputs. Every input produces an unambiguous output. For example, the integer addition function + accepts input in the domain Int x Int and produces outputs in the range Int.
object Ex0 {
def +(x: Int, y: Int): Int = x + y
}
Given any values for x and y, clearly + will always produce the same result. This is a function. If the compiler were extra clever, it could insert code to cache the results of this function for each pair of inputs, and perform a cache lookup as an optimization. That's clearly safe here.
The problem is that in software, the term "function" has been somewhat abused: although functions accept arguments and return values as declared in their signature, they can also read and write to some external context. For example:
class Ex1 {
def +(x: Int): Int = x + Random.nextInt
}
We can't think of this as a mathematical function anymore, because for a given value of x, + can produce different results (depending on a random value, which doesn't appear anywhere in +'s signature). The result of + can not be safely cached as described above. So now we have a vocabulary problem, which we solve by saying that Ex0.+ is pure, and Ex1.+ isn't.
Okay, since we've now accepted some degree of impurity, we need to define what kind of impurity we're talking about! In this case, we've said the difference is that we can cache Ex0.+'s results associated with its inputs x and y, and that we can't cache Ex1.+'s results associated with its input x. The term we use to describe cacheability (or, more properly, substitutability of a function call with its output) is referential transparency.
All pure functions are referentially transparent, but some referentially transparent functions aren't pure. For example:
object Ex2 {
var lastResult: Int
def +(x: Int, y: Int): Int = {
lastResult = x + y
lastResult
}
}
Here we're not reading from any external context, and the value produced by Ex2.+ for any inputs x and y will always be cacheable, as in Ex0. This is referentially transparent, but it does have a side effect, which is to store the last value computed by the function. Somebody else can come along later and grab lastResult, which will give them some sneaky insight into what's been happening with Ex2.+!
A side note: you can also argue that Ex2.+ is not referentially transparent, because although caching is safe with respect to the function's result, the side effect is silently ignored in the case of a cache "hit." In other words, introducing a cache changes the program's meaning, if the side effect is important (hence Norman Ramsey's comment)! If you prefer this definition, then a function must be pure in order to be referentially transparent.
Now, one thing to observe here is that if we call Ex2.+ twice or more in a row with the same inputs, lastResult will not change. The side effect of invoking the method n times is equivalent to the side effect of invoking the method only once, and so we say that Ex2.+ is idempotent. We could change it:
object Ex3 {
var history: Seq[Int]
def +(x: Int, y: Int): Int = {
result = x + y
history = history :+ result
result
}
}
Now, each time we call Ex3.+, the history changes, so the function is no longer idempotent.
Okay, a recap so far: a pure function is one that neither reads from nor writes to any external context. It is both referentially transparent and side effect free. A function that reads from some external context is no longer referentially transparent, whereas a function that writes to some external context is no longer free of side effects. And finally, a function which when called multiple times with the same input has the same side effect as calling it only once, is called idempotent. Note that a function with no side effects, such as a pure function, is also idempotent!
So how does mutability and immutability play into all this? Well, look back at Ex2 and Ex3. They introduce mutable vars. The side effects of Ex2.+ and Ex3.+ is to mutate their respective vars! So mutability and side effects go hand-in-hand; a function that operates only on immutable data must be side effect free. It might still not be pure (that is, it might not be referentially transparent), but at least it will not produce side effects.
A logical follow-up question to this might be: "what are the benefits of a pure functional style?" The answer to that question is more involved ;)
"No" to the first - one implies the other, but not the converse, and a qualified "Yes" to the second.
"An expression is said to be referentially transparent if it can be replaced with its value without changing the behavior of a program".
Immutable input suggests that an expression (function) will always evaluate to the same value, and therefore be referentially transparent.
However, (mergeconflict has kindly correct me on this point) being referentially transparent does not necessarily require immutability.
By definition, a side-effect is an aspect of a function; meaning that when you call a function, it changes something.
Immutability is an aspect of data; it cannot be changed.
Calling a function on such does imply that there can be no side-effects. (in Scala, that's limited to "no changes to the immutable object(s)" - the developer has responsibilities and decisions).
While side-effect and immutability don't mean the same thing, they are closely related aspects of a function and the data the function is applied to.
Since Scala is not a pure functional programming language, one must be careful when considering the meaning of such statements as "immutable input" - the scope of the input to a function may include elements other than those passed as parameters. Similarly for considering side-effects.
It rather depends on the specific definitions you use (there can be disagreement, see for example Purity vs Referential transparency), but I think this is a reasonable interpretation:
Referential transparency and 'purity' are properties of functions/expressions. A function/expression may or may not have side-effects. Immutability, on the other hand, is a property of objects, not functions/expressions.
Referential transparency, side-effects and purity are closely related: 'pure' and 'referentially transparent' are equivalent, and those notions are equivalent to the absence of side-effects.
An immutable object may have methods which are not referentially transparent: those methods will not change the object itself (as that would make the object mutable), but may have other side-effects such as performing I/O or manipulating their (mutable) parameters.