I wonder whether MATLAB is Turing complete (computationally universal)? - matlab

I wonder whether MATLAB is Turing complete (= computationally universal, i.e. "if it can be used to simulate any single-taped Turing machine")?

Being Turing complete is really a pretty low bar for real-world languages. According to Wikipedia (emphasis mine):
To show that something is Turing complete, it is enough to show that
it can be used to simulate some Turing complete system. For example,
an imperative language is Turing complete if it has conditional
branching (e.g., "if" and "goto" statements, or a "branch if zero"
instruction. See OISC) and the ability to change arbitrary memory
locations (e.g., the ability to maintain an arbitrary number of
variables). Since this is almost always the case, most if not all
imperative languages are Turing complete if we ignore any limitations
of finite memory.
Beyond that, MATLAB has many of the features you would expect from a relatively modern 3GL/4GL. It is complete with a VM, I/O, user interface constructs, mathematical operators (obviously), datatypes, user-defined-functions, etc. You can even deliver Matlab programs outside the Matlab environment.
Note that whether or not it's a good language is an entirely different question.

Yes, a high-level programming language.

I assume you distinguish between programming languages and scripting languages, and because of the nature of MATLAB it appears like a scripting language? If this is the case, your opinion might depend on what you consider a programming language.
I believe MATLAB is Turing-complete and has a reasonably strict and usable syntax, so I'd call it a programming language. At the same time though, csh is probably turing-complete, but it's so dramatically odd to program in that I'd call it a scripting language.

Related

Simulation speed of SystemC vs SystemVerilog

Does anyone have a pointer to measurements of how fast a behavioral model written in SystemC is compared to the same model written in SystemVerilog? Or at least first hand experience of the relative simulation speed of the two, when modeling the same thing?
To clarify, this is for high level system simulation, where the model is the minimum code to mimic the behavior. It's decidedly not synthesizable. In System Verilog, it would use as many non-synthesizable behavioral constructs as possible, for maximum execution speed. The point is, we're asking about an apples to apples comparison that someone did, where they tried to do both: stay high level (non-synthesizable), and make the code as close to equivalent as possible between the two languages.
It would be great if they stated the SystemVerilog environment used, and what machine each was run on, in order to be able to translate. Hopefully can be useful to a variety of people in choosing approach.
Thanks!
Like with other language benchmarks it is not possible to answer in general, because it depends on multiple factors like modeling style, compiler vendor and version, compilation options.
Also there are tools to convert Verilog to C++ and C++ to Verilog. So you can always match simulation performance by converting from one language to other.
Verilog can be converted to C++/SystemC using Verilator https://www.veripool.org/wiki/verilator
To convert C++/SystemC into Verilog you can use HLS (High-level synthesis) toos.

Does functional programming reduce the Von Neumann bottleneck?

I believe (from doing some reading) that reading/writing data across the bus from CPU caches to main memory places a considerable constraint on how fast a computational task (which needs to move data across the bus) can complete - the Von Neumann bottleneck.
I have come across a few articles so far which mention that functional programming can be more performant than other paradigms like the imperative approach eg. OO (in certain models of computation).
Can someone please explain some of the ways that purely functional programming can reduce this bottleneck? ie. are any of the following points found (in general) to be true?
Using immutable data structures means generally less data is moving across that bus - less writes?
Using immutable data structures means that data is possibly more likely to be hanging around in CPU cache - because less updates to existing state means less flushing of objects from cache?
Is it possible that using immutable data structures means that we may often never even read the data back from main memory because we may create the object during computation and have it in local cache and then during same time slice create a new immutable object off of it (if there is a need for an update) and we then never use original object ie. we are working a lot more with objects that are sitting in local cache.
Oh man, that’s a classic. John Backus’ 1977 ACM Turing Award lecture is all about that: “Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs.” (The paper, “Lambda: The Ultimate Goto,” was presented at the same conference.)
I’m guessing that either you or whoever raised this question had that lecture in mind. What Backus called “the von Neumann bottleneck” was “a connecting tube that can transmit a single word between the CPU and the store (and send an address to the store).”
CPUs do still have a data bus, although in modern computers, it’s usually wide enough to hold a vector of words. Nor have we gotten away from the problem that we need to store and look up a lot of addresses, such as the links to daughter nodes of lists and trees.
But Backus was not just talking about physical architecture (emphasis added):
Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself but where to find it.
In that sense, functional programming has been largely successful at getting people to write higher-level functions, such as maps and reductions, rather than “word-at-a-time thinking” such as the for loops of C. If you try to perform an operation on a lot of data in C, today, then just like in 1977, you need to write it as a sequential loop. Potentially, each iteration of the loop could do anything to any element of the array, or any other program state, or even muck around with the loop variable itself, and any pointer could potentially alias any of these variables. At the time, that was true of the DO loops of Backus’ first high-level language, Fortran, as well, except maybe the part about pointer aliasing. To get good performance today, you try to help the compiler figure out that, no, the loop doesn’t really need to run in the order you literally specified: this is an operation it can parallelize, like a reduction or a transformation of some other array or a pure function of the loop index alone.
But that’s no longer a good fit for the physical architecture of modern computers, which are all vectorized symmetric multiprocessors—like the Cray supercomputers of the late ’70s, but faster.
Indeed, the C++ Standard Template Library now has algorithms on containers that are totally independent of the implementation details or the internal representation of the data, and Backus’ own creation, Fortran, added FORALL and PURE in 1995.
When you look at today’s big data problems, you see that the tools we use to solve them resemble functional idioms a lot more than the imperative languages Backus designed in the ’50s and ’60s. You wouldn’t write a bunch of for loops to do machine learning in 2018; you’d define a model in something like Tensorflow and evaluate it. If you want to work with big data with a lot of processors at once, it’s extremely helpful to know that your operations are associative, and therefore can be grouped in any order and then combined, allowing for automatic parallelization and vectorization. Or that a data structure can be lock-free and wait-free because it is immutable. Or that a transformation on a vector is a map that can be implemented with SIMD instructions on another vector.
Examples
Last year, I wrote a couple short programs in several different languages to solve a problem that involved finding the coefficients that minimized a cubic polynomial. A brute-force approach in C11 looked, in relevant part, like this:
static variable_t ys[MOST_COEFFS];
// #pragma omp simd safelen(MOST_COEFFS)
for ( size_t j = 0; j < n; ++j )
ys[j] = ((a3s[j]*t + a2s[j])*t + a1s[j])*t + a0s[j];
variable_t result = ys[0];
// #pragma omp simd reduction(min:y)
for ( size_t j = 1; j < n; ++j ) {
const variable_t y = ys[j];
if (y < result)
result = y;
} // end for j
The corresponding section of the C++14 version looked like this:
const variable_t result =
(((a3s*t + a2s)*t + a1s)*t + a0s).min();
In this case, the coefficient vectors were std::valarray objects, a special type of object in the STL that have restrictions on how their components can be aliased, and whose member operations are limited, and a lot of the restrictions on what operations are safe to vectorize sound a lot like the restrictions on pure functions. The list of allowed reductions, like .min() at the end, is, not coincidentally, similar to the instances of Data.Semigroup. You’ll see a similar story these days if you look through <algorithm> in the STL.
Now, I’m not going to claim that C++ has become a functional language. As it happened, I made all the objects in the program immutable and automatically collected by RIIA, but that’s just because I’ve had a lot of exposure to functional programming and that’s how I like to code now. The language itself doesn’t impose such things as immutability, garbage collection or absence of side-effects. But when we look at what Backus in 1977 said was the real von Neumann bottleneck, “an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand,” does that apply to the C++ version? The operations are linear algebra on coefficient vectors, not word-at-a-time. And the ideas C++ borrowed to do this—and the ideas behind expression templates even more so—are largely functional concepts. (Compare that snippet to how it would’ve looked in K&R C, and how Backus defined a functional program to compute inner product in section 5.2 of his Turing Award lecture in 1977.)
I also wrote a version in Haskell, but I don’t think it’s as good an example of escaping that kind of von Neumann bottleneck.
It’s absolutely possible to write functional code that meets all of Backus’ descriptions of the von Neumann bottleneck. Looking back on the code I wrote this week, I’ve done it myself. A fold or traversal that builds a list? They’re high-level abstractions, but they’re also defined as sequences of word-at-a-time operations, and half or more of the data passed through the bottleneck when you create and traverse a singly-linked list is the addresses of other data! They’re efficient ways to put data through the von Neumann bottleneck, and that’s basically why I did it: they’re great patterns for programming von Neumann machines.
If we’re interested in coding a different way, however, functional programming gives us tools to do so. (I’m not going to claim it’s the only thing that does.) Express a reduction as a foldMap, apply it to the right kind of vector, and the associativity of the monoidal operation lets you split up the problem into chunks of whatever size you want and combine the pieces later. Make an operation a map rather than a fold, on a data structure other than a singly-linked list, and it can be automatically parallelized or vectorized. Or transformed in other ways that produce the same result, since we’ve expressed the result at a higher level of abstraction, not a particular sequence of word-at-a-time operations.
My examples so far have been about parallel programming, but I’m sure quantum computing will shake up what programs look like a lot more fundamentally.

Which language should I prefer working with if I want to use the Fast Artificial Neural Network Library (FANN)?

I am working on reducing dimentionality of a set of (Boolean) vectors with both the number and dimentionality of vectors tending to be of the order of 10^5-10^6 using autoencoders. Hence even though speed is not of essence (it is supposed to be a pre-computation for a clustering algorithm) but obviously one would expect that the computations take a reasonable amount of time. Seeing how the library itself was written in c++ would it be a good idea to stick to it or to code in Java (Since the rest of the code is written in Java)? Or would it not matter at all?
That question is difficult to answer. It depends on:
How computationally demanding will be your code? If the hard part is done by the library and your code is only to generate the input and post-process the output, Java would be a valid choice. Compare it to Matlab: The language is very slow but the built-in algorithms are super-fast.
How skilled are you (or your team, or your future students) in Java and C++. Consider learning C++ takes a lot of time. If you have only a small scaled project, it could be easier to buy a bigger machine or wait two days instead of one, to get the results.
Have you legacy code in one of the languages you want to couple or maybe re-use?
Overall, I would advice you to set up a benchmark example in whatever language you like more. Then give it a try. If the speed is ok, stick to it. If you wait to long, think about alternatives (new hardware, parallel execution, different language).

What gives Lisp its excellent math performance?

I am reading this in a Lisp textbook:
Lisp can perform some amazing feats with numbers, especially when compared with most other languages. For instance, here we’re using the function expt to calculate the fifty-third power of 53:
CL> (expt 53 53)
24356848165022712132477606520104725518533453128685640844505130879576720609150223301256150373
Most languages would choke on a calculation involving such a large number.
Yes, that's cool, but the author fails to explain why Lisp can do this more easily and more quickly than other languages.
Surely there is a simple reason, can anyone explain?
This is a good example that "worse is not always better".
The New Jersey approach
The "traditional" languages, like C/C++/Java, have limited range integer arithmetics based on the hardware capabilities, e.g., int32_t - signed 32-bit numbers which silently overflow when the result does not fit into 32 bits. This is very fast and often seems good enough for practical purposes, but causes subtle hard to find bugs.
The MIT/Stanford style
Lisp took a different approach.
It has a "small" unboxed integer type fixnum, and when the result of fixnum arithmetics does not fit into a fixnum, it is automatically and transparently promoted to an arbitrary size bignum, so you always get mathematically correct results. This means that, unless the compiler can prove that the result is a fixnum, it has to add code which will check whether a bignum has to be allocated. This, actually, should have a 0 cost on modern architecture, but was a non-trivial decision when it was made 4+ decades ago.
The "traditional" languages, when they offer bignum arithmetics, do that in a "library" way, i.e.,
it has to be explicitly requested by the user;
the bignums are operated upon in a clumsy way: BigInteger.add(a,b) instead of a+b;
the cost is incurred even when the actual number is small and would fit into a machine int.
Note that the Lisp approach is quite in line with the Lisp tradition of doing the right thing at a possible cost of some extra complexity. It is also manifested in the automated memory management, which is now mainstream, but was viciously attacked in the past. The lisp approach to integer arithmetics has now been used in some other languages (e.g., python), so the progress is happening!
To add to what wvxvw wrote, it's easier in Lisp because bignums are built into the language. You can juggle large numbers around just as, say, ints in C or Java.

Turing-completeness of a modified version of Brainfuck

Is Brainfuck Turing-complete if the cells are bits, and the + and - operations simply flip a bit? Is there a simple proof that Brainfuck-like languages are Turing-complete regardless of the cell size, or do I need to think of a program that simulates a Turing machine? How would I know if there isn't one?
EDIT: I found an answer to my question: Brainfuck with bit cells is called Boolfuck. Ordinary Brainfuck can be reduced to it, so Boolfuck is Turing-complete.
This answer should suit you well; it has a very specific definition of what features make a language turing complete.
Here's the gist of it:
In general, for an imperative language to be Turing-complete, it needs:
A form of conditional repetition or conditional jump (e.g., while, if+goto)
A way to read and write some form of storage (e.g., variables, tape)
A Turing complete language can "simulate any single-taped Turing machine." Brainfuck and Boolfuck are both Turing complete, because they follow the specifications.
Also note that if one is Turing complete, the other must be because they are nearly the same. With brainfuck, you are moving in bytes, but in boolfuck, you are using bits, which constitute bytes.