Software simulation of a quantum computer - simulation

While we are waiting for our quantum computers, is it possible to write a software simulation of one? I suspect the answer is no, but hope the reasons why not will throw some light on the mystery.

Implementing it isn't that hard. The problem is that the computational and memory complexity is exponential in the number of quantum bits you want to simulate.
Basically a quantum computer operates on all possible n-bit states at once. And those grow like 2^n.
The size of an operator grows even faster since it's a matrix. So it grows like (2^n)^2 = 2^(2*n) = 4^n
So I expect a good computer to be able to simulate a quantum computer up to about 20 bits, but it will be rather slow.

They do exist. Here's a browser based one. Here's one written in C++. Here's one written in Java. But, as stated by CodesInChaos, a quantum computer operates on all probability amplitudes at once. So imagine a 3 qubit quantum register, a typical state for it to be in looks like this:
a1|000> + a2|001> + a3|010> + a4|011> + a5|100> + a6|101> + a7|110>+ a8|111>
It's a superposition of all the possible combinations. What's worse is that those probability amplitudes are complex numbers. So an n-qubit register would require 2^(2*n) real numbers. So for a 32 qubit register, that's 2^(2*32) = 18446744073709551616 real numbers.
And as CodesInChaos said, the unitary matrices used to transform those states are that number squared. Their application being a dot product... They're computationally costly, to say the least.

My answer is yes:
You can simulate the behaviours of a quantum machine by simulating the quantum machine algorithm
D-Wave quantum machine using a technique called quantum annealing. This algorithm could be compared to simulated annealing algorithm.
References:
1.Quantum annealing
2.Simulated annealing
3.Optimization by simulated annealing: Quantitative studies

As Wikipedia state:
A classical computer could in principle (with exponential resources) simulate a quantum algorithm, as quantum computation does not violate the Church–Turing thesis.

There is a very big list of languages, frameworks and simulators.
Some simulate at low level the quantum equations, other just the gates.
Microsoft Quantum Development Kit (Q#)
Microsoft LIQUi>IBM Quantum Experience
Rigetti Forest
ProjectQ
QuTiP
OpenFermion
Qbsolv
ScaffCC
Quantum Computing Playground (Google)
Raytheon BBN
Quirk
Forest
It would be great to know your opinions on their capabilities and easiness of use.
https://quantumcomputingreport.com/resources/tools/
https://github.com/topics/quantum-computing?o=desc&s=stars

Years ago I attended a talk at a Perl conference where Damian Conway (I believe) was speculating on some of this. A bit later there was a Perl module made available that did some of this stuff. Search CPAN for Quantum::Superpositions.

Yet another reason why classical simulation of quantum computing is hard: you need almost perfect - i.e. as perfect as possible - random number generators to simulate measurement.

Quipper is full blown simulation EDSL for Quantum Computing, implemented in Haskell
I have experince to simulate behaviour of several QC algorithms such as Deutsch, Deutsch–Jozsa, Simon's, Shor's algorithms and it's very straightforward.

Another reason why classical simulation of quantum computation is hard: to keep track you may want to know after each action of a n-qubit gate (n>1) whether the outgoing qubits are entangled or not. This must be calculated classically but is known to be NP-hard.
See here: https://stackoverflow.com/a/23327816/363429

Related

Neural network: Just some regression?

I just started reading about neural networks. I thought they are something magic and extremly intelligent, but at the end of the day it just seems to be a large math function with many "undefined" constants? The learning is just another way for some kind of (more or less "stupid") regression? Is this true? For me this seems not to be very brilliant, so i am a bit surprised why this works that well.
Thank you very much
It has been proved that an artificial neural network with just one hidden layer is a universal approximator; that is, under proper parameterisation, it can approximate any continuous function (see universal approximation theorem). More importantly, as the Wikipedia article mentions:
Work by Hava Siegelmann and Eduardo D. Sontag has provided a proof that a specific recurrent architecture with rational valued weights (as opposed to full precision real number-valued weights) has the full power of a Universal Turing Machine using a finite number of neurons and standard linear connections.
This means that, at least in theory, a neural net is as much as clever as your expensive PC. And this is true without taking into account all modern extensions, e.g. as in Long-Short Term Memory networks. As one of the comments mentions, though, the real problem is learnability, i.e. how to find the right set of parameters for the task under consideration.

What are the available approaches to interconnecting simulation systems?

I am looking for a distributed simulation algorithm which allows me to couple multiple standalone systems. The systems I am targeting for interconnection use different formalisms, e.g. discrete time and continuous simulation paradigms. Now, the only algorithms I found were from the field of parallel discrete event simulation (PDES), such as the classical chandy/misra "null"-message protocol, which has some very undesirable problems. My question is now, what other approaches to interconnecting simulation systems besides PDES-algorithms are known i.e. can be used for interconnecting simulation systems?
Not an algorithm, but there are two IEEE standards out there that define protocols intended to address your issue: High-Level Architecture (HLA) and Distributed Interactive Simulation (DIS). HLA has a much greater presence in the analytic discrete-event simulation community where I hang out, DIS tends to get more use in training applications. If you'd like to check out some applications papers, go to the Winter Simulation Conference / INFORMS-sponsorted paper archive site and search for HLA, you'll get 448 hits.
Be forewarned, trying to make this stuff work in general requires some pretty weird plumbing, lots of kludges, and can be very fragile.

Matlab and GPU/CUDA programming

I need to run several independent analyses on the same data set.
Specifically, I need to run bunches of 100 glm (generalized linear models) analyses and was thinking to take advantage of my video card (GTX580).
As I have access to Matlab and the Parallel Computing Toolbox (and I'm not good with C++), I decided to give it a try.
I understand that a single GLM is not ideal for parallel computing, but as I need to run 100-200 in parallel, I thought that using parfor could be a solution.
My problem is that it is not clear to me which approach I should follow. I wrote a gpuArray version of the matlab function glmfit, but using parfor doesn't have any advantage over a standard "for" loop.
Has this anything to do with the matlabpool setting? It is not even clear to me how to set this to "see" the GPU card. By default, it is set to the number of cores in the CPU (4 in my case), if I'm not wrong.
Am I completely wrong on the approach?
Any suggestion would be highly appreciated.
Edit
Thanks. I'm aware of GPUmat and Jacket, and I could start writing in C without too much effort, but I'm testing the GPU computing possibilities for a department where everybody uses Matlab or R. The final goal would be a cluster based on C2050 and the Matlab Distribution Server (or at least this was the first project).
Reading the ADs from Mathworks I was under the impression that parallel computing was possible even without C skills. It is impossible to ask the researchers in my department to learn C, so I'm guessing that GPUmat and Jacket are the better solutions, even if the limitations are quite big and the support to several commonly used routines like glm is non-existent.
How can they be interfaced with a cluster? Do they work with some job distribution system?
I would recommend you try either GPUMat (free) or AccelerEyes Jacket (buy, but has free trial) rather than the Parallel Computing Toolbox. The toolbox doesn't have as much functionality.
To get the most performance, you may want to learn some C (no need for C++) and code in raw CUDA yourself. Many of these high level tools may not be smart enough about how they manage memory transfers (you could lose all your computational benefits from needlessly shuffling data across the PCI-E bus).
Parfor will help you for utilizing multiple GPUs, but not a single GPU. The thing is that a single GPU can do only one thing at a time, so parfor on a single GPU or for on a single GPU will achieve the exact same effect (as you are seeing).
Jacket tends to be more efficient as it can combine multiple operations and run them more efficiently and has more features, but most departments already have parallel computing toolbox and not jacket so that can be an issue. You can try the demo to check.
No experience with gpumat.
The parallel computing toolbox is getting better, what you need is some large matrix operations. GPUs are good at doing the same thing multiple times, so you need to either combine your code somehow into one operation or make each operation big enough. We are talking a need for ~10000 things in parallel at least, although it's not a set of 1e4 matrices but rather a large matrix with at least 1e4 elements.
I do find that with the parallel computing toolbox you still need quite a bit of inline CUDA code to be effective (it's still pretty limited). It does better allow you to inline kernels and transform matlab code into kernels though, something that

Why do we use neural networks in computers?

Why do we use neural networks? It's biologic. Aren't there any more solutions that're more "suitable" for computers?
In other words: Why do we use the human brain as a model for inspiration for artifical intelligence?
Neural networks aren't really very biological. They resemble, at a very general level, the architecture of neurons, but it's a great exaggeration to say that they work "just like the brain" (an exaggeration that's encouraged by some neural-net advocates, alas).
Neural nets are mostly used for fuzzy, difficult problems that don't yield to traditional algorithmic approaches. IOWs, there are more "suitable" solutions for computers, but sometimes those solutions don't work, and in those cases one approach is a neural network.
Why do we use neural networks?
Because they're simple to construct, and often appear to be a good approach to certain classes of problems, such as pattern recognition.
Aren't there any more solutions that're more "suitable" for computers?
Yes, implementations that more closely match a computer's architecture can be more suitable for the computer, but then can be less suitable for an effective solution.
Why do we use the human brain as a model for inspiration for artifical intelligence?
Because our brain is the superior example we have of something intelligent.
Neural Networks are still used for two reasons.
They are easy to understand for people who don't want to delve into the math of a more complicated algorithm.
They have a really good name. I mean when you role into a CEO's office to sell him your model which would you rather say, Neural Network or Support Vector Machine. When he asks how it works you can just say "just like the neurons in your brain", which is something most people understand. If you try and explain a support vector machine Mr. CEO is going to be lost (Not because he is dumb but because SVMs are harder to understand).
Sometimes they are still useful however I think that the training time is often just too long.
I don't understand the question. Neural nets are suitable for certain functions, and not others. The same is true for various other sorts of classes of algorithms, regardless of what they might have been inspired by.
If we have a good many inputs to something, and we want some outputs, and we have a set of example inputs with known desired outputs, and we don't want to calculate a function ourselves, neural nets are excellent. We feed in the example inputs, compare the output to the example outputs, and adjust the inner workings of the NN in an automatic fashion, to make the NN output closer to the desired output.
This sort of function derivation is very useful in various forms of pattern recognition and general classification. It isn't a panacea, of course. It has no explanatory power (in that you can't look at the innards to see why it classifies something in a particular way), it doesn't offer guarantees of correctness within certain limits, validating how well it works is difficult, and gathering enough examples for training and validation can be expensive or even impossible. The trick is to know when to use a NN and what sort to use.
There are, of course, people who oversell the things as some sort of super solution or even an explanation of human thought, and you might be reacting to them.
Neural network are only "inspired" by the neural structure of our brain, but they are not even close to the complexity of the behaviour of a real neuron (to date there is no neuron model that captures the complexity of a SINGLE neuron, don't even think about a neuronal population...)
Although "neural", machine "learning" and other "pseudo-bio" (like "genetic algorithms") terms are very "cool", that does not mean that they are actually based on real biological processes.
Just that they may very approximatively remind of a biological situation.
NB: of course this does not make them useless! They're very very important in many fields!
Neural networks have been around for a while, and originally were developed to model as close an understanding as we had at the time to the way neurons work in the brain. They represent a network of neurons, hence "neural network." Since computers and brains are very different hardware-wise, implementing anything like a brain with a computer is going to be rather clunky. However, as others have stated so far, neural networks can be useful for some things that are vague such as pattern recognition, facial recognition, and other similar uses. They are also still useful as a basic model of how neurons connect and are often used in Cognitive Science and other fields of artificial intelligence to try to understand how small parts of the complex human brain might make simple decisions. Unfortunately, once a neural network "learns" something, it is very difficult to understand how it actually makes its decisions.
There are, of course, many misuses of neural networks and in most non-research applications, other algorithms have been developed that are much more accurate. If a piece of business software proudly proclaims it uses a neural network, chances are it probably doesn't need it, and might be using it to inefficiently perform a task that could be performed in a much easier way. Unless the software is actually "learning" on the fly, which is very rare, neural networks are pretty much useless. And even when the software is "learning", sometimes neural networks aren't the best way to go.
While I admit, I tinker with Neural Networks because of my hopes in creating high level AI, however, you can look at a Neural Network as being more than just just an artificial representation of a human brain, but as a Mathematical construct.
For example Let's say you have a function y = f(x) or more abstractly y = f(x1, x2, ..., xn-1, xn), Neural networks themselves act as functions, or even a set of functions, taking in a large input and producing some output [y1, y2, ..., yn-1, yn] = f(x1, x2, ..., xn-1, xn)
Furthermore, they are not static, but instead can continue adapting and learning and eventually extrapolate(predict) interesting things. Their abstractness can even result in them coming up with unique solutions to problems that haven't haven't been thought up yet. For example the TDGammon program learned to play backgammon and beat the world champion. The world champion stated that the program play a unique end game that he had never seen. (that's pretty awesome if you ask me considering the complexity of NNs)
And then when you look at recurrent neural networks (i.e. can have internal feedback loops, or pipe their output back into their input, while consuming new input) they can solve even more interesting problems, and map even more complex functions.
In a nutshell Neural Networks are like a very very abstract high dimensional function and capable of mapping/learning very interesting things that would be otherwise impossible to program programmatically. For example, the energy needed to calculate the total net Forces of Gravity on a large number of objects is intense (you have to calculate it for each object, and against each object), but once a neural network learns how to map it they can do these complex calculations that would run in exponential or combinatoric? time in polynomial time. Just look at how fast your brain processes physics data, spatial data/ images / sound when you dream. That's the potential computation power of Neural Networks. And to also mention the way they store data is very clever as well (in synaptics patterns, i.e. memories)
Artificial intelligence is a branch of computer science devoted to making computers more 'biologic.' This is useful when you want a computer to do human(biologic) things like play chess, or imitate casual conversation.
Human brains are much more efficient and powerful in some ways than the most powerful computers, so it makes sense to try to imitate a biological way of processing information.
Most neural networks I'm aware of are nothing more than flexible interpolators. Backpropagating of errors is easy and fast, here are some possible uses :
Classification of data
Some games (modern backgammon AIs beat the best players in the world, the evaluation function is a neural net)
Pattern recognition (OCR ?)
There is nothing particularly related to human intelligence. There are other uses of neural nets, I have seen an implementation of associative memory which allowed for degradation without (much) data loss, pretty much like the brain which sees some neurons die with time.

Which physical open source simulation methods worth to port to GPU

I am writing a report, and I would like to know, in your opinion, which open source physical simulation methods (like Molecular Dynamics, Brownian Dynamics, etc) and not ported yet, would be worth to port to GPU or another special hardware that can potentially speedup the calculation.
Links to the projects would be really appreciated.
Thanks in advance
Any physical simulation technique, be it finite difference, finite element, or boundary element, could benefit from a port to GPU. Same for Monte Carlo simulations of financial models. Anything that could use that smoking floating point processing, really.
I am currently working on quantum chemistry application on GPU. as far as I am aware, quantum chemistry is one of most demanding areas, in terms of total cpu time. there has been a number of papers regarding GPU and quantum chemistry, you can research those.
As far as methods, all of them are open source. are you asking about particular program? Then you can look at pyquante or mpqc. for molecular dynamics, look at hoomd. you can also Google QCD on GPU.