Why less code for microkernel than monolithic? - distributed-computing

Why do we require less code to write a monolithic kernel than a microkernel although a monolithic kernel is big in size?

Related

In neural networks, why conventionally set number of neurons to 2^n?

For example, when pile up Dense layers, conventionally, we always set the number of neurons as 256 neurons, 128 neurons, 64 neurons, ... and so on.
My question is:
What's the reason for conventionally use 2^n neurons? Will this implementation makes the code runs faster? Saves memory? Or are there any other reasons?
It's historical. Early neural network implementations for GPU Computing (written in CUDA, OpenCL etc) had to concern themselves with efficient memory management to do data parallelism.
Generally speaking, you have to align N computations on physical processors. The number of physical processors is usually a power of 2. Therefore, if the number of computations is not a power of 2, the computations can't be mapped 1:1 and have to be moved around, requiring additional memory management (further reading here). This was only relevant for parallel batch processing, i.e. having the batch size as a power of 2 gave you slightly better performance. Interestingly, having other hyperparameters such as the number of hidden units as a power of 2 never had a measurable benefit - I assume as neural networks got more popular, people simply started adapting this practice without knowing why and spreading it to other hyperparameters.
Nowadays, some low-level implementations might still benefit from this convention but if you're using CUDA with Tensorflow or Pytorch in 2020 with a modern GPU architecture, you're very unlikely to encounter any difference between a batch size of 128 and 129 as these systems are highly optimized for very efficient data parallelism.

what is high performance version of LAPACK and BLAS?

This page of IMSL says
To obtain improved performance we recommend linking with High
Performance versions of LAPACK and BLAS, if available.
What is High Performance versions of LAPACK and BLAS ?
There are plenty of good implementations to pick from:
Intel MKL is likely the best on Intel machines. It's not free though, so that may be a problem.
According to their benchmark, OpenBLAS compares quite well with Intel MKL and is free
Eigen is also an option and has a largish (albeit old) benchmark showing good performance on small matrices (though it's not technically a drop-in BLAS library)
ATLAS, OSKI, POSKI are examples of auto-tuned kernels which will claim to work on many architectures
Generally, it is quite hard to pick one of these without benchmarking because:
some implementations work better on different types of matrices. For example Eigen works better on matrices with small rank (100s)
some are optimised for specific architectures (e.g. Intel's)
in some cases the multithreading of the BLAS library may conflict with a multithreaded application (e.g. OpenBLAS)
developer's benchmarks may tend to emphasise cases which work better on their implementation.
I would suggest pick one or two of these libraries that apply for your use case and benchmark them for your particular application on your particular (or similar) machine. This is quite easy to do even after compiling your code.
LAPACK and BLAS are performance libraries that provides basically Linear Algebra mathematical operations for a system of linear equations. you can find such libraries useful in the computer vision for example ( Object detection and classifications ) , Classical algorithms, Modelling , ...
TAsking provides a full C implementation of the LAPACK and BLAS performance libraries, both libraries are ISO-C99 Compliant with full documentation and examples, you can check it here
http://www.tasking.com/products/tasking-lapack-performance-libraries

pycuda vs theano vs pylearn2

I am currently learning programming with GPU to improve the performance of machine learning algorithms. Initially I try to learn programming cuda with pure c, then I found pycuda which to me a wrapper of cuda library, and then I found theano and pylearn2 and got a little confused:
I understand them in this way:
pycuda: python wrapper for cuda library
theano: similar to numpy but transparent to GPU and CPU
pylearn2: deep learning package which build on theano and implemented several machine learning/deep learning model
Since I am new to GPU programming, should I start learning from C/C++ implementation or starting from pycuda is enough, even starting from theano? E.g. I would like to implement randomForest model after learning the GPU programming.Thanks.
Your understand is almost right. I would just add some remarks about Theano. It's much more than a Numpy which can run on the GPU. Theano is indeed is math expression compiler, which translates symbolic math expressions in highly optimized C/CUDA code, targeted for both CPU and GPU. The code it generates is often much more efficient than the one most programmers would write. Theano also can make symbolic differentiation (very useful for gradient based optimization) and has also a feature to achieve better numerical stability (which probably is something useful, though I don't know for real to what extent). It's very likely Theano will be enough to implement what you need. If you still decide to learn CUDA or PyCUDA, choose the one based no the language you will use, C++ or Python.

Dedicated distributed system to handle matlab jobs

I'm a software engineer and currently looking forward to setup a distributed system at my laboratory so that I can process some matlab jobs over them. I have looked into MATLAB MPI but I want to know if there is some way so that I can setup a system here without any FEE or AMOUNT.
I have spent a lot of time looking at that very issue, and the short answer is: nope, not possible.
There are two long answers. First, if you're constrained to using Matlab, then all roads lead back to MathWorks. One possibility is that you could compile your code, you'd need to buy the compiler from Mathworks, though, and then you could run the compiled code on whatever grid infrastructure you wish, such as Hadoop.
Second, for this reason, I have found it much better to simply port code to another language, usually one in open source. For the work I tend to do, Octave is a poor replacement for Matlab. Instead, R and Python are great for most of the same functionality. Personally, I lean a lot more toward R than Python, but that's because R is a better fit for these applications (i.e. they're very statistical in nature).
I've ported a lot of Matlab code to R and it's not too bad. Porting to Python would be easier in general, though, and there is a very large Matlab refugee community that has switched to Python.
Once you're in either Python or R, there are a lot of options for MPI, multicore tools, distributed systems, GPU tools, and more. In fact, you may find the migration easier by writing some of the distributed functions in Python or R, loading up an easy to use grid system, and then have Matlab submit the job to the server. Your local code could be the same, but then you can work on porting only the gridded parts, which you'd probably have to devote some time to write in Matlab, anyway.
I wouldn't say it's completely impossible; You can use TCP/IP sockets to build a client/server application (you will find many MEX implementations for BSD sockets on File Exchange).
The architecture is simple: your main MATLAB client script sends jobs (code along with any needed data serialized) to nodes to evaluate and send back results when done. These nodes would be distributed MATLAB instances running the server part which listens for connections, and runs anything it receive through the EVAL function.
Obviously it is up to write code that can be divided into breakable tasks.
This is not as sophisticated as what is offered by the Distributed Computing Toolbox, but basically does the same thing...

CUDA and MATLAB for loop optimization

I'm going to attempt to optimize some code written in MATLAB, by using CUDA. I recently started programming CUDA, but I've got a general idea of how it works.
So, say I want to add two matrices together. In CUDA, I could write an algorithm that would utilize a thread to calculate the answer for each element in the result matrix. However, isn't this technique probably similar to what MATLAB already does? In that case, wouldn't the efficiency be independent of the technique and attributable only to the hardware level?
The technique might be similar but remember with CUDA you have hundreds of threads running simultaneously. If MATLAB is using threads and those threads are running on a Quad core, you are only going to get 4 threads excuted per clock cycle while you might achieve a couple of hundred threads to run on CUDA with that same clock cycle.
So to answer you question, YES, the efficiency in this example is independent of the technique and attributable only to the hardware.
The answer is unequivocally yes, all the efficiencies are hardware level. I don't how exactly matlab works, but the advantage of CUDA is that mutltiple threads can be executed simultaneously, unlike matlab.
On a side note, if your problem is small, or requires many read write operations, CUDA will probably only be an additional headache.
CUDA has official support for matlab.
[need link]
You can make use of mex files to run on GPU from MATLAB.
The bottleneck is the speed at which data is transfered from CPU-RAM to GPU. So if the transfer is minimized and done in large chunks, the speedup is great.
For simple things, it's better to use the gpuArray support in the Matlab PCT. You can check it here
http://www.mathworks.de/de/help/distcomp/using-gpuarray.html
For things like adding gpuArrays, multiplications, mins, maxs, etc., the implementation they use tends to be OK. I did find out that for making things like batch operations of small matrices like abs(y-Hx).^2, you're better off writing a small Kernel that does it for you.