Ways to accelerate reduce operation on Xeon CPU, GPU and Xeon Phi - hpc

I have an application where reduce operations (like sum, max) on a large matrix are bottleneck. I need to make this as fast as possible. Are there vector instructions in mkl to do that?
Is there a special hardware unit to deal with it on xeon cpu, gpu or mic?
How are reduce operations implemented in these hardware in general?

You can implement your own simple reductions using the KNC vpermd and vpermf32x4 instructions as well as the swizzle modifiers to do cross lane operations inside the vector units.
The C intrinsic function equivalents of these would be the mm512{mask}permute* and mm512{mask}swizzle* family.
However, I recommend that you first look at the array notation reduce operations, that already have high performance implementations on the MIC.
Look at the reduction operations available here and also check out this video by Taylor Kidd from Intel talking about array notation reductions on the Xeon Phi starting at 20mins 30s.
EDIT: I noticed you are also looking for CPU based solutions. The array notation reductions will work very well on the Xeon also.

Turns out none of the hardware have reduce operation circuit built-in.
I imagined a sixteen 17 bit adders attached to 128 bit vector register for reduce-sum operation. Maybe this is because no one has encountered a significant bottleneck with reduce operation. Well, the best solution i found is #pragma omp parallel for reduction in openmp. I am yet to test its performance though.

This operation is going to be bandwidth-limited and thus vectorization almost certainly doesn't matter. You want the hardware with the most memory bandwidth. An Intel Xeon Phi processor has more aggregate bandwidth (but not bandwidth-per-core) than a Xeon processor.

Related

In neural networks, why conventionally set number of neurons to 2^n?

For example, when pile up Dense layers, conventionally, we always set the number of neurons as 256 neurons, 128 neurons, 64 neurons, ... and so on.
My question is:
What's the reason for conventionally use 2^n neurons? Will this implementation makes the code runs faster? Saves memory? Or are there any other reasons?
It's historical. Early neural network implementations for GPU Computing (written in CUDA, OpenCL etc) had to concern themselves with efficient memory management to do data parallelism.
Generally speaking, you have to align N computations on physical processors. The number of physical processors is usually a power of 2. Therefore, if the number of computations is not a power of 2, the computations can't be mapped 1:1 and have to be moved around, requiring additional memory management (further reading here). This was only relevant for parallel batch processing, i.e. having the batch size as a power of 2 gave you slightly better performance. Interestingly, having other hyperparameters such as the number of hidden units as a power of 2 never had a measurable benefit - I assume as neural networks got more popular, people simply started adapting this practice without knowing why and spreading it to other hyperparameters.
Nowadays, some low-level implementations might still benefit from this convention but if you're using CUDA with Tensorflow or Pytorch in 2020 with a modern GPU architecture, you're very unlikely to encounter any difference between a batch size of 128 and 129 as these systems are highly optimized for very efficient data parallelism.

Why Ethernet is using CRC-32 and not CRC-32C?

It has being shown that CRC32C provides better results (an improve Hamming distance and a faster implementation) than CRC32. Why Ethernet is stil using the old CRC 32 and not CRC32C?
"Faster implementation" is not true for the hardware that is normally used to implement the data link layer.
You may be referring to the fact that one particular processor architecture, x86-64, has a CRC instruction that uses the CRC-32C polynomial. However the ARM architecture (aarch64) CRC instruction uses the CRC-32 polynomial. Go figure.
One could argue that yet another polynomial should be used, since Koopman has characterized the performance of many polynomials with better performance than either of the ones you mention. But none of that really matters since ...
All of the legacy hardware has to support the original CRC, and there is little motivation to provide an alternate CRC would use would need to somehow be negotiated between the transmitter and receiver. There is not a noticeable performance advantage for typical noise sources, which are rare single bit errors.

Application performance vs Peak performance

I have questions about real application performance running on a cluster vs cluster peak performance.
Let's say one HPC cluster report that it has peak performance of 1 Petaflops. How is this calculated?
To me, it seems that there are two measuring matrixes. One is the performance calculated based on the hardware. The other one is from running HPL? Is my understanding correct?
When I am reading one real application running on the system at full scale, the developer mentions that it could achieve 10% of the peak performance. How is this measured and why it can't achieve peak performance?
Thanks
Peak performance is what the system is theoretically able to deliver. It is the product of the total number of CPU cores, the core clock frequency, and the number of FLOPs one core makes per clock tick. That performance can never be reached in practice because no real application consists of 100% fully vectorised tight loops that only operate on data held in the L1 data cache. In many cases data doesn't even fit in the last-level cache and the memory interface is usually not fast enough to deliver data at the same rate at which the CPU is able to process it. One ubiquitous example from HPC is the multiplication of a sparse matrix with a vector. It is so memory intensive (i.e. many loads and stores per arithmetic operation) that on many platforms it only achieves a fraction of the peak performance.
Things get even worse when multiple nodes are networked together on a massive scale as data transfers could introduce huge additional delays. Performance in those cases is determined mainly by the ratio of local data processing and data transfer. HPL is a particularly good in that aspect - it does a lot of vectorised local processing and does not move much data across the CPUs/nodes. That's not the case with many real-world parallel programs and also the reason why many are questioning the applicability of HPL in assessing cluster performance nowadays. Alternative benchmarks are already emerging, for example the HPCG benchmark (from the people who brought you HPL).
The theoretical (peak) value is based on the capability of each individual core in the cluster, which depends on clock frequency, number of floating point units, parallel instruction issuing capacity, vector register sizes, etc. which are design characteristics of the core. The flops/s count for each core in the cluster is then aggregated to get the cluster flops/s count.
For a car the equivalent theoretical performance would be the maximum speed it can reach given the specification of its engine.
For a program to reach the theoretical count, it has to perform specific operations in a specific order so that the instruction-level parallelism is maximum and all floating-point units are working constantly without delay due to synchronization or memory access, etc. (See this SO question for more insights)
For a car, it is equivalent to measuring top speed on a straight line with no wind.
But of course, chances that such a program computes something of interest are small. So benchmarks like HPL use actual problems in linear algebra, with a highly optimized and tuned implementation, but which is still imperfect due to IO operations and the fact that the order of operations is not optimal.
For a car, it could be compared to measuring the top average speed on a race track with straight lines, curves, etc.
If the program requires a lot of network, or disk communications, which are operations that require a lot of clock cycle, then the CPU has often to stay idle waiting for data before it can perform arithmetic operations, effectively wasting away a lot of computing power. Then, the actual performance is estimated by dividing the number of floating points operations (addition and multiplications) the program is performing by the time it takes to perform them.
For a car, this would correspond to measuring the top average speed in town with red lights, etc. by calculating the length of the trip divided by the time needed to accomplish it.

Difference between data processed by CPU and GPU in openCL

I have some matlab code that uses several large MEX functions and I want to speed things up by using openCL ( I am replacing parts of code of the MEX functions with openCL code using openCL API ). I've translated a small part of the code into an openCL kernel and I am already facing difficulties.
Some elements of the resulting matrix after execution on GPU are different from the corresponding elements of the resulting matrix when the original MEX function is called and the error is less than 0.01. This leads to a small error in the final result but I fear the error will accumulate as I translate more code.
This is probably related with different precision of the calculations on CPU and GPU. Does anyone know how to ensure the same precision? I am running 64 bit matlab R2012b on Ubuntu 12.04. The hardware I am using is Intel Core2 Duo E4700 and NVIDIA GeForce GT 520.
The small differences between results on your CPU and GPU are easily explained as arising from differences in floating-point precision if you have modified your code from using double precision (64-bit) f-p numbers on the CPU to using single-precision (32-bit) f-p numbers on the GPU.
I would not call this difference an error, rather it is an artefact of doing arithmetic on computers with floating-point numbers. The results you were getting on your CPU-only code were already different from any theoretically 'true' result. Much of the art of numerical computing is in keeping the differences between theoretical and actual computations small enough (whatever the heck that means) for the entire duration of a computation. It would take more time and space than I have now to expand on this, but surprises arising from lack of understanding of what floating-point arithmetic is, and isn't, are a rich source of questions here on SO. Some of the answers to those questions are very illuminating. This one should get you started.
If you have taken care to use the same precision on both CPU and GPU then the differences you report may be explained by the non-commutativity of floating-point arithmetic: in floating-point arithmetic it is not guaranteed that (a+b)+c == a+(b+c). The order of operations matters; if you have any SIMD going on I'd bet that the order of operations is not identical on the two implementations. Even if you haven't, what have you done to ensure that operations are ordered the same on both GPU and CPU ?
As to what you should do about it, that's rather up to you. You could (though I personally wouldn't recommend it) write your own routines for doing double-precision f-p arithmetic on the GPU. If you choose to do this, expect to wave goodbye to much of the speed-up that the GPU promises.
A better course of action is to ensure that your single-precision software provides sufficient accuracy for your purposes. For example, in the world I work in our original measurements from the environment are generally not accurate to more than about 3 significant figures, so any results that our codes produce have no validity after about 3 s-f. So if I can keep the errors in the 5th and lower s-fs that's good enough.
Unfortunately, from your point of view, getting enough accuracy from single-precision computations isn't necessarily guaranteed by globally replacing double with float and reompiling, you may (generally would) need to implement different algorithms, ones which take more time to guarantee more accuracy and which do not drift so much as computations proceed. Again, you'll lose some of the speed advantage that GPUs promise.
A common problem is, that floating point values are kept within an 80bit CPU register, instead of getting truncated and stored each time. In these cases, the additional precision leads to deviations. So you may check, what options your compiler offers to counter such issues. It can also be interesting to view the difference of release and debug builds.

GPU perfomance request, what's the best solution?

I work on an audio processing project that needs to do a lot of basic computations (+, -, *) like a FFT (Fast Fourier Transform) calculation.
We're considering using a graphics card to accelerate these computations. But we don't know if this is the best solution. Our desired solution needs to be a good computation system costing less than $500.
We use Matlab programming, and we have a sound card acquisition which have to be plug in the system.
Do you know a solution other than graphics card + motherboard to do lot of calculus?
You can use the free Matlab CUDA library to perform the computations on the GPU. $500 will give you a very decent NVIDIA GPU. Beware that GPU's have limited video memory and will run out of memory with large data volumes even faster than Matlab.
I have benchmarked an 8core intel CPU against an 8800 Nvidia GPU (128streams) with GPUMat , for 512Kb datasets the GPU spun out at the same speed as the 8 core intel at 2Ghz, including transfer times to the GPU memory. For serious GPU work I recommend a dedicated card compared to the one you are using to drive the monitor. Use the motherboard cheapie intel video to drive the monitor and pass the array computes to the Nvidia.
Parallel Computing Toolbox from MathWorks now includes GPU support. In particular, elementwise operations and arithmetic are supported, as well as 1- and 2-dimensional FFTs (along with a whole bunch of other stuff to support hand-written CUDA code if you have that). If you're interested in performing calculations in double-precision, the recent Tesla and Quadro branded cards will give you the best performance.
Here's a trivial example showing how you might use the GPU in MATLAB using Parallel Computing Toolbox:
gA = gpuArray( rand(1000) );
gB = fft( 1 + gA * 3 );