I have a naive question about the maximal size for a counter. For example, the following code should couldn't be done in a reasonalbe time, because it needs at least 2^512 arithmetic operations, or more essentially, it needs to change the value of i 2^512 times!
c = 2 to the power 512;
for (i = 1, i < c, i++) {
j = j + 1 / ( i * i + 1 );
}
But when I use a computer algebra software "Mathematica", it gives me the answer less than one second. My question is that how could it achieve this?
ps. My naive idea about the size for counter is due to my opinion about the complexity. When I read some books (not too formal, because they focus on the complexity of arithmetic operations only) about complexity, they always omit the cost of the index. I can imagine this only if the counter is small.
At a guess, as your loop termination condition is fixed at 2^512, Mathematica might be able to treat this as a summed geometric sequence and so calculate it using a formula rather than having to iterate through all the loop values.
Take a look at the Wikipedia entry on Geometric Progression and the Wolfram page on Geometric Series.
If this was in a normal programming language e.g. like C++, Java or C#, you'd be absolutely right! Also, 2^512 is a very large number and would overflow the "normal" datatypes in those languages.
Assuming you mean 2 to the power of 512 and not 2 xor 512 (which is 514).
Related
For a mandelbrot generator I want to used fixed point arithmetic going from 32 up to maybe 1024 bit as you zoom in.
Now normaly SSE or AVX is no help there due to the lack of add with carry and doing normal integer arithmetic is faster. But in my case I have literally millions of pixels that all need to be computed. So I have a huge vector of values that all need to go through the same iterative formula over and over a million times too.
So I'm not looking at doing a fixed point add/sub/mul on single values but doing it on huge vectors. My hope is that for such vector operations AVX/AVX2 can still be utilized to improve the performance despite the lack of native add with carry.
Anyone know of a library for fixed point arithmetic on vectors or some example code how to do emulate add with carry on AVX/AVX2.
FP extended precision gives more bits per clock cycle (because double FMA throughput is 2/clock vs. 32x32=>64-bit at 1 or 2/clock on Intel CPUs); consider using the same tricks that Prime95 uses with FMA for integer math. With care it's possible to use FPU hardware for bit-exact integer work.
For your actual question: since you want to do the same thing to multiple pixels in parallel, probably you want to do carries between corresponding elements in separate vectors, so one __m256i holds 64-bit chunks of 4 separate bigintegers, not 4 chunks of the same integer.
Register pressure is a problem for very wide integers with this strategy. Perhaps you can usefully branch on there being no carry propagation past the 4th or 6th vector of chunks, or something, by using vpmovmskb on the compare result to generate the carry-out after each add. An unsigned add has carry out of a+b < a (unsigned compare)
But AVX2 only has signed integer compares (for greater-than), not unsigned. And with carry-in, (a+b+c_in) == a is possible with b=carry_in=0 or with b=0xFFF... and carry_in=1 so generating carry-out is not simple.
To solve both those problems, consider using chunks with manual wrapping to 60-bit or 62-bit or something, so they're guaranteed to be signed-positive and so carry-out from addition appears in the high bits of the full 64-bit element. (Where you can vpsrlq ymm, 62 to extract it for addition into the vector of next higher chunks.)
Maybe even 63-bit chunks would work here so carry appears in the very top bit, and vmovmskpd can check if any element produced a carry. Otherwise vptest can do that with the right mask.
This is a handy-wavy kind of brainstorm answer; I don't have any plans to expand it into a detailed answer. If anyone wants to write actual code based on this, please post your own answer so we can upvote that (if it turns out to be a useful idea at all).
Just for kicks, without claiming that this will be actually useful, you can extract the carry bit of an addition by just looking at the upper bits of the input and output values.
unsigned result = a + b + last_carry; // add a, b and (optionally last carry)
unsigned carry = (a & b) // carry if both a AND b have the upper bit set
| // OR
((a ^ b) // upper bits of a and b are different AND
& ~r); // AND upper bit of the result is not set
carry >>= sizeof(unsigned)*8 - 1; // shift the upper bit to the lower bit
With SSE2/AVX2 this could be implemented with two additions, 4 logic operations and one shift, but works for arbitrary (supported) integer sizes (uint8, uint16, uint32, uint64). With AVX2 you'd need 7uops to get 4 64bit additions with carry-in and carry-out.
Especially since multiplying 64x64-->128 is not possible either (but would require 4 32x32-->64 products -- and some additions or 3 32x32-->64 products and even more additions, as well as special case handling), you will likely not be more efficient than with mul and adc (maybe unless register pressure is your bottleneck).As
As Peter and Mystical suggested, working with smaller limbs (still stored in 64 bits) can be beneficial. On the one hand, with some trickery, you can use FMA for 52x52-->104 products. And also, you can actually add up to 2^k-1 numbers of 64-k bits before you need to carry the upper bits of the previous limbs.
I am building a neural network running on an FPGA, and the last piece of the puzzle is running a sigmoid function in hardware. This is either:
1/(1 + e^-x)
or
(atan(x) + 1) / 2
Unfortunately, x here is a float value (a real value in SystemVerilog).
Are there any tips on how to implement either of these functions in SystemVerilog?
This is really confusing to me since both of these functions are complex, and I don't even know where to begin implementing them due to the added complexity of being float values.
One simpler way for this is to create a memory/array for this function. However that option can be highly inefficient.
x should be the input address for the memory and the value at that location can be the output of the function.
Suppose value of your function is as follow. (This is just an example)
x = 0 => f(0) = 1
x = 1 => f(0) = 2
x = 2 => f(0) = 3
x = 3 => f(0) = 4
So you can create an array for this, which stored the output values.
int a[4] = `{1, 2, 3, 4};
I just finished this by Vivado HLS, which allows you to write circuits in C.
Here is my C code.
#include math.h
void exp(float a[10],b[10])
{
int i;
for(i=0;i<10;i++)
{
b[i] = exp(a[i]);
}
}
But there is a question that it is impossible to create a unsized matrix. Maybe there is another way that I don't know.
As you seem to realize, type real is not synthesizable. you need to operate on the type integer mantissa and type integer exponent separately and combine them when you are done, having tracked the sign. Once you take care of (e^-x), the rest should be straight-forward.
try this page for a quick explanation: https://www.geeksforgeeks.org/floating-point-representation-digital-logic/
and search on "floating point digital design" for more explanations/examples.
Do you really need a floating number for this? Is fixed point sufficient?
Considering (atan(x) + 1) / 2, quite likely the only useful values of x are those where the exponent is fairly small. (if the exponent is large, your answer is pi/2).
atan of a fixed point number can be calculated in HW fairly easily; there are cordic methods (see https://zipcpu.com/dsp/2017/08/30/cordic.html) and direct methods; see for example https://dspguru.com/dsp/tricks/fixed-point-atan2-with-self-normalization/
FPGA design flows in which hardware (FPGA) is targeted generally do not support floating point numbers in the FPGA fabric. Fixed point with limited precision is more commonly used.
A limited precision fixed point approach:
Use Matlab to create an array of samples for your math function such that the largest value is +/- .99999. For 8 bit precision (actually 7 with sign bit), multiply those numbers by 128, round at the decimal point and drop the fractional part. Write those numbers to a text file in 2s complement hex format. In SystemVerilog you can implement a ROM using that text file. Use $readmemh() to read these numbers into a memory style variable (one that has both a packed and unpacked dimension). Link to a tutorial:
https://projectf.io/posts/initialize-memory-in-verilog/.
Now you have a ROM with limited precision samples of your function
Section 21.4 Loading memory array data from a file in the SystemVerilog specification provides the definition for $readmh(). Here is that doc:
https://ieeexplore.ieee.org/document/8299595
If you need floating point one possibility is to use a processor soft core with a floating point unit implemented in FPGA fabric, and run software on that core. The core interface to the rest of the FPGA fabric over a physical bus such as axi4 steaming. See:
https://www.xilinx.com/products/design-tools/microblaze.html to get started.
It is a very different workflow than ordinary FPGA design and uses different tools. C or C++ compiler with math libraries (tan, exp, div, etc) would be used along with the processor core.
Another possibility for fixed point is an FPGA with a hard core processor. Xilinx Zynq is one of them. This is a complex and powerful approach. A free free book provides knowledge on how to use Zynq
http://www.zynqbook.com/.
This workflow is even more complex that soft core approach because the Zynq is a more complex platform (hard processor & FPGA integrated on one chip).
Its pretty hard to implement non-linear functions like that in hardware, and on top of that floating point arithmetic is even more costly. Its definitely better(and recommended) to work with fixed point arithmetic as mentioned in answers before. The number of precision bits in fixed point arithmetic will depend on your result accuracy and the error tolerance.
For hardware implementations, any kind of non-linear function can be approximated as piecewise linear function, and use a ROM based implementation approach as described in previous answers. The number of sample points that you take from the non-linear function determines your accuracy. The more samples you store the better approximation of the function you get. Often in hardware , number of samples you can store can become restricted by the amount of fast/local memory available to you. In this case to optimize the memory resources, you can add a little extra compute resources and perform linear interpolation to calculate the needed values.
I'm implementing a tangent distance based OCR solution for the iPhone, which heavily relies on fast multiplication of floating-point matrices of size 253x7. For the proof of concept, I've implemented my own naive matrix routines like this:
Matrix operator*(const Matrix& matrix) const {
if(cols != matrix.rows) throw "cant multiply!";
Matrix result(rows, matrix.cols);
for(int i = 0; i < result.rows; i++){
for(int j = 0; j < result.cols; j++){
T tmp = 0;
for(int k = 0; k < cols; k++){
tmp += at(i,k) * matrix.at(k,j);
}
result.at(i,j) = tmp;
}
}
return result;
}
As you can see, it's pretty basic.
After the PoC performed well, I've decided to push the performance limits further, by incorporating the Accelerate Framework's matrix multiplication (which presumably uses SIMD, and other fancy stuff, to do the heavy lifting...):
Matrix operator*(const Matrix& m) const {
if(cols != m.rows) throw "cant multiply!";
Matrix result(rows,m.cols);
cblas_dgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, rows, m.cols, cols, 1, matrix, cols, m.matrix, m.cols, 1, result.matrix, result.cols);
return result;
}
Shockingly (at least for me), the above code took double the time to multiply the matrices! I've tried using single precision instead of double, cause I was suspicious that it was something related with the CPU's word-size (32 bit float vs. 64 bit double on a 32 bit ARM), but had no performance gain...
What am I doing wrong? Are my 253x7 matrices too small for noticeable performance boost over the naive implementation?
A couple of questions:
253 x 7 multiplied by what size matrix? If you’re doing, say, 253x7 * 7x1, then a general-purpose multiply routine is going to spend most of its time in edging code, and there’s very little that a tuned library can do that will make it faster than a naive implementation.
What hardware are you timing on, and what iOS version? Especially for double precision, older hardware and older iOS versions are more limited performance-wise. On a Cortex-A8, for example, double-precision arithmetic is completely unpipelined, so there’s almost nothing a library can do to beat a naive implementation.
If the other matrix isn’t ridiculously small, and the hardware is recent, please file a bug (unexpectedly low performance is absolutely a bug). Small matrices with high aspect ratio are quite difficult to make fast in a general-purpose matrix-multiply, but it’s still a good bug to file.
If the hardware/iOS version is old, you may want to use Accelerate anyway, as it should perform significantly better on newer hardware/software.
If the other matrix is just tiny, then there may not be much to be done. There is no double-precision SIMD on ARM, the matrices are too small to benefit from cache blocking, and the dimensions of the matrices would also be too small to benefit much from loop unrolling.
If you know a priori that your matrices will be exactly 253x7 * 7x???, you should be able to do much better than both a naive implementation and any general-purpose library by completely unrolling the inner dimension of the matrix multiplication.
Basically, yes. The "x7" portion is likely too small to make the overhead of CBLAS worth it. The cost of making a function call, plus all the flexibility that the CBLAS functions give you, takes a while to make back up. Every time you pass a option like CblasNoTrans remember that there's an if() in there to manage that option. cblas_dgemm in particular accumulates into C, so it has to read the previous result element, apply a multiply, and then add before storing. That's a lot of extra work.
You may want to try the vDSP functions rather than CBLAS. vDSP_mmul is a bit simpler and doesn't accumulate into the result. I've had good luck with vDSP_* on small data sets (a few thousand elements).
That said, my experience with this is that naïve C implementations can often be quite fast on small data sets. Avoiding a function call is a huge benefit. Speaking of which, make sure that your at() call is inlined. Otherwise you're wasting a lot of time in your loop. You can likely speed up the C implementation by using pointer additions to move serially through your matrices rather than multiplies (which are required for random access through []). On a matrix this small, it may or not be worth it; you'd have to profile a bit. Looking at the assembler output is highly instructive.
Keep in mind that you absolutely must profile this stuff on the device. Performance in the simulator is irrelevant. It's not just that the simulator is faster; it's completely different. Things that are wildly faster on the simulator can be much slower on device.
I want to generate a large number of random numbers (uniformly distributed on the interval [0,1]). Currently the generation of these random numbers is causing my program to run quite slowly, however the program only needs them to be calculated to around 5 decimal places.
I'm not entirely sure of how MATLAB generates random numbers, but if there is a way of only calculating them to 5 decimal places then it will (hopefully greatly) speed up my program.
Is there a way of doing such a thing?
Thanks very much.
To answer your question, yes, you can generate single precision random numbers, like this:
r = rand(..., 'single'); %Reference: http://www.mathworks.com/help/matlab/ref/rand.html
Single precision numbers have 7 (ish) significant figures when printed as decimal.
To echo some comments above, I don't think this will buy you much performance. The first thing to do if rand is really your slow operation is to batch the calls. That is, instead of:
for ix 1:1000
y = rand(1,1,'single);
end
use:
yVector = rand(1000,1,'single');
As already mentioned, you can instruct RAND to generate numbers directly as single precision, and it's definitely best to generate the numbers in a decent sized chunk. If you still need more performance, and you have Parallel Computing Toolbox and a supported NVIDIA GPU, the gpuArray.rand function can be even faster, especially if you select the philox generator like so:
parallel.gpu.RandStream('Philox4x32-10')
Assuming you actually have a proper code layout where you generate a lot of numbers in an array, this can be a solution for low precision. Note that I have not tested but it is mentioned to be fast:
R = randi([0 100000],500,300)/100000
This will generate 150000 low precision random numbers between 0 and 1
The djb2 algorithm has a hash function for strings.
unsigned long hash = 5381;
int c;
while (c = *str++)
hash = ((hash << 5) + hash) + c; /* hash * 33 + c */
Why are 5381 and 33 so important?
This hash function is similar to a Linear Congruential Generator (LCG - a simple class of functions that generate a series of psuedo-random numbers), which generally has the form:
X = (a * X) + c; // "mod M", where M = 2^32 or 2^64 typically
Note the similarity to the djb2 hash function... a=33, M=2^32. In order for an LCG to have a "full period" (i.e. as random as it can be), a must have certain properties:
a-1 is divisible by all prime factors of M (a-1 is 32, which is divisible by 2, the only prime factor of 2^32)
a-1 is a multiple of 4 if M is a multiple of 4 (yes and yes)
In addition, c and M are supposed to be relatively prime (which will be true for odd values of c).
So as you can see, this hash function somewhat resembles a good LCG. And when it comes to hash functions, you want one that produces a "random" distribution of hash values given a realistic set of input strings.
As for why this hash function is good for strings, I think it has a good balance of being extremely fast, while providing a reasonable distribution of hash values. But I've seen many other hash functions which claim to have much better output characteristics, but involved many more lines of code. For instance see this page about hash functions
EDIT: This good answer explains why 33 and 5381 were chosen for practical reasons.
33 was chosen because:
1) As stated before, multiplication is easy to compute using shift and add.
2) As you can see from the shift and add implementation, using 33 makes two copies of most of the input bits in the hash accumulator, and then spreads those bits relatively far apart. This helps produce good avalanching. Using a larger shift would duplicate fewer bits, using a smaller shift would keep bit interactions more local and make it take longer for the interactions to spread.
3) The shift of 5 is relatively prime to 32 (the number of bits in the register), which helps with avalanching. While there are enough characters left in the string, each bit of an input byte will eventually interact with every preceding bit of input.
4) The shift of 5 is a good shift amount when considering ASCII character data. An ASCII character can sort of be thought of as a 4-bit character type selector and a 4-bit character-of-type selector. E.g. the digits all have 0x3 in the first 4 bits. So an 8-bit shift would cause bits with a certain meaning to mostly interact with other bits that have the same meaning. A 4-bit or 2-bit shift would similarly produce strong interactions between like-minded bits. The 5-bit shift causes many of the four low order bits of a character to strongly interact with many of the 4-upper bits in the same character.
As stated elsewhere, the choice of 5381 isn't too important and many other choices should work as well here.
This is not a fast hash function since it processes it's input a character at a time and doesn't try to use instruction level parallelism. It is, however, easy to write. Quality of the output divided by ease of writing the code is likely to hit a sweet spot.
On modern processors, multiplication is much faster than it was when this algorithm was developed and other multiplication factors (e.g. 2^13 + 2^5 + 1) may have similar performance, slightly better output, and be slightly easier to write.
Contrary to an answer above, a good non-cryptographic hash function doesn't want to produce a random output. Instead, given two inputs that are nearly identical, it wants to produce widely different outputs. If you're input values are randomly distributed, you don't need a good hash function, you can just use an arbitrary set of bits from your input. Some of the modern hash functions (Jenkins 3, Murmur, probably CityHash) produce a better distribution of outputs than random given inputs that are highly similar.
On 5381, Dan Bernstein (djb2) says in this article:
[...] practically any good multiplier works. I think you're worrying
about the fact that 31c + d doesn't cover any reasonable range of hash
values if c and d are between 0 and 255. That's why, when I discovered
the 33 hash function and started using it in my compressors, I started
with a hash value of 5381. I think you'll find that this does just as
well as a 261 multiplier.
The whole thread is here if you're interested.
Ozan Yigit has a page on hash functions which says:
[...] the magic of number 33 (why it works better than many other constants, prime or not) has never been adequately explained.
Maybe because 33 == 2^5 + 1 and many hashing algorithms use 2^n + 1 as their multiplier?
Credit to Jerome Berger
Update:
This seems to be borne out by the current version of the software package djb2 originally came from: cdb
The notes I linked to describe the heart of the hashing algorithm as using h = ((h << 5) + h) ^ c to do the hashing... x << 5 is a fast hardware way to use 2^5 as the multiplier.