Fastest way to get the first prime numbers up to a limit - numbers

What is the fastest way in terms of CPU to get the first prime numbers up to a limit?

It takes 4.8 s on my machine to calculate the first 1,000,000,000 numbers. It's even faster than reading it from a file!
public static void main(String[] args) {
long startingTime = System.nanoTime();
int limit = 1_000_000_000;
BitSet bitSet = findPrimes(limit);
System.out.println(2);
System.out.println(3);
System.out.println(5);
System.out.println(7);
limit = limit / 2 + 1;
for (int i = 6; i < limit; i++) {
if (!bitSet.get(i)) {
int p = i * 2 - 1;
//System.out.println(p);
}
}
System.out.println("Done in " + (System.nanoTime() - startingTime) / 1_000_000 + " milliseconds");
}
public static BitSet findPrimes(int limit) {
BitSet bitSet = new BitSet(limit / 2 + 1);
int size = (int) Math.sqrt(limit);
size += size % 2;
for (int prime = 3; prime < size; prime += 2) {
if (bitSet.get(prime / 2 + 1)) {
continue;
}
for (int i = prime / 2 + 1; i < bitSet.size(); i += prime) {
bitSet.set(i);
}
}
return bitSet;
}
The most efficient way of computing prime values is to use a Sieve of Eratosthenes, first discovered in the ancient greek millenniums ago.
The problem, however, is that it requires a lot of memory. My Java app is simply crashing when I just declare a boolean array of 1,000,000,000.
So I made two memory optimizations to make it work
Since all the primes after 2 are odd numbers, I half the array size by mapping the indexes to int p = i * 2 - 1;
Then instead of using an array of booleans, I used a BitSet that works with bit operation on an array of longs.
With those two optimizations using the Sieve of Eratosthenes allows you to compute the btSet in 4.8 seconds. As for printing it out, it's another story ;)

There are a number of ways to quickly calculate primes -- the 'fastest' ultimately will depend on many factors (combination of hardware, algorithms, data format, etc). This has already been answered here: Fastest way to list all primes below N . However, that is for Python, and the question becomes more interesting when we consider C (or other lower level languages)
You can see:
https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
https://en.wikipedia.org/wiki/Sieve_of_Atkin
Asymptotically, Sieve of Atkin has a better computational complexity (O(N) or lower* -- see notes), but in practice the Sieve of Eratosthenes (O(N log log N)) is perfectly fine, and it has the advantage of being extremely easy to implement:
/* Calculate 'isprime[i]' to tell whether 'i' is prime, for all 0 <= i < N */
void primes(int N, bool* isprime) {
int i, j;
/* first, set odd numbers to prime, evens to not-prime */
for (i = 0; i < N; ++i) isprime[i] = i % 2 == 1;
/* special cases for i<3 */
isprime[0] = isprime[1] = false;
/* Compute square root via newton's algorithm
* (this can be replaced by 'i * i <= N' in the foor loop)
*/
int sqrtN = (N >> 10) + 1024;
for (i = 0; i < 32; ++i) sqrtN = (N / sqrtN + sqrtN) / 2;
/* iterate through all odd numbers */
isprime[2] = true;
for (i = 3; i <= sqrtN /* or: i*i <= N */; i += 2) {
/* check if this odd number is still prime (i.e. if it has not been marked off) */
if (isprime[i]) {
/* mark off multiples of the prime */
j = 2 * i;
isprime[j] = false;
for (j += i; j < N; j += 2 * i) {
isprime[j] = false;
}
}
}
}
That code is pretty clear and consise, but takes around ~6s to compute primes < 1_000_000_000 (1 billion), and uses N * sizeof(bool) == N == 1000000000 bytes of memory. Not great
We can use bit-fiddling and uint64_t type (use #include <stdint.h>) to produce code that runs in the same complexity, but hopefully much faster and using less memory. We can right off the bat reduce the memory to N/8 (1 bit per boolean instead of 1 byte). Additionally, we can only count odd numbers and shave even more memory off -- a factor of 2, taking our final usage to N/16 bytes.
Here's the code:
/* Calculate that the (i//2)%64'th bit of 'isprime[(i//2)/64]' to tell whether 'i' is prime, for all 0 <= i < N,
* with 'i % 2 == 0'
*
* Since the array 'isprime' can be seen as a long bistring:
*
* isprime:
* [0] [1] ...
* +---------+---------+
* |01234....|01234....| ...
* +---------+---------+
*
* Where each block is a 64 bit integer. Therefore, we can map the odd natural numbers to this with the following formula:
* i -> index=((i / 2) / 64), bit=((i / 2) % 64)
*
* So, '1' is located at 'primes[0][0]' (where the second index represnts the bit),
* '3' is at 'primes[0][1]', 5 at 'primes[0][2]', and finally, 129 is at 'primes[1][0]'
*
* And so forth.
*
* Then, we use that value as a boolean to indicate whether the 'i' that maps to it is prime
*
*/
void primes_bs(int N, uint64_t* isprime) {
uint64_t i, j, b/* block */, c/* bit */, v, ii;
/* length of the array is roundup(N / 128) */
int len = (N + 127) / 128;
/* set all to isprime */
for (i = 0; i < len; ++i) isprime[i] = ~0;
/* set i==1 to not prime */
isprime[0] &= ~1ULL;
/* Compute square root via newton's algorithm */
int sqrtN = (N >> 10) + 1024;
for (i = 0; i < 32; ++i) sqrtN = (N / sqrtN + sqrtN) / 2;
/* Iterate through every word/chunk and handle its bits */
uint64_t chunk;
for (b = 0; b <= (sqrtN + 127) / 128; ++b) {
chunk = isprime[b];
c = 0;
while (chunk && c < 64) {
if (chunk & 1ULL) {
/* hot bit, so is prime */
i = 128 * b + 2 * c + 1;
/* iterate 3i, 5i, 7i, etc...
* BUT, j is the index, so is basically 3i/2, 5i/2, 7i/2,
* that way we don't need as many divisions per iteration,
* and we can use 'j += i' (since the index only needs 'i'
* added to it, whereas the value needs '2i')
*/
for (j = 3 * i / 2; j < N / 2; j += i) {
isprime[j / 64] &= ~(1ULL << (j % 64));
}
}
/* chunk will not modify itself */
c++;
chunk = isprime[b] >> c;
}
}
/* set extra bits to 0 -- unused */
if (N % 128 != 0) isprime[len - 1] &= (1ULL << ((N / 2) % 64)) - 1;
}
This code calculates primality for all numbers less than 1000000000 (1_000_000_000) in ~2.7sec on my machine, which is a great improvement! And you use much less memory (1/16 th of what the other method uses.
However, if you're using it in other code, the interface is trickier -- you can use a function to help:
/* Calculate if 'x' is prime from a dense bitset */
bool calc_is_prime(uint64_t* isprime, uint64_t x) {
/**/ if (x == 2) return true; /* must be included, since 'isprime' only encodes odd numbers */
else if (x % 2 == 0) return false;
return (isprime[(x / 2) / 64] >> ((x / 2) % 64))) & 1ULL;
}
You could go even further, and generalize this method of mapping indices for odd-only values (https://en.wikipedia.org/wiki/Wheel_factorization), but the resulting code may well be slower -- since the inner loops and operations are no longer powers of 2, you either have to have a table look up, or some very clever trickery. The only case when you would want to use wheel factorization is if you were very memory constrained, in which case wheel factoring for primorial(3) == 30, would allow you to store 30 numbers for every 7 bits (as opposed to 2 numbers for every 1 bit). Or, say primorial(5) == 2310, you could store 2310 numbers for every 341
However, large wheel factorizations will, again, change some of the instructions from bit shifts and power-of-two operations to table lookups, and for large wheels, this could actually end up being much slower
NOTES:
Technically, versions of the Sieve of Atkin exist which is O(N / (log log N)), but you no longer use the same dense array (if it was, the complexity MUST be at least O(N))

Related

How to generate a synthesizable pseudo random generator from 1 to 52 System Verilog

Could someone explain how to create a pseudo number generator that has a range of 1 to 51 and can have its value placed within something like the for loop. This has to be in System Verilog and synthesizable. I have read about some CRC and LFSRs but was not quite sure how to modify them to fit it within the specified values I needed (1-52) and how to fit it within a for loop as most examples took up an entire module.
Thank you for your time.
initial begin
for(int i = 0; i < 52; i++) begin
int j = // insert PSEUDO RANDOM NUM GEN 0 to 51
if(array[j] == 0)begin
i = i-1;
continue;
end
else
array2[i] = array[j];
array[j] = 0;
end
end
(array and array2 both have 52 values of bit size 4)
I don't know what is your application, but what you could do is to generate numbers in the modular ring mod 53 (53 is a prime number). Since you did not mention any other requirement I will take the simplest generator I could imagine, that can be performed without multiplications and will generate numbers, 1 <= n < 52.
function [5:0] next_item(input logic [5:0] v);
if(v < 26) begin
next_item = (v << 1); // 2 * v % 53 = 2 * v
end
else if(v > 26) begin
// (2 * v) % 53 = (2 * v - 52 - 1) % 53 = 2 * (v - 26) - 1
next_item = (((v - 26) << 2) - 1);
end
else begin
// v = 26, you want to exclude 2 * v % 53 = 52, so we skip it
// by applying the next item again, and it will be
next_item = 51;
end
endfunction
And you could use it like this
parameter SEED = 20; // any number 1 <= N < 52 you like
initial begin
int j = SEED;
for(int i = 0; i < 52; i++) begin
if(array[j-1] == 0)begin
i = i-1;
continue;
end
else begin
array2[i] = array[k];
end
j = next_item(j);
end

Log2 approximation in fixed-point

I'v already implemented fixed-point log2 function using lookup table and low-order polynomial approximation but not quite happy with accuracy across the entire 32-bit fixed-point range [-1,+1). The input format is s0.31 and the output format is s15.16.
I'm posting this question here so that another user can post his answer (some comments were exchanged in another thread but they prefer to provide comprehensive answer in a separate thread). Any other answers are welcome, I would much appreciate if you could provide some speed vs accuracy details of your algorithm and its implementation.
Thanks.
By simply counting the leading zero bits in a fixed-point number x, one can determine log2(x) to the closest strictly smaller integer. On many processor architectures, there is a "count leading zeros" machine instruction or intrinsic. Where this is not available, a fairly efficient implementation of clz() can be constructed in a variety of ways, one of which is included in the code below.
To compute the fractional part of the logarithm, the two main obvious contenders are interpolation in a table and minimax polynomial approximation. In this specific case, quadratic interpolation in a fairly small table seems to be the more attractive option. x = 2i * (1+f), with 0 ≤ f < 1. We determine i as described above and use the leading bits of f to index into the table. A parabola is fit through this and two following table entries, computing the parameters of the parabola on the fly. The result is rounded, and a heuristic adjustment is applied to partially compensate for the truncating nature of fixed-point arithmetic. Finally, the integer portion is added, yielding the final result.
It should be noted that the computation involves right shifts of signed integers which may be negative. We need those right shifts to map to arithmetic right shifts at machine code level, something which is not guaranteed by the ISO-C standard. However, in practice most compilers do what is desired. In this case I used the Intel compiler on an x64 platform running Windows.
With a 66-entry table of 32-bit words, the maximum absolute error can be reduced to 8.18251e-6, so full s15.16 accuracy is achieved.
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <math.h>
#define FRAC_BITS_OUT (16)
#define INT_BITS_OUT (15)
#define FRAC_BITS_IN (31)
#define INT_BITS_IN ( 0)
/* count leading zeros: intrinsic or machine instruction on many architectures */
int32_t clz (uint32_t x)
{
uint32_t n, y;
n = 31 + (!x);
if ((y = (x & 0xffff0000U))) { n -= 16; x = y; }
if ((y = (x & 0xff00ff00U))) { n -= 8; x = y; }
if ((y = (x & 0xf0f0f0f0U))) { n -= 4; x = y; }
if ((y = (x & 0xccccccccU))) { n -= 2; x = y; }
if (( (x & 0xaaaaaaaaU))) { n -= 1; }
return n;
}
#define LOG2_TBL_SIZE (6)
#define TBL_SIZE ((1 << LOG2_TBL_SIZE) + 2)
/* for i = [0,65]: log2(1 + i/64) * (1 << 31) */
const uint32_t log2Tab [TBL_SIZE] =
{
0x00000000, 0x02dcf2d1, 0x05aeb4dd, 0x08759c50,
0x0b31fb7d, 0x0de42120, 0x108c588d, 0x132ae9e2,
0x15c01a3a, 0x184c2bd0, 0x1acf5e2e, 0x1d49ee4c,
0x1fbc16b9, 0x22260fb6, 0x24880f56, 0x26e2499d,
0x2934f098, 0x2b803474, 0x2dc4439b, 0x30014ac6,
0x32377512, 0x3466ec15, 0x368fd7ee, 0x38b25f5a,
0x3acea7c0, 0x3ce4d544, 0x3ef50ad2, 0x40ff6a2e,
0x43041403, 0x450327eb, 0x46fcc47a, 0x48f10751,
0x4ae00d1d, 0x4cc9f1ab, 0x4eaecfeb, 0x508ec1fa,
0x5269e12f, 0x5440461c, 0x5612089a, 0x57df3fd0,
0x59a80239, 0x5b6c65aa, 0x5d2c7f59, 0x5ee863e5,
0x60a02757, 0x6253dd2c, 0x64039858, 0x65af6b4b,
0x675767f5, 0x68fb9fce, 0x6a9c23d6, 0x6c39049b,
0x6dd2523d, 0x6f681c73, 0x70fa728c, 0x72896373,
0x7414fdb5, 0x759d4f81, 0x772266ad, 0x78a450b8,
0x7a231ace, 0x7b9ed1c7, 0x7d17822f, 0x7e8d3846,
0x80000000, 0x816fe50b
};
#define RND_SHIFT (31 - FRAC_BITS_OUT)
#define RND_CONST ((1 << RND_SHIFT) / 2)
#define RND_ADJUST (0x10d) /* established heuristically */
/*
compute log2(x) in s15.16 format, where x is in s0.31 format
maximum absolute error 8.18251e-6 # 0x20352845 (0.251622232)
*/
int32_t fixed_log2 (int32_t x)
{
int32_t f1, f2, dx, a, b, approx, lz, i, idx;
uint32_t t;
/* x = 2**i * (1 + f), 0 <= f < 1. Find i */
lz = clz (x);
i = INT_BITS_IN - lz;
/* normalize f */
t = (uint32_t)x << (lz + 1);
/* index table of log2 values using LOG2_TBL_SIZE msbs of fraction */
idx = t >> (32 - LOG2_TBL_SIZE);
/* difference between argument and smallest sampling point */
dx = t - (idx << (32 - LOG2_TBL_SIZE));
/* fit parabola through closest three sampling points; find coeffs a, b */
f1 = (log2Tab[idx+1] - log2Tab[idx]);
f2 = (log2Tab[idx+2] - log2Tab[idx]);
a = f2 - (f1 << 1);
b = (f1 << 1) - a;
/* find function value for argument by computing ((a*dx+b)*dx) */
approx = (int32_t)((((int64_t)a)*dx) >> (32 - LOG2_TBL_SIZE)) + b;
approx = (int32_t)((((int64_t)approx)*dx) >> (32 - LOG2_TBL_SIZE + 1));
approx = log2Tab[idx] + approx;
/* round fractional part of result */
approx = (((uint32_t)approx) + RND_CONST + RND_ADJUST) >> RND_SHIFT;
/* combine integer and fractional parts of result */
return (i << FRAC_BITS_OUT) + approx;
}
/* convert from s15.16 fixed point to double-precision floating point */
double fixed_to_float_s15_16 (int32_t a)
{
return a / 65536.0;
}
/* convert from s0.31 fixed point to double-precision floating point */
double fixed_to_float_s0_31 (int32_t a)
{
return a / (65536.0 * 32768.0);
}
int main (void)
{
double a, res, ref, err, maxerr = 0.0;
int32_t x, start, end;
start = 0x00000001;
end = 0x7fffffff;
printf ("testing fixed_log2 with inputs in [%17.10e, %17.10e)\n",
fixed_to_float_s0_31 (start), fixed_to_float_s0_31 (end));
for (x = start; x < end; x++) {
a = fixed_to_float_s0_31 (x);
ref = log2 (a);
res = fixed_to_float_s15_16 (fixed_log2 (x));
err = fabs (res - ref);
if (err > maxerr) {
maxerr = err;
}
}
printf ("max. err = %g\n", maxerr);
return EXIT_SUCCESS;
}
For completeness, I am showing the minimax polynomial approximation below. The coefficients for such approximations can be generated by several tools such as Maple, Mathematica, Sollya or with homebrew code using the Remez algorithm, which is what I used here. The code below shows the original floating-point coefficients, the dynamic scaling used to maximize accuracy in intermediate computation, and the heuristic adjustments applied to mitigate the impact of non-rounding fixed-point arithmetic.
A typical approach for computation of log2(x) is to use x = 2i * (1+f) and use approximation of log2(1+f) for (1+f) in [√½, √2], which means that we use a polynomial p(f) on the primary approximation interval [√½-1, √2-1].
The intermediate computation scales up operands as far as feasible for improved accuracy under the restriction that we want to use a 32-bit mulhi operation as its basic building block, as this is a native instruction on many 32-bit architectures, accessible either via inline machine code or as an intrinsic. As in the table-based code, there are right shifts of signed data which may be negative, and such right shifts must map to arithmetic right shifts, something that ISO-C doesn't guarantee but most C compilers do.
I managed to get the maximum absolute error for this variant down to 1.11288e-5, so almost full s15.16 accuracy but slightly worse than for the table-based variant. I suspect I should have added one additional term to the polynomial.
/* on 32-bit architectures, there is often an instruction/intrinsic for this */
int32_t mulhi (int32_t a, int32_t b)
{
return (int32_t)(((int64_t)a * (int64_t)b) >> 32);
}
#define RND_SHIFT (25 - FRAC_BITS_OUT)
#define RND_CONST ((1 << RND_SHIFT) / 2)
#define RND_ADJUST (-2) /* established heuristically */
/*
compute log2(x) in s15.16 format, where x is in s0.31 format
maximum absolute error 1.11288e-5 # 0x5a82689f (0.707104757)
*/
int32_t fixed_log2 (int32_t x)
{
int32_t lz, i, f, p, approx;
uint32_t t;
/* x = 2**i * (1 + f), 0 <= f < 1. Find i */
lz = clz (x);
i = INT_BITS_IN - lz;
/* force (1+f) into range [sqrt(0.5), sqrt(2)] */
t = (uint32_t)x << lz;
if (t > (uint32_t)(1.414213562 * (1U << 31))) {
i++;
t = t >> 1;
}
/* compute log2(1+f) for f in [-0.2929, 0.4142] */
f = t - (1U << 31);
p = + (int32_t)(-0.206191055 * (1U << 31) - 1);
p = mulhi (p, f) + (int32_t)( 0.318199910 * (1U << 30) - 18);
p = mulhi (p, f) + (int32_t)(-0.366491705 * (1U << 29) + 22);
p = mulhi (p, f) + (int32_t)( 0.479811855 * (1U << 28) - 2);
p = mulhi (p, f) + (int32_t)(-0.721206390 * (1U << 27) + 37);
p = mulhi (p, f) + (int32_t)( 0.442701618 * (1U << 26) + 35);
p = mulhi (p, f) + (f >> (31 - 25));
/* round fractional part of the result */
approx = (p + RND_CONST + RND_ADJUST) >> RND_SHIFT;
/* combine integer and fractional parts of result */
return (i << FRAC_BITS_OUT) + approx;
}

Recursively use of self-implemented cuIDFT.cu leads to changing output every time when re-runing the code

I have implemented a CUDA version of inverse discrete cosine transform (IDCT), by "translating" the MATLAB built-in function idct.m into CUDA:
My implementation is cuIDCT.cu, works when m = n and both m and n are even numbers.
cuIDCT.cu
#include <stdio.h>
#include <stdlib.h>
#include <cuda.h>
#include <cufft.h>
#include <cuComplex.h>
// round up n/m
inline int iDivUp(int n, int m)
{
return (n + m - 1) / m;
}
typedef cufftComplex complex;
#define PI 3.1415926535897932384626433832795028841971693993751
__global__
void idct_ComputeWeightsKernel(const int n, complex *ww)
{
const int pos = threadIdx.x + blockIdx.x * blockDim.x;
if (pos >= n) return;
ww[pos].x = sqrtf(2*n) * cosf(pos*PI/(2*n));
ww[pos].y = sqrtf(2*n) * sinf(pos*PI/(2*n));
}
__global__
void idct_ComputeEvenKernel(const float *b, const int n, const int m, complex *ww, complex *y)
{
const int ix = threadIdx.x + blockIdx.x * blockDim.x;
const int iy = threadIdx.y + blockIdx.y * blockDim.y;
if (ix >= n || iy >= m) return;
const int pos = ix + iy*n;
// Compute precorrection factor
ww[0].x = ww[0].x / sqrtf(2);
ww[0].y = ww[0].y / sqrtf(2);
y[iy + ix*m].x = ww[iy].x * b[pos];
y[iy + ix*m].y = ww[iy].y * b[pos];
}
__global__
void Reordering_a0_Kernel(complex *y, const int n, const int m, complex *yy)
{
const int ix = threadIdx.x + blockIdx.x * blockDim.x;
const int iy = threadIdx.y + blockIdx.y * blockDim.y;
if (ix >= n || iy >= m) return;
const int pos = ix + iy*n;
yy[iy + ix*n].x = y[pos].x / (float) n;
yy[iy + ix*n].y = y[pos].y / (float) n;
}
__global__
void Reordering_a_Kernel(complex *yy, const int n, const int m, float *a)
{
const int ix = threadIdx.x + blockIdx.x * blockDim.x;
const int iy = threadIdx.y + blockIdx.y * blockDim.y;
if (ix >= n || iy >= m) return;
const int pos = ix + iy*n;
// Re-order elements of each column according to equations (5.93) and (5.94) in Jain
if (iy < n/2)
{
a[ix + 2*iy*n] = yy[pos].x;
a[ix + (2*iy+1)*n] = yy[ix + (m-iy-1)*n].x;
}
}
/**
* a = idct(b), where a is of size [n m].
* #param b, input array
* #param n, first dimension of a
* #param m, second dimension of a
* #param a, output array
*/
void cuIDCT(float *h_in, int n, int m, float *h_out) // a is of size [n m]
{
const int data_size = n * m * sizeof(float);
// device memory allocation
float *d_in, *d_out;
cudaMalloc(&d_in, data_size);
cudaMalloc(&d_out, data_size);
// transfer data from host to device
cudaMemcpy(d_in, h_in, data_size, cudaMemcpyHostToDevice);
// compute IDCT using CUDA
// begin============================================
// Compute weights
complex *ww;
cudaMalloc(&ww, n*sizeof(complex));
dim3 threads(256);
dim3 blocks(iDivUp(n, threads.x));
idct_ComputeWeightsKernel<<<blocks, threads>>>(n, ww);
complex *y;
complex *yy;
cufftHandle plan;
dim3 threads1(32, 6);
dim3 blocks2(iDivUp(n, threads1.x), iDivUp(m, threads1.y)); // for even case
int Length[1] = {m}; // for each IFFT, the length is m
cudaMalloc(&y, n*m*sizeof(complex));
idct_ComputeEvenKernel<<<blocks2, threads1>>>(d_in, n, m, ww, y);
cufftPlanMany(&plan, 1, Length,
Length, 1, m,
Length, 1, m, CUFFT_C2C, n);
cufftExecC2C(plan, y, y, CUFFT_INVERSE); // y is of size [n m]
cudaMalloc(&yy, n*m*sizeof(complex));
Reordering_a0_Kernel<<<blocks2, threads1>>>(y, n, m, yy);
Reordering_a_Kernel<<<blocks2, threads1>>>(yy, n, m, d_out);
// end============================================
// transfer result from device to host
cudaMemcpy(h_out, d_out, data_size, cudaMemcpyDeviceToHost);
// cleanup
cufftDestroy(plan);
cudaFree(ww);
cudaFree(y);
cudaFree(yy);
cudaFree(d_in);
cudaFree(d_out);
}
Then I compared the result of my CUDA IDCT (i.e. cuIDCT.cu) against MATLAB idct.m using following code:
a test main.cpp function, and
a MATLAB main function main.m to read result from CUDA and compare it against MATLAB.
main.cpp
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <helper_functions.h>
#include <stdlib.h>
#include <stdio.h>
// N must equal to M, and both must be even numbers
#define N 256
#define M 256
void WriteDataFile(const char *name, int w, int h, const float *in, const float *out)
{
FILE *stream;
stream = fopen(name, "wb");
float data = 202021.25f;
fwrite(&data, sizeof(float), 1, stream);
fwrite(&w, sizeof(w), 1, stream);
fwrite(&h, sizeof(h), 1, stream);
for (int i = 0; i < h; i++)
for (int j = 0; j < w; j++)
{
const int pos = j + i * h;
fwrite(in + pos, sizeof(float), 1, stream);
fwrite(out + pos, sizeof(float), 1, stream);
}
fclose(stream);
}
void cuIDCT(float *b, int n, int m, float *a);
int main()
{
// host memory allocation
float *h_in = new float [N * M];
float *h_out = new float [N * M];
float *h_temp = new float [N * M];
// input data initialization
for (int i = 0; i < N * M; i++)
{
h_in[i] = (float)rand()/(float)RAND_MAX;
h_out[i] = h_in[i];
h_temp[i] = h_in[i];
}
// please comment either one case for testing
// test case 1: use cuIDCT.cu once
// cuIDCT(h_in, N, M, h_out);
// test case 2: iteratively use cuIDCT.cu
for (int i = 0; i < 4; i++)
{
if (i % 2 == 0)
cuIDCT(h_out, N, M, h_temp);
else
cuIDCT(h_temp, N, M, h_out);
}
// write data, for further visualization using MATLAB
WriteDataFile("test.flo", N, M, h_in, h_out);
// cleanup
delete [] h_in;
delete [] h_out;
delete [] h_temp;
cudaDeviceReset();
}
main.m
clc;clear;
% read
[h_in, h_out] = read_data('test.flo');
% MATLAB result, for test case 1, comment the for-loop
matlab_out = h_in;
for i = 1:4
matlab_out = idct(matlab_out);
end
% compare
err = matlab_out - h_out;
% show
figure(1);
subplot(221); imshow(h_in, []); title('h\_in'); colorbar
subplot(222); imshow(h_out, []); title('h\_out'); colorbar
subplot(223); imshow(matlab_out, []); title('matlab\_out'); colorbar
subplot(224); imshow(err, []); title('error map'); colorbar
disp(['maximum error between CUDA and MATLAB is ' ...
num2str(max(max(abs(err))))])
I ran the code on Visual Studio 11 (i.e. VS2012) in Windows 7 with Nvidia GPU Tesla K20c, using CUDA Toolkit version 7.5, and my MATLAB version is R2015b.
My test steps:
For test case 1. Un-comment test case 1 and comment test case 2.
Run main.cpp.
Run main.m in MATLAB.
Repeat step 1 and step 2 (without any change, just re-run the code).
I repeated step 3 for 20 times. The output result is unchanged, and results in main.m are:
results of test case 1
The maximum error is 7.7152e-07.
For test case 2. Un-comment test case 2 and comment test case 1.
Run main.cpp.
Run main.m in MATLAB.
Repeat step 1 and step 2 (without any change, just re-run the code).
I repeated step 3 for 20 times. The output result is changed, and results in main.m are (not enough reputation to put all images, only wrong case is shown below):
one situation (the wrong one) of test case 2
The maximum error is 0.45341 (2 times), 0.44898 (1 time), 0.26186 (1 time), 0.26301 (1 time), and 9.5716e-07 (15 times).
From the test results, my conclusion is:
From test case 1: cuIDCT.cu is numerically correct (error ~10^-7) to idct.m.
From test case 2: recursively use of cuIDCT.cu leads to unstable result (i.e. the output changes every time when re-run the code and may sometimes be numerically wrong, error ~0.1)
My question:
From test case 1 we know cuIDCT.cu is numerically correct to idct.m. But why recursiviely use of cuIDCT.cu leads to different output result each time when re-run the code?
Any helps or suggestions are highly appreciated.
I believe the variability in your results is coming about due to this code in your idct_ComputeEvenKernel:
// Compute precorrection factor
ww[0].x = ww[0].x / sqrtf(2);
ww[0].y = ww[0].y / sqrtf(2);
It's not entirely clear what your intent is here, but it's doubtful that this code could be doing what you want. You may be confused about the CUDA execution model.
The above code will be executed by every CUDA thread that you launch for that kernel that passes the thread check:
if (ix >= n || iy >= m) return;
I believe this means 65536 threads will all execute this code in that kernel. Furthermore, the threads will execute that code in more-or-less any order (not all CUDA threads execute in lock-step). They may even step on each other as they are trying to write out their values to the location ww[0]. So the final result in ww[0] will be quite unpredictable.
When I comment out those lines of code, the results become stable for me (albeit different from what they were with those lines in place), unchanging from run to run.
I'd like to point something else out. Wherever you are calculating the .x and .y values of a complex quantity, my suggestion would be to rework the code from this (for example):
y[iy + ix*m].x = ww[iy].x * b[pos];
y[iy + ix*m].y = ww[iy].y * b[pos];
to this:
complex temp1, temp2;
temp1 = ww[iy];
temp2.x = temp1.x * b[pos];
temp2.y = temp2.y * b[pos];
y[iy + ix*m] = temp2;
At least according to my testing, the compiler doesn't seem to be making this optimization for you, and one side-effect benefit is that it's much easier to test your code with cuda-memcheck --tool initcheck .... In the first realization, the compiler will load y[iy + ix*m] as an 8 byte quantity, modify either 4 or 8 bytes of it, then store y[iy + ix*m] as an 8 byte quantity. The second realization should be more efficient (it eliminates the load of y[]), and eliminates the load of an uninitialized quantity (y[]), which the cuda-memcheck tool will report as a hazard.
This variability I'm describing should be possible whether you run either the 1-pass version of your code or the 4-pass version of your code. Therefore I think your statements about the 1-pass version being correct are suspect. I think if you run the 1-pass version enough, you will eventually see variability (although it may require varying initial memory conditions, or running on different GPU types). Even in your own results, we see that 15 out of 20 runs of the 4 pass code produce "correct" results, i.e. the residual error is ~1e-7
Here's my modified cuIDCT.cu file, modified from the version you posted here. The assumption I'm making below is that you wanted to compute the scaling on ww[0] only once, in which case we can easily handle that arithmetic as an addendum to the previous idct_ComputeWeightsKernel:
#include <stdio.h>
#include <stdlib.h>
#include <cuda.h>
#include <cufft.h>
#include <cuComplex.h>
#include <helper_cuda.h>
#include "assert.h"
// round up n/m
inline int iDivUp(int n, int m)
{
return (n + m - 1) / m;
}
typedef cufftComplex complex;
#define PI 3.1415926535897932384626433832795028841971693993751
#define cufftSafeCall(err) __cufftSafeCall(err, __FILE__, __LINE__)
inline void __cufftSafeCall(cufftResult err, const char *file, const int line)
{
if( CUFFT_SUCCESS != err) {
fprintf(stderr, "CUFFT error in file '%s', line %d\n %s\nerror %d: %s\nterminating!\n",__FILE__, __LINE__,err, \
_cudaGetErrorEnum(err)); \
cudaDeviceReset(); assert(0); \
}
}
__global__
void idct_ComputeWeightsKernel(const int n, complex *ww)
{
const int pos = threadIdx.x + blockIdx.x * blockDim.x;
if (pos >= n) return;
complex temp;
temp.x = sqrtf(2*n) * cosf(pos*PI/(2*n));
temp.y = sqrtf(2*n) * sinf(pos*PI/(2*n));
if (pos == 0) {
temp.x /= sqrtf(2);
temp.y /= sqrtf(2);}
ww[pos] = temp;
}
__global__
void idct_ComputeEvenKernel(const float *b, const int n, const int m, complex *ww, complex *y)
{
const int ix = threadIdx.x + blockIdx.x * blockDim.x;
const int iy = threadIdx.y + blockIdx.y * blockDim.y;
if (ix >= n || iy >= m) return;
const int pos = ix + iy*n;
/* handle this in idct_ComputeWeightsKernel
// Compute precorrection factor
ww[0].x = ww[0].x / sqrtf(2);
ww[0].y = ww[0].y / sqrtf(2);
*/
complex temp1, temp2;
temp1 = ww[iy];
temp2.x = temp1.x * b[pos];
temp2.y = temp1.y * b[pos];
y[iy + ix*m] = temp2;
}
__global__
void Reordering_a0_Kernel(complex *y, const int n, const int m, complex *yy)
{
const int ix = threadIdx.x + blockIdx.x * blockDim.x;
const int iy = threadIdx.y + blockIdx.y * blockDim.y;
if (ix >= n || iy >= m) return;
const int pos = ix + iy*n;
complex temp1, temp2;
temp1 = y[pos];
temp2.x = temp1.x / (float) n;
temp2.y = temp1.y / (float) n;
yy[iy + ix*n] = temp2;
}
__global__
void Reordering_a_Kernel(complex *yy, const int n, const int m, float *a)
{
const int ix = threadIdx.x + blockIdx.x * blockDim.x;
const int iy = threadIdx.y + blockIdx.y * blockDim.y;
if (ix >= n || iy >= m) return;
const int pos = ix + iy*n;
// Re-order elements of each column according to equations (5.93) and (5.94) in Jain
if (iy < n/2)
{
a[ix + 2*iy*n] = yy[pos].x;
a[ix + (2*iy+1)*n] = yy[ix + (m-iy-1)*n].x;
}
}
/**
* a = idct(b), where a is of size [n m].
* #param b, input array
* #param n, first dimension of a
* #param m, second dimension of a
* #param a, output array
*/
void cuIDCT(float *h_in, int n, int m, float *h_out) // a is of size [n m]
{
const int data_size = n * m * sizeof(float);
// device memory allocation
float *d_in, *d_out;
checkCudaErrors(cudaMalloc(&d_in, data_size));
checkCudaErrors(cudaMalloc(&d_out, data_size));
// transfer data from host to device
checkCudaErrors(cudaMemcpy(d_in, h_in, data_size, cudaMemcpyHostToDevice));
// compute IDCT using CUDA
// begin============================================
// Compute weights
complex *ww;
checkCudaErrors(cudaMalloc(&ww, n*sizeof(complex)));
dim3 threads(256);
dim3 blocks(iDivUp(n, threads.x));
idct_ComputeWeightsKernel<<<blocks, threads>>>(n, ww);
complex *y;
complex *yy;
cufftHandle plan;
dim3 threads1(32, 6);
dim3 blocks2(iDivUp(n, threads1.x), iDivUp(m, threads1.y)); // for even case
int Length[1] = {m}; // for each IFFT, the length is m
checkCudaErrors(cudaMalloc(&y, n*m*sizeof(complex)));
idct_ComputeEvenKernel<<<blocks2, threads1>>>(d_in, n, m, ww, y);
cufftSafeCall(cufftPlanMany(&plan, 1, Length,
Length, 1, m,
Length, 1, m, CUFFT_C2C, n));
cufftSafeCall(cufftExecC2C(plan, y, y, CUFFT_INVERSE)); // y is of size [n m]
checkCudaErrors(cudaMalloc(&yy, n*m*sizeof(complex)));
Reordering_a0_Kernel<<<blocks2, threads1>>>(y, n, m, yy);
cudaMemset(d_out, 0, data_size);
Reordering_a_Kernel<<<blocks2, threads1>>>(yy, n, m, d_out);
// end============================================
// transfer result from device to host
checkCudaErrors(cudaMemcpy(h_out, d_out, data_size, cudaMemcpyDeviceToHost));
// cleanup
cufftDestroy(plan);
checkCudaErrors(cudaFree(ww));
checkCudaErrors(cudaFree(y));
checkCudaErrors(cudaFree(yy));
checkCudaErrors(cudaFree(d_in));
checkCudaErrors(cudaFree(d_out));
}
You'll note I threw an extra cudaMemset on d_out in there, because it helped me clean up an issue with cuda-memcheck --tool initcheck .... It shouldn't be necessary, you can delete it if you want.

Integer division in Scala [duplicate]

(note: not the same as this other question since the OP never explicitly specified rounding towards 0 or -Infinity)
JLS 15.17.2 says that integer division rounds towards zero. If I want floor()-like behavior for positive divisors (I don't care about the behavior for negative divisors), what's the simplest way to achieve this that is numerically correct for all inputs?
int ifloor(int n, int d)
{
/* returns q such that n = d*q + r where 0 <= r < d
* for all integer n, d where d > 0
*
* d = 0 should have the same behavior as `n/d`
*
* nice-to-have behaviors for d < 0:
* option (a). same as above:
* returns q such that n = d*q + r where 0 <= r < -d
* option (b). rounds towards +infinity:
* returns q such that n = d*q + r where d < r <= 0
*/
}
long lfloor(long n, long d)
{
/* same behavior as ifloor, except for long integers */
}
(update: I want to have a solution both for int and long arithmetic.)
If you can use third-party libraries, Guava has this: IntMath.divide(int, int, RoundingMode.FLOOR) and LongMath.divide(int, int, RoundingMode.FLOOR). (Disclosure: I contribute to Guava.)
If you don't want to use a third-party library for this, you can still look at the implementation.
(I'm doing everything for longs since the answer for ints is the same, just substitute int for every long and Integer for every Long.)
You could just Math.floor a double division result, otherwise...
Original answer:
return n/d - ( ( n % d != 0 ) && ( (n<0) ^ (d<0) ) ? 1 : 0 );
Optimized answer:
public static long lfloordiv( long n, long d ) {
long q = n/d;
if( q*d == n ) return q;
return q - ((n^d) >>> (Long.SIZE-1));
}
(For completeness, using a BigDecimal with a ROUND_FLOOR rounding mode is also an option.)
New edit: Now I'm just trying to see how far it can be optimized for fun. Using Mark's answer the best I have so far is:
public static long lfloordiv2( long n, long d ){
if( d >= 0 ){
n = -n;
d = -d;
}
long tweak = (n >>> (Long.SIZE-1) ) - 1;
return (n + tweak) / d + tweak;
}
(Uses cheaper operations than the above, but slightly longer bytecode (29 vs. 26)).
There's a rather neat formula for this that works when n < 0 and d > 0: take the bitwise complement of n, do the division, and then take the bitwise complement of the result.
int ifloordiv(int n, int d)
{
if (n >= 0)
return n / d;
else
return ~(~n / d);
}
For the remainder, a similar construction works (compatible with ifloordiv in the sense that the usual invariant ifloordiv(n, d) * d + ifloormod(n, d) == n is satisfied) giving a result that's always in the range [0, d).
int ifloormod(int n, int d)
{
if (n >= 0)
return n % d;
else
return d + ~(~n % d);
}
For negative divisors, the formulas aren't quite so neat. Here are expanded versions of ifloordiv and ifloormod that follow your 'nice-to-have' behavior option (b) for negative divisors.
int ifloordiv(int n, int d)
{
if (d >= 0)
return n >= 0 ? n / d : ~(~n / d);
else
return n <= 0 ? n / d : (n - 1) / d - 1;
}
int ifloormod(int n, int d)
{
if (d >= 0)
return n >= 0 ? n % d : d + ~(~n % d);
else
return n <= 0 ? n % d : d + 1 + (n - 1) % d;
}
For d < 0, there's an unavoidable problem case when d == -1 and n is Integer.MIN_VALUE, since then the mathematical result overflows the type. In that case, the formula above returns the wrapped result, just as the usual Java division does. As far as I'm aware, this is the only corner case where we silently get 'wrong' results.
return BigDecimal.valueOf(n).divide(BigDecimal.valueOf(d), RoundingMode.FLOOR).longValue();

iPhone FFT with Accelerate framework vDSP

I'm having difficulty implementing an FFT using vDSP. I understand the theory but am looking for a specific code example please.
I have data from a wav file as below:
Question 1. How do I put the audio data into the FFT?
Question 2. How do I get the output data out of the FFT?
Question 3. The ultimate goal is to check for low frequency sounds. How would I do this?
-(OSStatus)open:(CFURLRef)inputURL{
OSStatus result = -1;
result = AudioFileOpenURL (inputURL, kAudioFileReadPermission, 0, &mAudioFile);
if (result == noErr) {
//get format info
UInt32 size = sizeof(mASBD);
result = AudioFileGetProperty(mAudioFile, kAudioFilePropertyDataFormat, &size, &mASBD);
UInt32 dataSize = sizeof packetCount;
result = AudioFileGetProperty(mAudioFile, kAudioFilePropertyAudioDataPacketCount, &dataSize, &packetCount);
NSLog([NSString stringWithFormat:#"File Opened, packet Count: %d", packetCount]);
UInt32 packetsRead = packetCount;
UInt32 numBytesRead = -1;
if (packetCount > 0) {
//allocate buffer
audioData = (SInt16*)malloc( 2 *packetCount);
//read the packets
result = AudioFileReadPackets (mAudioFile, false, &numBytesRead, NULL, 0, &packetsRead, audioData);
NSLog([NSString stringWithFormat:#"Read %d bytes, %d packets", numBytesRead, packetsRead]);
}
}
return result;
}
FFT code below:
log2n = N;
n = 1 << log2n;
stride = 1;
nOver2 = n / 2;
printf("1D real FFT of length log2 ( %d ) = %d\n\n", n, log2n);
/* Allocate memory for the input operands and check its availability,
* use the vector version to get 16-byte alignment. */
A.realp = (float *) malloc(nOver2 * sizeof(float));
A.imagp = (float *) malloc(nOver2 * sizeof(float));
originalReal = (float *) malloc(n * sizeof(float));
obtainedReal = (float *) malloc(n * sizeof(float));
if (originalReal == NULL || A.realp == NULL || A.imagp == NULL) {
printf("\nmalloc failed to allocate memory for the real FFT"
"section of the sample.\n");
exit(0);
}
/* Generate an input signal in the real domain. */
for (i = 0; i < n; i++)
originalReal[i] = (float) (i + 1);
/* Look at the real signal as an interleaved complex vector by
* casting it. Then call the transformation function vDSP_ctoz to
* get a split complex vector, which for a real signal, divides into
* an even-odd configuration. */
vDSP_ctoz((COMPLEX *) originalReal, 2, &A, 1, nOver2);
/* Set up the required memory for the FFT routines and check its
* availability. */
setupReal = vDSP_create_fftsetup(log2n, FFT_RADIX2);
if (setupReal == NULL) {
printf("\nFFT_Setup failed to allocate enough memory for"
"the real FFT.\n");
exit(0);
}
/* Carry out a Forward and Inverse FFT transform. */
vDSP_fft_zrip(setupReal, &A, stride, log2n, FFT_FORWARD);
vDSP_fft_zrip(setupReal, &A, stride, log2n, FFT_INVERSE);
/* Verify correctness of the results, but first scale it by 2n. */
scale = (float) 1.0 / (2 * n);
vDSP_vsmul(A.realp, 1, &scale, A.realp, 1, nOver2);
vDSP_vsmul(A.imagp, 1, &scale, A.imagp, 1, nOver2);
/* The output signal is now in a split real form. Use the function
* vDSP_ztoc to get a split real vector. */
vDSP_ztoc(&A, 1, (COMPLEX *) obtainedReal, 2, nOver2);
/* Check for accuracy by looking at the inverse transform results. */
Compare(originalReal, obtainedReal, n);
Thanks
You put your audio sample data into the real part of the input, and zero the imaginary part.
If you are just interested in the magnitude of each bin in the frequency domain then you calculate sqrt(re*re + im*im) for each output bin. If you're only interested in relative magnitude then you can drop the sqrt and just calculate the squared magnitude, (re*re + im*im).
You would look at the magnitudes of the bin or bins (see (2)) that correspond to your frequency or frequencies of interest. If your sample rate is Fs, and your FFT size is N, then the corresponding frequency for output bin i is given by f = i * Fs / N. Conversely if you are interested in a specific frequency f then the bin of interest, i, is given by i = N * f / Fs.
Additional note: you will need to apply a suitable window function (e.g. Hann aka Hanning) to your FFT input data, prior to calculating the FFT itself.
You can check Apple’s documentation and take good care of data packing.
Here is my example:
// main.cpp
// FFTTest
//
// Created by Harry-Chris Stamatopoulos on 11/23/12.
//
/*
This is an example of a hilbert transformer using
Apple's VDSP fft/ifft & other VDSP calls.
Output signal has a PI/2 phase shift.
COMPLEX_SPLIT vector "B" was used to cross-check
real and imaginary parts coherence with the original vector "A"
that is obtained straight from the fft.
Tested and working.
Cheers!
*/
#include <iostream>
#include <Accelerate/Accelerate.h>
#define PI 3.14159265
#define DEBUG_PRINT 1
int main(int argc, const char * argv[])
{
float fs = 44100; //sample rate
float f0 = 440; //sine frequency
uint32_t i = 0;
uint32_t L = 1024;
/* vector allocations*/
float *input = new float [L];
float *output = new float[L];
float *mag = new float[L/2];
float *phase = new float[L/2];
for (i = 0 ; i < L; i++)
{
input[i] = cos(2*PI*f0*i/fs);
}
uint32_t log2n = log2f((float)L);
uint32_t n = 1 << log2n;
//printf("FFT LENGTH = %lu\n", n);
FFTSetup fftSetup;
COMPLEX_SPLIT A;
COMPLEX_SPLIT B;
A.realp = (float*) malloc(sizeof(float) * L/2);
A.imagp = (float*) malloc(sizeof(float) * L/2);
B.realp = (float*) malloc(sizeof(float) * L/2);
B.imagp = (float*) malloc(sizeof(float) * L/2);
fftSetup = vDSP_create_fftsetup(log2n, FFT_RADIX2);
/* Carry out a Forward and Inverse FFT transform. */
vDSP_ctoz((COMPLEX *) input, 2, &A, 1, L/2);
vDSP_fft_zrip(fftSetup, &A, 1, log2n, FFT_FORWARD);
mag[0] = sqrtf(A.realp[0]*A.realp[0]);
//get phase
vDSP_zvphas (&A, 1, phase, 1, L/2);
phase[0] = 0;
//get magnitude;
for(i = 1; i < L/2; i++){
mag[i] = sqrtf(A.realp[i]*A.realp[i] + A.imagp[i] * A.imagp[i]);
}
//after done with possible phase and mag processing re-pack the vectors in VDSP format
B.realp[0] = mag[0];
B.imagp[0] = mag[L/2 - 1];;
//unwrap, process & re-wrap phase
for(i = 1; i < L/2; i++){
phase[i] -= 2*PI*i * fs/L;
phase[i] -= PI / 2 ;
phase[i] += 2*PI*i * fs/L;
}
//construct real & imaginary part of the output packed vector (input to ifft)
for(i = 1; i < L/2; i++){
B.realp[i] = mag[i] * cosf(phase[i]);
B.imagp[i] = mag[i] * sinf(phase[i]);
}
#if DEBUG_PRINT
for (i = 0 ; i < L/2; i++)
{
printf("A REAL = %f \t A IMAG = %f \n", A.realp[i], A.imagp[i]);
printf("B REAL = %f \t B IMAG = %f \n", B.realp[i], B.imagp[i]);
}
#endif
//ifft
vDSP_fft_zrip(fftSetup, &B, 1, log2n, FFT_INVERSE);
//scale factor
float scale = (float) 1.0 / (2*L);
//scale values
vDSP_vsmul(B.realp, 1, &scale, B.realp, 1, L/2);
vDSP_vsmul(B.imagp, 1, &scale, B.imagp, 1, L/2);
//unpack B to real interleaved output
vDSP_ztoc(&B, 1, (COMPLEX *) output, 2, L/2);
// print output signal values to console
printf("Shifted signal x = \n");
for (i = 0 ; i < L/2; i++)
printf("%f\n", output[i]);
//release resources
free(input);
free(output);
free(A.realp);
free(A.imagp);
free(B.imagp);
free(B.realp);
free(mag);
free(phase);
}
One thing you need to be careful to is the DC component of the calculated FFT. I compared my results with the fftw library FFT and the imaginary part of the transform calculated with the vDSP library always had a different value at index 0 (which means 0 frequency, so DC).
Another measure I applied was to divide both real and imaginary parts by a factor of 2. I guess this is due to the algorithm used in the function. Also, both these problems occurred in the FFT process but not in the IFFT process.
I used vDSP_fft_zrip.