operations on a 2D array in CUDA kernel for matlab - matlab

suppose I have the following serial C:
int add(int* a, int* b, int n)
{
for(i=0; i<n; i++)
{
for(j=0; j<n; j++)
{
a[i][j]+=b[i][j];
}
}
return 0;
}
I think the best way to paralellise it is to realise it is a 2D problem and use 2D thread blocks as per CUDA kernel - nested for loop
With that in mind I started writing my cuda kernal like this:
__global__ void calc(int **A, int **B, int n)
{
int i= blockIdx.x * blockDim.x + threadIdx.x;
int j= blockIdx.y * blockDim.y + threadIdx.y;
if (i>=n || j>=n)
return;
A[i][j]+=B[i][j];
}
nvcc tells me that:
./addm.cu(13): Warning: Cannot tell what pointer points to, assuming global memory space
./addm.cu(13): Warning: Cannot tell what pointer points to, assuming global memory space
./addm.cu(13): Warning: Cannot tell what pointer points to, assuming global memory space
1) I am correct with my philosphy?
2) I think I understand blocks, thread etc but I don't understand what
int i= blockIdx.x * blockDim.x + threadIdx.x;
int j= blockIdx.y * blockDim.y + threadIdx.y;
does
3) Is this the most efficient/fastest way of performing operations on a 2D array in general? i.e not just matrix addition it could be any "element by element" operation.
4) Will I be able to call it from matlab? normally it freaks when the prototype is of the form type** var
Thanks guys

The compiler warnings you are getting come from the fact that on older GPUs, the memory structure is not "flat". The compiler can't know what memory space the addresses held by the pointer arrays your kernel is working in are. So it is warning you that it is assuming the operation is being peforming in global memory. If you compile the code for a Fermi card (sm_20 or sm_21 architecture), you won't see that warning because the memory model on those cards is "flat", and pointers are correctly interpreted by the hardware at runtime. The compiler doesn't need to handle it at compile time.
To answer each of your questions:
Yes. And no. The overall idea is about 90% right, but there are several implementation issues which will become apparent from the answers which follow.
CUDA C has built in variables to allow each thread to determine its "coordinates" in the execution grid which it is running, and the dimensions of each block and the grid itsef. threadIdx.{xyz} provides the thread coordinates within a block, and blockIdx.{xyz} the block coordinate with the grid. blockDim.{xyz} and gridDim.{xyz} provide the dimensions of the block and the grid, respectively (note not all hardware supports 3D grids). CUDA uses column major order for numbering threads within each block and block within each grid. The calculation you are querying is computing the equivalent {i,j} coordinate in a 2D grid using the thread and block coordinates and the block size. This is discussed in some detail in the first few pages of the "Programming model" chapter of the CUDA programming guide.
No, and I say that for two reasons.
Firstly, using arrays of pointers for memory access is not a good idea in CUDA. Two levels of pointer indirection hugely increases the latency penalty to get to your data. The key difference in a typical GPU architecture compared to a modern CPU achitecture is the memory system. GPUs have stunningly high peak memory bandwidth, but very high access latency, whereas CPUs are designed for minimal latency. So having to read and indirect two pointers to fetch a value from memory is a very big performance penalty. Store your 2D array or matrix in linear memory instead. This is what BLAS, LAPACK and Matlab do anyway.
Secondly, every thread in your code is performing four integer arithmetic operations of setup overhead (the index calculations) for every one "productive" integer operation (the addition). There are strategies to reduce that, usually involving having each thread process more than one array element.
If I was to write a kernel for that operation I would do it something like the code at the bottom of my answer. This uses linear memory and a 1D grid. A suitable number of threads to properly occupy the GPU process the whole input array, with each thread processing many inputs.
No. As I mentioned earlier in my answer, Matlab uses linear memory to store matrices, not an array of pointers. This doesn't match the layout your kernel code is expecting.
Sample code:
__global__ void calc(int *A, int *B, int N)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
int s = blockDim.x * gridDim.x;
for( ; i<N; i+=s) {
A[i] += B[i];
}
}

I am assuming you are working with n-by-n, row major order array. Try the following :
__global__ void calc(int *A, int *B, int n)
{
int i= blockIdx.x * blockDim.x + threadIdx.x;
int j= blockIdx.y * blockDim.y + threadIdx.y;
if (i<n && j<n) {
A[i*n+j] += B[i*n+j];
}
}

Related

Does unrolling a loop affect the accuracy of the computations within?

Summarized question Does unrolling a loop affect the accuracy of the computations performed within the loop? And if so, why?
Elaboration and background I am writing a compute shader using HLSL for use in a Unity-project (2021.2.9f1). Parts of my code include numerical procedures and highly osciallatory functions, meaning that high computational accuracy is essential.
When comparing my results with an equivalent procedure in Python, I noticed that some deviations in the order of 1e-5. This was concerning, as I did not expect such large errors to be the result of precision differences, e.g., the float-precision in trigonometric or power functions in HLSL.
Ultimatley, after much debugging, I now believe the choice of unrolling or not unrolling a loop to be the cause of the deviation. However, I do find this strange, as I can not seem to find any sources indicating that unrolling a loop affects the accuracy in addition to the "space–time tradeoff".
For clarification, if considering my Python results as the correct solution, unrolling the loop in HLSL gives me better results than what not unrolling gives.
Minimal working example Below is an MWE consisting of a C# script for Unity, the corresponding compute shader where the computations are performed and a screen-shot of my console when running in Unity (2021.2.9f1). Forgive me for a somewhat messy implementation of Newtons method, but I chose to keep it since I believe it might be a cause to this deviation. That is, if simply computing cos(x), then there is not difference between the unrolled and not unrolled. None the less, I still fail to understand how the simple addition of [unroll(N)] in the testing kernel changes the result...
// C# for Unity
using UnityEngine;
public class UnrollTest : MonoBehaviour
{
[SerializeField] ComputeShader CS;
ComputeBuffer CBUnrolled, CBNotUnrolled;
readonly int N = 3;
private void Start()
{
CBUnrolled = new ComputeBuffer(N, sizeof(double));
CBNotUnrolled = new ComputeBuffer(N, sizeof(double));
CS.SetBuffer(0, "_CBUnrolled", CBUnrolled);
CS.SetBuffer(0, "_CBNotUnrolled", CBNotUnrolled);
CS.Dispatch(0, (int)((N + (64 - 1)) / 64), 1, 1);
double[] ansUnrolled = new double[N];
double[] ansNotUnrolled = new double[N];
CBUnrolled.GetData(ansUnrolled);
CBNotUnrolled.GetData(ansNotUnrolled);
for (int i = 0; i < N; i++)
{
Debug.Log("Unrolled ans = " + ansUnrolled[i] +
" - Not Unrolled ans = " + ansNotUnrolled[i] +
" -- Difference is: " + (ansUnrolled[i] - ansNotUnrolled[i]));
}
CBUnrolled.Release();
CBNotUnrolled.Release();
}
}
#pragma kernel CSMain
RWStructuredBuffer<double> _CBUnrolled, _CBNotUnrolled;
// Dummy function for Newtons method
double fDummy(double k, double fnh, double h, double theta)
{
return fnh * fnh * k * h * cos(theta) * cos(theta) - (double) tanh(k * h);
}
// Derivative of Dummy function above using a central finite difference scheme.
double dfDummy(double k, double fnh, double h, double theta)
{
return (fDummy(k + (double) 1e-3, fnh, h, theta) - fDummy(k - (double) 1e-3, fnh, h, theta)) / (double) 2e-3;
}
// Function to solve.
double f(double fnh, double h, double theta)
{
// Solved using Newton's method.
int max_iter = 50;
double epsilon = 1e-8;
double fxn, dfxn;
// Define initial guess for k, herby denoted as x.
double xn = 10.0;
for (int n = 0; n < max_iter; n++)
{
fxn = fDummy(xn, fnh, h, theta);
if (abs(fxn) < epsilon) // A solution is found.
return xn;
dfxn = dfDummy(xn, fnh, h, theta);
if (dfxn == 0.0) // No solution found.
return xn;
xn = xn - fxn / dfxn;
}
// No solution found.
return xn;
}
[numthreads(64,1,1)]
void CSMain(uint3 threadID : SV_DispatchThreadID)
{
int N = 3;
// ---------------
double fnh = 0.9, h = 4.53052, theta = -0.161, dtheta = 0.01; // Example values.
for (int i = 0; i < N; i++) // Not being unrolled
{
_CBNotUnrolled[i] = f(fnh, h, theta);
theta += dtheta;
}
// ---------------
fnh = 0.9, h = 4.53052, theta = -0.161, dtheta = 0.01; // Example values.
[unroll(N)] for (int j = 0; j < N; j++) // Being unrolled.
{
_CBUnrolled[j] = f(fnh, h, theta);
theta += dtheta;
}
}
Image of Unity console when running the above
Edit After some more testing, the deviation has been narrowed down to the following code, giving a difference of about 1e-17 between the exact same code unrolled vs not unrolled. Despite the small difference, I still consider it a valid example of the issue, as I believe they should be equal.
[numthreads(64, 1, 1)]
void CSMain(uint3 threadID : SV_DispatchThreadID)
{
if ((int) threadID.x != 1)
return;
int N = 3;
double k = 1.0;
// ---------------
double fnh = 0.9, h = 4.53052, theta = -0.161, dtheta = 0.01; // Example values.
for (int i = 0; i < N; i++) // Not being unrolled
{
_CBNotUnrolled[i] = (k + (double) 1e-3) * theta - (k - (double) 1e-3) * theta;
theta += dtheta;
}
// ---------------
fnh = 0.9, h = 4.53052, theta = -0.161, dtheta = 0.01; // Example values.
[unroll(N)]
for (int j = 0; j < N; j++) // Being unrolled.
{
_CBUnrolled[j] = (k + (double) 1e-3) * theta - (k - (double) 1e-3) * theta;
theta += dtheta;
}
}
Image of Unity console when running the edited script above
Edit 2 The following is the compiled code for the kernel given in Edit 1. Unfortunately, my experience with assembly language is limited, and I am not capable of spotting if this script shows any errors, or if it is useful to the problem at hand.
**** Platform Direct3D 11:
Compiled code for kernel CSMain
keywords: <none>
binary blob size 648:
//
// Generated by Microsoft (R) D3D Shader Disassembler
//
//
// Note: shader requires additional functionality:
// Double-precision floating point
//
//
// Input signature:
//
// Name Index Mask Register SysValue Format Used
// -------------------- ----- ------ -------- -------- ------- ------
// no Input
//
// Output signature:
//
// Name Index Mask Register SysValue Format Used
// -------------------- ----- ------ -------- -------- ------- ------
// no Output
cs_5_0
dcl_globalFlags refactoringAllowed | enableDoublePrecisionFloatOps
dcl_uav_structured u0, 8
dcl_uav_structured u1, 8
dcl_input vThreadID.x
dcl_temps 2
dcl_thread_group 64, 1, 1
0: ine r0.x, vThreadID.x, l(1)
1: if_nz r0.x
2: ret
3: endif
4: dmov r0.xy, d(-0.161000l, 0.000000l)
5: mov r0.z, l(0)
6: loop
7: ige r0.w, r0.z, l(3)
8: breakc_nz r0.w
9: dmul r1.xyzw, r0.xyxy, d(1.001000l, 0.999000l)
10: dadd r1.xy, -r1.zwzw, r1.xyxy
11: store_structured u1.xy, r0.z, l(0), r1.xyxx
12: dadd r0.xy, r0.xyxy, d(0.010000l, 0.000000l)
13: iadd r0.z, r0.z, l(1)
14: endloop
15: store_structured u0.xy, l(0), l(0), l(-0.000000,-0.707432,0,0)
16: store_structured u0.xy, l(1), l(0), l(0.000000,-0.702312,0,0)
17: store_structured u0.xy, l(2), l(0), l(-918250586112.000000,-0.697192,0,0)
18: ret
// Approximately 0 instruction slots used
Edit 3 After reaching out to Microsoft, (see https://learn.microsoft.com/en-us/an...nrolling-a-loop-affect-the-accuracy-of-t.html), they stated that the problem is more about Unity. This because
"The pragma unroll [(n)] is keil compiler which Unity uses topic"
This is driver, hardware, compiler, and unity dependent.
In essence, the HLSL specification has somewhat looser guarantees for rounding behavior of mathematical operations than regular IEEE-754 floating point.
First, it is implementation-dependent whether operations round up or down.
IEEE-754 requires floating-point operations to produce a result that
is the nearest representable value to an infinitely-precise result,
known as round-to-nearest-even. Direct3D 10, however, defines a looser
requirement: 32-bit floating-point operations produce a result that is
within one unit-last-place (1 ULP) of the infinitely-precise result.
This means that, for example, hardware is allowed to truncate results
to 32-bit rather than perform round-to-nearest-even, as that would
result in error of at most one ULP.
See https://learn.microsoft.com/en-us/windows/win32/direct3d10/d3d10-graphics-programming-guide-resources-float-rules#32-bit-floating-point-rules
Going one step further, the HLSL compiler itself has many fast-math optimizations that can violate IEEE-754 float conformance; see, for example:
D3DCOMPILE_IEEE_STRICTNESS - Forces strict compile, which might not allow for legacy syntax. By default, the compiler disables strictness on deprecated syntax.
D3DCOMPILE_OPTIMIZATION_LEVEL3 - Directs the compiler to use the highest optimization level. If you set this constant, the compiler produces the best possible code but might take significantly longer to do so. Set this constant for final builds of an application when performance is the most important factor.
D3DCOMPILE_PARTIAL_PRECISION - Directs the compiler to perform all computations with partial precision. If you set this constant, the compiled code might run faster on some hardware.
Source: https://learn.microsoft.com/en-us/windows/win32/direct3dhlsl/d3dcompile-constants
This particularly matters for your scenario, because if optimizations are enabled, the existence of loop unrolling can trigger constant folding optimizations that reduce the computational cost of your code and change the precision of its results (potentially even improving them). Note that when constant folding occurs, the compiler has to decide how to perform rounding, and that might disagree with what your hardware FPUs would do.
Oh, and note that IEEE-754 does not place constraints on the precision, let alone require implementation, of "additional operations" (e.g. sin, cos, tanh, atan, ln, etc); it purely recommends them.
See, a very common case where this goes wrong and sin gets quantized to 4 different values on intel integrated graphics, but otherwise has reasonable precision on alternative hardware: sin(x) only returns 4 different values for moderately large input on GLSL fragment shader, Intel HD4000
Also, note that Unity does not guarantee that a float in shader is actually a 32-bit float; on certain hardware (e.g. mobile), it can even be backed by a 16-bit half or an 11-bit fixed.
High precision: float
Highest precision floating point value; generally 32 bits (just like float from regular programming languages).
...
One complication of float/half/fixed data type usage is that PC GPUs are always high precision. That is, for all the PC (Windows/Mac/Linux) GPUs, it does not matter whether you write float, half or fixed data types in your shaders. They always compute everything in full 32-bit floating point precision.
The half and fixed types only become relevant when targeting mobile
GPUs, where these types primarily exist for power (and sometimes
performance) constraints. Keep in mind that you need to test your
shaders on mobile to see whether or not you are running into
precision/numerical issues.
Even on mobile GPUs, the different precision support varies between
GPU families.
Source: https://docs.unity3d.com/Manual/SL-DataTypesAndPrecision.html
I don't believe Unity exposes compiler flags to developers; you are at its whim as to what optimizations it passes to dxc/fxc. Given it's primarily used for games, you can bet they enable optimizations.
Source: https://forum.unity.com/threads/possible-to-set-directx-compiler-flags-in-shaders.453790/
Finally, check out "Floating-Point Determinism" by Bruce Dawson if you want an in-depth dive into this topic; I will add that this problem also exists if you want consistent results between languages (since languages themselves can implement math functions themselves rather than using hardware intrinsics, e.g. for better precision), when cross-compiling (since different compilers / backends can optimize differently or use different system libraries), or when running managed code across different runtimes (e.g. since JIT can do different optimiztions).

Running a program in matlab [duplicate]

I have gone through Google and Stack Overflow search, but nowhere I was able to find a clear and straightforward explanation for how to calculate time complexity.
What do I know already?
Say for code as simple as the one below:
char h = 'y'; // This will be executed 1 time
int abc = 0; // This will be executed 1 time
Say for a loop like the one below:
for (int i = 0; i < N; i++) {
Console.Write('Hello, World!!');
}
int i=0; This will be executed only once.
The time is actually calculated to i=0 and not the declaration.
i < N; This will be executed N+1 times
i++ This will be executed N times
So the number of operations required by this loop are {1+(N+1)+N} = 2N+2. (But this still may be wrong, as I am not confident about my understanding.)
OK, so these small basic calculations I think I know, but in most cases I have seen the time complexity as O(N), O(n^2), O(log n), O(n!), and many others.
How to find time complexity of an algorithm
You add up how many machine instructions it will execute as a function of the size of its input, and then simplify the expression to the largest (when N is very large) term and can include any simplifying constant factor.
For example, lets see how we simplify 2N + 2 machine instructions to describe this as just O(N).
Why do we remove the two 2s ?
We are interested in the performance of the algorithm as N becomes large.
Consider the two terms 2N and 2.
What is the relative influence of these two terms as N becomes large? Suppose N is a million.
Then the first term is 2 million and the second term is only 2.
For this reason, we drop all but the largest terms for large N.
So, now we have gone from 2N + 2 to 2N.
Traditionally, we are only interested in performance up to constant factors.
This means that we don't really care if there is some constant multiple of difference in performance when N is large. The unit of 2N is not well-defined in the first place anyway. So we can multiply or divide by a constant factor to get to the simplest expression.
So 2N becomes just N.
This is an excellent article: Time complexity of algorithm
The below answer is copied from above (in case the excellent link goes bust)
The most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N as N approaches infinity. In general you can think of it like this:
statement;
Is constant. The running time of the statement will not change in relation to N.
for ( i = 0; i < N; i++ )
statement;
Is linear. The running time of the loop is directly proportional to N. When N doubles, so does the running time.
for ( i = 0; i < N; i++ ) {
for ( j = 0; j < N; j++ )
statement;
}
Is quadratic. The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.
while ( low <= high ) {
mid = ( low + high ) / 2;
if ( target < list[mid] )
high = mid - 1;
else if ( target > list[mid] )
low = mid + 1;
else break;
}
Is logarithmic. The running time of the algorithm is proportional to the number of times N can be divided by 2. This is because the algorithm divides the working area in half with each iteration.
void quicksort (int list[], int left, int right)
{
int pivot = partition (list, left, right);
quicksort(list, left, pivot - 1);
quicksort(list, pivot + 1, right);
}
Is N * log (N). The running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic.
In general, doing something with every item in one dimension is linear, doing something with every item in two dimensions is quadratic, and dividing the working area in half is logarithmic. There are other Big O measures such as cubic, exponential, and square root, but they're not nearly as common. Big O notation is described as O ( <type> ) where <type> is the measure. The quicksort algorithm would be described as O (N * log(N )).
Note that none of this has taken into account best, average, and worst case measures. Each would have its own Big O notation. Also note that this is a VERY simplistic explanation. Big O is the most common, but it's also more complex that I've shown. There are also other notations such as big omega, little o, and big theta. You probably won't encounter them outside of an algorithm analysis course. ;)
Taken from here - Introduction to Time Complexity of an Algorithm
1. Introduction
In computer science, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input.
2. Big O notation
The time complexity of an algorithm is commonly expressed using big O notation, which excludes coefficients and lower order terms. When expressed this way, the time complexity is said to be described asymptotically, i.e., as the input size goes to infinity.
For example, if the time required by an algorithm on all inputs of size n is at most 5n3 + 3n, the asymptotic time complexity is O(n3). More on that later.
A few more examples:
1 = O(n)
n = O(n2)
log(n) = O(n)
2 n + 1 = O(n)
3. O(1) constant time:
An algorithm is said to run in constant time if it requires the same amount of time regardless of the input size.
Examples:
array: accessing any element
fixed-size stack: push and pop methods
fixed-size queue: enqueue and dequeue methods
4. O(n) linear time
An algorithm is said to run in linear time if its time execution is directly proportional to the input size, i.e. time grows linearly as input size increases.
Consider the following examples. Below I am linearly searching for an element, and this has a time complexity of O(n).
int find = 66;
var numbers = new int[] { 33, 435, 36, 37, 43, 45, 66, 656, 2232 };
for (int i = 0; i < numbers.Length - 1; i++)
{
if(find == numbers[i])
{
return;
}
}
More Examples:
Array: Linear Search, Traversing, Find minimum etc
ArrayList: contains method
Queue: contains method
5. O(log n) logarithmic time:
An algorithm is said to run in logarithmic time if its time execution is proportional to the logarithm of the input size.
Example: Binary Search
Recall the "twenty questions" game - the task is to guess the value of a hidden number in an interval. Each time you make a guess, you are told whether your guess is too high or too low. Twenty questions game implies a strategy that uses your guess number to halve the interval size. This is an example of the general problem-solving method known as binary search.
6. O(n2) quadratic time
An algorithm is said to run in quadratic time if its time execution is proportional to the square of the input size.
Examples:
Bubble Sort
Selection Sort
Insertion Sort
7. Some useful links
Big-O Misconceptions
Determining The Complexity Of Algorithm
Big O Cheat Sheet
Several examples of loop.
O(n) time complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount. For example following functions have O(n) time complexity.
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
for (int i = n; i > 0; i -= c) {
// some O(1) expressions
}
O(nc) time complexity of nested loops is equal to the number of times the innermost statement is executed. For example, the following sample loops have O(n2) time complexity
for (int i = 1; i <=n; i += c) {
for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}
for (int i = n; i > 0; i += c) {
for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}
For example, selection sort and insertion sort have O(n2) time complexity.
O(log n) time complexity of a loop is considered as O(log n) if the loop variables is divided / multiplied by a constant amount.
for (int i = 1; i <=n; i *= c) {
// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}
For example, [binary search][3] has _O(log n)_ time complexity.
O(log log n) time complexity of a loop is considered as O(log log n) if the loop variables is reduced / increased exponentially by a constant amount.
// Here c is a constant greater than 1
for (int i = 2; i <=n; i = pow(i, c)) {
// some O(1) expressions
}
//Here fun is sqrt or cuberoot or any other constant root
for (int i = n; i > 0; i = fun(i)) {
// some O(1) expressions
}
One example of time complexity analysis
int fun(int n)
{
for (int i = 1; i <= n; i++)
{
for (int j = 1; j < n; j += i)
{
// Some O(1) task
}
}
}
Analysis:
For i = 1, the inner loop is executed n times.
For i = 2, the inner loop is executed approximately n/2 times.
For i = 3, the inner loop is executed approximately n/3 times.
For i = 4, the inner loop is executed approximately n/4 times.
…………………………………………………….
For i = n, the inner loop is executed approximately n/n times.
So the total time complexity of the above algorithm is (n + n/2 + n/3 + … + n/n), which becomes n * (1/1 + 1/2 + 1/3 + … + 1/n)
The important thing about series (1/1 + 1/2 + 1/3 + … + 1/n) is around to O(log n). So the time complexity of the above code is O(n·log n).
References:
1
2
3
Time complexity with examples
1 - Basic operations (arithmetic, comparisons, accessing array’s elements, assignment): The running time is always constant O(1)
Example:
read(x) // O(1)
a = 10; // O(1)
a = 1,000,000,000,000,000,000 // O(1)
2 - If then else statement: Only taking the maximum running time from two or more possible statements.
Example:
age = read(x) // (1+1) = 2
if age < 17 then begin // 1
status = "Not allowed!"; // 1
end else begin
status = "Welcome! Please come in"; // 1
visitors = visitors + 1; // 1+1 = 2
end;
So, the complexity of the above pseudo code is T(n) = 2 + 1 + max(1, 1+2) = 6. Thus, its big oh is still constant T(n) = O(1).
3 - Looping (for, while, repeat): Running time for this statement is the number of loops multiplied by the number of operations inside that looping.
Example:
total = 0; // 1
for i = 1 to n do begin // (1+1)*n = 2n
total = total + i; // (1+1)*n = 2n
end;
writeln(total); // 1
So, its complexity is T(n) = 1+4n+1 = 4n + 2. Thus, T(n) = O(n).
4 - Nested loop (looping inside looping): Since there is at least one looping inside the main looping, running time of this statement used O(n^2) or O(n^3).
Example:
for i = 1 to n do begin // (1+1)*n = 2n
for j = 1 to n do begin // (1+1)n*n = 2n^2
x = x + 1; // (1+1)n*n = 2n^2
print(x); // (n*n) = n^2
end;
end;
Common running time
There are some common running times when analyzing an algorithm:
O(1) – Constant time
Constant time means the running time is constant, it’s not affected by the input size.
O(n) – Linear time
When an algorithm accepts n input size, it would perform n operations as well.
O(log n) – Logarithmic time
Algorithm that has running time O(log n) is slight faster than O(n). Commonly, algorithm divides the problem into sub problems with the same size. Example: binary search algorithm, binary conversion algorithm.
O(n log n) – Linearithmic time
This running time is often found in "divide & conquer algorithms" which divide the problem into sub problems recursively and then merge them in n time. Example: Merge Sort algorithm.
O(n2) – Quadratic time
Look Bubble Sort algorithm!
O(n3) – Cubic time
It has the same principle with O(n2).
O(2n) – Exponential time
It is very slow as input get larger, if n = 1,000,000, T(n) would be 21,000,000. Brute Force algorithm has this running time.
O(n!) – Factorial time
The slowest!!! Example: Travelling salesman problem (TSP)
It is taken from this article. It is very well explained and you should give it a read.
When you're analyzing code, you have to analyse it line by line, counting every operation/recognizing time complexity. In the end, you have to sum it to get whole picture.
For example, you can have one simple loop with linear complexity, but later in that same program you can have a triple loop that has cubic complexity, so your program will have cubic complexity. Function order of growth comes into play right here.
Let's look at what are possibilities for time complexity of an algorithm, you can see order of growth I mentioned above:
Constant time has an order of growth 1, for example: a = b + c.
Logarithmic time has an order of growth log N. It usually occurs when you're dividing something in half (binary search, trees, and even loops), or multiplying something in same way.
Linear. The order of growth is N, for example
int p = 0;
for (int i = 1; i < N; i++)
p = p + 2;
Linearithmic. The order of growth is n·log N. It usually occurs in divide-and-conquer algorithms.
Cubic. The order of growth is N3. A classic example is a triple loop where you check all triplets:
int x = 0;
for (int i = 0; i < N; i++)
for (int j = 0; j < N; j++)
for (int k = 0; k < N; k++)
x = x + 2
Exponential. The order of growth is 2N. It usually occurs when you do exhaustive search, for example, check subsets of some set.
Loosely speaking, time complexity is a way of summarising how the number of operations or run-time of an algorithm grows as the input size increases.
Like most things in life, a cocktail party can help us understand.
O(N)
When you arrive at the party, you have to shake everyone's hand (do an operation on every item). As the number of attendees N increases, the time/work it will take you to shake everyone's hand increases as O(N).
Why O(N) and not cN?
There's variation in the amount of time it takes to shake hands with people. You could average this out and capture it in a constant c. But the fundamental operation here --- shaking hands with everyone --- would always be proportional to O(N), no matter what c was. When debating whether we should go to a cocktail party, we're often more interested in the fact that we'll have to meet everyone than in the minute details of what those meetings look like.
O(N^2)
The host of the cocktail party wants you to play a silly game where everyone meets everyone else. Therefore, you must meet N-1 other people and, because the next person has already met you, they must meet N-2 people, and so on. The sum of this series is x^2/2+x/2. As the number of attendees grows, the x^2 term gets big fast, so we just drop everything else.
O(N^3)
You have to meet everyone else and, during each meeting, you must talk about everyone else in the room.
O(1)
The host wants to announce something. They ding a wineglass and speak loudly. Everyone hears them. It turns out it doesn't matter how many attendees there are, this operation always takes the same amount of time.
O(log N)
The host has laid everyone out at the table in alphabetical order. Where is Dan? You reason that he must be somewhere between Adam and Mandy (certainly not between Mandy and Zach!). Given that, is he between George and Mandy? No. He must be between Adam and Fred, and between Cindy and Fred. And so on... we can efficiently locate Dan by looking at half the set and then half of that set. Ultimately, we look at O(log_2 N) individuals.
O(N log N)
You could find where to sit down at the table using the algorithm above. If a large number of people came to the table, one at a time, and all did this, that would take O(N log N) time. This turns out to be how long it takes to sort any collection of items when they must be compared.
Best/Worst Case
You arrive at the party and need to find Inigo - how long will it take? It depends on when you arrive. If everyone is milling around you've hit the worst-case: it will take O(N) time. However, if everyone is sitting down at the table, it will take only O(log N) time. Or maybe you can leverage the host's wineglass-shouting power and it will take only O(1) time.
Assuming the host is unavailable, we can say that the Inigo-finding algorithm has a lower-bound of O(log N) and an upper-bound of O(N), depending on the state of the party when you arrive.
Space & Communication
The same ideas can be applied to understanding how algorithms use space or communication.
Knuth has written a nice paper about the former entitled "The Complexity of Songs".
Theorem 2: There exist arbitrarily long songs of complexity O(1).
PROOF: (due to Casey and the Sunshine Band). Consider the songs Sk defined by (15), but with
V_k = 'That's the way,' U 'I like it, ' U
U = 'uh huh,' 'uh huh'
for all k.
For the mathematically-minded people: The master theorem is another useful thing to know when studying complexity.
O(n) is big O notation used for writing time complexity of an algorithm. When you add up the number of executions in an algorithm, you'll get an expression in result like 2N+2. In this expression, N is the dominating term (the term having largest effect on expression if its value increases or decreases). Now O(N) is the time complexity while N is dominating term.
Example
For i = 1 to n;
j = 0;
while(j <= n);
j = j + 1;
Here the total number of executions for the inner loop are n+1 and the total number of executions for the outer loop are n(n+1)/2, so the total number of executions for the whole algorithm are n + 1 + n(n+1/2) = (n2 + 3n)/2.
Here n^2 is the dominating term so the time complexity for this algorithm is O(n2).
Other answers concentrate on the big-O-notation and practical examples. I want to answer the question by emphasizing the theoretical view. The explanation below is necessarily lacking in details; an excellent source to learn computational complexity theory is Introduction to the Theory of Computation by Michael Sipser.
Turing Machines
The most widespread model to investigate any question about computation is a Turing machine. A Turing machine has a one dimensional tape consisting of symbols which is used as a memory device. It has a tapehead which is used to write and read from the tape. It has a transition table determining the machine's behaviour, which is a fixed hardware component that is decided when the machine is created. A Turing machine works at discrete time steps doing the following:
It reads the symbol under the tapehead.
Depending on the symbol and its internal state, which can only take finitely many values, it reads three values s, σ, and X from its transition table, where s is an internal state, σ is a symbol, and X is either Right or Left.
It changes its internal state to s.
It changes the symbol it has read to σ.
It moves the tapehead one step according to the direction in X.
Turing machines are powerful models of computation. They can do everything that your digital computer can do. They were introduced before the advent of digital modern computers by the father of theoretical computer science and mathematician: Alan Turing.
Time Complexity
It is hard to define the time complexity of a single problem like "Does white have a winning strategy in chess?" because there is a machine which runs for a single step giving the correct answer: Either the machine which says directly 'No' or directly 'Yes'. To make it work we instead define the time complexity of a family of problems L each of which has a size, usually the length of the problem description. Then we take a Turing machine M which correctly solves every problem in that family. When M is given a problem of this family of size n, it solves it in finitely many steps. Let us call f(n) the longest possible time it takes M to solve problems of size n. Then we say that the time complexity of L is O(f(n)), which means that there is a Turing machine which will solve an instance of it of size n in at most C.f(n) time where C is a constant independent of n.
Isn't it dependent on the machines? Can digital computers do it faster?
Yes! Some problems can be solved faster by other models of computation, for example two tape Turing machines solve some problems faster than those with a single tape. This is why theoreticians prefer to use robust complexity classes such as NL, P, NP, PSPACE, EXPTIME, etc. For example, P is the class of decision problems whose time complexity is O(p(n)) where p is a polynomial. The class P do not change even if you add ten thousand tapes to your Turing machine, or use other types of theoretical models such as random access machines.
A Difference in Theory and Practice
It is usually assumed that the time complexity of integer addition is O(1). This assumption makes sense in practice because computers use a fixed number of bits to store numbers for many applications. There is no reason to assume such a thing in theory, so time complexity of addition is O(k) where k is the number of bits needed to express the integer.
Finding The Time Complexity of a Class of Problems
The straightforward way to show the time complexity of a problem is O(f(n)) is to construct a Turing machine which solves it in O(f(n)) time. Creating Turing machines for complex problems is not trivial; one needs some familiarity with them. A transition table for a Turing machine is rarely given, and it is described in high level. It becomes easier to see how long it will take a machine to halt as one gets themselves familiar with them.
Showing that a problem is not O(f(n)) time complexity is another story... Even though there are some results like the time hierarchy theorem, there are many open problems here. For example whether problems in NP are in P, i.e. solvable in polynomial time, is one of the seven millennium prize problems in mathematics, whose solver will be awarded 1 million dollars.

Speeding up GPU neural network

I'm trying to implement a neural network to run on a GPU using the Thrust and CUBLAS libraries, but I'm having a lot of trouble getting it to run faster than our current multithreaded and vectorized CPU implementation. The network has a single hidden layer with logistic units and an output layer with linear units, and here is the code for that:
// Functor to add bias before computing logistic
template <typename T>
struct bias_logistic_f {
__host__ __device__
T operator()(const T& x, const T& y) const {
return 1/(1+exp(-(x+y)));
}
};
bias_logistic_f bias_logistic();
// Thrust vectors for input/hidden/output units
thrust::device_vector<FLT> batch(batch_rows*ndim);
thrust::device_vector<FLT> hid(batch_rows*nhid);
thrust::device_vector<FLT> gpu_code(ndata*ncode);
// ...Load data and network weights...
// Multiply input (batch) by weights (vis2hid)
// Our matrices are stored row-major, but BLAS wants column-major,
// so pretend they're transposed and compute hid' = vis2hid' * batch'
cublasDgemm(handle, CUBLAS_OP_N, CUBLAS_OP_N, nhid, batch_rows, ndim,
&alpha, thrust::raw_pointer_cast(&vis2hid[0]), nhid,
thrust::raw_pointer_cast(&batch[0]), ndim,
&beta, thrust::raw_pointer_cast(&hid[0]), nhid);
// Add hidbiases to hid and compute logistic
thrust::transform(hid.begin(), hid.end(), hidbiases.begin(), hid.begin(),
bias_logistic);
// Multiply hid by weights (hid2code)
cublasDgemm(handle, CUBLAS_OP_N, CUBLAS_OP_N, ncode, batch_rows, nhid,
&alpha, thrust::raw_pointer_cast(&hid2code[0]), ncode,
thrust::raw_pointer_cast(&hid[0]), nhid,
&beta, thrust::raw_pointer_cast(&gpu_code[b*batch_rows*ncode]), ncode);
// Add codebiases
thrust::transform(gpu_code.begin() + b*batch_rows*ncode, gpu_code.begin() + (b+1)*batch_rows*ncode,
codebiases.begin(), gpu_code.begin() + b*batch_rows*ncode,
thrust::plus<FLT>());
Our input data is a sparse matrix with about 150,000 rows and 6,500 columns, with about 100 non-zero elements per row on average. This is too large to store the full matrix as a dense matrix on the GPU, so what I do is loop through the sparse matrix expanding batches of 1,000 rows each to input into the neural network:
for(int b=0; b<nbatch; ++b) {
// Zero out batch b
thrust::fill(batch.begin(), batch.end(), 0.0f);
// batch_val contains the non-zero values for the current batch, batch_idx the indices within the batch,
// and batch_ptr indexes into batch_val/batch_idx
// This is like CSR format except instead of compressing rows, it's compressing submatrices of 1,000 rows
thrust::scatter(batch_val.begin() + batch_ptr[b],
batch_val.begin() + batch_ptr[b+1],
batch_idx.begin() + batch_ptr[b],
batch.begin());
// ...Input batch to network (shown above)...
}
Our CPU implementation does the same thing, using STL vectors. When I ran both and compared their run times, I was surprised to find that the GPU code takes about 38 seconds on average to process our data, while the CPU code only takes about 27 seconds. It could be that some of this difference is due to the GPU being a few years old (a Tesla C1060) while the server is a newer 24-core machine. But still I would've thought with thousands of threads available, it wouldn't end up being 50% slower.
Any ideas how I can make this code run faster? I'm new to GPU programming so I'm at a loss as to what I could be doing wrong. Is there a more efficient way to deal with sparse matrices than what I'm doing here, such as using the CUSPARSE library? Or would it be a better idea to forget about the high-level libraries altogether and just write my own kernels in CUDA to combine the matrix multiplication/logistic/addition steps?

Fast way to compute (1:N)'*(1:N)

I am looking for a fast way to compute
(1:N)'*(1:N)
for reasonably large N. I feel like the symmetry of the problem makes it so that actually doing the multiplications and additions is wasteful.
The question of why you want to do this really matters.
In the theoretical sense, the triangular approach suggested in the other answers will save you operations. #jgmao's answer is especially interesting in reducing multiplies.
In the practical sense, number of CPU operations is no longer the metric to minimize when writing fast code. Memory bandwidth dominates when you have so few CPU operations, so tuned cache-aware access patterns are how to make this go fast. Matrix multiplication code is implemented extremely efficiently, since it's such a common operation, and every implementation of the BLAS numeric library worth its salt will use optimized access patterns, and SIMD computation as well.
Even if you wrote straight C and reduced your op count to the theoretic minimum, you'd probably still not beat the full matrix multiply. What this boils down to is to find the numeric primitive which most closely matches your operation.
All that said, there's a BLAS operation which gets a little closer than DGEMM (matrix multiply). It's called DSYRK, the rank-k update, and it can be used for exactly A'*A. The MEX function I wrote for this a long time ago is here. I haven't messed with it in a long time, but it did work when I first wrote it, and did in fact run faster than a straight A'*A.
/* xtrx.c: calculates x'*x taking advantage of the symmetry.
Peter Boettcher <email removed>
Last modified: <Thu Jan 23 13:53:02 2003> */
#include "mex.h"
const double one = 1;
const double zero = 0;
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
double *x, *z;
int i, j, mrows, ncols;
if(nrhs!=1) mexErrMsgTxt("One input required.");
x = mxGetPr(prhs[0]);
mrows = mxGetM(prhs[0]);
ncols = mxGetN(prhs[0]);
plhs[0] = mxCreateDoubleMatrix(ncols,ncols, mxREAL);
z = mxGetPr(plhs[0]);
/* Call the FORTRAN BLAS routine for rank k update */
dsyrk_("U", "T", &ncols, &mrows, &one, x, &mrows, &zero, z, &ncols);
/* Result is in the upper triangle. Copy it down the lower part */
for(i=0; i<ncols; i++)
for(j=i+1; j<ncols; j++)
z[i*ncols + j] = z[j*ncols + i];
}
MATLAB's matrix multiplication is generally pretty fast, but here are a couple of ways to get just the upper triangular matrix. They are slower than naïvely computing the v'*v (or using a MEX wrapper that calls the more appropriate symmetric rank k update function in BLAS, not surprisingly!). Anyway, here are a few MATLAB-only solutions:
The first uses linear indexing:
% test vector
N = 1e3;
v = 1:N;
% compute upper triangle of product
[ii, jj] = find(triu(ones(N)));
upperMask = false(N,N);
upperMask(ii + N*(jj-1)) = true;
Mu = zeros(N);
Mu(upperMask) = v(ii).*v(jj); % other lines always the same computation
% validate
M = v'*v;
isequal(triu(M),Mu)
This next way won't be faster than the naive approach either, but here's another solution to compute the lower triangle with bsxfun:
Ml = bsxfun(#(x,y) [zeros(y-1,1); x(y:end)*y],v',v);
For the upper triangle:
Mu = bsxfun(#(x,y) [x(1:y)*y; zeros(numel(x)-y,1)],v',v);
isequal(triu(M),Mu)
Another solution for the whole matrix using cumsum for this special case (where v=1:N). This one is actually close in speed.
M = cumsum(repmat(v,[N 1]));
Maybe these can be a starting point for something better.
This is 3 times faster than (1:N).'*(1:N) provided an int32 result is acceptable (it's even faster if the numbers are small enough to use int16 instead of int32):
N = 1000;
aux = int32(1:N);
result = bsxfun(#times,aux.',aux);
Benchmarking:
>> N = 1000; aux = int32(1:N); tic, for count = 1:1e2, bsxfun(#times,aux.',aux); end, toc
Elapsed time is 0.734992 seconds.
>> N = 1000; aux = 1:N; tic, for count = 1:1e2, aux.'*aux; end, toc
Elapsed time is 2.281784 seconds.
Note that aux.'*aux cannot be used for aux = int32(1:N).
As pointed out by #DanielE.Shub, if the result is needed as a double matrix, a final cast has to be done, and in that case the gain is very small:
>> N = 1000; aux = int32(1:N); tic, for count = 1:1e2, double(bsxfun(#times,aux.',aux)); end, toc
Elapsed time is 2.173059 seconds.
Since the special ordered structure of the input, consider the case N=4
(1:4)'*(1:4) = [1 2 3 4
2 4 6 8
3 6 9 12
4 8 12 16]
you will find that 1st row is just (1:N), from second (j=2) row, the value of this row is previous row (j=1) plus (1:N).
So 1. you do not to do many multiplications. Instead, you can generate it by N*N additions.
2. since the output is symmetric, only half of the output matrix need to be computed. So the total computation is (N-1)+(N-2)+...+1 = N^2 / 2 additions.

Code or compiler: optimizing a IIR filter in C for the iPhone 4 and later

I've been profiling my almost-finished project and I'm seeing that about three-quarters of the CPU time is spent in this IIR filter function (which is called hundreds of thousands of times in about a second currently on the target hardware) so with everything else working well I am wondering if it can be optimized for my specific hardware and software target. My targets are only iPhone 4 and newer, only iOS 4.3 and newer, only LLVM 4.x. A little bit of imprecision is probably OK if there are gains to be made.
static float filter(const float a, const float *b, const float c, float *d, const int e, const float x)
{
float return_value = 0;
d[0] = x;
d[1] = c * d[0] + a * d[1];
int j;
for (j = 2; j <= e; j++) {
return_value += (d[j] += a * (d[j + 1] - d[j - 1])) * b[j];
}
for (j = e + 1; j > 1; j--) {
d[j] = d[j - 1];
}
return (return_value);
}
Any suggestions about speeding it up appreciated, also interested in your opinion if it is possible to optimize beyond the default compiler optimization at all. I am wondering if it is something where NEON SIMD would help (that is new ground for me) or if VFP can be exploited, or if LLVM autovectorization would help.
I've tried the following LLVM flags:
-ffast-math (didn't make a notable difference)
-O4 (made a big difference on the iPhone 4S with a 25% reduction in time, but no notable difference on my minimum target device the iPhone 4, improvement of which is my main goal)
-O3 -mllvm -unroll-allow-partial -mllvm -unroll-runtime -funsafe-math-optimizations -ffast-math -mllvm -vectorize -mllvm -bb-vectorize-aligned-only (LLVM autovectorization flags from Hal Finkel's slides here: http://llvm.org/devmtg/2012-04-12/Slides/Hal_Finkel.pdf, made things slower than the default LLVM optimization for an Xcode release target)
Open to other flags, different approaches, and changes to the function. I'd prefer to leave the input and return types and values alone. There is actually a discussion of using NEON intrinsic functions for FIR here: https://pixhawk.ethz.ch/_media/software/optimization/neon_support_in_the_arm_compiler.pdf but I don't have quite enough experience with its subject to successfully apply the information to my own case. Thank you for any clarification.
EDIT My apologies for not noting this sooner. After investigating aka.nice's suggestion I noticed that the values passed in for e, a and c are always the same values and I know them before runtime, so approaches incorporating this info are an option.
Here are some transformations that could be made on the code to use vDSP routines. These transformations make use of various temporary buffers named T0, T1, and T2. Each of these is an array of float with enough space for e-1 elements.
First, use a temporary buffer to compute a * b[j]. This changes the original code:
for (j = 2; j <= e; j++) {
return_value += (d[j] += a * (d[j + 1] - d[j - 1])) * b[j];
}
to:
vDSP_vsmul(b+2, 1, &a, T0, 1, e-1);
for (j = 2; j <= e; j++)
return_value += (d[j] += (d[j+1] - d[j-1])) * T0[j-2];
Then use vDSP_vmul to compute d[j+1] * T0[j-2]:
vDSP_vsmul(b+2, 1, &a, T0, 1, e-1);
vDSP_vmul(d+3, 1, T0, 1, T1, 1, e-1);
for (j = 2; j <= e; j++)
return_value += (d[j] += T1[j-2] - d[j-1] * T0[j-2];
Next, promote vDSP_vmul to vDSP_vma (vector multiply add) to compute d[j] + d[j+1] * T0[j-2]:
vDSP_vsmul(b+2, 1, &a, T0, 1, e-1);
vDSP_vma(d+3, 1, T0, 1, d+2, 1, T1, 1, e-1);
for (j = 2; j <= e; j++)
return_value += (d[j] = T1[j-2] - d[j-1] * T0[j-2];
I suppose I would time that and see if there is any improvement. There are some issues:
SIMD code works best when data is 16-byte aligned. The use of array indices such as j-1 and j+1 prevents this. The ARM processors in phones are not as bad with unaligned data as some other processors, but performance will vary from model to model.
If e is large (more than a few thousand), then T0 and d may be evicted from cache during the vDSP_vma operation, and the following loop will have to reload them. There is a technique called strip mining to reduce the effect of this. I will not detail it now, but, essentially, the operation is partitioned into smaller strips of the array.
The IIR in the final loop may still bottleneck the processor. There are routines in vDSP for performing some IIRs (such as vDSP_deq22), but it is not clear whether this filter can be expressed in a way that is a good enough match to a vDSP routine to gain more performance than might be lost by the transformation.
The summation in the final loop to calculate return_value could also be removed from the loop and replaced with a vDSP routine (likely vDSP_sve), but I suspect the slack caused by the IIR will permit the additions to be done without adding significant execution time to the loop.
The above is off the top of my head; I have not tested the code. I suggest making the transformations one-by-one so you can test the code after each change and identify any errors before going on.
If you can find a satisfactory filter that is not an IIR, more performance optimizations may be available.
I'd prefer to leave the input and return types and values alone…
Nevertheless, moving your rendering from float to integer would help considerably.
Localizing that change to the implementation you present won't be useful. But if you expand it to reimplement just the FIR as integer, it can quickly pay off (unless the sizes are guaranteed to always be incredibly small -- then conversion/move times cost more). Of course, moving larger portions of the render graph to integer will introduce larger gains and require even fewer conversions.
Another consideration would be to look at the utilities in Accelerate.framework (potentially saving you from writing your own asm).
I tried this little exercize to rewrite your filter with delay operator z
For example, for e=4, i renamed input u and output y
d1*z= u
d2*z= c*u + a*d1
d3*z= d2 + a*(d3-d1*z)
d4*z= d3 + a*(d4-d2*z)
d5*z= d4 + a*(d5-d3*z)
y = (b2*d3*z + b3*d4*z + b4*d5*z)
Note that the di are the filter states.
d3*z is the next value of d3 (it appears to be variable d2 in your code)
You can then eliminate the di to write the transfer function y/u in z.
You will then find that a minimal representation require only e states by factoring/simplifying above transfer function.
Denominator is z*(z-a)^3, that is a pole at 0, and another at a with multiplicity (e-1).
You can then put your filter in a standard state space matrix representation:
z*X = A*X + B*u
y = C*X + d*u
With the particular form of poles, you can decompose the transfer in partial fraction expansion and obtain the matrices A & B in this special form (matlab like notations)
A = [0 1 0 0; B=[0;
0 a 1 0; 0;
0 0 a 1; 0;
0 0 0 a] 1]
C & d are a bit less easy though...
They are extracted from the numerators and direct term of partial fraction expansion
They are polynomials in bi, c (degree 1) and a (degree e)
For e=4, I have
C=[ a^3*b2 - a^2*b3 + a*b4 ,
-a^2*b2 + a*b3 + (c-a^2)*b4 ,
a*b2 + (c-a^2)*b3 + (-2*a^2*c-2*a^2-a+a^4)*b4 ,
(c-a^2)*b2 + (-a^2-a^2*c-a)*b3 + (-2*a*c+2*a^3)*b4 ]
d= -a*b2 - a*c*b3 + a^2*b4
If you can find the recurrence in e governing C & d, and precompute them
then the filter can be reduced to those simple vector ops:
z*X = a*[ 0; x2 ; x3 ; x4 ... xe ] + [x2 ; x3 ; x4 ... xe ; u ];
Y = C*[ x1 ; x2 ; x3 ; x4 ... xe ] + d*u
Or expressed as a function (Xnext,y)=filter(X,u,a,C,d,e) pseudo code:
y = dot_product( C , X) + d*u; // (like BLAS _DOT)
Xnext(1:e-1) = X(2:e); // this is a memcopy (like BLAS _COPY)
Xnext(e)=u;
X(1)=0;
Xnext=a*X+Xnext; // this is an inplace vector muladd (like BLAS _AXPY)
X=Xnext; // another memcopy outside the function (can be moved inside).
Note that if you use BLAS functions, your code will be portable to many hardwares, not just Applecentric, and I guess the perf won't be much different.
EDIT: about partial fraction expansion
The pure partial fraction expansion would give a diagonal state space representation, and a matrix B full of 1s. This can be an interesting variant too. (filters in parallel)
My variant used above is more like a cascade or ladder (filters in serie).