Related
How can I implement 2^x fixed-point arithmetic s5.26 and input values is in range [-31.9, 31.9] using the minimax polynomial approximation for exp2()
How to generate the polynomial using Sollya Tool mentioned in the following link
Power of 2 approximation in fixed point
Since fixed-point arithmetic generally does not include an "infinity" encoding representing overflowed results, any implementation of exp2() for an s5.26 format will be limited to inputs in the interval (-32, 5), resulting in outputs in [0, 32).
The computation of transcendental functions typically consist of argument reduction, core approximation, final result construction. In the case of exp2(a), a reasonable argument reduction scheme is to split a into integer part i and fractional part f, such that a == i + f, with f in [-0.5, 0.5]. One then computes exp2(f), and scales the result by 2i, which corresponds to shifts in fixed-point arithmetic: exp2(a) = exp2(f) * exp2(i).
The common design choices for the computation of exp2(f) are interpolation in tabulated values of exp2(), or polynomial approximation. Since we need 31 result bits for the largest arguments, accurate interpolation would probably want to use quadratic interpolation to keep the table size reasonable. Since many modern processors (including ones used in embedded systems) provide a fast integer multiplier, I will focus here on approximation by polynomial. For this, we want a polynomial with minimax properties, that is, one that minimizes the maximum error compared to the reference.
Both commercial and free tools offer built-in capabilities to generate minimax approximations, e.g. Mathematica's MiniMaxApproximation command, Maple's minimax command, and Sollya's fpminimax command. One might also chose to build one's own infrastructure based on the Remez algorithm, which is the approach I have used. As opposed to floating-point arithmetic which typically uses to-nearest-or-even rounding, fixed-point arithmetic is usually restricted to truncation of intermediate results. This adds additional error during expression evaluation. As a consequence, it is usually a good idea to try a heuristic-based search for small adjustments to the coefficients of the generated approximation to partially balance those accumulating one-sided errors.
Because we need up to 31 bits in the result, and because coefficients in core approximations are typically less than unity in magnitude, we cannot use the native fixed-point precision, here s5.26, for polynomial evaluation. Instead, we want to scale up the operands in intermediate computation to fully use the available range of 32-bit integers, by dynamically adjusting the fixed-point format we are working in. For reasons of efficiency, it seems advisable to arrange the computation such that multiplications use re-normalization right shifts by 32 bits. This will often allow the elimination of explicit shifts on 32-bit processors.
Since intermediate computation uses signed data, right shifts of signed, negative operands will occur. We want those right shifts to map to arithmetic right shift instructions, something the C standard does not guarantee. But on most commonly used platforms, C compilers do what is desirable for us. Otherwise, it may be necessary to resort to intrinsics or inline assembly. I developed the code below with the Microsoft compiler on an x64 platform.
In the evaluation of the polynomial approximation for exp2(f) the original floating-point coefficients, the dynamic scaling, and the heuristic adjustments are all clearly visible. The code below does not quite achieve full accuracy for large arguments. The biggest absolute error is 1.10233e-7, for the argument of 0x12de9c5b = 4.71739332: fixed_exp2() returns 0x693ab6a3 while the accurate result would be 0x693ab69c. Presumably full accuracy could be achieved by increasing the degree of the polynomial core approximation by one.
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <math.h>
/* on 32-bit architectures, there is often an instruction/intrinsic for this */
int32_t mulhi (int32_t a, int32_t b)
{
return (int32_t)(((int64_t)a * (int64_t)b) >> 32);
}
/* compute exp2(a) in s5.26 fixed-point arithmetic */
int32_t fixed_exp2 (int32_t a)
{
int32_t i, f, r, s;
/* split a = i + f, such that f in [-0.5, 0.5] */
i = (a + 0x2000000) & ~0x3ffffff; // 0.5
f = a - i;
s = ((5 << 26) - i) >> 26;
f = f << 5; /* scale up for maximum accuracy in intermediate computation */
/* approximate exp2(f)-1 for f in [-0.5, 0.5] */
r = (int32_t)(1.53303146e-4 * (1LL << 36) + 996);
r = mulhi (r, f) + (int32_t)(1.33887795e-3 * (1LL << 35) + 99);
r = mulhi (r, f) + (int32_t)(9.61833261e-3 * (1LL << 34) + 121);
r = mulhi (r, f) + (int32_t)(5.55036329e-2 * (1LL << 33) + 51);
r = mulhi (r, f) + (int32_t)(2.40226507e-1 * (1LL << 32) + 8);
r = mulhi (r, f) + (int32_t)(6.93147182e-1 * (1LL << 31) + 5);
r = mulhi (r, f);
/* add 1, scale based on integral portion of argument, round the result */
r = ((((uint32_t)r * 2) + (uint32_t)(1.0*(1LL << 31)) + ((1U << s) / 2) + 1) >> s);
/* when argument < -26.5, result underflows to zero */
if (a < -0x6a000000) r = 0;
return r;
}
/* convert from s5.26 fixed point to double-precision floating point */
double fixed_to_float (int32_t a)
{
return a / 67108864.0;
}
int main (void)
{
double a, res, ref, err, maxerr = 0.0;
int32_t x, start, end;
start = -0x7fffffff; // -31.999999985
end = 0x14000000; // 5.000000000
printf ("testing fixed_exp2 with inputs in [%.9f, %.9f)\n",
fixed_to_float (start), fixed_to_float (end));
for (x = start; x < end; x++) {
a = fixed_to_float (x);
ref = exp2 (a);
res = fixed_to_float (fixed_exp2 (x));
err = fabs (res - ref);
if (err > maxerr) {
maxerr = err;
}
}
printf ("max. abs. err = %g\n", maxerr);
return EXIT_SUCCESS;
}
A table-based alternative would trade-off table storage for a reduction in the amount of computation that is performed. Depending on the size of the L1 data cache, this may or may not increase performance. One possible approach is to tabulate 2f-1 for f in [0, 1). The split the function argument into an integer i and a fraction f, such that f in [0, 1). In order to keep the table reasonably small, use quadratic interpolation, with the coefficients of the polynomial computed on the fly from three consecutive table entries. The result is slightly adjusted by a heuristically determined offset to somewhat compensate for the truncating nature of fixed-point arithmetic.
The table is indexed by leading bits of the fraction f. Using seven bits for the index (resulting in a table of 128+2 entries), accuracy is slightly worse than with the previous minimax polynomial approximation. Maximum absolute error is 1.74935e-7. It occurs for an argument of 0x11580000 = 4.33593750, where fixed_exp2() returns 0x50c7d771, whereas the accurate result would be 0x50c7d765.
/* For i in [0,129]: (exp2 (i/128.0) - 1.0) * (1 << 31) */
static const uint32_t expTab [130] =
{
0x00000000, 0x00b1ed50, 0x0164d1f4, 0x0218af43,
0x02cd8699, 0x0383594f, 0x043a28c4, 0x04f1f656,
0x05aac368, 0x0664915c, 0x071f6197, 0x07db3580,
0x08980e81, 0x0955ee03, 0x0a14d575, 0x0ad4c645,
0x0b95c1e4, 0x0c57c9c4, 0x0d1adf5b, 0x0ddf0420,
0x0ea4398b, 0x0f6a8118, 0x1031dc43, 0x10fa4c8c,
0x11c3d374, 0x128e727e, 0x135a2b2f, 0x1426ff10,
0x14f4efa9, 0x15c3fe87, 0x16942d37, 0x17657d4a,
0x1837f052, 0x190b87e2, 0x19e04593, 0x1ab62afd,
0x1b8d39ba, 0x1c657368, 0x1d3ed9a7, 0x1e196e19,
0x1ef53261, 0x1fd22825, 0x20b05110, 0x218faecb,
0x22704303, 0x23520f69, 0x243515ae, 0x25195787,
0x25fed6aa, 0x26e594d0, 0x27cd93b5, 0x28b6d516,
0x29a15ab5, 0x2a8d2653, 0x2b7a39b6, 0x2c6896a5,
0x2d583eea, 0x2e493453, 0x2f3b78ad, 0x302f0dcc,
0x3123f582, 0x321a31a6, 0x3311c413, 0x340aaea2,
0x3504f334, 0x360093a8, 0x36fd91e3, 0x37fbefcb,
0x38fbaf47, 0x39fcd245, 0x3aff5ab2, 0x3c034a7f,
0x3d08a39f, 0x3e0f680a, 0x3f1799b6, 0x40213aa2,
0x412c4cca, 0x4238d231, 0x4346ccda, 0x44563ecc,
0x45672a11, 0x467990b6, 0x478d74c9, 0x48a2d85d,
0x49b9bd86, 0x4ad2265e, 0x4bec14ff, 0x4d078b86,
0x4e248c15, 0x4f4318cf, 0x506333db, 0x5184df62,
0x52a81d92, 0x53ccf09a, 0x54f35aac, 0x561b5dff,
0x5744fccb, 0x5870394c, 0x599d15c2, 0x5acb946f,
0x5bfbb798, 0x5d2d8185, 0x5e60f482, 0x5f9612df,
0x60ccdeec, 0x62055b00, 0x633f8973, 0x647b6ca0,
0x65b906e7, 0x66f85aab, 0x68396a50, 0x697c3840,
0x6ac0c6e8, 0x6c0718b6, 0x6d4f301f, 0x6e990f98,
0x6fe4b99c, 0x713230a8, 0x7281773c, 0x73d28fde,
0x75257d15, 0x767a416c, 0x77d0df73, 0x792959bb,
0x7a83b2db, 0x7bdfed6d, 0x7d3e0c0d, 0x7e9e115c,
0x80000000, 0x8163daa0
};
int32_t fixed_exp2 (int32_t x)
{
int32_t f1, f2, dx, a, b, approx, idx, i, f;
/* extract integer portion; 2**i is realized as a shift at the end */
i = (x >> 26);
/* extract fraction f so we can compute 2^f, 0 <= f < 1 */
f = x & 0x3ffffff;
/* index table of exp2 values using 7 most significant bits of fraction */
idx = (uint32_t)f >> (26 - 7);
/* difference between argument and next smaller sampling point */
dx = f - (idx << (26 - 7));
/* fit parabola through closest 3 sampling points; find coefficients a,b */
f1 = (expTab[idx+1] - expTab[idx]);
f2 = (expTab[idx+2] - expTab[idx]);
a = f2 - (f1 << 1);
b = (f1 << 1) - a;
/* find function value offset for argument x by computing ((a*dx+b)*dx) */
approx = a;
approx = (int32_t)((((int64_t)approx)*dx) >> (26 - 7)) + b;
approx = (int32_t)((((int64_t)approx)*dx) >> (26 - 7 + 1));
/* combine integer and fractional parts of result, round result */
approx = (((expTab[idx] + (uint32_t)approx + (uint32_t)(1.0*(1LL << 31)) + 22U) >> (30 - 26 - i)) + 1) >> 1;
/* flush underflow to 0 */
if (i < -27) approx = 0;
return approx;
}
I have images with different rotational orientations. I want to find correct rotation angle using cross-correlation maximization. Since my image set is big, I wanted to speed up normxcorr2 function using the mex file here.
I used the following code to calculate matched_angle:
function [matched_angle, max_corr_vecq, matched_angle_mex, max_corr_vecq_mex] = get_correct_rotation(moving, fixed)
for theta = 360:-10:10
rotated = imrotate(moving, theta,'bicubic','crop');
corr2d_map = normxcorr2(double(rotated), double(fixed));
corr2d_map_mex = normxcorr2_mex(double(rotated), double(fixed),'full');
[max_corr_vec(theta/10), ~] = max(corr2d_map(:));
[max_corr_vec_mex(theta/10), ~] = max(corr2d_map_mex(:));
end
% Interpolate correlation max vector for half degree resolution
max_corr_vecq = interp1(10:10:360, max_corr_vec, 0.5:0.5:360, 'spline');
[~, matched_angle] = max(max_corr_vecq);
matched_angle = 0.5 * matched_angle;
% Interpolate correlation max vector for half degree resolution
max_corr_vecq_mex = interp1(10:10:360, max_corr_vec_mex, 0.5:0.5:360, 'spline');
[~, matched_angle_mex] = max(max_corr_vecq_mex);
matched_angle_mex = 0.5 * matched_angle_mex;
end
However using those two same images (Moving Template Image & Fixed Reference Image) for two different normxcorr2 & normxcorr2_mex gives totally different results.
plot(0.5:0.5:360, max_corr_vecq, 'linewidth',2); hold on;
plot(0.5:0.5:360, max_corr_vecq_mex, 'linewidth',2);
legend({'MATLAB Built-in', 'MEX'});
set(gca, 'FontSize', 14, 'FontWeight', 'bold');
See Result Plot.
Does anyone has an idea what is going on? I could not found any entry regarding the accuracy of that mex file. And according to the author:
the following are equivalent:
result = normxcorr2_mex(template, image, 'full');
AND
result = normxcorr2(template, image);
except that normxcorr2_mex has 0's in the 'invalid' area along the boundary
which should not be problem in my case. Since I am only checking the max correlation value.
Since my previous answer, I have found the normcorr2_mex library to be consistently slower (than MATLAB) and incorrect in all of my use cases.
As I really needed a C++ implementation (that I could verify with MATLAB), I created my own. The code is listed here:
/* normxcorr2_mex.cpp
*
* A MATLAB-mex wrapper around a C/C++ implementation of the Normalised Cross Correlation algorithm described
* by #dafnahaktana in https://stackoverflow.com/questions/44591037/speed-up-calculation-of-maximum-of-normxcorr2.
*
* This module uses the 'integral image' data structure described in the posted MATLAB/Octave code (based upon the
* original Industrial Light & Magic paper at http://scribblethink.org/Work/nvisionInterface/nip.pdf), but replaces
* the "naive" correlation step with a Fourier transform implementation for larger template sizes.
*
* Daniel Eaton released a MATLAB-mex library (http://www.cs.ubc.ca/research/deaton/remarks_ncc.html) with the
* same function name as this one in 2013. Indeed, I acknowledge [and flatteringly plagiarise] his interface and
* naming convention. Unfortunaly, I was unable to duplicate the speed (wrt MATLABs normxcorr2) improvements he
* claimed with the image sizes I required. Curiously, I also observed different results using his library compared
* with MATLABs built-in function (despite being claimed to be identical). This was also noted by others here:
* https://stackoverflow.com/questions/48641648/different-results-of-normxcorr2-and-normxcorr2-mex. This module
* does match normxcorr2 on both the MATLAB R2016b and R2017a/b versions tested, using the (accompanying) test script.
* Like Daniel's module, however, this function returns only the 'valid' region of correlation values, i.e. it
* doesn't pad the output array to match the input image size.
*
* This function is called via:
* NCC = normxcorr2_mex (TEMPLATE, A);
* Where:
* TEMPLATE - The (double precision) matrix to correlate with A.
* A - (Double precision) input matrix for correlation with the TEMPLATE. Note size(A) > size(TEMPLATE).
* NCC - is the computed normalised cross correlation coefficients of the matrices TEMPLATE and A.
* The size of the correlation coefficient matrix is given as:
*
* size(NCC) = [(Ar - TEMPLATEr + 1), (Ac - TEMPLATEc + 1)] ; where:
*
* Ar, Ac and TEMPLATEr, TEMPLATEc are the number of (rows, cols) of A and TEMPLATE respectively.
*
* This module requires the Eigen C++ library (http://eigen.tuxfamily.org/index.php?title=Main_Page) for compilation
* and may be compiled within MATLAB via:
*
* mex -I'[Path to]\eigen-3.3.5' normxcorr2_mex.cpp
*
* Since NCC is such a computationally intensive task, this module may be linked against the openMP library to exploit a
* pool of worker threads and distribute some of the embarrassingly parellel operations within across a number of CPU cores.
* Only rudimentary use is made of the library, but the following compilation option provides speedups generally
* exceeding 50%:
*
* mex -I'[Path to]\eigen-3.3.5' CXXFLAGS="$CXXFLAGS -fopenmp" LDFLAGS="$LDFLAGS -fopenmp" normxcorr2_mex.cpp
*
*
* You are free to do with this code as you wish. For this reason, it is released under the UNLICENSE model:
*
* This is free and unencumbered software released into the public domain.
*
* Anyone is free to copy, modify, publish, use, compile, sell, or
* distribute this software, either in source code form or as a compiled
* binary, for any purpose, commercial or non-commercial, and by any
* means.
*
* In jurisdictions that recognize copyright laws, the author or authors
* of this software dedicate any and all copyright interest in the
* software to the public domain. We make this dedication for the benefit
* of the public at large and to the detriment of our heirs and
* successors. We intend this dedication to be an overt act of
* relinquishment in perpetuity of all present and future rights to this
* software under copyright law.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
* IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* For more information, please refer to <http://unlicense.org/>
*/
#include "mex.h"
#include <cstring>
#include <algorithm>
#include <limits>
#include <vector>
#include <cmath>
#include <complex>
#include <iostream>
#include <Eigen/Core>
#include <unsupported/Eigen/FFT>
using namespace Eigen;
// If we're compiled/linked with openMP, turn off Eigen's parallelisation
#ifdef _OPENMP
#define EIGEN_DONT_PARALLELIZE
#define EIGEN_NO_DEBUG
#endif
// For very small input templates, performing the raw 2D correlation in the spatial domain may be faster than
// the transform domain (due to the overhead that the latter involves). The decision which approach to use is
// made at runtime by comparing the size (=rows*cols) of the input TEMPLATE matrix with the following constant.
// Feel free to experiment with this value in your own application!
#define TEMPLATE_SIZE_THRESHOLD 401
// 2D Cross-correlation performed via the "naive approach" (laborious spatial domain convolution).
ArrayXXd spatialXcorr (const Ref<const ArrayXXd>& img, const Ref<const ArrayXXd>& templ)
{
int32_t r, c;
ArrayXXd xcorr2(img.rows()-templ.rows()+1, img.cols()-templ.cols()+1);
for (r=0; r<(img.rows()-templ.rows()+1); r++)
for (c=0; c<(img.cols()-templ.cols()+1); c++)
xcorr2(r,c) = (templ*img.block(r,c,templ.rows(),templ.cols())).sum();
return(xcorr2);
}
// 2D Cross-correlation performed via Fourier transform
ArrayXXd transformXcorr (const Ref<const ArrayXXd>& img, const Ref<const ArrayXXd>& templ)
{
ArrayXXd xcorr2(img.rows()-templ.rows()+1, img.cols()-templ.cols()+1);
// Copy the input arrays into a matrix the next power-of-2 up in size
int32_t nextPow2r = (int32_t)(pow(2.0, round(0.5+log((double)(img.rows()))/log(2.0))));
int32_t nextPow2c = (int32_t)(pow(2.0, round(0.5+log((double)(img.cols()))/log(2.0))));
MatrixXd imgPwr2 = MatrixXd::Zero(nextPow2r, nextPow2c);
MatrixXd templPwr2 = MatrixXd::Zero(nextPow2r, nextPow2c);
// A -> copied to top-left corner.
// TEMPLATE is rotated 180 degrees to account for rotation/flip performed during convolution.
imgPwr2.block(0, 0, img.rows(), img.cols()) = img.matrix();
templPwr2.block(0, 0, templ.rows(), templ.cols()) = (templ.matrix().colwise().reverse()).rowwise().reverse();
// Perform 2D FFTs via sequential 1D transforms (Rows first, then columns)
MatrixXcd imgFT(nextPow2r, nextPow2c), templFT(nextPow2r, nextPow2c), prodFT(nextPow2r, nextPow2c);
// Rows first...
#ifdef _OPENMP // If using parallel threads, then each thread
// must have it's own copy of the eigenFFT plan.
#pragma omp parallel for schedule(dynamic)
for (int32_t r=0; r<nextPow2r; r++) { // This is unnecesary for single-threaded execution as
// each evaluation of the FFT is identical in length
VectorXcd rowVec(nextPow2c); // and data type.
FFT<double> eigenFFT;
// The creation of the plan is computationally expensive
#else // and so we do it once, outside of the loop in the single
// threaded case (to reduce the run time by a factor > 2).
VectorXcd rowVec(nextPow2c);
FFT<double> eigenFFT;
for (int32_t r=0; r<nextPow2r; r++) {
#endif
eigenFFT.fwd(rowVec, imgPwr2.row(r));
imgFT.row(r) = rowVec;
eigenFFT.fwd(rowVec, templPwr2.row(r));
templFT.row(r) = rowVec;
}
// ...then columns.
#ifdef _OPENMP
#pragma omp parallel for schedule(dynamic)
for (int32_t c=0; c<nextPow2c; c++) {
VectorXcd colVec(nextPow2r);
FFT<double> eigenFFT;
#else
VectorXcd colVec(nextPow2r);
for (int32_t c=0; c<nextPow2c; c++) {
#endif
eigenFFT.fwd(colVec, imgFT.col(c));
imgFT.col(c) = colVec;
eigenFFT.fwd(colVec, templFT.col(c));
templFT.col(c) = colVec;
}
// Mutliply complex Fourier domain matricies
prodFT = imgFT.cwiseProduct(templFT);
// Transform (complex) Fourier product back -> (real) spatial domain (2D IFFT).
// Reuse templPwr2 as the output variable for efficiency.
// Rows first (again)...
#ifdef _OPENMP
#pragma omp parallel for schedule(dynamic)
for (int32_t r=0; r<nextPow2r; r++) {
FFT<double> eigenFFT;
VectorXcd rowVec(nextPow2c);
#else
for (int32_t r=0; r<nextPow2r; r++) {
#endif
eigenFFT.inv(rowVec, prodFT.row(r));
prodFT.row(r) = rowVec;
}
// ...and lastly, columns.
#ifdef _OPENMP
#pragma omp parallel for schedule(dynamic)
for (int32_t c=0; c<nextPow2c; c++) {
FFT<double> eigenFFT;
VectorXcd colVec(nextPow2r);
#else
for (int32_t c=0; c<nextPow2c; c++) {
#endif
eigenFFT.inv(colVec, prodFT.col(c));
templPwr2.col(c) = colVec.real();
}
// Extract the valid region of correlation coefficients
xcorr2 = templPwr2.array().block(templ.rows()-1, templ.cols()-1, img.rows()-templ.rows()+1, img.cols()-templ.cols()+1);
return(xcorr2);
}
// Normalised cross-correlation top-level function
ArrayXXd normxcorr2 (const Ref<const ArrayXXd>& templ, const Ref<const ArrayXXd>& img)
{
ArrayXXd templZMean(templ.rows(), templ.cols());
ArrayXXd scalingCoeffs(img.rows() - templ.rows() +1, img.cols() - templ.cols() +1);
ArrayXXd normxcorr(img.rows()-templ.rows()+1, img.cols()-templ.cols()+1);
ArrayXXd integralImg(img.rows()+2, img.cols()+2), integralImgSq(img.rows()+2, img.cols()+2);
ArrayXXd windowMeanA = ArrayXXd::Zero(img.rows() - templ.rows() +1, img.cols() - templ.cols() +1);
ArrayXXd windowMeanASq = ArrayXXd::Zero(img.rows() - templ.rows() +1, img.cols() - templ.cols() +1);
// Calculate the standard deviation of the TEMPLATE
double templSizeRcp = 1.0/(double)(templ.rows()*templ.cols());
templZMean = templ-templ.mean();
double templateStd = sqrt((templZMean.pow(2)).sum()*templSizeRcp);
// Compute mean and standard deviation of input matrix A over the template window size. Firsly...
// Construct array for computing the integral image(s) + zero pad the edges to avoid boundary issues
integralImg.block(0, 0, 1, integralImg.cols()) = ArrayXXd::Zero(1, integralImg.cols());
integralImg.block(0, 0, integralImg.rows(), 1) = ArrayXXd::Zero(integralImg.rows(), 1);
integralImg.block(0, integralImg.cols()-1, integralImg.rows(), 1) = ArrayXXd::Zero(integralImg.rows(), 1);
integralImg.block(integralImg.rows()-1, 0, 1, integralImg.cols()) = ArrayXXd::Zero(1, integralImg.cols());
integralImgSq.block(0, 0, 1, integralImgSq.cols()) = ArrayXXd::Zero(1, integralImgSq.cols());
integralImgSq.block(0, 0, integralImgSq.rows(), 1) = ArrayXXd::Zero(integralImgSq.rows(), 1);
integralImgSq.block(0, integralImgSq.cols()-1, integralImgSq.rows(), 1) = ArrayXXd::Zero(integralImgSq.rows(), 1);
integralImgSq.block(integralImgSq.rows()-1, 0, 1, integralImgSq.cols()) = ArrayXXd::Zero(1, integralImgSq.cols());
// Calculate cumulative sum. Along the length of each row first...
for (int32_t r=0; r<img.rows(); r++) {
double sum = 0.0;
double sumSq = 0.0;
for (int32_t c=0; c<img.cols(); c++) {
sum += img(r,c);
sumSq += (img(r,c)*img(r,c));
integralImg(r+1, c+1) = sum;
integralImgSq(r+1, c+1) = sumSq;
}
}
// ...and then down each column.
for (int32_t c=1; c<=img.cols(); c++) {
double sum = 0.0;
double sumSq = 0.0;
for (int32_t r=1; r<=img.rows(); r++) {
sum += integralImg(r,c);
sumSq += integralImgSq(r,c);
integralImg(r,c) = sum;
integralImgSq(r,c) = sumSq;
}
}
// Determine start/finish indexes for the boundaries of the summed area
int32_t rStart = (int32_t)(0.5 + templ.rows()/2.0);
int32_t rEnd = img.rows() - rStart + (templ.rows() % 2);
int32_t cStart = (int32_t)(0.5 + templ.cols()/2.0);
int32_t cEnd = img.cols() - cStart + (templ.cols() % 2);
// Evaluate the sum of intensities
windowMeanA += ( integralImg.block(templ.rows(), templ.cols(), rEnd-rStart+1, cEnd-cStart+1) \
- integralImg.block(templ.rows(), 0, rEnd-rStart+1, cEnd-cStart+1) \
- integralImg.block(0, templ.cols(), rEnd-rStart+1, cEnd-cStart+1) \
+ integralImg.block(0, 0, rEnd-rStart+1, cEnd-cStart+1) )*templSizeRcp;
// Evaluate the sum of intensities (squared)
windowMeanASq += ( integralImgSq.block(templ.rows(), templ.cols(), rEnd-rStart+1, cEnd-cStart+1) \
- integralImgSq.block(templ.rows(), 0, rEnd-rStart+1, cEnd-cStart+1) \
- integralImgSq.block(0, templ.cols(), rEnd-rStart+1, cEnd-cStart+1) \
+ integralImgSq.block(0, 0, rEnd-rStart+1, cEnd-cStart+1) )*templSizeRcp;
// Calculate the standard deviation (squared) of A over the template size window
// Standard deviation = sqrt(windowMeanASq - windowMeanA.square());
scalingCoeffs = (windowMeanASq - windowMeanA.square());
// Amalgamate the element-by-element test/square root with other coefficients scaling for efficiency
for (int32_t r=0; r<scalingCoeffs.rows(); r++)
for (int32_t c=0; c<scalingCoeffs.cols(); c++)
if (scalingCoeffs(r,c) > 0)
scalingCoeffs(r,c) = templSizeRcp/(templateStd*sqrt(scalingCoeffs(r,c)));
else
scalingCoeffs(r,c) = std::numeric_limits<double>::quiet_NaN();
// Decide which 2D correlation approach to use (transform or spatial domain)
if ((templ.rows()*templ.cols()) > TEMPLATE_SIZE_THRESHOLD)
normxcorr = scalingCoeffs*transformXcorr(img, templZMean);
else
normxcorr = scalingCoeffs*spatialXcorr(img, templZMean);
return(normxcorr);
}
// ******************** Minimal MEX wrapper ********************
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
// Check the number of arguments
if (nrhs != 2)
mexErrMsgIdAndTxt("MATLAB:normxcorr2_mex", "Usage: NCC = normxcorr2_mex (TEMPLATE, A);");
// Verify input array sizes
size_t rowsTempl = mxGetM(prhs[0]);
size_t colsTempl = mxGetN(prhs[0]);
size_t rowsA = mxGetM(prhs[1]);
size_t colsA = mxGetN(prhs[1]);
if ((rowsA <= rowsTempl) || (colsA <= colsTempl))
mexErrMsgIdAndTxt("MATLAB:normxcorr2_mex", "Size of TEMPLATE must be less than input matrix A.");
#ifdef _OPENMP
// Required for Eigen versions < 3.3 and for *some* non-compliant C++11 compilers.
// (Warn Eigen our application might be calling it from multiple threads).
initParallel();
#endif
// Perform correlation
ArrayXXd xcorr(rowsA-rowsTempl+1, colsA-colsTempl+1);
xcorr = normxcorr2 (Map<ArrayXXd>(mxGetPr(prhs[0]), rowsTempl, colsTempl), Map<ArrayXXd>(mxGetPr(prhs[1]), rowsA, colsA));
// Return data to MATLAB
plhs[0] = mxCreateDoubleMatrix(rowsA-rowsTempl+1, colsA-colsTempl+1, mxREAL);
Map<ArrayXXd> (mxGetPr(plhs[0]), xcorr.rows(), xcorr.cols()) = xcorr;
return;
}
As per the comments in the header, save the file to normxcorr2_mex.cpp and compile with:
mex -I'[Path to]\eigen-3.3.5' normxcorr2_mex.cpp for
single-threaded operation, or with
mex -I'[Path to]\eigen-3.3.5' CXXFLAGS="$CXXFLAGS -fopenmp" LDFLAGS="$LDFLAGS -fopenmp"
normxcorr2_mex.cpp for multi-threaded openMP support.
The timing and correct operation of the code can be verified with the following MATLAB script:
% testHarness.m
%
% Verify the results of the compiled normxcorr2_mex() function against
% MATLABs inbuilt normxcorr2() function. This takes aaaaages to run!
%% Simulation/comparison parameters
nRunsA = 50; % Number of trials for accuracy comparison
nRunsT = 30; % Number of repetitions for execution time detemination
nStepsT = 50; % Number of input matrix size steps to take in execution time measurement
maxImSize = [1343 1745]; % (Deliberately non-round-number) maximum image size for tests
maxTemplSize = [248 379]; % Maximum image template size
%% Accuracy comparison
sumSqErr = zeros(1, nRunsA);
fprintf(2, 'Accuracy comparison\n');
for nRun = 1:nRunsA
fprintf('Run %d (of %d)\n', nRun, nRunsA);
% Create input images/templates of random content and size
randSizeScale = 0.02 + 0.98*rand(1, 2);
img = rand(round(maxImSize.*randSizeScale));
templ = rand(round(maxTemplSize.*randSizeScale));
% MATLABs inbuilt function
resultMatPadded = normxcorr2(templ, img);
% Remove unwanted padding
[rTempl, cTempl] = size(templ);
[rImg, cImg] = size(img);
resultMat = resultMatPadded(rTempl:rImg, cTempl:cImg);
% MEX function
resultMex = normxcorr2_mex(templ, img);
% Compare results
sumSqErr(nRun) = sum(sum( (resultMat-resultMex).^2 ));
end
figure;
plot(sumSqErr);
title('Accuracy comparison between MATLAB and MEX normxcorr2');
xlabel('Run #');
ylabel('\Sigma |MATLAB-MEX|^2');
grid on;
%% Timing comparison
avMatT = zeros(1, nStepsT);
avMexT = zeros(1, nStepsT);
fprintf(2, 'Timing comparison\n');
for stp = 1:nStepsT
fprintf('Run %d (of %d)\n', stp, nStepsT);
% Create input images/templates of random content and progressively larger size
img = rand(round(maxImSize*stp/nStepsT));
templ = rand(round(maxTemplSize.*stp/nStepsT));
% MATLABs function
tStart = tic;
for exec = 1:nRunsT
dummy = normxcorr2(templ, img);
end
avMatT(stp) = toc(tStart)/nRunsT;
% MEX function
tStart = tic;
for exec = 1:nRunsT
dummy = normxcorr2_mex(templ, img);
end
avMexT(stp) = toc(tStart)/nRunsT;
end
figure;
plot((1:nStepsT)/(0.01*nStepsT), avMatT, 'rx-', (1:nStepsT)/(0.01*nStepsT), avMexT, 'bo-');
title('Execution time comparison between MATLAB and MEX normxcorr2');
xlabel('Input array size [% of maximum]');
ylabel('Evaluation time [s]');
legend('MATLAB', 'MEX');
grid on;
The above C++/mex implementation and MATLAB's inbuilt normxcorr2 function agree to a level approaching the limits of the underlying double-precision data type. It turns out that the recent MATLAB normxcorr2 is hard to beat in speed though - even when using openMP - as this comparative timing plot shows when run on my elderly i7-980 CPU.
Unfortunately I don't have an explanation, but can confirm the issue appears to be with the library and not your implementation. I had issues building the normxcorr2_mex library with the MinGW64 compiler under windows which made me wary of possible variations between builds. Builds under both Debian Linux and Windows exhibit the same (incorrect) behaviour compared to MATLAB's built-in normxcorr2 function, as shown in the plot included here.
To assist anyone else building the library under Windows, I had to coerce the C++ compiler with the following command line:
mex -O CXXFLAGS="$CXXFLAGS -std=c++03 -fpermissive" normxcorr2_mex.cpp cv_src/*.cpp
Incidentally, I also found the mex implementation to be an order of magnitude slower than MATLABs!
I have a 900×1 vector of values (in MATLAB). Each 9 consecutive values should be averaged -without overlap- result in a 100×1 vector of values. The problem is that the averaging should be weighted based on a weighting vector of [1 2 1;2 4 2;1 2 1]. Is there any efficient way to do that averaging? I’ve heard about conv function in MATLAB; Is it helpful?
conv works by sliding a kernel through your data. But in your case, you need the mask to be jumping through your data, so I don't think conv will work for you.
If you want to use existing MATLAB function, you can do this (I have to assume your weighting matrix has only one dimension) :
kernel = [1;2;1;2;4;2;1;2;1];
in_matrix = reshape(in_matrix, 9, 100);
base = sum(kernel);
out_matrix = bsxfun(#times, in_matrix, kernel);
result = sum(out_matrix,1)/base;
I don't know if there is any clever way to speed this up. bsxfun allows singleton expansion, but maybe not dimension reduction.
A faster way would be to use mex. Open a new file in editor, paste the following code and save file as weighted_average.c.
#include "mex.h"
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
double *in_matrix, *kernel, *out_matrix, base;
int niter;
size_t nrows_data, nrows_kernel;
/* Get number of element along first dimension of input matrix. */
nrows_kernel = mxGetM(prhs[1]);
nrows_data = mxGetM(prhs[0]);
/* Create output matrix*/
plhs[0] = mxCreateDoubleMatrix((mwSize)nrows_data/nrows_kernel,1,mxREAL);
/* Get a pointer to the real data */
in_matrix = mxGetPr(prhs[0]);
kernel = mxGetPr(prhs[1]);
out_matrix = mxGetPr(plhs[0]);
/* Sum the elements in weighting array */
base = 0;
for (int i = 0; i < nrows_kernel; i +=1)
{
base += kernel[i];
}
/* Perform calculation */
niter = nrows_data/nrows_kernel;
for (int i = 0; i < niter ; i += 1)
{
for (int j = 0; j < nrows_kernel; j += 1)
{
out_matrix[i] += in_matrix[i*nrows_kernel+j]*kernel[j];
}
out_matrix[i] /= base;
}
}
Then in command window , type in
mex weighted_average.c
To use it:
result = weighted_average(input, kernel);
Note that both input and kernel have to be M x 1 matrix. On my computer, the first method took 0.0012 second. The second method took 0.00007 second. That's an order of magnitude faster than the first method.
I am trying to parallelize a section of my Matlab code using OpenMP in a mex file. The section in theMatlab code that I want to parallelize is:
for i = 1 : n
D(:, i) = CALC(A, B(:,i), C(i));
end
I have written this in order to parallelize it:
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
size_t r,n,i,G;
double *A, *B, *C, *D;
int nthreads;
nthreads = 4;
A = mxGetPr(prhs[0]); /* first input matrix */
B = mxGetPr(prhs[1]); /* second input matrix */
C = mxGetPr(prhs[2]);/* third input matrix */
/* dimensions of input matrices */
r = mxGetN(prhs[0]);
n = mxGetN(prhs[1]);
plhs[0] = mxCreateDoubleMatrix(r,n, mxREAL);
D = mxGetPr(plhs[0]);
G=n/nthreads;
omp_set_num_threads(nthreads);
#pragma omp parallel for schedule (dynamic, G)
{
for i = 1 : n
D(:, i) = CALC(A, B(:,i), C(i));
}
}
CALC is a Matlab function I have written. My challenge is how to use Mexcallmatlab to call in the CALC function to the mex file so that it can execute it in parallel inside my mex file, and return the elements of each column of D (i.e. D(:, i) back to my Matlab code.
Sorry for the lenghty question. Any help I can get on this will be highly appreciated.
You need to use multiple MATLAB processes to be able to run multiple calls in parallel. The easiest way would be to use parallel computing toolbox and use parfor instead of for loop.
I'm trying to have the same FFT with FFTW and Matlab. I use MEX files to check if FFTW is good. I think I have everything correct but :
I get absurd values from FFTW,
I do not get the same results when running the FFTW code several times on the same input signal.
Can someone help me get FFTW right?
--
EDIT 1 : I finally figured out what was wrong, BUT...
FFTW is very unstable : I get the right spectrum 1 time out of 5 !
How come? Plus, when I get it right, it doesn't have symmetry (which is not a very serious problem but that's too bad).
--
Here is the Matlab code to compare both :
fs = 2000; % sampling rate
T = 1/fs; % sampling period
t = (0:T:0.1); % time vector
f1 = 50; % frequency in Hertz
omega1 = 2*pi*f1; % angular frequency in radians
phi = 2*pi*0.25; % arbitrary phase offset = 3/4 cycle
x1 = cos(omega1*t + phi); % sinusoidal signal, amplitude = 1
%%
mex -I/usr/local/include -L/usr/local/lib/ -lfftw3 mexfftw.cpp
N=256;
S1=mexfftw(x1,N);
S2=fft(x1,N);
plot(abs(S1)),hold,plot(abs(S2),'r'), legend('FFTW','Matlab')
Here is the MEX file :
/*********************************************************************
* mex -I/usr/local/include -L/usr/local/lib/ -lfftw3 mexfftw.cpp
* Use above to compile !
*
********************************************************************/
#include <matrix.h>
#include <mex.h>
#include "fftw3.h"
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]) {
//declare variables
mxArray *sig_v, *fft_v;
int nfft;
const mwSize *dims;
double *s, *fr, *fi;
int dimx, dimy, numdims;
//associate inputs
sig_v = mxDuplicateArray(prhs[0]);
nfft = static_cast<int>(mxGetScalar(prhs[1]));
//figure out dimensions
dims = mxGetDimensions(prhs[0]);
numdims = mxGetNumberOfDimensions(prhs[0]);
dimy = (int)dims[0]; dimx = (int)dims[1];
//associate outputs
fft_v = plhs[0] = mxCreateDoubleMatrix(nfft, 1, mxCOMPLEX);
//associate pointers
s = mxGetPr(sig_v);
fr = mxGetPr(fft_v);
fi = mxGetPi(fft_v);
//do something
double *in;
fftw_complex *out;
fftw_plan p;
in = (double*) fftw_malloc(sizeof(double) * dimy);
out = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * nfft);
p = fftw_plan_dft_r2c_1d(nfft, s, out, FFTW_ESTIMATE);
fftw_execute(p); /* repeat as needed */
for (int i=0; i<nfft; i++) {
fr[i] = out[i][0];
fi[i] = out[i][1];
}
fftw_destroy_plan(p);
fftw_free(in);
fftw_free(out);
return;
}
Matlab uses the fftw library to perform its ffts, on my platform (Mac OS) this leads to issues with the linker as mex replaces the desired library with Matlab's version of fftw. To avoid this static link to the library using mex "-I/usr/local/include /usr/local/lib/libfftw3.a mexfftw.cpp".
The input of fftw_plan_dft_r2c_1d is not destroyed so you don't need to duplicate the input (note: not true for fftw_plan_dft_c2r_1d). The output has size nfft/2+1 as the output of a real fft is Hermitian. So to get the full output use:
for (i=0; i<nfft/2+1; i++) {
fr[i] = out[i][0];
fi[i] = out[i][1];
}
for (i=1; i<nfft/2+1; i++) {
fr[nfft-i] = out[i][0];
fi[nfft-i] = out[i][1];
}
Should "p = fftw_plan_dft_r2c_1d(nfft, s, out, FFTW_ESTIMATE);"
be
"p = fftw_plan_dft_r2c_1d(nfft, in, out, FFTW_ESTIMATE);".
'in' is 16-byte aligned, but 's' may not.
I am not sure whether it will cause a problem. I have a similar code about FFTW,
it sometimes gives me correct result and other times NaN. Also, I tried to test
my code with python ctypes, and actually it has the same weired behavior.
Finally, I found this post Checking fftw3 with valgrind, and it helps me.
For me, the issue is the reservation of heap storage in FFTW which are not freed
even after the program terminates.
fftw_cleanup()
resolves my problem. Maybe it can help you too.