Porting signal windowing code from Matlab to Java - matlab

This is part of a code from spectral subtraction algorithm,i'm trying to optimize it for android.please help me.
this is the matlab code:
function Seg=segment(signal,W,SP,Window)
% SEGMENT chops a signal to overlapping windowed segments
% A= SEGMENT(X,W,SP,WIN) returns a matrix which its columns are segmented
% and windowed frames of the input one dimentional signal, X. W is the
% number of samples per window, default value W=256. SP is the shift
% percentage, default value SP=0.4. WIN is the window that is multiplied by
% each segment and its length should be W. the default window is hamming
% window.
% 06-Sep-04
% Esfandiar Zavarehei
if nargin<3
SP=.4;
end
if nargin<2
W=256;
end
if nargin<4
Window=hamming(W);
end
Window=Window(:); %make it a column vector
L=length(signal);
SP=fix(W.*SP);
N=fix((L-W)/SP +1); %number of segments
Index=(repmat(1:W,N,1)+repmat((0:(N-1))'*SP,1,W))';
hw=repmat(Window,1,N);
Seg=signal(Index).*hw;
and this is our java code for this function:
public class MatrixAndSegments
{
public int numberOfSegments;
public double[][] res;
public MatrixAndSegments(int numberOfSegments,double[][] res)
{
this.numberOfSegments = numberOfSegments;
this.res = res;
}
}
public MatrixAndSegments segment (double[] signal_in,int samplesPerWindow, double shiftPercentage, double[] window)
{
//default shiftPercentage = 0.4
//default samplesPerWindow = 256 //W
//default window = hanning
int L = signal_in.length;
shiftPercentage = fix(samplesPerWindow * shiftPercentage); //SP
int numberOfSegments = fix ( (L - samplesPerWindow)/ shiftPercentage + 1); //N
double[][] reprowMatrix = reprowtrans(samplesPerWindow,numberOfSegments);
double[][] repcolMatrix = repcoltrans(numberOfSegments, shiftPercentage,samplesPerWindow );
//Index=(repmat(1:W,N,1)+repmat((0:(N-1))'*SP,1,W))';
double[][] index = new double[samplesPerWindow+1][numberOfSegments+1];
for (int x = 1; x < samplesPerWindow+1; x++ )
{
for (int y = 1 ; y < numberOfSegments + 1; y++) //numberOfSegments was 3
{
index[x][y] = reprowMatrix[x][y] + repcolMatrix[x][y];
}
}
//hamming window
double[] hammingWindow = this.HammingWindow(samplesPerWindow);
double[][] HW = repvector(hammingWindow, numberOfSegments);
double[][] seg = new double[samplesPerWindow][numberOfSegments];
for (int y = 1 ; y < numberOfSegments + 1; y++)
{
for (int x = 1; x < samplesPerWindow+1; x++)
{
seg[x-1][y-1] = signal_in[ (int)index[x][y]-1 ] * HW[x-1][y-1];
}
}
MatrixAndSegments Matrixseg = new MatrixAndSegments(numberOfSegments,seg);
return Matrixseg;
}
public int fix(double val) {
if (val < 0) {
return (int) Math.ceil(val);
}
return (int) Math.floor(val);
}
public double[][] repvector(double[] vec, int replications)
{
double[][] result = new double[vec.length][replications];
for (int x = 0; x < vec.length; x++) {
for (int y = 0; y < replications; y++) {
result[x][y] = vec[x];
}
}
return result;
}
public double[][] reprowtrans(int end, int replications)
{
double[][] result = new double[end +1][replications+1];
for (int x = 1; x <= end; x++) {
for (int y = 1; y <= replications; y++) {
result[x][y] = x ;
}
}
return result;
}
public double[][] repcoltrans(int end, double multiplier, int replications)
{
double[][] result = new double[replications+1][end+1];
for (int x = 1; x <= replications; x++) {
for (int y = 1; y <= end ; y++) {
result[x][y] = (y-1)*multiplier;
}
}
return result;
}
public double[] HammingWindow(int size)
{
double[] window = new double[size];
for (int i = 0; i < size; i++)
{
window[i] = 0.54-0.46 * (Math.cos(2.0 * Math.PI * i / (size-1)));
}
return window;
}

"Porting" Matlab code statement by statement to Java is a bad approach.
Data is rarely manipulated in Matlab using loops and addressing individual elements (because the Matlab interpreter/VM is rather slow), but rather through calls to block processing functions (which have been carefully written and optimized). This leads to a very idiosyncratic programming style in which repmat, reshape, find, fancy indexing et al. are used to do operations which would be much more naturally expressed through Java loops.
For example, to multiply each column of a matrix A by a vector v, you will write in matlab:
A = diag(v) * A
or
A = repmat(v', 1, size(A, 2)) .* A
This solution:
for i = 1:size(A, 2),
A(:, i) = A(:, i) .* v';
end;
is inefficient.
But it would be terribly foolish to try to do the same thing in Java and invoke a matrix product or to build a matrix with repeated copies of v. Instead, just do:
for (int i = 0; i < rows; i++) {
for (int j = 0; j < columns; j++) {
a[i][j] *= v[i]
}
}
I suggest you to try to understand what this matlab function is actually doing, instead of focusing on how it is doing it, and reimplement it from scratch in Java, forgetting all the matlab implementation except the specifications given in the comments. Half of the code you have written is useless, indeed. Actually, it seems to me that this function wouldn't be needed at all, and what it does could be efficiently integrated in the caller's code.

Related

How to generate non-repeating Random Numbers in Unity

I am trying to create a simple Bingo game and want to make sure the numbers are not repeating on the bingo card. I have a random number generator, but for some reason the code I'm using doesn't work as the same numbers will constantly repeat. Could somebody please take a look at my code below and either tell me what I need to fix or fix the code for me?
public Grid(int width, int height, float cellSize)
{
this.width = width;
this.height = height;
this.cellSize = cellSize;
gridArray = new int[width, height];
debugTextArray = new TextMesh[width, height];
for (int x = 0; x < gridArray.GetLength(0); x++)
{
for (int y = 0; y < gridArray.GetLength(1); y++)
{
debugTextArray[x, y] = UtilsClass.CreateWorldText(gridArray[x, y].ToString(), null, GetWorldPosition(x, y) + new Vector3(cellSize, cellSize) * .5f, 20, Color.white, TextAnchor.MiddleCenter);
Debug.DrawLine(GetWorldPosition(x, y), GetWorldPosition(x, y + 1), Color.white, 100f);
Debug.DrawLine(GetWorldPosition(x, y), GetWorldPosition(x + 1, y), Color.white, 100f);
}
}
Debug.DrawLine(GetWorldPosition(0, height), GetWorldPosition(width, height), Color.white, 100f);
Debug.DrawLine(GetWorldPosition(width, 0), GetWorldPosition(width, height), Color.white, 100f);
for (int x = 0; x <= 4; x++)
{
RandomValue(0, x);
RandomValue(1, x);
RandomValue(2, x);
RandomValue(3, x);
RandomValue(4, x);
}
}
private Vector3 GetWorldPosition(int x, int y)
{
return new Vector3(x, y) * cellSize;
}
public void RandomValue(int x, int y)
{
if (x >= 0 && y >= 0 && x < width && y < height)
{
list = new List<int>(new int[Lenght]);
for (int j = 0; j < 25; j++)
{
Rand = UnityEngine.Random.Range(1, 50);
while (list.Contains(Rand))
{
Rand = UnityEngine.Random.Range(1, 50);
}
list[j] = Rand;
gridArray[x, y] = list[j];
}
debugTextArray[x, y].text = gridArray[x, y].ToString();
debugTextArray[2, 2].text = "Free";
}
}
Basically your concept in function RandomValue() is correct, but problem is it only check in same column, so you have to bring the concept of RandomValue() to Grid() level. You need a List contain all approved value, then check Contains() at Grid().
But in fact you can do it in all one go.
Make sure your width*height not larger than maxValue.
Dictionary<Vector2Int, int> CreateBingoGrid(int width, int height, int maxValue)
{
var grid = new Dictionary<Vector2Int, int>();
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
var num = Random.Range(1, maxValue);
while (grid.ContainsValue(num))
{
num = Random.Range(1, maxValue);
}
grid.Add(new Vector2Int(x, y), num);
}
}
return grid;
}
As mentioned in the comment on your question, it's probably the easiest to just shuffle the numbers in the range [1,50] and then take the first 25 or however many you want.
The reason your code isn't working properly and you see a lot of repeats is because you're calling the RandomValue() function multiple separate times and the list variable you're comparing against if a value is already on the chart is inside of that function. Meaning that it will only ever check the values it has generated in that call, in this case meaning only for one row.
Also, if you make a list that you know will always be the same size, you should use an array instead. Lists are for when you want the size to be adjustable.
Solution 1:
A very simple way to generate an array with the numbers 1-50 would be to do this:
//Initialize Array
int[] numbers = new int[50];
for (int i = 1; i <= numbers.Length; i++)
{
numbers[i] = i;
}
//Shuffle Array
for (int i = 0; i < numbers.Length; i++ )
{
int tmp = numbers[i];
int r = Random.Range(i, numbers.Length);
numbers[i] = numbers[r];
numbers[r] = tmp;
}
//Get first 'n' numbers
int[] result = Array.Copy(numbers, 0, result, 0, n);
return result;
I'm not sure if it's the most efficient way, but it would work.
Solution 2:
To change your code to check against the entire list, I would change this section:
for (int x = 0; x <= 4; x++)
{
RandomValue(0, x);
RandomValue(1, x);
RandomValue(2, x);
RandomValue(3, x);
RandomValue(4, x);
}
To something like this:
List<int> values = new List<int>();
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
int r = RandomValue(1, 50);
while (values.Contains(r))
{
r = RandomValue(1, 50);
}
values[y * width + x].Add(r);
gridArray[x, y] = r;
}
}
int RandomValue(int min, int max) {
return UnityEngine.Random.Range(min, max);
}
Hope this helps!

Same C code different results TIv5.2.5 and gcc 5.4.1 c99 compiler

I am using MSP432P401R to do FFT of SAR ADC samples, did FFT in MATLAB and got results same as C compiler online but Code Composer Studio IDE is giving different output than MATLAB results, I thought that can be a compiler issue so tried reading same did some changes and tried but not getting results Like MATLAB.
Online C compiler was gcc 5.4.1 c99.
and in CCS TI v5.2.5 compiler is used.
float m;
float ur, ui, sr, si,tr, ti;
long double Temp_A[256],ArrayA[256]={2676,2840,2838,2832,2826,2818,2814,2808,
2804,2798,2790,2784,2778,2770,2764,2758,2752,2746,2740,2734,
2726,2720,2714,2706,2700,2692,2686,2680,2674,2668,2660,2654,
2646,2642,2634,2624,2618,2612,2604,2598,2590,2584,2576,2570,
2562,2556,2550,2542,2536,2530,2522,2512,2508,2498,2490,2484,
2478,2470,2462,2454,2448,2442,2432,2426,2420,2414,2404,2398,
2390,2382,2374,2368,2360,2352,2346,2338,2330,2322,2314,2306,
2300,2294,2286,2278,2272,2262,2258,2250,2238,2234,2228,2220,
2208,2202,2192,2186,2178,2170,2164,2156,2150,2142,2134,2126,
2116,2110,2104,2096,2088,2078,2070,2062,2054,2046,2040,2034,
2026,2018,2010,2002,1994,1986,1978,1970,1962,1954,1946,1936,
1930,1922,1914,1908,1902,1894,1886,1876,1868,1860,1852,1846,
1838,1830,1822,1814,1804,1796,1790,1784,1776,1768,1760,1754,
1746,1738,1728,1720,1714,1708,1698,1692,1684,1674,1668,1656,
1656,1644,1640,1628,1624,1612,1610,1598,1596,1584,1580,1570,
1564,1554,1546,1540,1532,1526,1520,1512,1504,1496,1490,1482,
1474,1468,1462,1454,1446,1438,1432,1424,1420,1410,1404,1398,
1392,1384,1376,1370,1364,1356,1348,1342,1336,1328,1322,1316,
1308,1300,1294,1286,1280,1276,1270,1262,1254,1248,1242,1236,
1230,1222,1216,1210,1206,1198,1192,1188,1178,1172,1168,1162,
1154,1148,1144,1138,1132,1126,1120,1114,1108,1102,1096,1090,
1084,1080,1074,1068,1062,1058,1052,1048},ArrayA_IMX[256]={0};
unsigned int jm1,i;
unsigned int ip,l;
void main(void)
{
WDT_A->CTL = WDT_A_CTL_PW |WDT_A_CTL_HOLD;
VCORE();
CLK();
P1DIR |= BIT5; //CLK--AD7352 OUTPUT DIRECTION
P1DIR |= BIT7; //CHIP SELECT--AD7352 OUTPUT DIRECTION
P5DIR &= ~BIT0; //SDATAA--AD7352 INPUT DIRECTION P5.0
P5DIR &= ~BIT2; //SDATAB--AD7352 INPUT DIRECTION P5.2
while(1)
{
bit_reversal(ArrayA);
fft(ArrayA,ArrayA_IMX);
}
}
void bit_reversal(long double REX[])
{
int i,i2,n,m;
int tx,k,j;
n = 1;
m=8;
for (i=0;i<m;i++)
{
n *= 2;
}
i2 = n >> 1;
j = 0;
for (i=0;i<n-1;i++)
{
if (i < j)
{
tx = REX[i];
//ty = IMX[i];
REX[i] = REX[j];
//IMX[i] = IMX[j];
REX[j] = tx;
//IMX[j] = ty;
}
k = i2;
while (k <= j)
{
j -= k;
k >>= 1;
}
j += k;
}
}
void fft(long double REX[],long double IMX[])
{
N = 256;
nm1 = N - 1;
nd2 = N / 2;
m = log10l(N) / log10l(2);
j = nd2;
for (l = 1; l <= m; l++)
{
le = powl(2, l);
le2 = le / 2;
ur = 1;
ui = 0;
// Calculate sine and cosine values
sr = cosl(M_PI/le2);
si = -sinl(M_PI/le2);
// Loop for each sub DFT
for (j = 1; j <= le2; j++)
{
jm1 = j - 1;
// Loop for each butterfly
for (i = jm1; i <= nm1; i += le)
{
ip = i + le2;
tr = REX[ip]*ur - IMX[ip]*ui;
ti = REX[ip]*ui + IMX[ip]*ur;
REX[ip] = REX[i] - tr;
IMX[ip] = IMX[i] - ti;
REX[i] = REX[i] + tr;
IMX[i] = IMX[i] + ti;
}
tr = ur;
ur = tr*sr - ui*si;
ui = tr*si + ui*sr;
}
}
}

Best Way to Add 3 Numbers (or 4, or N) in Java - Kahan Sums?

I found a completely different answer to this question, the whole original question makes no sense anymore. However, the answer way be useful, so I modify it a bit...
I want to sum up three double numbers, say a, b, and c, in the most numerically stable way possible.
I think using a Kahan Sum would be the way to go.
However, a strange thought occured to me: Would it make sense to:
First sum up a, b, and c and remember the (absolute value of the) compensation.
Then sum up a, c, b
If the (absolute value of the) compensation of the second sum is smaller, use this sum instead.
Proceed similar with b, a, c and other permutations of the numbers.
Return the sum with the smallest associated absolute compensation.
Would I get a more "stable" Addition of three numbers this way? Or does the order of numbers in the sum have no (use-able) impact on the compensation left at the end of the Summation? With (use-able) I mean to ask whether the compensation value itself is stable enough to contain Information that I can use?
(I am using the Java programming language, although I think this does not matter here.)
Many thanks,
Thomas.
I think I found a much more reliable way to solve the "Add 3" (or "Add 4" or "Add N" numbers problem.
First of all, I implemented my idea from the original post. It resulted into quite some big code which seemed, initially, to work. However, it failed in the following case: add Double.MAX_VALUE, 1, and -Double.MAX_VALUE. The result was 0.
#njuffa's comments inspired me dig somewhat deeper and at http://code.activestate.com/recipes/393090-binary-floating-point-summation-accurate-to-full-p/, I found that in Python, this problem has been solved quite nicely. To see the full code, I downloaded the Python source (Python 3.5.1rc1 - 2015-11-23) from https://www.python.org/getit/source/, where we can find the following method (under PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2):
static PyObject*
math_fsum(PyObject *self, PyObject *seq)
{
PyObject *item, *iter, *sum = NULL;
Py_ssize_t i, j, n = 0, m = NUM_PARTIALS;
double x, y, t, ps[NUM_PARTIALS], *p = ps;
double xsave, special_sum = 0.0, inf_sum = 0.0;
volatile double hi, yr, lo;
iter = PyObject_GetIter(seq);
if (iter == NULL)
return NULL;
PyFPE_START_PROTECT("fsum", Py_DECREF(iter); return NULL)
for(;;) { /* for x in iterable */
assert(0 <= n && n <= m);
assert((m == NUM_PARTIALS && p == ps) ||
(m > NUM_PARTIALS && p != NULL));
item = PyIter_Next(iter);
if (item == NULL) {
if (PyErr_Occurred())
goto _fsum_error;
break;
}
x = PyFloat_AsDouble(item);
Py_DECREF(item);
if (PyErr_Occurred())
goto _fsum_error;
xsave = x;
for (i = j = 0; j < n; j++) { /* for y in partials */
y = p[j];
if (fabs(x) < fabs(y)) {
t = x; x = y; y = t;
}
hi = x + y;
yr = hi - x;
lo = y - yr;
if (lo != 0.0)
p[i++] = lo;
x = hi;
}
n = i; /* ps[i:] = [x] */
if (x != 0.0) {
if (! Py_IS_FINITE(x)) {
/* a nonfinite x could arise either as
a result of intermediate overflow, or
as a result of a nan or inf in the
summands */
if (Py_IS_FINITE(xsave)) {
PyErr_SetString(PyExc_OverflowError,
"intermediate overflow in fsum");
goto _fsum_error;
}
if (Py_IS_INFINITY(xsave))
inf_sum += xsave;
special_sum += xsave;
/* reset partials */
n = 0;
}
else if (n >= m && _fsum_realloc(&p, n, ps, &m))
goto _fsum_error;
else
p[n++] = x;
}
}
if (special_sum != 0.0) {
if (Py_IS_NAN(inf_sum))
PyErr_SetString(PyExc_ValueError,
"-inf + inf in fsum");
else
sum = PyFloat_FromDouble(special_sum);
goto _fsum_error;
}
hi = 0.0;
if (n > 0) {
hi = p[--n];
/* sum_exact(ps, hi) from the top, stop when the sum becomes
inexact. */
while (n > 0) {
x = hi;
y = p[--n];
assert(fabs(y) < fabs(x));
hi = x + y;
yr = hi - x;
lo = y - yr;
if (lo != 0.0)
break;
}
/* Make half-even rounding work across multiple partials.
Needed so that sum([1e-16, 1, 1e16]) will round-up the last
digit to two instead of down to zero (the 1e-16 makes the 1
slightly closer to two). With a potential 1 ULP rounding
error fixed-up, math.fsum() can guarantee commutativity. */
if (n > 0 && ((lo < 0.0 && p[n-1] < 0.0) ||
(lo > 0.0 && p[n-1] > 0.0))) {
y = lo * 2.0;
x = hi + y;
yr = x - hi;
if (y == yr)
hi = x;
}
}
sum = PyFloat_FromDouble(hi);
_fsum_error:
PyFPE_END_PROTECT(hi)
Py_DECREF(iter);
if (p != ps)
PyMem_Free(p);
return sum;
}
This summation method is different from Kahan's method, it uses a variable number of compensation variables. When adding the ith number, at most i additional compensation variables (stored in the array p) get used. This means if I want to add 3 numbers, I may need 3 additional variables. For 4 numbers, I may need 4 additional variables. Since the number of used variables may increase from n to n+1 only after the nth summand is loaded, I can translate the above code to Java as follows:
/**
* Compute the exact sum of the values in the given array
* {#code summands} while destroying the contents of said array.
*
* #param summands
* the summand array – will be summed up and destroyed
* #return the accurate sum of the elements of {#code summands}
*/
private static final double __destructiveSum(final double[] summands) {
int i, j, n;
double x, y, t, xsave, hi, yr, lo;
boolean ninf, pinf;
n = 0;
lo = 0d;
ninf = pinf = false;
for (double summand : summands) {
xsave = summand;
for (i = j = 0; j < n; j++) {
y = summands[j];
if (Math.abs(summand) < Math.abs(y)) {
t = summand;
summand = y;
y = t;
}
hi = summand + y;
yr = hi - summand;
lo = y - yr;
if (lo != 0.0) {
summands[i++] = lo;
}
summand = hi;
}
n = i; /* ps[i:] = [summand] */
if (summand != 0d) {
if ((summand > Double.NEGATIVE_INFINITY)
&& (summand < Double.POSITIVE_INFINITY)) {
summands[n++] = summand;// all finite, good, continue
} else {
if (xsave <= Double.NEGATIVE_INFINITY) {
if (pinf) {
return Double.NaN;
}
ninf = true;
} else {
if (xsave >= Double.POSITIVE_INFINITY) {
if (ninf) {
return Double.NaN;
}
pinf = true;
} else {
return Double.NaN;
}
}
n = 0;
}
}
}
if (pinf) {
return Double.POSITIVE_INFINITY;
}
if (ninf) {
return Double.NEGATIVE_INFINITY;
}
hi = 0d;
if (n > 0) {
hi = summands[--n];
/*
* sum_exact(ps, hi) from the top, stop when the sum becomes inexact.
*/
while (n > 0) {
x = hi;
y = summands[--n];
hi = x + y;
yr = hi - x;
lo = y - yr;
if (lo != 0d) {
break;
}
}
/*
* Make half-even rounding work across multiple partials. Needed so
* that sum([1e-16, 1, 1e16]) will round-up the last digit to two
* instead of down to zero (the 1e-16 makes the 1 slightly closer to
* two). With a potential 1 ULP rounding error fixed-up, math.fsum()
* can guarantee commutativity.
*/
if ((n > 0) && (((lo < 0d) && (summands[n - 1] < 0d)) || //
((lo > 0d) && (summands[n - 1] > 0d)))) {
y = lo * 2d;
x = hi + y;
yr = x - hi;
if (y == yr) {
hi = x;
}
}
}
return hi;
}
This function will take the array summands and add up the elements while simultaneously using it to store the compensation variables. Since we load the summand at index i before the array element at said index may become used for compensation, this will work.
Since the array will be small if the number of variables to add is small and won't escape the scope of our method, I think there is a decent chance that it will be allocated directly on the stack by the JIT, which may make the code quite fast.
I admit that I did not fully understand why the authors of the original code handled infinities, overflows, and NaNs the way they did. Here my code deviates from the original. (I hope I did not mess it up.)
Either way, I can now sum up 3, 4, or n double numbers by doing:
public static final double add3(final double x0, final double x1,
final double x2) {
return __destructiveSum(new double[] { x0, x1, x2 });
}
public static final double add4(final double x0, final double x1,
final double x2, final double x3) {
return __destructiveSum(new double[] { x0, x1, x2, x3 });
}
If I want to sum up 3 or 4 long numbers and obtain the precise result as double, I will have to deal with the fact that doubles can only represent longs in -9007199254740992..9007199254740992L. But this can easily be done by splitting each long into two parts:
public static final long add3(final long x0, final long x1,
final long x2) {
double lx;
return __destructiveSum(new long[] {new double[] { //
lx = x0, //
(x0 - ((long) lx)), //
lx = x1, //
(x1 - ((long) lx)), //
lx = x2, //
(x2 - ((long) lx)), //
});
}
public static final long add4(final long x0, final long x1,
final long x2, final long x3) {
double lx;
return __destructiveSum(new long[] {new double[] { //
lx = x0, //
(x0 - ((long) lx)), //
lx = x1, //
(x1 - ((long) lx)), //
lx = x2, //
(x2 - ((long) lx)), //
lx = x3, //
(x3 - ((long) lx)), //
});
}
I think this should be about right. At least I can now add Double.MAX_VALUE, 1, and -Double.MAX_VALUE and get 1 as result.

imregionalmax matlab function's equivalent in opencv

I have an image of connected components(circles filled).If i want to segment them i can use watershed algorithm.I prefer writing my own function for watershed instead of using the inbuilt function in OPENCV.I have successfu How do i find the regionalmax of objects using opencv?
I wrote a function myself. My results were quite similar to MATLAB, although not exact. This function is implemented for CV_32F but it can easily be modified for other types.
I mark all the points that are not part of a minimum region by checking all the neighbors. The remaining regions are either minima, maxima or areas of inflection.
I use connected components to label each region.
I check each region for any point belonging to a maxima, if yes then I push that label into a vector.
Finally I sort the bad labels, erase all duplicates and then mark all the points in the output as not minima.
All that remains are the regions of minima.
Here is the code:
// output is a binary image
// 1: not a min region
// 0: part of a min region
// 2: not sure if min or not
// 3: uninitialized
void imregionalmin(cv::Mat& img, cv::Mat& out_img)
{
// pad the border of img with 1 and copy to img_pad
cv::Mat img_pad;
cv::copyMakeBorder(img, img_pad, 1, 1, 1, 1, IPL_BORDER_CONSTANT, 1);
// initialize binary output to 2, unknown if min
out_img = cv::Mat::ones(img.rows, img.cols, CV_8U)+2;
// initialize pointers to matrices
float* in = (float *)(img_pad.data);
uchar* out = (uchar *)(out_img.data);
// size of matrix
int in_size = img_pad.cols*img_pad.rows;
int out_size = img.cols*img.rows;
int x, y;
for (int i = 0; i < out_size; i++) {
// find x, y indexes
y = i % img.cols;
x = i / img.cols;
neighborCheck(in, out, i, x, y, img_pad.cols); // all regions are either min or max
}
cv::Mat label;
cv::connectedComponents(out_img, label);
int* lab = (int *)(label.data);
in = (float *)(img.data);
in_size = img.cols*img.rows;
std::vector<int> bad_labels;
for (int i = 0; i < out_size; i++) {
// find x, y indexes
y = i % img.cols;
x = i / img.cols;
if (lab[i] != 0) {
if (neighborCleanup(in, out, i, x, y, img.rows, img.cols) == 1) {
bad_labels.push_back(lab[i]);
}
}
}
std::sort(bad_labels.begin(), bad_labels.end());
bad_labels.erase(std::unique(bad_labels.begin(), bad_labels.end()), bad_labels.end());
for (int i = 0; i < out_size; ++i) {
if (lab[i] != 0) {
if (std::find(bad_labels.begin(), bad_labels.end(), lab[i]) != bad_labels.end()) {
out[i] = 0;
}
}
}
}
int inline neighborCleanup(float* in, uchar* out, int i, int x, int y, int x_lim, int y_lim)
{
int index;
for (int xx = x - 1; xx < x + 2; ++xx) {
for (int yy = y - 1; yy < y + 2; ++yy) {
if (((xx == x) && (yy==y)) || xx < 0 || yy < 0 || xx >= x_lim || yy >= y_lim)
continue;
index = xx*y_lim + yy;
if ((in[i] == in[index]) && (out[index] == 0))
return 1;
}
}
return 0;
}
void inline neighborCheck(float* in, uchar* out, int i, int x, int y, int x_lim)
{
int indexes[8], cur_index;
indexes[0] = x*x_lim + y;
indexes[1] = x*x_lim + y+1;
indexes[2] = x*x_lim + y+2;
indexes[3] = (x+1)*x_lim + y+2;
indexes[4] = (x + 2)*x_lim + y+2;
indexes[5] = (x + 2)*x_lim + y + 1;
indexes[6] = (x + 2)*x_lim + y;
indexes[7] = (x + 1)*x_lim + y;
cur_index = (x + 1)*x_lim + y+1;
for (int t = 0; t < 8; t++) {
if (in[indexes[t]] < in[cur_index]) {
out[i] = 0;
break;
}
}
if (out[i] == 3)
out[i] = 1;
}
The following listing is a function similar to Matlab's "imregionalmax". It looks for at most nLocMax local maxima above threshold, where the found local maxima are at least minDistBtwLocMax pixels apart. It returns the actual number of local maxima found. Notice that it uses OpenCV's minMaxLoc to find global maxima. It is "opencv-self-contained" except for the (easy to implement) function vdist, which computes the (euclidian) distance between points (r,c) and (row,col).
input is one-channel CV_32F matrix, and locations is nLocMax (rows) by 2 (columns) CV_32S matrix.
int imregionalmax(Mat input, int nLocMax, float threshold, float minDistBtwLocMax, Mat locations)
{
Mat scratch = input.clone();
int nFoundLocMax = 0;
for (int i = 0; i < nLocMax; i++) {
Point location;
double maxVal;
minMaxLoc(scratch, NULL, &maxVal, NULL, &location);
if (maxVal > threshold) {
nFoundLocMax += 1;
int row = location.y;
int col = location.x;
locations.at<int>(i,0) = row;
locations.at<int>(i,1) = col;
int r0 = (row-minDistBtwLocMax > -1 ? row-minDistBtwLocMax : 0);
int r1 = (row+minDistBtwLocMax < scratch.rows ? row+minDistBtwLocMax : scratch.rows-1);
int c0 = (col-minDistBtwLocMax > -1 ? col-minDistBtwLocMax : 0);
int c1 = (col+minDistBtwLocMax < scratch.cols ? col+minDistBtwLocMax : scratch.cols-1);
for (int r = r0; r <= r1; r++) {
for (int c = c0; c <= c1; c++) {
if (vdist(Point2DMake(r, c),Point2DMake(row, col)) <= minDistBtwLocMax) {
scratch.at<float>(r,c) = 0.0;
}
}
}
} else {
break;
}
}
return nFoundLocMax;
}
I do not know if it is what you want, but in my answer to this post, I gave some code to find local maxima (peaks) in a grayscale image (resulting from distance transform).
The approach relies on subtracting the original image from the dilated image and finding the zero pixels).
I hope it helps,
Good luck
I had the same problem some time ago, and the solution was to reimplement the imregionalmax algorithm in OpenCV/Cpp. It is not that complicated, because you can find the C++ source code of the function in the Matlab distribution. (somewhere in toolbox). All you have to do is to read carefully and understand the algorithm described there. Then rewrite it or remove the matlab-specific checks and you'll have it.

Teaching a Neural Net: Bipolar XOR

I'm trying to to teach a neural net of 2 inputs, 4 hidden nodes (all in same layer) and 1 output node. The binary representation works fine, but I have problems with the Bipolar. I can't figure out why, but the total error will sometimes converge to the same number around 2.xx. My sigmoid is 2/(1+ exp(-x)) - 1. Perhaps I'm sigmoiding in the wrong place. For example to calculate the output error should I be comparing the sigmoided output with the expected value or with the sigmoided expected value?
I was following this website here: http://galaxy.agh.edu.pl/~vlsi/AI/backp_t_en/backprop.html , but they use different functions then I was instructed to use. Even when I did try to implement their functions I still ran into the same problem. Either way I get stuck about half the time at the same number (a different number for different implementations). Please tell me if I have made a mistake in my code somewhere or if this is normal (I don't see how it could be). Momentum is set to 0. Is this a common 0 momentum problem? The error functions we are supposed to be using are:
if ui is an output unit
Error(i) = (Ci - ui ) * f'(Si )
if ui is a hidden unit
Error(i) = Error(Output) * weight(i to output) * f'(Si)
public double sigmoid( double x ) {
double fBipolar, fBinary, temp;
temp = (1 + Math.exp(-x));
fBipolar = (2 / temp) - 1;
fBinary = 1 / temp;
if(bipolar){
return fBipolar;
}else{
return fBinary;
}
}
// Initialize the weights to random values.
private void initializeWeights(double neg, double pos) {
for(int i = 0; i < numInputs + 1; i++){
for(int j = 0; j < numHiddenNeurons; j++){
inputWeights[i][j] = Math.random() - pos;
if(inputWeights[i][j] < neg || inputWeights[i][j] > pos){
print("ERROR ");
print(inputWeights[i][j]);
}
}
}
for(int i = 0; i < numHiddenNeurons + 1; i++){
hiddenWeights[i] = Math.random() - pos;
if(hiddenWeights[i] < neg || hiddenWeights[i] > pos){
print("ERROR ");
print(hiddenWeights[i]);
}
}
}
// Computes output of the NN without training. I.e. a forward pass
public double outputFor ( double[] argInputVector ) {
for(int i = 0; i < numInputs; i++){
inputs[i] = argInputVector[i];
}
double weightedSum = 0;
for(int i = 0; i < numHiddenNeurons; i++){
weightedSum = 0;
for(int j = 0; j < numInputs + 1; j++){
weightedSum += inputWeights[j][i] * inputs[j];
}
hiddenActivation[i] = sigmoid(weightedSum);
}
weightedSum = 0;
for(int j = 0; j < numHiddenNeurons + 1; j++){
weightedSum += (hiddenActivation[j] * hiddenWeights[j]);
}
return sigmoid(weightedSum);
}
//Computes the derivative of f
public static double fPrime(double u){
double fBipolar, fBinary;
fBipolar = 0.5 * (1 - Math.pow(u,2));
fBinary = u * (1 - u);
if(bipolar){
return fBipolar;
}else{
return fBinary;
}
}
// This method is used to update the weights of the neural net.
public double train ( double [] argInputVector, double argTargetOutput ){
double output = outputFor(argInputVector);
double lastDelta;
double outputError = (argTargetOutput - output) * fPrime(output);
if(outputError != 0){
for(int i = 0; i < numHiddenNeurons + 1; i++){
hiddenError[i] = hiddenWeights[i] * outputError * fPrime(hiddenActivation[i]);
deltaHiddenWeights[i] = learningRate * outputError * hiddenActivation[i] + (momentum * lastDelta);
hiddenWeights[i] += deltaHiddenWeights[i];
}
for(int in = 0; in < numInputs + 1; in++){
for(int hid = 0; hid < numHiddenNeurons; hid++){
lastDelta = deltaInputWeights[in][hid];
deltaInputWeights[in][hid] = learningRate * hiddenError[hid] * inputs[in] + (momentum * lastDelta);
inputWeights[in][hid] += deltaInputWeights[in][hid];
}
}
}
return 0.5 * (argTargetOutput - output) * (argTargetOutput - output);
}
General coding comments:
initializeWeights(-1.0, 1.0);
may not actually get the initial values you were expecting.
initializeWeights should probably have:
inputWeights[i][j] = Math.random() * (pos - neg) + neg;
// ...
hiddenWeights[i] = (Math.random() * (pos - neg)) + neg;
instead of:
Math.random() - pos;
so that this works:
initializeWeights(0.0, 1.0);
and gives you initial values between 0.0 and 1.0 rather than between -1.0 and 0.0.
lastDelta is used before it is declared:
deltaHiddenWeights[i] = learningRate * outputError * hiddenActivation[i] + (momentum * lastDelta);
I'm not sure if the + 1 on numInputs + 1 and numHiddenNeurons + 1 are necessary.
Remember to watch out for rounding of ints: 5/2 = 2, not 2.5!
Use 5.0/2.0 instead. In general, add the .0 in your code when the output should be a double.
Most importantly, have you trained the NeuralNet long enough?
Try running it with numInputs = 2, numHiddenNeurons = 4, learningRate = 0.9, and train for 1,000 or 10,000 times.
Using numHiddenNeurons = 2 it sometimes get "stuck" when trying to solve the XOR problem.
See also XOR problem - simulation