Problems with rsGetElementAt_uchar4 - renderscript

I'm trying to implement a median filter in RenderScript. But the work of the code is not stable. Reading elements not from the current line rsGetElementAt_uchar4 (inputAlloc, x , y + a ) causes errors. What is the problem ? Is there an example of such a filter using RenderScript?
#pragma version(1)
#pragma rs java_package_name(a.myapplication)
#pragma rs_fp_relaxed
rs_allocation inputAlloc;
int bWidht, bHeight;
static uchar4 arrpix[9];
static uchar4 buff;
uchar4 __attribute__((kernel)) median(uchar4 in, uint32_t x, uint32_t y)
{
uchar4 arrpix[9];
uchar4 buff;
if((x<bWidht) && (y<bHeight)){
arrpix[0] = rsGetElementAt_uchar4(inputAlloc, x -1 , y - 1);
arrpix[1] = rsGetElementAt_uchar4(inputAlloc, x , y - 1);
arrpix[2] = rsGetElementAt_uchar4(inputAlloc, x +1 , y - 1);
arrpix[3] = rsGetElementAt_uchar4(inputAlloc, x -1 , y );
arrpix[4] = in;
arrpix[5] = rsGetElementAt_uchar4(inputAlloc, x +1 , y );
arrpix[6] = rsGetElementAt_uchar4(inputAlloc, x -1 , y + 1);
arrpix[7] = rsGetElementAt_uchar4(inputAlloc, x , y + 1);
arrpix[8] = rsGetElementAt_uchar4(inputAlloc, x +1 , y + 1);
for(int i=0; i<4; i++)
for(int i=0; i<=8; i++){
if(arrpix[i].r>arrpix[i+1].r){
buff.r = arrpix[i].r; arrpix[i].r = arrpix[i+1].r;
arrpix[i+1].r = buff.r;}
if(arrpix[i].g>arrpix[i+1].g){
buff.g = arrpix[i].g; arrpix[i].g = arrpix[i+1].g;
arrpix[i+1].g = buff.g;}
if(arrpix[i].b>arrpix[i+1].b){
buff.b = arrpix[i].b; arrpix[i].b = arrpix[i+1].b;
arrpix[i+1].b = buff.b;}
}
}
return arrpix[4];
}

You need to check x>0 and y>0 because 0-1 =-1
Bottom loops don't look completely correct either. Can you fix the spacing and did you mean to use i in both loops?

Related

How to generate non-repeating Random Numbers in Unity

I am trying to create a simple Bingo game and want to make sure the numbers are not repeating on the bingo card. I have a random number generator, but for some reason the code I'm using doesn't work as the same numbers will constantly repeat. Could somebody please take a look at my code below and either tell me what I need to fix or fix the code for me?
public Grid(int width, int height, float cellSize)
{
this.width = width;
this.height = height;
this.cellSize = cellSize;
gridArray = new int[width, height];
debugTextArray = new TextMesh[width, height];
for (int x = 0; x < gridArray.GetLength(0); x++)
{
for (int y = 0; y < gridArray.GetLength(1); y++)
{
debugTextArray[x, y] = UtilsClass.CreateWorldText(gridArray[x, y].ToString(), null, GetWorldPosition(x, y) + new Vector3(cellSize, cellSize) * .5f, 20, Color.white, TextAnchor.MiddleCenter);
Debug.DrawLine(GetWorldPosition(x, y), GetWorldPosition(x, y + 1), Color.white, 100f);
Debug.DrawLine(GetWorldPosition(x, y), GetWorldPosition(x + 1, y), Color.white, 100f);
}
}
Debug.DrawLine(GetWorldPosition(0, height), GetWorldPosition(width, height), Color.white, 100f);
Debug.DrawLine(GetWorldPosition(width, 0), GetWorldPosition(width, height), Color.white, 100f);
for (int x = 0; x <= 4; x++)
{
RandomValue(0, x);
RandomValue(1, x);
RandomValue(2, x);
RandomValue(3, x);
RandomValue(4, x);
}
}
private Vector3 GetWorldPosition(int x, int y)
{
return new Vector3(x, y) * cellSize;
}
public void RandomValue(int x, int y)
{
if (x >= 0 && y >= 0 && x < width && y < height)
{
list = new List<int>(new int[Lenght]);
for (int j = 0; j < 25; j++)
{
Rand = UnityEngine.Random.Range(1, 50);
while (list.Contains(Rand))
{
Rand = UnityEngine.Random.Range(1, 50);
}
list[j] = Rand;
gridArray[x, y] = list[j];
}
debugTextArray[x, y].text = gridArray[x, y].ToString();
debugTextArray[2, 2].text = "Free";
}
}
Basically your concept in function RandomValue() is correct, but problem is it only check in same column, so you have to bring the concept of RandomValue() to Grid() level. You need a List contain all approved value, then check Contains() at Grid().
But in fact you can do it in all one go.
Make sure your width*height not larger than maxValue.
Dictionary<Vector2Int, int> CreateBingoGrid(int width, int height, int maxValue)
{
var grid = new Dictionary<Vector2Int, int>();
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
var num = Random.Range(1, maxValue);
while (grid.ContainsValue(num))
{
num = Random.Range(1, maxValue);
}
grid.Add(new Vector2Int(x, y), num);
}
}
return grid;
}
As mentioned in the comment on your question, it's probably the easiest to just shuffle the numbers in the range [1,50] and then take the first 25 or however many you want.
The reason your code isn't working properly and you see a lot of repeats is because you're calling the RandomValue() function multiple separate times and the list variable you're comparing against if a value is already on the chart is inside of that function. Meaning that it will only ever check the values it has generated in that call, in this case meaning only for one row.
Also, if you make a list that you know will always be the same size, you should use an array instead. Lists are for when you want the size to be adjustable.
Solution 1:
A very simple way to generate an array with the numbers 1-50 would be to do this:
//Initialize Array
int[] numbers = new int[50];
for (int i = 1; i <= numbers.Length; i++)
{
numbers[i] = i;
}
//Shuffle Array
for (int i = 0; i < numbers.Length; i++ )
{
int tmp = numbers[i];
int r = Random.Range(i, numbers.Length);
numbers[i] = numbers[r];
numbers[r] = tmp;
}
//Get first 'n' numbers
int[] result = Array.Copy(numbers, 0, result, 0, n);
return result;
I'm not sure if it's the most efficient way, but it would work.
Solution 2:
To change your code to check against the entire list, I would change this section:
for (int x = 0; x <= 4; x++)
{
RandomValue(0, x);
RandomValue(1, x);
RandomValue(2, x);
RandomValue(3, x);
RandomValue(4, x);
}
To something like this:
List<int> values = new List<int>();
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
int r = RandomValue(1, 50);
while (values.Contains(r))
{
r = RandomValue(1, 50);
}
values[y * width + x].Add(r);
gridArray[x, y] = r;
}
}
int RandomValue(int min, int max) {
return UnityEngine.Random.Range(min, max);
}
Hope this helps!

Same C code different results TIv5.2.5 and gcc 5.4.1 c99 compiler

I am using MSP432P401R to do FFT of SAR ADC samples, did FFT in MATLAB and got results same as C compiler online but Code Composer Studio IDE is giving different output than MATLAB results, I thought that can be a compiler issue so tried reading same did some changes and tried but not getting results Like MATLAB.
Online C compiler was gcc 5.4.1 c99.
and in CCS TI v5.2.5 compiler is used.
float m;
float ur, ui, sr, si,tr, ti;
long double Temp_A[256],ArrayA[256]={2676,2840,2838,2832,2826,2818,2814,2808,
2804,2798,2790,2784,2778,2770,2764,2758,2752,2746,2740,2734,
2726,2720,2714,2706,2700,2692,2686,2680,2674,2668,2660,2654,
2646,2642,2634,2624,2618,2612,2604,2598,2590,2584,2576,2570,
2562,2556,2550,2542,2536,2530,2522,2512,2508,2498,2490,2484,
2478,2470,2462,2454,2448,2442,2432,2426,2420,2414,2404,2398,
2390,2382,2374,2368,2360,2352,2346,2338,2330,2322,2314,2306,
2300,2294,2286,2278,2272,2262,2258,2250,2238,2234,2228,2220,
2208,2202,2192,2186,2178,2170,2164,2156,2150,2142,2134,2126,
2116,2110,2104,2096,2088,2078,2070,2062,2054,2046,2040,2034,
2026,2018,2010,2002,1994,1986,1978,1970,1962,1954,1946,1936,
1930,1922,1914,1908,1902,1894,1886,1876,1868,1860,1852,1846,
1838,1830,1822,1814,1804,1796,1790,1784,1776,1768,1760,1754,
1746,1738,1728,1720,1714,1708,1698,1692,1684,1674,1668,1656,
1656,1644,1640,1628,1624,1612,1610,1598,1596,1584,1580,1570,
1564,1554,1546,1540,1532,1526,1520,1512,1504,1496,1490,1482,
1474,1468,1462,1454,1446,1438,1432,1424,1420,1410,1404,1398,
1392,1384,1376,1370,1364,1356,1348,1342,1336,1328,1322,1316,
1308,1300,1294,1286,1280,1276,1270,1262,1254,1248,1242,1236,
1230,1222,1216,1210,1206,1198,1192,1188,1178,1172,1168,1162,
1154,1148,1144,1138,1132,1126,1120,1114,1108,1102,1096,1090,
1084,1080,1074,1068,1062,1058,1052,1048},ArrayA_IMX[256]={0};
unsigned int jm1,i;
unsigned int ip,l;
void main(void)
{
WDT_A->CTL = WDT_A_CTL_PW |WDT_A_CTL_HOLD;
VCORE();
CLK();
P1DIR |= BIT5; //CLK--AD7352 OUTPUT DIRECTION
P1DIR |= BIT7; //CHIP SELECT--AD7352 OUTPUT DIRECTION
P5DIR &= ~BIT0; //SDATAA--AD7352 INPUT DIRECTION P5.0
P5DIR &= ~BIT2; //SDATAB--AD7352 INPUT DIRECTION P5.2
while(1)
{
bit_reversal(ArrayA);
fft(ArrayA,ArrayA_IMX);
}
}
void bit_reversal(long double REX[])
{
int i,i2,n,m;
int tx,k,j;
n = 1;
m=8;
for (i=0;i<m;i++)
{
n *= 2;
}
i2 = n >> 1;
j = 0;
for (i=0;i<n-1;i++)
{
if (i < j)
{
tx = REX[i];
//ty = IMX[i];
REX[i] = REX[j];
//IMX[i] = IMX[j];
REX[j] = tx;
//IMX[j] = ty;
}
k = i2;
while (k <= j)
{
j -= k;
k >>= 1;
}
j += k;
}
}
void fft(long double REX[],long double IMX[])
{
N = 256;
nm1 = N - 1;
nd2 = N / 2;
m = log10l(N) / log10l(2);
j = nd2;
for (l = 1; l <= m; l++)
{
le = powl(2, l);
le2 = le / 2;
ur = 1;
ui = 0;
// Calculate sine and cosine values
sr = cosl(M_PI/le2);
si = -sinl(M_PI/le2);
// Loop for each sub DFT
for (j = 1; j <= le2; j++)
{
jm1 = j - 1;
// Loop for each butterfly
for (i = jm1; i <= nm1; i += le)
{
ip = i + le2;
tr = REX[ip]*ur - IMX[ip]*ui;
ti = REX[ip]*ui + IMX[ip]*ur;
REX[ip] = REX[i] - tr;
IMX[ip] = IMX[i] - ti;
REX[i] = REX[i] + tr;
IMX[i] = IMX[i] + ti;
}
tr = ur;
ur = tr*sr - ui*si;
ui = tr*si + ui*sr;
}
}
}

How to smooth exterior color gradient in Mandelbrot fractal generation?

This is my progress with a mandelbrot fractal generation:
It appears with cases where the edge cases between colors are small, it has a good "blending" effect. However, as the distance between colors become larger, you can see very explicitly and evidently the separation of colors. I was wondering, how would I achieve a blending effect without using something like bicubic interpolation post-processing?
Attached is the code I have to generate the fractal:
public static void drawFractal()
{
Complex Z;
Complex C;
double x;
double y;
// The min and max values should be between -2 and +2
double minX = -2.0; // use -2 for the full-range fractal image
double minY = -2.0; // use -2 for the full-range fractal image
double maxX = 2.0; // use 2 for the full-range fractal image
double maxY = 2.0; // use 2 for the full-range fractal image
double xStepSize = ( maxX - minX ) / width;
double yStepSize = ( maxY - minY ) / height;
int maxIterations = 100;
int maxColors = 0xFF0000;
// for each pixel on the screen
for( x = minX; x < maxX; x = x + xStepSize)
{
for ( y = minY; y < maxY; y = y + yStepSize )
{
C = new Complex( x, y );
Z = new Complex( 0, 0 );
int iter = getIterValue( Z, C, 0, maxIterations );
int myX = (int) ( ( x - minX ) / xStepSize );
int myY = (int) ( ( y - minY ) / yStepSize );
if ( iter < maxIterations )
{
myPixel[ myY * width + myX ] = iter * ( maxColors / maxIterations ) / 50;
}
}
}
}

How to make the blackboard text appear clearer using MATLAB?

What are the sequence of filters I should put if I want the final image to be more clearer with a digital type look. I mean only two distinct colors, one for the board and one for the chalk writing.
When it comes to identifying text in images you better use Stroke Width Transform.
Here's a little result I obtained on your image (the basic transform + connected component w/o filtering):
My mex implementation based on code from here
#include "mex.h"
#include <vector>
#include <map>
#include <set>
#include <algorithm>
#include <math.h>
using namespace std;
#define PI 3.14159265
struct Point2d {
int x;
int y;
float SWT;
};
struct Point2dFloat {
float x;
float y;
};
struct Ray {
Point2d p;
Point2d q;
std::vector<Point2d> points;
};
void strokeWidthTransform(const float * edgeImage,
const float * gradientX,
const float * gradientY,
bool dark_on_light,
float * SWTImage,
int h, int w,
std::vector<Ray> & rays) {
// First pass
float prec = .05f;
for( int row = 0; row < h; row++ ){
const float* ptr = edgeImage + row*w;
for ( int col = 0; col < w; col++ ){
if (*ptr > 0) {
Ray r;
Point2d p;
p.x = col;
p.y = row;
r.p = p;
std::vector<Point2d> points;
points.push_back(p);
float curX = (float)col + 0.5f;
float curY = (float)row + 0.5f;
int curPixX = col;
int curPixY = row;
float G_x = gradientX[ col + row*w ];
float G_y = gradientY[ col + row*w ];
// normalize gradient
float mag = sqrt( (G_x * G_x) + (G_y * G_y) );
if (dark_on_light){
G_x = -G_x/mag;
G_y = -G_y/mag;
} else {
G_x = G_x/mag;
G_y = G_y/mag;
}
while (true) {
curX += G_x*prec;
curY += G_y*prec;
if ((int)(floor(curX)) != curPixX || (int)(floor(curY)) != curPixY) {
curPixX = (int)(floor(curX));
curPixY = (int)(floor(curY));
// check if pixel is outside boundary of image
if (curPixX < 0 || (curPixX >= w) || curPixY < 0 || (curPixY >= h)) {
break;
}
Point2d pnew;
pnew.x = curPixX;
pnew.y = curPixY;
points.push_back(pnew);
if ( edgeImage[ curPixY*w+ curPixX ] > 0) {
r.q = pnew;
// dot product
float G_xt = gradientX[ curPixY*w + curPixX ];
float G_yt = gradientY[ curPixY*w + curPixX ];
mag = sqrt( (G_xt * G_xt) + (G_yt * G_yt) );
if (dark_on_light){
G_xt = -G_xt/mag;
G_yt = -G_yt/mag;
} else {
G_xt = G_xt/mag;
G_yt = G_yt/mag;
}
if (acos(G_x * -G_xt + G_y * -G_yt) < PI/2.0 ) {
float length = sqrt( ((float)r.q.x - (float)r.p.x)*((float)r.q.x - (float)r.p.x) + ((float)r.q.y - (float)r.p.y)*((float)r.q.y - (float)r.p.y));
for (std::vector<Point2d>::iterator pit = points.begin(); pit != points.end(); pit++) {
float* pSWT = SWTImage + w * pit->y + pit->x;
if (*pSWT < 0) {
*pSWT = length;
} else {
*pSWT = std::min(length, *pSWT);
}
}
r.points = points;
rays.push_back(r);
}
break;
}
}
}
}
ptr++;
}
}
}
bool Point2dSort(const Point2d &lhs, const Point2d &rhs) {
return lhs.SWT < rhs.SWT;
}
void SWTMedianFilter(float * SWTImage, int h, int w,
std::vector<Ray> & rays, float maxWidth = -1 ) {
for (std::vector<Ray>::iterator rit = rays.begin(); rit != rays.end(); rit++) {
for (std::vector<Point2d>::iterator pit = rit->points.begin(); pit != rit->points.end(); pit++) {
pit->SWT = SWTImage[ w*pit->y + pit->x ];
}
std::sort(rit->points.begin(), rit->points.end(), &Point2dSort);
//std::nth_element( rit->points.begin(), rit->points.end(), rit->points.size()/2, &Point2dSort );
float median = (rit->points[rit->points.size()/2]).SWT;
if ( maxWidth > 0 && median >= maxWidth ) {
median = -1;
}
for (std::vector<Point2d>::iterator pit = rit->points.begin(); pit != rit->points.end(); pit++) {
SWTImage[ w*pit->y + pit->x ] = std::min(pit->SWT, median);
}
}
}
typedef std::vector< std::set<int> > graph_t; // graph as a list of neighbors per node
void connComp( const graph_t& g, std::vector<int>& c, int i, int l ) {
// starting from node i labe this conn-comp with label l
if ( i < 0 || i > g.size() ) {
return;
}
std::vector< int > stack;
// push i
stack.push_back(i);
c[i] = l;
while ( ! stack.empty() ) {
// pop
i = stack.back();
stack.pop_back();
// go over all nieghbors
for ( std::set<int>::const_iterator it = g[i].begin(); it != g[i].end(); it++ ) {
if ( c[*it] < 0 ) {
stack.push_back( *it );
c[ *it ] = l;
}
}
}
}
int findNextToLabel( const graph_t& g, const vector<int>& c ) {
for ( int i = 0 ; i < c.size(); i++ ) {
if ( c[i] < 0 ) {
return i;
}
}
return c.size();
}
int connected_components(const graph_t& g, vector<int>& c) {
// check for empty graph!
if ( g.empty() ) {
return 0;
}
int i = 0;
int num_conn = 0;
do {
connComp( g, c, i, num_conn );
num_conn++;
i = findNextToLabel( g, c );
} while ( i < g.size() );
return num_conn;
}
std::vector< std::vector<Point2d> >
findLegallyConnectedComponents(const float* SWTImage, int h, int w,
std::vector<Ray> & rays) {
std::map<int, int> Map;
std::map<int, Point2d> revmap;
std::vector<std::vector<Point2d> > components; // empty
int num_vertices = 0, idx = 0;
graph_t g;
// Number vertices for graph. Associate each point with number
for( int row = 0; row < h; row++ ){
for (int col = 0; col < w; col++ ){
idx = col + w * row;
if (SWTImage[idx] > 0) {
Map[idx] = num_vertices;
Point2d p;
p.x = col;
p.y = row;
revmap[num_vertices] = p;
num_vertices++;
std::set<int> empty;
g.push_back(empty);
}
}
}
if ( g.empty() ) {
return components; // nothing to do with an empty graph...
}
for( int row = 0; row < h; row++ ){
for (int col = 0; col < w; col++ ){
idx = col + w * row;
if ( SWTImage[idx] > 0) {
// check pixel to the right, right-down, down, left-down
int this_pixel = Map[idx];
float thisVal = SWTImage[idx];
if (col+1 < w) {
float right = SWTImage[ w*row + col + 1 ];
if (right > 0 && (thisVal/right <= 3.0 || right/thisVal <= 3.0)) {
g[this_pixel].insert( Map[ w*row + col + 1 ] );
g[ Map[ w*row + col + 1 ] ].insert( this_pixel );
//boost::add_edge(this_pixel, map.at(row * SWTImage->width + col + 1), g);
}
}
if (row+1 < h) {
if (col+1 < w) {
float right_down = SWTImage[ w*(row+1) + col + 1 ];
if (right_down > 0 && (thisVal/right_down <= 3.0 || right_down/thisVal <= 3.0)) {
g[ this_pixel ].insert( Map[ w*(row+1) + col + 1 ] );
g[ Map[ w*(row+1) + col + 1 ] ].insert(this_pixel);
// boost::add_edge(this_pixel, map.at((row+1) * SWTImage->width + col + 1), g);
}
}
float down = SWTImage[ w*(row+1) + col ];
if (down > 0 && (thisVal/down <= 3.0 || down/thisVal <= 3.0)) {
g[ this_pixel ].insert( Map[ w*(row+1) + col ] );
g[ Map[ w*(row+1) + col ] ].insert( this_pixel );
//boost::add_edge(this_pixel, map.at((row+1) * SWTImage->width + col), g);
}
if (col-1 >= 0) {
float left_down = SWTImage[ w*(row+1) + col - 1 ];
if (left_down > 0 && (thisVal/left_down <= 3.0 || left_down/thisVal <= 3.0)) {
g[ this_pixel ].insert( Map[ w*(row+1) + col - 1 ] );
g[ Map[ w*(row+1) + col - 1 ] ].insert( this_pixel );
//boost::add_edge(this_pixel, map.at((row+1) * SWTImage->width + col - 1), g);
}
}
}
}
}
}
std::vector<int> c(num_vertices, -1);
int num_comp = connected_components(g, c);
components.reserve(num_comp);
//std::cout << "Before filtering, " << num_comp << " components and " << num_vertices << " vertices" << std::endl;
for (int j = 0; j < num_comp; j++) {
std::vector<Point2d> tmp;
components.push_back( tmp );
}
for (int j = 0; j < num_vertices; j++) {
Point2d p = revmap[j];
(components[c[j]]).push_back(p);
}
return components;
}
enum {
EIN = 0,
GXIN,
GYIN,
DOLFIN,
MAXWIN,
NIN };
void mexFunction( int nout, mxArray* pout[], int nin, const mxArray* pin[] ) {
//
// make sure images are input in transposed so that they are arranged row-major in memory
//
mxAssert( nin == NIN, "wrong number of inputs" );
mxAssert( nout > 1, "only one output" );
int h = mxGetN( pin[EIN] ); // inputs are transposed!
int w = mxGetM( pin[EIN] );
mxAssert( mxIsClass( pin[EIN], mxSINGLE_CLASS ) && h == mxGetN( pin[EIN] ) && w == mxGetM( pin[EIN] ), "edge map incorrect");
mxAssert( mxIsClass( pin[GXIN], mxSINGLE_CLASS ) && h == mxGetN( pin[GXIN] ) && w == mxGetM( pin[GXIN] ), "edge map incorrect");
mxAssert( mxIsClass( pin[GYIN], mxSINGLE_CLASS ) && h == mxGetN( pin[GYIN] ) && w == mxGetM( pin[GYIN] ), "edge map incorrect");
const float * edgeImage = (float*) mxGetData( pin[EIN] );
const float * gradientX = (float*) mxGetData( pin[GXIN] );
const float * gradientY = (float*) mxGetData( pin[GYIN] );
bool dark_on_light = mxGetScalar( pin[DOLFIN] ) != 0 ;
float maxWidth = mxGetScalar( pin[MAXWIN] );
// allocate output
pout[0] = mxCreateNumericMatrix( w, h, mxSINGLE_CLASS, mxREAL );
float * SWTImage = (float*) mxGetData( pout[0] );
// set SWT to -1
for ( int i = 0 ; i < w*h; i++ ) {
SWTImage[i] = -1;
}
std::vector<Ray> rays;
strokeWidthTransform ( edgeImage, gradientX, gradientY, dark_on_light, SWTImage, h, w, rays );
SWTMedianFilter ( SWTImage, h, w, rays, maxWidth );
// connected components
if ( nout > 1 ) {
// Calculate legally connect components from SWT and gradient image.
// return type is a vector of vectors, where each outer vector is a component and
// the inner vector contains the (y,x) of each pixel in that component.
std::vector<std::vector<Point2d> > components = findLegallyConnectedComponents(SWTImage, h, w, rays);
pout[1] = mxCreateNumericMatrix( w, h, mxSINGLE_CLASS, mxREAL );
float* pComp = (float*) mxGetData( pout[1] );
for ( int i = 0 ; i < w*h; i++ ) {
pComp[i] = 0;
}
for ( int ci = 0 ; ci < components.size(); ci++ ) {
for ( std::vector<Point2d>::iterator it = components[ci].begin() ; it != components[ci].end(); it++ ) {
pComp[ w * it->y + it->x ] = ci + 1;
}
}
}
}
Matlab function calling stroke-width-transform (SWT) mex-file:
function [swt swtcc] = SWT( img, dol, maxWidth )
if size( img, 3 ) == 3
img = rgb2gray(img);
end
img = im2single(img);
edgeMap = single( edge( img, 'canny', .15 ) );
img = imfilter( img, fspecial('gauss',[5 5], 0.3*(2.5-1)+.8) );
gx = imfilter( img, fspecial('prewitt')' ); %//'
gy = imfilter( img, fspecial('prewitt') );
gx = single(medfilt2( gx, [3 3] ));
gy = single(medfilt2( gy, [3 3] ));
[swt swtcc] = swt_mex( edgeMap.', gx.', gy.', dol, maxWidth ); %//'
swt = swt'; %//'
swtcc = double(swtcc'); %//'
Try this :
I = imread('...'); % Your board image
ThreshConstant = 1; % Try to vary this constant.
bw = im2bw(I , ThreshConstant * graythresh(I)); % Black-white image
SegmentedImg = I.*repmat(uint8(bw), [1 1 3]);
Just do imshow(bw); and you will have a 2 color image normally well segmented.
If the threshold is too strong, try to turn around 0.5 to 1.5 with ThreshConstant.
or you could try this
im = imread('http://i.imgur.com/uJIXp13.jpg'); %the image posted above
im2=rgb2gray(im);
maxp=uint16(max(max(im2)));
minp=uint16(min(min(im2)));
bw=im2bw(im2,(double(minp+maxp))/(2*255)); %the threshold as alexandre said, but with the min max idensity as threshold
bw=~bw; % you need to reverse from black font - whit letters to black letters white font :P
imshow(bw)
this should be the result
have in mind , that you can use this technique adaptively with a window, finding the threshold of the window every time for best results.

imregionalmax matlab function's equivalent in opencv

I have an image of connected components(circles filled).If i want to segment them i can use watershed algorithm.I prefer writing my own function for watershed instead of using the inbuilt function in OPENCV.I have successfu How do i find the regionalmax of objects using opencv?
I wrote a function myself. My results were quite similar to MATLAB, although not exact. This function is implemented for CV_32F but it can easily be modified for other types.
I mark all the points that are not part of a minimum region by checking all the neighbors. The remaining regions are either minima, maxima or areas of inflection.
I use connected components to label each region.
I check each region for any point belonging to a maxima, if yes then I push that label into a vector.
Finally I sort the bad labels, erase all duplicates and then mark all the points in the output as not minima.
All that remains are the regions of minima.
Here is the code:
// output is a binary image
// 1: not a min region
// 0: part of a min region
// 2: not sure if min or not
// 3: uninitialized
void imregionalmin(cv::Mat& img, cv::Mat& out_img)
{
// pad the border of img with 1 and copy to img_pad
cv::Mat img_pad;
cv::copyMakeBorder(img, img_pad, 1, 1, 1, 1, IPL_BORDER_CONSTANT, 1);
// initialize binary output to 2, unknown if min
out_img = cv::Mat::ones(img.rows, img.cols, CV_8U)+2;
// initialize pointers to matrices
float* in = (float *)(img_pad.data);
uchar* out = (uchar *)(out_img.data);
// size of matrix
int in_size = img_pad.cols*img_pad.rows;
int out_size = img.cols*img.rows;
int x, y;
for (int i = 0; i < out_size; i++) {
// find x, y indexes
y = i % img.cols;
x = i / img.cols;
neighborCheck(in, out, i, x, y, img_pad.cols); // all regions are either min or max
}
cv::Mat label;
cv::connectedComponents(out_img, label);
int* lab = (int *)(label.data);
in = (float *)(img.data);
in_size = img.cols*img.rows;
std::vector<int> bad_labels;
for (int i = 0; i < out_size; i++) {
// find x, y indexes
y = i % img.cols;
x = i / img.cols;
if (lab[i] != 0) {
if (neighborCleanup(in, out, i, x, y, img.rows, img.cols) == 1) {
bad_labels.push_back(lab[i]);
}
}
}
std::sort(bad_labels.begin(), bad_labels.end());
bad_labels.erase(std::unique(bad_labels.begin(), bad_labels.end()), bad_labels.end());
for (int i = 0; i < out_size; ++i) {
if (lab[i] != 0) {
if (std::find(bad_labels.begin(), bad_labels.end(), lab[i]) != bad_labels.end()) {
out[i] = 0;
}
}
}
}
int inline neighborCleanup(float* in, uchar* out, int i, int x, int y, int x_lim, int y_lim)
{
int index;
for (int xx = x - 1; xx < x + 2; ++xx) {
for (int yy = y - 1; yy < y + 2; ++yy) {
if (((xx == x) && (yy==y)) || xx < 0 || yy < 0 || xx >= x_lim || yy >= y_lim)
continue;
index = xx*y_lim + yy;
if ((in[i] == in[index]) && (out[index] == 0))
return 1;
}
}
return 0;
}
void inline neighborCheck(float* in, uchar* out, int i, int x, int y, int x_lim)
{
int indexes[8], cur_index;
indexes[0] = x*x_lim + y;
indexes[1] = x*x_lim + y+1;
indexes[2] = x*x_lim + y+2;
indexes[3] = (x+1)*x_lim + y+2;
indexes[4] = (x + 2)*x_lim + y+2;
indexes[5] = (x + 2)*x_lim + y + 1;
indexes[6] = (x + 2)*x_lim + y;
indexes[7] = (x + 1)*x_lim + y;
cur_index = (x + 1)*x_lim + y+1;
for (int t = 0; t < 8; t++) {
if (in[indexes[t]] < in[cur_index]) {
out[i] = 0;
break;
}
}
if (out[i] == 3)
out[i] = 1;
}
The following listing is a function similar to Matlab's "imregionalmax". It looks for at most nLocMax local maxima above threshold, where the found local maxima are at least minDistBtwLocMax pixels apart. It returns the actual number of local maxima found. Notice that it uses OpenCV's minMaxLoc to find global maxima. It is "opencv-self-contained" except for the (easy to implement) function vdist, which computes the (euclidian) distance between points (r,c) and (row,col).
input is one-channel CV_32F matrix, and locations is nLocMax (rows) by 2 (columns) CV_32S matrix.
int imregionalmax(Mat input, int nLocMax, float threshold, float minDistBtwLocMax, Mat locations)
{
Mat scratch = input.clone();
int nFoundLocMax = 0;
for (int i = 0; i < nLocMax; i++) {
Point location;
double maxVal;
minMaxLoc(scratch, NULL, &maxVal, NULL, &location);
if (maxVal > threshold) {
nFoundLocMax += 1;
int row = location.y;
int col = location.x;
locations.at<int>(i,0) = row;
locations.at<int>(i,1) = col;
int r0 = (row-minDistBtwLocMax > -1 ? row-minDistBtwLocMax : 0);
int r1 = (row+minDistBtwLocMax < scratch.rows ? row+minDistBtwLocMax : scratch.rows-1);
int c0 = (col-minDistBtwLocMax > -1 ? col-minDistBtwLocMax : 0);
int c1 = (col+minDistBtwLocMax < scratch.cols ? col+minDistBtwLocMax : scratch.cols-1);
for (int r = r0; r <= r1; r++) {
for (int c = c0; c <= c1; c++) {
if (vdist(Point2DMake(r, c),Point2DMake(row, col)) <= minDistBtwLocMax) {
scratch.at<float>(r,c) = 0.0;
}
}
}
} else {
break;
}
}
return nFoundLocMax;
}
I do not know if it is what you want, but in my answer to this post, I gave some code to find local maxima (peaks) in a grayscale image (resulting from distance transform).
The approach relies on subtracting the original image from the dilated image and finding the zero pixels).
I hope it helps,
Good luck
I had the same problem some time ago, and the solution was to reimplement the imregionalmax algorithm in OpenCV/Cpp. It is not that complicated, because you can find the C++ source code of the function in the Matlab distribution. (somewhere in toolbox). All you have to do is to read carefully and understand the algorithm described there. Then rewrite it or remove the matlab-specific checks and you'll have it.