Reading image in matlab in a format acceptable to mex - matlab

Hi I use imread function in matlab to read an image. But now if i want to use this image in a mex file what should I do?
I used im2double function to the image and then passed it to the mex file but the results i get are not acceptable.
so is there any other function which can be used

Matlab has different data types, for example uint8, in32 and, most commonly, double. You can see those in the workspace or by using the whos command.
These data types are quite similar to the ones in C(++). So for example, we could write this piece of c++/mex code printint.cpp:
#include <matrix.h>
#include <mex.h>
void mexFunction(int nargout, mxArray *argout[], int nargin, const mxArray *argin[]) {
int nCols = mxGetDimensions(argin[0])[1];
int nRows = mxGetDimensions(argin[0])[0];
int *data = (int *)mxGetData(argin[0]);
for(int row=0; row<nRows; ++row) {
for(int col=0; col<nCols-1; ++col) {
mexPrintf("%3d, ", data[nRows*col + row]);
}
mexPrintf("%3d\n", data[nRows*(nCols-1) + row]);
}
}
This code simply prints all the numbers in a matrix, but it assumes that the elements in the matrix are integers. That is, the mxGetData returns a void pointer, which may point to any kind of data, also doubles. I cast the void pointer to an integer pointer, which is tricky but allowed in C(++).
This does however allow me to run printint on doubles, or any kind of matrix for that matter:
>> doubles=randi(10,[2,5])
doubles =
8 3 1 7 10
1 1 9 4 1
>> whos('doubles')
Name Size Bytes Class Attributes
doubles 2x5 80 double
>> printint(doubles)
0, 0, 0, 0, 0
1075838976, 1072693248, 1074266112, 1072693248, 1072693248
One can see that the output is garbage, I have interpreted doubles as if they were integers. This may also cause segmentation faults, namely if the actual sizeof of the data one inputs is smaller than the sizeof of an int; like uint8. If one inputs integers, the method does work:
>> integers = int32(doubles)
integers =
8 3 1 7 10
1 1 9 4 1
>> whos('integers')
Name Size Bytes Class Attributes
integers 2x5 40 int32
>> printint(integers)
8, 3, 1, 7, 10
1, 1, 9, 4, 1
I typically assume a certain data type to be input, since I use mex to speed up certain parts of the code and don't want to spend time on creating flexible and fool proof API's. This comes at the cost of running into unexpected behavior when the input and assumed data types do not match. Therefore, one may consider programatically testing the data type that is input, in order to handle it correctly.
Your problem: In case you use I=imread(filename), Matlab returns a rows x cols x channels 3-D matrix of type uint8, which is equal to an unsigned char in C(++). In order to work with this data, one could write a simple class to wrap the image. This is done here:
#include <matrix.h>
#include <mex.h>
class MatlabImage {
public:
MatlabImage(const mxArray *data) {
this->data = (unsigned char*)mxGetData(data);
this->rows = mxGetDimensions(data)[0];
this->cols = mxGetDimensions(data)[1];
this->channels = mxGetDimensions(data)[2];
}
unsigned char *data;
int rows, cols, channels;
int get(int row, int col, int channel) {
return this->data[row + rows*(col + cols*channel)];
}
};
void mexFunction(int nargout, mxArray *argout[], int nargin, const mxArray *argin[]) {
MatlabImage image(argin[0]);
int row=316; int col=253;
mexPrintf("I(%d, %d) = (%3d, %3d, %3d)\n", row, col, image.get(row,col,0), image.get(row,col,1), image.get(row,col,2));
}
This can then be used in Matlab like this:
>> I = imread('workspace.png');
>> whos('I')
Name Size Bytes Class Attributes
I 328x344x3 338496 uint8
>> getpixel(I)
I(316, 253) = (197, 214, 233)
>> I(317,254,:)
ans(:,:,1) =
197
ans(:,:,2) =
214
ans(:,:,3) =
233
One can also work with doubles in a similar way. double(I) would convert the image to doubles, and im2double(I) does the same, but then divides all values by 255, such that they lie between 0 and 1. In both case one would need to cast the result of mxGetData(...) to (double *) instead of (unsigned char *).Finally note that C(++) starts indexing from 0 and Matlab from 1, so passing indexes from one to another always requires incrementing or decrementing them somewhere.

Convert to double and divided by 255. In Mex file Matlab double array is used to represent the image

Related

same Int has different values in Swift, mysterious riddle

with this code :
let rand : Int = Int(arc4random())
NSLog("rand = %d %i %# \(rand)",rand,rand,String(rand))
I get :
rand = -1954814774 -1954814774 2340152522 2340152522
why all 4 values are not the same ?
arc4random generates an unsigned 32bit int. Int is probably 64 bit on your machine so you get the same number and it doesn't overflow. But %i and %d are signed 32-bit format specifiers. See here and here. That's why you get a negative number when arc4random returns a number greater than 2^32-1, aka Int32.max.
For example, when 2340152522 is generated, you get -1954814774 in the %i position because:
Int32(bitPattern: 2340152522) == -1954814774
On the other hand, converting an Int to String won't change the number. Int is a signed 64 bit integer.

How to use a fixed point sin function in Vivado HLS

I am calculating the intersection point of two lines given in the polar coordinate system:
typedef ap_fixed<16,3,AP_RND> t_lines_angle;
typedef ap_fixed<16,14,AP_RND> t_lines_rho;
bool get_intersection(
hls::Polar_< t_lines_angle, t_lines_rho>* lineOne,
hls::Polar_< t_lines_angle, t_lines_rho>* lineTwo,
Point* point)
{
float angleL1 = lineOne->angle.to_float();
float angleL2 = lineTwo->angle.to_float();
t_lines_angle rhoL1 = lineOne->rho.to_float();
t_lines_angle rhoL2 = lineTwo->rho.to_float();
t_lines_angle ct1=cosf(angleL1);
t_lines_angle st1=sinf(angleL1);
t_lines_angle ct2=cosf(angleL2);
t_lines_angle st2=sinf(angleL2);
t_lines_angle d=ct1*st2-st1*ct2;
// we make sure that the lines intersect
// which means that parallel lines are not possible
point->X = (int)((st2*rhoL1-st1*rhoL2)/d);
point->Y = (int)((-ct2*rhoL1+ct1*rhoL2)/d);
return true;
}
After synthesis for our FPGA I saw that the 4 implementations of the float sine (and cos) take 4800 LUTs per implementation, which sums up to 19000 LUTs for these 4 functions. I want to reduce the LUT count by using a fixed point sine. I already found a implementation of CORDIC but I am not sure how to use it. The input of the function is an integer but i have a ap_fixed datatype. How can I map this ap_fixed to integer? and how can I map my 3.13 fixed point to the required 2.14 fixed point?
With the help of one of my colleagues I figured out a quite easy solution that does not require any hand written implementations or manipulation of the fixed point data:
use #include "hls_math.h" and the hls::sinf() and hls::cosf() functions.
It is important to say that the input of the functions should be ap_fixed<32, I> where I <= 32. The output of the functions can be assigned to different types e.g., ap_fixed<16, I>
Example:
void CalculateSomeTrig(ap_fixed<16,5>* angle, ap_fixed<16,5>* output)
{
ap_fixed<32,5> functionInput = *angle;
*output = hls::sinf(functionInput);
}
LUT consumption:
In my case the consumption of LUT was reduced to 400 LUTs for each implementation of the function.
You can use bit-slicing to get the fraction and the integer parts of the ap_fixed variable, and then manipulate them to get the new ap_fixed. Perhaps something like:
constexpr int max(int a, int b) { return a > b ? a : b; }
template <int W2, int I2, int W1, int I1>
ap_fixed<W2, I2> convert(ap_fixed<W1, I1> f)
{
// Read fraction part as integer:
ap_fixed<max(W2, W1) + 1, max(I2, I1) + 1> result = f(W1 - I1 - 1, 0);
// Shift by the original number of bits in the fraction part
result >>= W1 - I1;
// Add the integer part
result += f(W1 - 1, W1 - I1);
return result;
}
I haven't tested this code well, so take it with a grain of salt.

Assigning a value to the pointer to an object

class Distance
{
public:
int a;
};
int main()
{
Distance d1; //declaring object
char *p=(char *)&d1;
*p=1;
printf("\n %d ",d1.a);
return 0;
}
This is my code.
When I am passing the value of 'a' to be like 256,512 , I am getting 257,513 respectively but for values like 1000 i get 769 and for values like 16,128,100 I am getting 1.
First I thought it might be related to powers of 2 being incremented by 1 due to changes in their binary representation. But adding 1 to binary representation of 1000 won't give me 769.
Please help me to understand this code.
*p = 1 sets the last byte(char) to 000000001
As you're type casting int to char,
binary for (int)1000 is (binary)0000001111101000
you're assigning (int)1 for last 8 bits i,e (binary)0000001100000001 which is 769.
Using 256512 worked because last 8 bit that you change are all zeros i.e (int)256512 is (binary)111110101000000000 so making last bit as 1 gives you (binary)111110101000000001 which is (int)256513
And I think(not sure) you get 1 for 16,128,100 because this integer is well out of int range and thus not assigned and a is set to 0 as class object is created. and thus setting last bit to 1 makes a = 1

CUSP GMRES error with complex number

I am trying to use CUSP to solve a complex matrix using the GMRES method.
When compiling, I get a error that says:
"no suitable conversion function from "cusp::complex" to "float" exists"
and if I go see where the error comes from it gives me gmres.inl, line 143
resid[0] = s[0];
where resid and s type are
cusp::array1d<NormType,cusp::host_memory> resid(1);
cusp::array1d<ValueType,cusp::host_memory> s(R+1);
typedef typename LinearOperator::value_type ValueType;
typedef typename LinearOperator::memory_space MemorySpace;
typedef typename norm_type<ValueType>::type NormType;
Why is it telling me float, when the type of both is complex?
code:`
// create an empty sparse matrix structure (CSR format)
cusp::csr_matrix,cusp::device_memory>A;
// initialize matrix from file
cusp::io::read_matrix_market_file(A, PATH);
// allocate storage for solution (x) and right hand side (b)
cusp::array1d<complex<float>, cusp::device_memory> x(A.num_rows, 0);
cusp::array1d<complex<float>, cusp::device_memory> b(A.num_rows, 1);
// set stopping criteria:
// iteration_limit = 100
// relative_tolerance = 1e-6
cusp::verbose_monitor<complex<float>> monitor(b,10000, 1e-6);
int restart = 50;
// set preconditioner (identity)
cusp::identity_operator<complex<float>, cusp::device_memory> M(A.num_rows, A.num_rows);
// solve the linear system A x = b
cusp::krylov::gmres(A, x, b, restart, monitor,M);//`

how can I count the number of set bits in a uint in specman?

I want to count the number of set bits in a uint in Specman:
var x: uint;
gen x;
var x_set_bits: uint;
x_set_bits = ?;
What's the best way to do this?
One way I've seen is:
x_set_bits = pack(NULL, x).count(it == 1);
pack(NULL, x) converts x to a list of bits.
count acts on the list and counts all the elements for which the condition holds. In this case the condition is that the element equals 1, which comes out to the number of set bits.
I don't know Specman, but another way I've seen this done looks a bit cheesy, but tends to be efficient: Keep a 256-element array; each element of the array consists of the number of bits corresponding to that value. For example (pseudocode):
bit_count = [0, 1, 1, 2, 1, ...]
Thus, bit_count2 == 1, because the value 2, in binary, has a single "1" bit. Simiarly, bit_count[255] == 8.
Then, break the uint into bytes, use the byte values to index into the bit_count array, and add the results. Pseudocode:
total = 0
for byte in list_of_bytes
total = total + bit_count[byte]
EDIT
This issue shows up in the book Beautiful Code, in the chapter by Henry S. Warren. Also, Matt Howells shows a C-language implementation that efficiently calculates a bit count. See this answer.