How to expand the GTK window containing a really tiny GRID inside a scrolled window? - gtk

I have a code containing GTK entries in a GRID, my goal is to append them into the GRID to make a table of values, something like a "spreadsheet". The entries are in the GRID, and the GRID is in a GTK Scrolled window, which is in the main window.
Yet, I am not getting the expected result. The GRID is really tiny, it can be expanded.
Here is what I am getting (image):
Here is what I want (image):
Here is my code:
#include <gtk/gtk.h>
int main(int argc, char **argv){
gtk_init(&argc, &argv);
GtkWidget *mainwindow;
mainwindow = gtk_window_new(GTK_WINDOW_TOPLEVEL);
gtk_window_set_title(GTK_WINDOW(mainwindow), "Example");
g_signal_connect(mainwindow, "destroy", G_CALLBACK(gtk_main_quit), NULL);
int n = 5;
GtkWidget *entries[n][n];
GtkWidget *grid;
GtkWidget *scrolledwindow;
grid = gtk_grid_new();
gtk_grid_set_row_spacing(GTK_GRID(grid), 5);
gtk_grid_set_column_spacing(GTK_GRID(grid), 3);
scrolledwindow = gtk_scrolled_window_new(NULL, NULL);
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++){
entries[i][j] = gtk_entry_new();
gtk_grid_attach(GTK_GRID(grid), entries[i][j], i, j, 1, 1);
}
}
gtk_container_add(GTK_CONTAINER(scrolledwindow), grid);
gtk_container_add(GTK_CONTAINER(mainwindow), scrolledwindow);
gtk_widget_show_all(mainwindow);
gtk_main();
return 0;
}
Searching in the internet, I have found this problem:
Gtk scroll window size in a grid, but it's in Python, and I don't know, but NULL and NULL, what values I must atribute to a and b in this command: gtk_scrolled_window_new(a, b);

You can use the gtk_scrolled_window_set_propagate_natural_width and gtk_scrolled_window_set_propagate_natural_height functions from GtkScrolledWindow.
For example, in your snippet:
// ...
scrolledwindow = gtk_scrolled_window_new(NULL, NULL);
// Use the child widget's natural dimensions when requesting the scrollwindow's
// own natural size:
gtk_scrolled_window_set_propagate_natural_width(GTK_SCROLLED_WINDOW(scrolledwindow), true);
gtk_scrolled_window_set_propagate_natural_height(GTK_SCROLLED_WINDOW(scrolledwindow), true);
for(int i = 0; i < n; i++){
// ...

Related

Convert Dicom image pixel value to Unity color

I use SimpleITK to read a Dicom image in Unity. I create a list of Unity Texture2D to store Dicom slices. My question is how can we convert the pixel value from Dicom image to Unity Color?
List<Texture2D> _imageListTexture = new List<Texture2D>();
for (int k = 0; k < depth; k++)
{
Texture2D _myTex = new Texture2D(width, height, TextureFormat.ARGB32, true);
for (int j = 0; j < height; j++)
{
for (int i = 0; i < width; i++)
{
int idx = i + width * (j + height * k);
float pixel = liverVoxels[idx]; //We have the pixel value
Color32 _col = new Color32(); //Unity has rgba but we have only one value
//What to do here?
_myTex.SetPixel(i, j, _col);
}
}
_myTex.Apply();
_imageListTexture.Add(_myTex);
}
It's very simple like this:
float pixel = liverVoxels[idx]; //We have the pixel value
Color32 _col = new Color32(pixel, pixel, pixel,255);
_myTex.SetPixel(i, j, _col);

Converting and an arma::mat adjacency matrix into an igraph graph in C (Rcpp)

I use Armadillo objects in some (Rcpp) code where I work with matrices.
The matrices are adjacency matrices and I need to quickly compute the components of the underlying network and though I could do this via igraph.
But I fail already at converting the adjacency matrix into something that can be used with igraph.
#include <RcppArmadillo.h>
#include <iostream>
#include <igraph-0.7.1\include\igraph.h>
using namespace arma;
// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
vec component_membership(const mat& adjacencymatrix) {
igraph_t g;
igraph_adjacency(&g,&adjacencymatrix,IGRAPH_ADJ_DIRECTED);
// here is more code that is immaterial to my problem
}
On compilation it complains
cannot convert 'const mat* {aka const arma::Mat<double>*}' to
'igraph_matrix_t*' for argument '2' to
'int igraph_adjacency(igraph_t*, igraph_matrix_t*, igraph_adjacency_t)'
I understand why that is the case: I believe igraph_matrix_t and arma::matrix must be fundamentally different data types. How can I convert, i.e., but how do i fix this easily?
As you suspected, igraph_matrix_t and arma::matrix are completely different types. The igraph documentation lists no methods that would make use of a C array for constructing an igraph_matrix_t, so I think one has to do it by hand. Something like this might work (totally untested!):
igraph_matrix_t *m;
int rc = igraph_matrix_init(m, mat.n_rows, mat.n_cols);
for (unsigned long j = 0; j < mat.n_cols; ++j)
for (unsigned long i = 0; i < mat.n_rows; ++i)
igraph_matrix_set(m, i, j, mat(i, j));
Following #Ralf_Stubner's suggestion I ended up using the following code. Not sure it is smart, I thought I'd share it anyways
void armamat_to_igraph_matrix(const mat &x_in, igraph_matrix_t *x_out) {
igraph_matrix_init(x_out, x_in.n_rows, x_in.n_cols);
for (unsigned long j = 0; j < x_in.n_cols; ++j)
for (unsigned long i = 0; i < x_in.n_rows; ++i)
igraph_matrix_set(x_out, i, j, x_in(i, j));
return;
}
void igraph_vector_to_armauvec(const igraph_vector_t *x_in, uvec &x_out) {
x_out = uvec(igraph_vector_size(x_in));
for (unsigned long j = 0; j < igraph_vector_size(x_in); ++j)
x_out(j) = igraph_vector_e(x_in,j);
return;
}
void igraph_vector_to_armavec(const igraph_vector_t *x_in, vec &x_out) {
x_out = vec(igraph_vector_size(x_in));
for (unsigned long j = 0; j < igraph_vector_size(x_in); ++j)
x_out(j) = igraph_vector_e(x_in,j);
return;
}

Matlab to openCV code conversion

I am trying to convert a code in MatLab to OpenCV but I am stuck about the following lines as I don't know much programming
MatLab code:
[indx_row, indx_col] = find(mask ==1);
Indx_Row = indx_row;
Indx_Col = indx_col;
for ib = 1:nB;
istart = (ib-1)*n + 1;
iend = min(ib*n, N);
indx_row = Indx_Row(istart:iend);
indx_col = Indx_Col(istart:iend);
openCV code:
vector <Point> index_rowCol;
for(int i=0; i<mask.rows; i++)
{
for(int j=0; j<mask.cols; j++)
{
if( mask.at<float>(i,j) == 1 )
{
Point pixel;
pixel.x = j;
pixel.y = i;
index_rowCol.push_back(pixel);
}
}
}
//Code about the "for loop" in MatLab code
for(int ib=0 ; ib<nB; ib++)
{
int istart = (ib-1)*n;
int iend = std::min( ib*n, N );
index_rowCol.clear();// Clearing the "index_rowCol" so that we can fill it again from "istart" to "iend"4
for(int j = istart; j<iend; j++)
{
index_rowCol.push_back( Index_RowCol[j] );
}
}
I am unable to understand if it is ok or not?
I think that there is mistake in usage of min function.
Here
for ib = 1:nB;
istart = (ib-1)*n + 1;
iend = min(ib*n, N);
ib - is array [1,2,3..nB] and you compare each value with N. As the result you also get array.
So as result:
ib - is array, istart - is array and iend also an array.
In C++ implementation
for(int ib=0 ; ib<nB; ib++)
{
int istart = (ib-1)*n;
int iend = std::min( ib*n, N );
you work with scalars (here ib,istars and iend are scalars).
For better understand how the code above works use step-by-step debugging. (Set breakpoint and run the code then press (F10 key-for matlab) )

Scaling in inverse FFT by cuFFT

Whenever I'm plotting the values obtained by a programme using the cuFFT and comparing the results with that of Matlab, I'm getting the same shape of graphs and the values of maxima and minima are getting at the same points. However, the values resulting by the cuFFT are much greater than those resulting from Matlab. The Matlab code is
fs = 1000; % sample freq
D = [0:1:4]'; % pulse delay times
t = 0 : 1/fs : 4000/fs; % signal evaluation time
w = 0.5; % width of each pulse
yp = pulstran(t,D,'rectpuls',w);
filt = conj(fliplr(yp));
xx = fft(yp,1024).*fft(filt,1024);
xx = (abs(ifft(xx)));
and the CUDA code with the same input is like:
cufftExecC2C(plan, (cufftComplex *)d_signal, (cufftComplex *)d_signal, CUFFT_FORWARD);
cufftExecC2C(plan, (cufftComplex *)d_filter_signal, (cufftComplex *)d_filter_signal, CUFFT_FORWARD);
ComplexPointwiseMul<<<blocksPerGrid, threadsPerBlock>>>(d_signal, d_filter_signal, NX);
cufftExecC2C(plan, (cufftComplex *)d_signal, (cufftComplex *)d_signal, CUFFT_INVERSE);
The cuFFT performs also a 1024 points FFT with batch size of 2.
With the scaling factor of NX=1024, the values are not coming correct. Please tell what to do.
This is a late answer to remove this question from the unanswered list.
You are not giving enough information to diagnose your problem, since you are missing to specify the way you are setting up the cuFFT plan. You are even not specifying whether you have exactly the same shape for the Matlab's and cuFFT's signals (so you have just a scaling) or you have approximately the same shape. However, let me make the following two observations:
The yp vector has 4000 elements; opposite to thatm by fft(yp,1024), you are performing an FFT by truncating the signal to 1024 elements;
The inverse cuFFT does not perform the scaling by the number of vector elements.
For the sake of convenience (it could be useful to other users), I'm reporting below a simple FFT-IFFT scheme which includes also the scaling performed by using the CUDA Thrust library.
#include <cufft.h>
#include <thrust/host_vector.h>
#include <thrust/device_vector.h>
/*********************/
/* SCALE BY CONSTANT */
/*********************/
class Scale_by_constant
{
private:
float c_;
public:
Scale_by_constant(float c) { c_ = c; };
__host__ __device__ float2 operator()(float2 &a) const
{
float2 output;
output.x = a.x / c_;
output.y = a.y / c_;
return output;
}
};
int main(void){
const int N=4;
// --- Setting up input device vector
thrust::device_vector<float2> d_vec(N,make_cuComplex(1.f,2.f));
cufftHandle plan;
cufftPlan1d(&plan, N, CUFFT_C2C, 1);
// --- Perform in-place direct Fourier transform
cufftExecC2C(plan, thrust::raw_pointer_cast(d_vec.data()),thrust::raw_pointer_cast(d_vec.data()), CUFFT_FORWARD);
// --- Perform in-place inverse Fourier transform
cufftExecC2C(plan, thrust::raw_pointer_cast(d_vec.data()),thrust::raw_pointer_cast(d_vec.data()), CUFFT_INVERSE);
thrust::transform(d_vec.begin(), d_vec.end(), d_vec.begin(), Scale_by_constant((float)(N)));
// --- Setting up output host vector
thrust::host_vector<float2> h_vec(d_vec);
for (int i=0; i<N; i++) printf("Element #%i; Real part = %f; Imaginary part: %f\n",i,h_vec[i].x,h_vec[i].y);
getchar();
}
With the introduction of the cuFFT callback feature, the normalization required by the inverse FFT performed by the cuFFT can be embedded directly within the cufftExecC2C call by defining the normalization operation as a __device__ function.
Besides the cuFFT User Guide, for the cuFFT callback features, see
CUDA Pro Tip: Use cuFFT Callbacks for Custom Data Processing
Below is an example of implementing the IFFT normalization by cuFFT callback.
#include <stdio.h>
#include <assert.h>
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <cufft.h>
#include <cufftXt.h>
/********************/
/* CUDA ERROR CHECK */
/********************/
#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, char *file, int line, bool abort=true)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if (abort) exit(code);
}
}
/*********************/
/* CUFFT ERROR CHECK */
/*********************/
// See http://stackoverflow.com/questions/16267149/cufft-error-handling
#ifdef _CUFFT_H_
static const char *_cudaGetErrorEnum(cufftResult error)
{
switch (error)
{
case CUFFT_SUCCESS:
return "CUFFT_SUCCESS";
case CUFFT_INVALID_PLAN:
return "CUFFT_INVALID_PLAN";
case CUFFT_ALLOC_FAILED:
return "CUFFT_ALLOC_FAILED";
case CUFFT_INVALID_TYPE:
return "CUFFT_INVALID_TYPE";
case CUFFT_INVALID_VALUE:
return "CUFFT_INVALID_VALUE";
case CUFFT_INTERNAL_ERROR:
return "CUFFT_INTERNAL_ERROR";
case CUFFT_EXEC_FAILED:
return "CUFFT_EXEC_FAILED";
case CUFFT_SETUP_FAILED:
return "CUFFT_SETUP_FAILED";
case CUFFT_INVALID_SIZE:
return "CUFFT_INVALID_SIZE";
case CUFFT_UNALIGNED_DATA:
return "CUFFT_UNALIGNED_DATA";
}
return "<unknown>";
}
#endif
#define cufftSafeCall(err) __cufftSafeCall(err, __FILE__, __LINE__)
inline void __cufftSafeCall(cufftResult err, const char *file, const int line)
{
if( CUFFT_SUCCESS != err) {
fprintf(stderr, "CUFFT error in file '%s', line %d\n %s\nerror %d: %s\nterminating!\n",__FILE__, __LINE__,err, \
_cudaGetErrorEnum(err)); \
cudaDeviceReset(); assert(0); \
}
}
__device__ void IFFT_Scaling(void *dataOut, size_t offset, cufftComplex element, void *callerInfo, void *sharedPtr) {
float *scaling_factor = (float*)callerInfo;
float2 output;
output.x = cuCrealf(element);
output.y = cuCimagf(element);
output.x = output.x / scaling_factor[0];
output.y = output.y / scaling_factor[0];
((float2*)dataOut)[offset] = output;
}
__device__ cufftCallbackStoreC d_storeCallbackPtr = IFFT_Scaling;
/********/
/* MAIN */
/********/
int main() {
const int N = 16;
cufftHandle plan;
float2 *h_input = (float2*)malloc(N*sizeof(float2));
float2 *h_output1 = (float2*)malloc(N*sizeof(float2));
float2 *h_output2 = (float2*)malloc(N*sizeof(float2));
float2 *d_input; gpuErrchk(cudaMalloc((void**)&d_input, N*sizeof(float2)));
float2 *d_output1; gpuErrchk(cudaMalloc((void**)&d_output1, N*sizeof(float2)));
float2 *d_output2; gpuErrchk(cudaMalloc((void**)&d_output2, N*sizeof(float2)));
float *h_scaling_factor = (float*)malloc(sizeof(float));
h_scaling_factor[0] = 16.0f;
float *d_scaling_factor; gpuErrchk(cudaMalloc((void**)&d_scaling_factor, sizeof(float)));
gpuErrchk(cudaMemcpy(d_scaling_factor, h_scaling_factor, sizeof(float), cudaMemcpyHostToDevice));
for (int i=0; i<N; i++) {
h_input[i].x = 1.0f;
h_input[i].y = 0.f;
}
gpuErrchk(cudaMemcpy(d_input, h_input, N*sizeof(float2), cudaMemcpyHostToDevice));
cufftSafeCall(cufftPlan1d(&plan, N, CUFFT_C2C, 1));
cufftSafeCall(cufftExecC2C(plan, d_input, d_output1, CUFFT_FORWARD));
gpuErrchk(cudaMemcpy(h_output1, d_output1, N*sizeof(float2), cudaMemcpyDeviceToHost));
for (int i=0; i<N; i++) printf("Direct transform - %d - (%f, %f)\n", i, h_output1[i].x, h_output1[i].y);
cufftCallbackStoreC h_storeCallbackPtr;
gpuErrchk(cudaMemcpyFromSymbol(&h_storeCallbackPtr, d_storeCallbackPtr, sizeof(h_storeCallbackPtr)));
cufftSafeCall(cufftXtSetCallback(plan, (void **)&h_storeCallbackPtr, CUFFT_CB_ST_COMPLEX, (void **)&d_scaling_factor));
cufftSafeCall(cufftExecC2C(plan, d_output1, d_output2, CUFFT_INVERSE));
gpuErrchk(cudaMemcpy(h_output2, d_output2, N*sizeof(float2), cudaMemcpyDeviceToHost));
for (int i=0; i<N; i++) printf("Inverse transform - %d - (%f, %f)\n", i, h_output2[i].x, h_output2[i].y);
cufftSafeCall(cufftDestroy(plan));
gpuErrchk(cudaFree(d_input));
gpuErrchk(cudaFree(d_output1));
gpuErrchk(cudaFree(d_output2));
return 0;
}
EDIT
The "moment" the callback operation is performed is specified by CUFFT_CB_ST_COMPLEX in the call to cufftXtSetCallback. Notice that you can then have load and store callbacks with the same cuFFT plan.
PERFORMANCE
I'm adding a further answer to compare the callback performance with the non-callback version of the same code for this particular case of IFFT scaling. The code I'm using is
#include <stdio.h>
#include <assert.h>
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <cufft.h>
#include <cufftXt.h>
#include <thrust/device_vector.h>
#include "Utilities.cuh"
#include "TimingGPU.cuh"
//#define DISPLAY
/*******************************/
/* THRUST FUNCTOR IFFT SCALING */
/*******************************/
class Scale_by_constant
{
private:
float c_;
public:
Scale_by_constant(float c) { c_ = c; };
__host__ __device__ float2 operator()(float2 &a) const
{
float2 output;
output.x = a.x / c_;
output.y = a.y / c_;
return output;
}
};
/**********************************/
/* IFFT SCALING CALLBACK FUNCTION */
/**********************************/
__device__ void IFFT_Scaling(void *dataOut, size_t offset, cufftComplex element, void *callerInfo, void *sharedPtr) {
float *scaling_factor = (float*)callerInfo;
float2 output;
output.x = cuCrealf(element);
output.y = cuCimagf(element);
output.x = output.x / scaling_factor[0];
output.y = output.y / scaling_factor[0];
((float2*)dataOut)[offset] = output;
}
__device__ cufftCallbackStoreC d_storeCallbackPtr = IFFT_Scaling;
/********/
/* MAIN */
/********/
int main() {
const int N = 100000000;
cufftHandle plan; cufftSafeCall(cufftPlan1d(&plan, N, CUFFT_C2C, 1));
TimingGPU timerGPU;
float2 *h_input = (float2*)malloc(N*sizeof(float2));
float2 *h_output1 = (float2*)malloc(N*sizeof(float2));
float2 *h_output2 = (float2*)malloc(N*sizeof(float2));
float2 *d_input; gpuErrchk(cudaMalloc((void**)&d_input, N*sizeof(float2)));
float2 *d_output1; gpuErrchk(cudaMalloc((void**)&d_output1, N*sizeof(float2)));
float2 *d_output2; gpuErrchk(cudaMalloc((void**)&d_output2, N*sizeof(float2)));
// --- Callback function parameters
float *h_scaling_factor = (float*)malloc(sizeof(float));
h_scaling_factor[0] = 16.0f;
float *d_scaling_factor; gpuErrchk(cudaMalloc((void**)&d_scaling_factor, sizeof(float)));
gpuErrchk(cudaMemcpy(d_scaling_factor, h_scaling_factor, sizeof(float), cudaMemcpyHostToDevice));
// --- Initializing the input on the host and moving it to the device
for (int i = 0; i < N; i++) {
h_input[i].x = 1.0f;
h_input[i].y = 0.f;
}
gpuErrchk(cudaMemcpy(d_input, h_input, N * sizeof(float2), cudaMemcpyHostToDevice));
// --- Execute direct FFT on the device and move the results to the host
cufftSafeCall(cufftExecC2C(plan, d_input, d_output1, CUFFT_FORWARD));
#ifdef DISPLAY
gpuErrchk(cudaMemcpy(h_output1, d_output1, N * sizeof(float2), cudaMemcpyDeviceToHost));
for (int i=0; i<N; i++) printf("Direct transform - %d - (%f, %f)\n", i, h_output1[i].x, h_output1[i].y);
#endif
// --- Execute inverse FFT with subsequent scaling on the device and move the results to the host
timerGPU.StartCounter();
cufftSafeCall(cufftExecC2C(plan, d_output1, d_output2, CUFFT_INVERSE));
thrust::transform(thrust::device_pointer_cast(d_output2), thrust::device_pointer_cast(d_output2) + N, thrust::device_pointer_cast(d_output2), Scale_by_constant((float)(N)));
#ifdef DISPLAY
gpuErrchk(cudaMemcpy(h_output2, d_output2, N * sizeof(float2), cudaMemcpyDeviceToHost));
for (int i=0; i<N; i++) printf("Inverse transform - %d - (%f, %f)\n", i, h_output2[i].x, h_output2[i].y);
#endif
printf("Timing NO callback %f\n", timerGPU.GetCounter());
// --- Setup store callback
// timerGPU.StartCounter();
cufftCallbackStoreC h_storeCallbackPtr;
gpuErrchk(cudaMemcpyFromSymbol(&h_storeCallbackPtr, d_storeCallbackPtr, sizeof(h_storeCallbackPtr)));
cufftSafeCall(cufftXtSetCallback(plan, (void **)&h_storeCallbackPtr, CUFFT_CB_ST_COMPLEX, (void **)&d_scaling_factor));
// --- Execute inverse callback FFT on the device and move the results to the host
timerGPU.StartCounter();
cufftSafeCall(cufftExecC2C(plan, d_output1, d_output2, CUFFT_INVERSE));
#ifdef DISPLAY
gpuErrchk(cudaMemcpy(h_output2, d_output2, N * sizeof(float2), cudaMemcpyDeviceToHost));
for (int i=0; i<N; i++) printf("Inverse transform - %d - (%f, %f)\n", i, h_output2[i].x, h_output2[i].y);
#endif
printf("Timing callback %f\n", timerGPU.GetCounter());
cufftSafeCall(cufftDestroy(plan));
gpuErrchk(cudaFree(d_input));
gpuErrchk(cudaFree(d_output1));
gpuErrchk(cudaFree(d_output2));
return 0;
}
For such large 1D arrays and simple processing (scaling), the timing on a Kepler K20c is the following
Non-callback 69.029762 ms
Callback 65.868607 ms
So, there is not much improvement. I expect that the improvement one sees is due to avoiding a separate kernel call in the non-callback case. For smaller 1D arrays, there is either no improvement or the non-callback case runs faster.

How to pass an array of double Image to mexFunction in matlab

I have already passed an image to my mexFunction but now I need to pass an array of images and I am struggling to get the thing right. This is my code to get the simple Image. This works perfectly but when I go into 3D I don't understand how the information is arranged in the mxArray.
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, mxArray *prhs[])
{
mxArray *matrixIn = prhs[0];
inputImage=(double *)mxGetPr(matrixIn);
int x = int(dims[0]);
int y = int(dims[1]);
volume3D image(inputImage, x, y, 1);
}
volume3D::volume3D(double* image, int x, int y, int z)
{
allocateVolume( x, y, z);
for(int i=0; i<xSize; i++)
for(int j=0; j<ySize; j++) {
volume[i][j][0] = double(image[(i)*x+j]);
}
}
I did something like this to pass it the other way around
mwSize mrows,ncols;
mrows = mxGetM(prhs[0]);
ncols = mxGetN(prhs[0]);
plhs[0] = mxCreateNumericMatrix(mrows, ncols, mxDOUBLE_CLASS, mxREAL);
double *matlabTumorMap = mxGetPr(plhs[0]);
const int * dims = mxGetDimensions( plhs[0]);
int x = int(dims[0]);
int y = int(dims[1]);
int z = int(dims[2]);
mwIndex subs[3];
mexPrintf("x %i\n",x);
mexPrintf("y %i\n",y);
mexPrintf("z %i\n",z);
mxArray *matrixTumor = plhs[0];
for(subs[0]=0; subs[0]<x; subs[0]++)
for(subs[1]=0; subs[1]<y; subs[1]++)
for(subs[2]=0; subs[2]<z; subs[2]++)
{
mwIndex x = mxCalcSingleSubscript( matrixTumor,3,subs);
matlabTumorMap[x] = tumorMap.getVoxel(subs[0],subs[1],subs[2]);
}
According to http://www.mathworks.de/help/techdoc/apiref/bqoqnz0.html, there is a mxCalcSingleSubscript which helps you calculating these data.
Something like
mxArray *matrixIn = prhs[0];
volume3D image(matrixIn);
}
volume3D::volume3D(MxArray* matrixIn)
{
double * inputImage=(double *)mxGetPr(matrixIn);
assert(mxGetNumberOfDimensions(matrixIn) >= 3)
mwSize * dims = mxGetDimensions(matrixIn);
int x = int(dims[0]);
int y = int(dims[1]);
int z = int(dims[2]);
double * image = mxGetPr(matrixIn);
mwIndex subs[3];
allocateVolume( x, y, z);
for(subs[0]=0; subs[0]<x; subs[0]++)
for(subs[1]=0; subs[1]<y; subs[1]++)
for(subs[2]=0; subs[2]<z; subs[2]++) {
mwIndex x = mxCalcSingleSubscript(matrixIn, 3, subs);
/* <unsure> */volume[subs[0]][subs[1]][subs[2]] /* </unsure> */ = image[x];
}
BTW: Pay attention if mixing C and C++ - it can lead to even more headache due to name mangling etc.
You are doing the thing right.
The only problem is your indexing, I think. you should write:
volume[i][j][0] = double(image[i+j*x]);
and also you forgot to write:
mwSize* dims = mxGetDimensions(matrixIn);