Issues running mex commands on Octave - matlab

I am using Ubuntu and am trying to use Octave to run a Matlab script that runs the mex compiler on some cpp files https://github.com/yuxng/MDP_Tracking/blob/master/compile.m I have the OpenCV requirement already installed but am receiving some errors.
Basically the commands in octave are
compile
MOT_test
The errors I got from the compile command are shown below. Based on my web search plus this thread it seems like this issue of compiling the MDP Tracker remains unsolved. I have minimal experience running mex and octave so there is not much more I can do.
octave:4> compile
In file included from imResampleMex.cpp:7:0:
wrappers.hpp:22:24: error: ‘wrCalloc’ declared as an ‘inline’ variable
inline void* wrCalloc( size_t num, size_t size ) { return calloc(num,size); }
^~~~~~
wrappers.hpp:22:24: error: ‘size_t’ was not declared in this scope
wrappers.hpp:22:36: error: ‘size_t’ was not declared in this scope
inline void* wrCalloc( size_t num, size_t size ) { return calloc(num,size); }
^~~~~~
wrappers.hpp:22:48: error: expression list treated as compound expression in initializer [-fpermissive]
inline void* wrCalloc( size_t num, size_t size ) { return calloc(num,size); }
^
wrappers.hpp:23:24: error: ‘wrMalloc’ declared as an ‘inline’ variable
inline void* wrMalloc( size_t size ) { return malloc(size); }
^~~~~~
wrappers.hpp:23:24: error: ‘size_t’ was not declared in this scope
wrappers.hpp: In function ‘void wrFree(void*)’:
wrappers.hpp:24:44: error: ‘free’ was not declared in this scope
inline void wrFree( void * ptr ) { free(ptr); }
^
wrappers.hpp: At global scope:
wrappers.hpp:29:17: error: ‘size_t’ was not declared in this scope
void* alMalloc( size_t size, int alignment ) {
^~~~~~
wrappers.hpp:29:30: error: expected primary-expression before ‘int’
void* alMalloc( size_t size, int alignment ) {
^~~
wrappers.hpp:29:44: error: expression list treated as compound expression in initializer [-fpermissive]
void* alMalloc( size_t size, int alignment ) {
^
imResampleMex.cpp: In function ‘void resampleCoef(int, int, int&, int*&, int*&, T*&, int*, int)’:
imResampleMex.cpp:22:39: error: ‘alMalloc’ cannot be used as a function
wts = (T*)alMalloc(nMax*sizeof(T),16);
^
imResampleMex.cpp:23:43: error: ‘alMalloc’ cannot be used as a function
yas = (int*)alMalloc(nMax*sizeof(int),16);
^
imResampleMex.cpp:24:43: error: ‘alMalloc’ cannot be used as a function
ybs = (int*)alMalloc(nMax*sizeof(int),16);
^
imResampleMex.cpp: In function ‘void resample(T*, T*, int, int, int, int, int, T)’:
imResampleMex.cpp:49:43: error: ‘alMalloc’ cannot be used as a function
T *C = (T*) alMalloc((ha+4)*sizeof(T),16); for(y=ha; y<ha+4; y++) C[y]=0;
^
warning: mkoctfile exited with failure status
warning: called from
mkoctfile at line 171 column 5
mex at line 29 column 18
compile at line 17 column 2
Compilation finished.
octave:5>

Related

Error: expected initializer before 'bool'

I have this header file and GNU GGC compiler gives me error in row with declaration of bool function. I cannot find anything wrong.
#ifndef GAMECREATOR_H
#define GAMECREATOR_H
bool isPerfectSquare (int x);
#endif // GAME_CREATOR_H

How to parse extended integer type in python C extension module?

I am trying to pass a (large) integer from python to an extension module, but I am unable to parse pythons arbitrary precision integers to 256-bit unsigned integers uint256. Here is the C callee:
#include <Python.h>
typedef unsigned _ExtInt(256) uint256;
static PyObject* test(PyObject* self, PyObject* args)
{
uint256 x;
if(!PyArg_ParseTuple(args, "O", &x)) {
puts("Could not parse the python arg");
return NULL;
}
// simple addition
x += (uint256) 1;
return Py_BuildValue("O", x);
}
// ... initalize extension module here ...
In python I run something like
import extension_module
extension_module.test(1)
And I get the error:
Bus error: 10
Or
Segmentation fault: 11
However, if I remove the simple addition x += (uint256) 1; it will atleast not throw any error and return the argument.
How do I parse extended-integer types in my C extension module?

Print UTF-32 string with wprintf

I am migrating some code from using wchar_t to char32_t, and when compiling with the -Werror=pointer-sign flag set, I am getting the following issue:
// main.c
#include <uchar.h>
#include <wchar.h>
int main(void) {
wprintf(U"some data\n");
}
Compiling: gcc -std=c11 -Werror=pointer-sign main.c
Output:
main.c: In function ‘main’:
main.c:5:10: error: pointer targets in passing argument 1 of ‘wprintf’ differ in signedness [-Werror=pointer-sign]
wprintf(U"some data\n");
^~~~~~~~~~~~~~
In file included from main.c:2:
/usr/include/wchar.h:587:12: note: expected ‘const wchar_t * restrict’ {aka ‘const int * restrict’} but argument is of type ‘unsigned int *’
extern int wprintf (const wchar_t *__restrict __format, ...)
^~~~~~~
To remedy this, I can do:
wprintf((const int *)U"some data\n");
//or
printf("%ls\n", U"some data");
Although this is quite a pain. Is there a nice and easy way to do this? What is the real difference between const unsigned int* vs const signed int*, aside from the data type it points to? Is this possibly dangerous, or should I just disable the flag altogether?
char32_t is an unsigned type.
wchar_t is either signed or unsigned, depending on implementation. In your case, it is signed.
You can't pass a pointer-to-unsigned where a pointer-to-signed is expected. So yes, you need a type-cast, however you should be casting to const wchar_t *, since that is what wprintf() actually expects (wchar_t just happens to be implemented as an int on your compiler, but don't cast to that directly):
wprintf((const wchar_t *)U"some data\n");
It doesn't get much cleaner than that, unless you wrap it in your own function, eg:
int wprintf32(const char32_t *str, ...)
{
va_list args;
va_start(args, str);
int result = vwprintf((const wchar_t *)str, args);
va_end(args);
return result;
}
wprintf32(U"some data\n");
Note that this code will not work properly at all on platforms where sizeof(wchar_t) < sizeof(char32_t), such as Windows. On those platforms, where sizeof(wchar_t) is 2, you will have to actually convert your string data from UTF-32 to UTF-16 instead, eg:
int wprintf32(const char32_t *str, ...)
{
va_list args;
int result;
va_start(args, str);
if (sizeof(wchar_t) < sizeof(char32_t))
{
wchar_t *str = convert_to_utf16(str); // <-- for you to implement
result = vwprintf(str, args);
free(str);
}
else
result = vwprintf((const wchar_t *)str, args);
va_end(args);
return result;
}
wprintf32(U"some data\n");

Freeglut doesn't initialize when using it from Swift

I've tried to use the Freeglut library in a Swift 4 Project. When the
void glutInit(int *argcp, char **argv);
function is shifted to Swift, its declaration is
func glutInit(_ pargc: UnsafeMutablePointer<Int32>!, _ argv: UnsafeMutablePointer<UnsafeMutablePointer<Int8>?>!)
Since I don't need the real arguments from the command line I want to make up the two arguments. I tried to define **argv in the Bridging-Header.h file
#include <OpenGL/gl.h>
#include <GL/glut.h>
char ** argv[1] = {"t"};
and use them in main.swift
func main() {
var argcp: Int32 = 1
glutInit(&argcp, argv!) // EXC_BAD_ACCESS
glutInitDisplayMode(UInt32(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH));
glutCreateWindow("my project")
glutDisplayFunc(display)
initOpenGL()
glutMainLoop()
but with that I get Thread 1: EXC_BAD_ACCESS (code=1, address=0x74) at the line with glutInit().
How can I initialize glut properly? How can I get an UnsafeMutablePointer<UnsafeMutablePointer<Int8>?>! so that it works?
The reason the right code in C char * argv[1] = {"t"}; does not work is because Swift imports fixed size C-array as a tuple, not a pointer to the first element.
But your char ** argv[1] = {"t"}; is completely wrong. Each Element of argv needs to be char **, but you assign char * ("t"). Xcode must have shown you a warning at first build:
warning: incompatible pointer types initializing 'char **' with an expression of type 'char [2]'
You should better take incompatible pointer types warning as error, unless you know what you are doing completely.
Generally, you should better not write some codes generating actual code/data like char * argv[1] = {"t"}; in a header file.
You can try it with Swift code.
As you know, when you want to pass a pointer to single element T, you declare a var of type T and pass &varName to the function you call.
As argcp in your code.
As well, when you want to pass a pointer to multiple element T, you declare a var of type [T] (Array<T>) and pass &arrName to the function you call.
(Ignoring immutable case to simplify.)
The parameter argv matches this case, where T == UnsafeMutablePointer<Int8>?.
So declare a var of type [UnsafeMutablePointer<Int8>?].
func main() {
var argc: Int32 = 1
var argv: [UnsafeMutablePointer<Int8>?] = [
strdup("t")
]
defer { argv.forEach{free($0)} }
glutInit(&argc, &argv)
//...
}
But I wonder if you really want to pass something to glutInit().
You can try something like this:
func main() {
var argc: Int32 = 0 //<- 0
glutInit(&argc, nil)
//...
}
I'm not sure if freeglut accept this, but you can find some articles on the web saying that this works in some implementation of Glut.

What does error conflicting types for '' mean?

i got an error that said "error: conflicting types for '____'. What does that mean?
Quickfix:
Make sure that your functions are declared once and only once before they are called. For example, change:
main(){ myfun(3.4); }
double myfun(double x){ return x; }
To:
double myfun(double x){ return x; }
main(){ myfun(3.4); }
Or add a separate function declaration:
double myfun(double x);
main(){ myfun(3.4); }
double myfun(double x){ return x; }
Possible causes for the error
Function was called before being declared
Function defined overrides a function declared in an included header.
Function was defined twice in the same file
Declaration and definition don't match
Declaration conflict in the included headers
What's really going on
error: conflicting types for ‘foo’ means that a function was defined more than once with different type signatures.
A file that includes two functions with the same name but different return types would throw this error, for example:
int foo(){return 1;}
double foo(){return 1.0;}
Indeed, when compiled with GCC we get the following errors:
foo.c:5:8: error: conflicting types for ‘foo’
double foo(){return 1.0;}
^
foo.c:4:5: note: previous definition of ‘foo’ was here
int foo(){return 1;}
^
Now, if instead we had a file with two function definitions with the same name
double foo(){return 1;}
double foo(){return 1.0;}
We would get a 'redefinition' error instead:
foo.c:5:8: error: redefinition of ‘foo’
double foo(){return 1.0;}
^
foo.c:4:8: note: previous definition of ‘foo’ was here
double foo(){return 1;}
^
Implicit function declaration
So why does the following code throw error: conflicting types for ‘foo’?
main(){ foo(); }
double foo(){ return 1.0; }
The reason is implicit function declaration.
When the compiler first encounters foo() in the main function, it will assume a type signature for the function foo of int foo(). By default, implicit functions are assumed to return integers, and the input argument types are derived from what you're passing into the function (in this case, nothing).
Obviously, the compiler is wrong to make this assumption, but the specs for the C (and thus Objective-C) language are old, cranky, and not very clever. Maybe implicitly declaring functions saved some development time by reducing compiler complexity back in the day, but now we're stuck with a terrible feature that should have never made it into the language. In fact, implicit declarations were made illegal in C99.
That said, once you know what's going on, it should be easy to dig out the root cause of your problem.
it's probably because your function "_" already exists in your library. It happened to me with this function:
I was using stdio.h
int getline (char s[ ] , int lim)
{
int c, i;
for (i=0; i < lim-1 && (c=getchar())!=EOF && c!='\n'; ++i)
s[i] = c;
if (c == '\n') {
s[i] = c;
++i;
}
s[i] = '\0';
return i;
}
When I changed "getline" to "getlinexxx" and gcc compiled it:
int getlinexxx (char s[], int lim)
{
int c, i;
for (i=0; i < lim-1 && (c=getchar())!=EOF && c!='\n'; ++i)
s[i] = c;
if (c == '\n') {
s[i] = c;
++i;
}
s[i] = '\0';
return i;
}
And the problem was gone
What datatype is '___'?
My guess is that you're trying to initialize a variable of a type that can't accept the initial value. Like saying int i = "hello";
If you're trying to assign it from a call that returns an NSMutableDictionary, that's probably your trouble. Posting the line of code would definitely help diagnose warnings and errors in it.