How to pass values to mex files - matlab

Hi I want to send a value from my matlab to my mex function. The value is generally about 10 digits long and i have used unsigned long long data type.
But i have difficulty accessing it from the mexfile.
mxGetPr returns double type, so is there some type conversion i have to do?

Yes, I just encountered this. You shouldn't use mxGetPr anymore in general. The better way to do it is to first check the type like this:
if(!mxIsClass(prhs[0],"double"))
{
mexErrMsgTxt("Data must be of type double!!!\n");
}
Then access the data through (double *)mxGetData(prhs[0]) or in your case (unsigned long long int*)mxGetData(prhs[0])
You can look up mxIsClass and mxGetData for more info.
Edit: Also here's a list of the different types for mxIsClass

Related

Why are random generated numbers UInt-32 by default in Swift?

I stumbled upon the fact that when a random number is generated in Swift, it is of type UInt32 by default instead of type Int
What's the reason?
I suspect you are referring to arc4random. This function returns UInt32 because the underlying C function (also called arc4random) returns uint32_t, which is the C equivalent of Swift's UInt32.
I would assume it makes it faster. If you want Int random numbers take a look at GKRandomSource in the game kit, https://developer.apple.com/documentation/gameplaykit/gkrandomsource

c cast and deference a pointer strict aliasing

In http://blog.regehr.org/archives/1307, the author claims that the following snippet has undefined behavior:
unsigned long bogus_conversion(double d) {
unsigned long *lp = (unsigned long *)&d;
return *lp;
}
The argument is based on http://port70.net/~nsz/c/c11/n1570.html#6.5p7, which specified the allowed access circumstances. However, in the footnote(88) for this bullet point, it says this list is only for checking aliasing purpose, so I think this snippet is fine, assuming sizeof(long) == sizeof(double).
My question is whether the above snippet is allowed.
The snippet is erroneous but not because of aliasing. First there is a simple rule that says to deference a pointer to object with a different type than its effective type is wrong. Here the effective type is double, so there is an error.
This safeguard is there in the standard, because the bit representation of a double must not be a valid representation for unsigned long, although this would be quite exotic nowadays.
Second, from a more practical point of view, double and unsigned long may have different alignment properties, and accessing this in that way may produce a bus error or just have a run time penalty.
Generally casting pointers like that is almost always wrong, has no defined behavior, is bad style and in addition is mostly useless, anyhow. Focusing on aliasing in the argumentation about these problems is a bad habit that probably originates in incomprehensible and scary gcc warnings.
If you really want to know the bit representation of some type, there are some exceptions of the "effective type" rule. There are two portable solutions that are well defined by the C standard:
Use unsigned char* and inspect the bytes.
Use a union that comprises both types, store the value in there and read it with the other type. By that you are telling the compiler that you want an object that can be seen as both types. But here you should not use unsigned long as a target type but uint64_t, since you have to be sure that the size is exactly what you think it is, and that there are no trap representations.
To illustrate that, here is the same function as in the question but with defined behavior.
unsigned long valid_conversion(double d) {
union {
unsigned long ul;
double d;
} ub = { .d = d, };
return ub.ul;
}
My compiler (gcc on a Debian, nothing fancy) compiles this to exactly the same assembler as the code in the question. Only that you know that this code is portable.

Working with opaque types (Char and Long)

I'm trying to export a Scala implementation of an algorithm for use in JavaScript. I'm using #JSExport. The algorithm works with Scala Char and Long values which are marked as opaque in the interoperability guide.
I'd like to know (a) what this means; and (b) what the recommendation is for dealing with this.
I presume it means I should avoid Char and Long and work with String plus a run-time check on length (or perhaps use a shapeless Sized collection) and Int instead.
But other ideas welcome.
More detail...
The kind of code I'm looking at is:
#JSExport("Foo")
class Foo(val x: Int) {
#JSExport("add")
def add(n: Int): Int = x+n
}
...which works just as expected: new Foo(1).add(2) produces 3.
Replacing the types with Long the same call reports:
java.lang.ClassCastException: 1 is not an instance of scala.scalajs.runtime.RuntimeLong (and something similar with methods that take and return Char).
Being opaque means that
There is no corresponding JavaScript type
There is no way to create a value of that type from JavaScript (except if there is an #JSExported constructor)
There is no way of manipulating a value of that type (other than calling #JSExported methods and fields)
It is still possible to receive a value of that type from Scala.js code, pass it around, and give it back to Scala.js code. It is also always possible to call .toString(), because java.lang.Object.toString() is #JSExported. Besides toString(), neither Char nor Long export anything, so you can't do anything else with them.
Hence, as you have experienced, a JavaScript 1 cannot be used as a Scala.js Long, because it's not of the right type. Neither is 'a' a valid Char (but it's a valid String).
Therefore, as you have inferred yourself, you must indeed avoid opaque types, and use other types instead if you need to create/manipulate them from JavaScript. The Scala.js side can convert back and forth using the standard tools in the language, such as someChar.toInt and someInt.toChar.
The choice of which type is best depends on your application. For Char, it could be Int or String. For Long, it could be String, a pair of Ints, or possibly even Double if the possible values never use more than 52 bits of precision.

Problems with using delphi dta types with a C DLL

I am trying to use a .dll which has been written in C (although it wraps around a matlab .ddl)
The function I am trying to use is defined in C as:
__declspec(dllexport) int ss_scaling_subtraction(double* time, double** signals, double* amplitudes, int nSamples, int nChannels, double* intensities);
The .dll requires, amongst others, a 2 dimensional array - When I tried to use:
Array of array of double
In the declaration, the compiler gave an error so I defined my own data type:
T2DArray = Array of array of double;
I initialise the .dll function in a unit like so:
function ss_scaling_subtraction(const time: array of double; const signals: T2DArray; const amplituides : array of double; const nSamples: integer;const nChannels: integer; var intensities: array of double) : integer ; cdecl; external 'StirScanDLL.dll';
However, when called this function, I get an access violation from the .dll
Creating a new data type
T1DArray = array of double
and changing
Array of double
To
T1DArray
In the declaration seems to make things run but the result is still not correct.
I have read on here that it can be dangerous to pass delphi data types to .dll's coded in a different language so I thought this might be causing the issue.
But how do I NOT use a delphi data type when I HAVE to use it to properly declare the function in the first place?!
Extra Info, I have already opened the matlab runtime complier lib's and opened the entry point to the StirScanDLL.dll
The basic problem here is one of binary interop mismatch. Simply put, a pointer to an array is not the same thing at a binary level as a Delphi open array parameter. Whilst they both semantically represent an array, the binary representation differs.
The C function is declared as follows:
__declspec(dllexport) int ss_scaling_subtraction(
double* time,
double** signals,
double* amplitudes,
int nSamples,
int nChannels,
double* intensities
);
Declare your function like so in Delphi:
function ss_scaling_subtraction(
time: PDouble;
signals: PPDouble;
amplitudes: PDouble;
nSamples: Integer;
nChannels: Integer;
intensities: PDouble
): Integer; cdecl; external 'StirScanDLL.dll';
If you find that PPDouble is not declared, define it thus:
type
PPDouble = ^PDouble;
That is, pointer to pointer to double.
Now what remains is to call the functions. Declare your arrays in Delphi as dynamic arrays. Like this:
var
time, amplitudes, intensities: TArray<Double>;
signals: TArray<TArray<Double>>;
If you have an older pre-generics Delphi then declare some types:
type
TDoubleArray = array of Double;
T2DDoubleArray = array of TDoubleArray;
Then declare the variables with the appropriate types.
Next you need to allocate the arrays, and populate any that have data passing from caller to callee.
SetLength(time, nSamples); // I'm guessing here as to the length
SetLength(signals, nSamples, nChannels); // again, guessing
Finally it is time to call the function. Now it turns out that the good designers of Delphi arranged for dynamic arrays to be stored as pointers to the first element. That means that they are a simple cast away from being used as parameters.
retval := ss_scaling_subtraction(
PDouble(time),
PPDouble(signals),
PDouble(amplitudes),
nSamples,
nChannels,
PDouble(intensities)
);
Note that the casting of the dynamic arrays seen here does rely on an implementation detail. So, some people might argue that it would be better to use, for instance #time[0] and so on for the one dimensional arrays. And to create an array of PDouble for the amplitudes and copy over the addresses of the first elements of the inner arrays. Personally I am comfortable with relying on this implementation detail. It certainly makes the coding a lot simpler.
One final piece of advice. Interop can be tricky. It's easy to get wrong. When you get it wrong, the code compiles, but then dies horribly at runtime. With cryptic error messages. Leading to much head scratching.
So, start with the simplest possible interface. A function that receives scalar parameters. Say, receives an integer, and returns an integer. Prove that you can do that. Then move on to floating point scalars. Then one dimensional arrays. Finally two dimensional arrays. Each step along the way, build up the complexity. When you hit a problem you'll know that it is down to the most recently added parameter.
You've not taken that approach. You've gone straight for the kill and implemented everything in your first attempt. And when it fails, you've no idea where to look. Break a problem into small pieces, and build the more complex problem out of those smaller pieces.

Ambiguity of function overloading - Integers vs. Doubles

Suppose I wish to have 2 functions, one that generates a random integer within a given range, and one that generates a random double within a given range.
int GetRandomNumber( int min, int max );
double GetRandomNumber( double min, double max );
Notice that the method names are the same. I'm trying to decide whether to name the functions that or...
int GetRandomInteger( int min, int max );
double GetRandomDouble( double min, double max );
The first option has the benefit of the user not having to worry about which one they are calling. They can just call GetRandomNumber with integers or doubles and get a result.
The second option is more explicit in the names, but it reveals unneeded information to the caller.
I know this is petty, but I care about petty things.
Edit: How would C++ behave regarding implicit conversion.
Example:
GetRandomNumber( 1, 1 );
This could be implicitly converted for the GetRandomNumber function for the double version. Obviously I don't want this to occur. Will C++ use the int version before the double version?
I prefer your second example, it is explicit and leaves no ambiguity in interpretation. It is better to err on the side of being explicit in method names to clearly illuminate the purpose and function of that member.
The only downside to your approach is that you have coupled the name of the method to the return type which is not ideal in the event that you want to change the return type of one of these methods. However in that case I would be better to add a new method and not break compatibility in your API anyways.
I prefer the second version. I like overloading a function when ultimately the two functions do the same thing; in this case they return different types so they're not quite the same thing. Another possibility if your language supports it is to define it as a generic/template method, like:
T GetRandom<T>(T min, T max);
A function name should tell what the function does; I do not see a point in cramming the types into the names. So definitely go for overloading - that's what it is for.
I prefer the overloaded method. Look at the Math.Min() method in .NET. It has 11 overloads: int, double, byte, etc.
I usually prefer the first example because it doesn't pollute the namespace. For example when calling it, if I pass ints, I'm expecting to get back an int. If I pass in doubles, I probably expect to get back a double. The compiler will give you an error if you write:
//this causes an error
double d = GetRandomNumber(1,10);
so it's not really a big issue. and you can always cast the arguments if you need an int but have doubles for input...
In some of languages you can not vary the return type of overloaded functions this would require the second example.
Assuming C++, the second also avoids problems with ambiguity. If you said:
GetRandomNumber( 1, 5.0 );
which one should be used? In fact, it is a compilation error.
I think the ideal solution would be
Int32.GetRandom(int min, int max)
Double.GetRandom(double min, double max)
but, alas, static extension method are not possible (yet?).
The .net Framwork seems to favor the first option (System.Math class):
public static decimal Abs(decimal value)
public static int Abs(int value)
Like Andrew, I would personally favor the second option to avoid ambiguity, although I do think this is a matter of taste.