Why are random generated numbers UInt-32 by default in Swift? - swift

I stumbled upon the fact that when a random number is generated in Swift, it is of type UInt32 by default instead of type Int
What's the reason?

I suspect you are referring to arc4random. This function returns UInt32 because the underlying C function (also called arc4random) returns uint32_t, which is the C equivalent of Swift's UInt32.

I would assume it makes it faster. If you want Int random numbers take a look at GKRandomSource in the game kit, https://developer.apple.com/documentation/gameplaykit/gkrandomsource

Related

Precondition failed: Negative count not allowed

Error:
Precondition failed: Negative count not allowed: file /BuildRoot/Library/Caches/com.apple.xbs/Sources/swiftlang/swiftlang-900.0.74.1/src/swift/stdlib/public/core/StringLegacy.swift, line 49
Code:
String(repeating: "a", count: -1)
Thinking:
Well, it doesn't make sense repeating some string a negative number of times. Since we have types in Swift, why not use an UInt?
Here we have some documentation about it.
Use UInt only when you specifically need an unsigned integer type with
the same size as the platform’s native word size. If this isn’t the
case, Int is preferred, even when the values to be stored are known to
be nonnegative. A consistent use of Int for integer values aids code
interoperability, avoids the need to convert between different number
types, and matches integer type inference, as described in Type Safety
and Type Inference.
Apple Docs
Ok that Int is preferred, therefore the API is just following the rules, but why the Strings API is designed like that? Why this constructor is not private and the a public one with UInt ro something like that? Is there a "real" reason? It this some "undefined behavior" kind of thing?
Also: https://forums.developer.apple.com/thread/98594
This isn't undefined behavior — in fact, a precondition indicates the exact opposite: an explicit check was made to ensure that the given count is positive.
As to why the parameter is an Int and not a UInt — this is a consequence of two decisions made early in the design of Swift:
Unlike C and Objective-C, Swift does not allow implicit (or even explicit) casting between integer types. You cannot pass an Int to function which takes a UInt, and vice versa, nor will the following cast succeed: myInt as? UInt. Swift's preferred method of converting is using initializers: UInt(myInt)
Since Ints are more generally applicable than UInts, they would be the preferred integer type
As such, since converting between Ints and UInts can be cumbersome and verbose, the easiest way to interoperate between the largest number of APIs is to write them all in terms of the common integer currency type: Int. As the docs you quote mention, this "aids code interoperability, avoids the need to convert between different number types, and matches integer type inference"; trapping at runtime on invalid input is a tradeoff of this decision.
In fact, Int is so strongly ingrained in Swift that when Apple framework interfaces are imported into Swift from Objective-C, NSUInteger parameters and return types are converted to Int and not UInt, for significantly easier interoperability.

Swift syntax: UnsafeMutablePointers in CGPDFDocument.getVersion

Can anyone explain how I'm supposed to use the method 'getVersion' for CGPDFDocument in Swift?
Apple's documentation gives:
func getVersion(majorVersion: UnsafeMutablePointer<Int32>,
minorVersion: UnsafeMutablePointer<Int32>)
"On return, the values of the majorVersion and minorVersion parameters are set to the major and minor version numbers of the document respectively."
So I supply two variables as arguments of the function, and they get filled with the values on exit? Do they need to point to something in particular before the method is called? Why not just type them as integers, if that's what the returned values are?
You use it like this:
var major: Int32 = 0
var minor: Int32 = 0
document.getVersion(majorVersion: &major, minorVersion: &minor)
print("Version: \(major).\(minor)")
The function expects pointers, but if you pass in plain Int32 variables with the & operator, the Swift compiler is smart enough to call the function with pointers to the variables. This is documented in Using Swift with Cocoa and Objective-C: Interacting with C APIs.
The main reason the function works like this is probably that it's a very old C function that has been imported into Swift. C doesn't support tuples as return values; using pointers as in-out parameters is a way to have the function return more than one value. Arguably, it would have been a better design to define a custom struct for the return type so that the function could return the two values in a single type, but the original developers of this function apparently didn't think it was necessary — perhaps unsuprisingly, because this pattern is very common in C.

Problems with using delphi dta types with a C DLL

I am trying to use a .dll which has been written in C (although it wraps around a matlab .ddl)
The function I am trying to use is defined in C as:
__declspec(dllexport) int ss_scaling_subtraction(double* time, double** signals, double* amplitudes, int nSamples, int nChannels, double* intensities);
The .dll requires, amongst others, a 2 dimensional array - When I tried to use:
Array of array of double
In the declaration, the compiler gave an error so I defined my own data type:
T2DArray = Array of array of double;
I initialise the .dll function in a unit like so:
function ss_scaling_subtraction(const time: array of double; const signals: T2DArray; const amplituides : array of double; const nSamples: integer;const nChannels: integer; var intensities: array of double) : integer ; cdecl; external 'StirScanDLL.dll';
However, when called this function, I get an access violation from the .dll
Creating a new data type
T1DArray = array of double
and changing
Array of double
To
T1DArray
In the declaration seems to make things run but the result is still not correct.
I have read on here that it can be dangerous to pass delphi data types to .dll's coded in a different language so I thought this might be causing the issue.
But how do I NOT use a delphi data type when I HAVE to use it to properly declare the function in the first place?!
Extra Info, I have already opened the matlab runtime complier lib's and opened the entry point to the StirScanDLL.dll
The basic problem here is one of binary interop mismatch. Simply put, a pointer to an array is not the same thing at a binary level as a Delphi open array parameter. Whilst they both semantically represent an array, the binary representation differs.
The C function is declared as follows:
__declspec(dllexport) int ss_scaling_subtraction(
double* time,
double** signals,
double* amplitudes,
int nSamples,
int nChannels,
double* intensities
);
Declare your function like so in Delphi:
function ss_scaling_subtraction(
time: PDouble;
signals: PPDouble;
amplitudes: PDouble;
nSamples: Integer;
nChannels: Integer;
intensities: PDouble
): Integer; cdecl; external 'StirScanDLL.dll';
If you find that PPDouble is not declared, define it thus:
type
PPDouble = ^PDouble;
That is, pointer to pointer to double.
Now what remains is to call the functions. Declare your arrays in Delphi as dynamic arrays. Like this:
var
time, amplitudes, intensities: TArray<Double>;
signals: TArray<TArray<Double>>;
If you have an older pre-generics Delphi then declare some types:
type
TDoubleArray = array of Double;
T2DDoubleArray = array of TDoubleArray;
Then declare the variables with the appropriate types.
Next you need to allocate the arrays, and populate any that have data passing from caller to callee.
SetLength(time, nSamples); // I'm guessing here as to the length
SetLength(signals, nSamples, nChannels); // again, guessing
Finally it is time to call the function. Now it turns out that the good designers of Delphi arranged for dynamic arrays to be stored as pointers to the first element. That means that they are a simple cast away from being used as parameters.
retval := ss_scaling_subtraction(
PDouble(time),
PPDouble(signals),
PDouble(amplitudes),
nSamples,
nChannels,
PDouble(intensities)
);
Note that the casting of the dynamic arrays seen here does rely on an implementation detail. So, some people might argue that it would be better to use, for instance #time[0] and so on for the one dimensional arrays. And to create an array of PDouble for the amplitudes and copy over the addresses of the first elements of the inner arrays. Personally I am comfortable with relying on this implementation detail. It certainly makes the coding a lot simpler.
One final piece of advice. Interop can be tricky. It's easy to get wrong. When you get it wrong, the code compiles, but then dies horribly at runtime. With cryptic error messages. Leading to much head scratching.
So, start with the simplest possible interface. A function that receives scalar parameters. Say, receives an integer, and returns an integer. Prove that you can do that. Then move on to floating point scalars. Then one dimensional arrays. Finally two dimensional arrays. Each step along the way, build up the complexity. When you hit a problem you'll know that it is down to the most recently added parameter.
You've not taken that approach. You've gone straight for the kill and implemented everything in your first attempt. And when it fails, you've no idea where to look. Break a problem into small pieces, and build the more complex problem out of those smaller pieces.

How to pass values to mex files

Hi I want to send a value from my matlab to my mex function. The value is generally about 10 digits long and i have used unsigned long long data type.
But i have difficulty accessing it from the mexfile.
mxGetPr returns double type, so is there some type conversion i have to do?
Yes, I just encountered this. You shouldn't use mxGetPr anymore in general. The better way to do it is to first check the type like this:
if(!mxIsClass(prhs[0],"double"))
{
mexErrMsgTxt("Data must be of type double!!!\n");
}
Then access the data through (double *)mxGetData(prhs[0]) or in your case (unsigned long long int*)mxGetData(prhs[0])
You can look up mxIsClass and mxGetData for more info.
Edit: Also here's a list of the different types for mxIsClass

Ambiguity of function overloading - Integers vs. Doubles

Suppose I wish to have 2 functions, one that generates a random integer within a given range, and one that generates a random double within a given range.
int GetRandomNumber( int min, int max );
double GetRandomNumber( double min, double max );
Notice that the method names are the same. I'm trying to decide whether to name the functions that or...
int GetRandomInteger( int min, int max );
double GetRandomDouble( double min, double max );
The first option has the benefit of the user not having to worry about which one they are calling. They can just call GetRandomNumber with integers or doubles and get a result.
The second option is more explicit in the names, but it reveals unneeded information to the caller.
I know this is petty, but I care about petty things.
Edit: How would C++ behave regarding implicit conversion.
Example:
GetRandomNumber( 1, 1 );
This could be implicitly converted for the GetRandomNumber function for the double version. Obviously I don't want this to occur. Will C++ use the int version before the double version?
I prefer your second example, it is explicit and leaves no ambiguity in interpretation. It is better to err on the side of being explicit in method names to clearly illuminate the purpose and function of that member.
The only downside to your approach is that you have coupled the name of the method to the return type which is not ideal in the event that you want to change the return type of one of these methods. However in that case I would be better to add a new method and not break compatibility in your API anyways.
I prefer the second version. I like overloading a function when ultimately the two functions do the same thing; in this case they return different types so they're not quite the same thing. Another possibility if your language supports it is to define it as a generic/template method, like:
T GetRandom<T>(T min, T max);
A function name should tell what the function does; I do not see a point in cramming the types into the names. So definitely go for overloading - that's what it is for.
I prefer the overloaded method. Look at the Math.Min() method in .NET. It has 11 overloads: int, double, byte, etc.
I usually prefer the first example because it doesn't pollute the namespace. For example when calling it, if I pass ints, I'm expecting to get back an int. If I pass in doubles, I probably expect to get back a double. The compiler will give you an error if you write:
//this causes an error
double d = GetRandomNumber(1,10);
so it's not really a big issue. and you can always cast the arguments if you need an int but have doubles for input...
In some of languages you can not vary the return type of overloaded functions this would require the second example.
Assuming C++, the second also avoids problems with ambiguity. If you said:
GetRandomNumber( 1, 5.0 );
which one should be used? In fact, it is a compilation error.
I think the ideal solution would be
Int32.GetRandom(int min, int max)
Double.GetRandom(double min, double max)
but, alas, static extension method are not possible (yet?).
The .net Framwork seems to favor the first option (System.Math class):
public static decimal Abs(decimal value)
public static int Abs(int value)
Like Andrew, I would personally favor the second option to avoid ambiguity, although I do think this is a matter of taste.