I'm working with a C API from Swift and for one of the methods that I need to call I need to give a
UnsafeMutablePointer<UnsafeMutablePointer<UnsafeMutablePointer<Int8>>>
More Info:
Swift Interface:
public func presage_predict(prsg: presage_t, _ result: UnsafeMutablePointer<UnsafeMutablePointer<UnsafeMutablePointer<Int8>>>) -> presage_error_code_t
Original C:
presage_error_code_t presage_predict(presage_t prsg, char*** result);
Generally, if a function takes a UnsafePointer<T> parameter
then you can pass a variable of type T as in "inout" parameter with &. In your case, T is
UnsafeMutablePointer<UnsafeMutablePointer<Int8>>
which is the Swift mapping of char **. So you can call the C function
as
var prediction : UnsafeMutablePointer<UnsafeMutablePointer<Int8>> = nil
if presage_predict(prsg, &prediction) == PRESAGE_OK { ... }
From the documentation and sample code of the Presage library I
understand that this allocates an array of strings and assigns the
address of this array to the variable pointed to by prediction.
To avoid a memory leak, these strings have to be released eventually
with
presage_free_string_array(prediction)
To demonstrate that this actually works, I have taken the first
part of the demo code at presage_c_demo.c and translated it
to Swift:
// Duplicate the C strings to avoid premature deallocation:
let past = strdup("did you not sa")
let future = strdup("")
func get_past_stream(arg: UnsafeMutablePointer<Void>) -> UnsafePointer<Int8> {
return UnsafePointer(past)
}
func get_future_stream(arg: UnsafeMutablePointer<Void>) -> UnsafePointer<Int8> {
return UnsafePointer(future)
}
var prsg = presage_t()
presage_new(get_past_stream, nil, get_future_stream, nil, &prsg)
var prediction : UnsafeMutablePointer<UnsafeMutablePointer<Int8>> = nil
if presage_predict(prsg, &prediction) == PRESAGE_OK {
for var i = 0; prediction[i] != nil; i++ {
// Convert C string to Swift `String`:
let pred = String.fromCString(prediction[i])!
print ("prediction[\(i)]: \(pred)")
}
presage_free_string_array(prediction)
}
free(past)
free(future)
This actually worked and produced the output
prediction[0]: say
prediction[1]: said
prediction[2]: savages
prediction[3]: saw
prediction[4]: sat
prediction[5]: same
There may be a better way but this runs in playground and defines a value r with the type you want:
func ptrFromAddress<T>(p:UnsafeMutablePointer<T>) -> UnsafeMutablePointer<T>
{
return p
}
var myInt:Int8 = 0
var p = ptrFromAddress(&myInt)
var q = ptrFromAddress(&p)
var r = ptrFromAddress(&q)
What's the point of defining ptrFromAddress, which seems like it does nothing? My thinking is that the section of the Swift interop book which discusses mutable pointers shows many ways to initialize them by passing some expression as an argument (like &x), but does not seem to show corresponding ways where you simply call UnsafeMutablePointer's initializer. So let's define a no-op function just to use those special initialization methods based on argument-passing
Update:
While I believe the method above is correct, it was pointed out by #alisoftware in another forum that this seems to be a safer and more idiomatic way to do the same thing:
var myInt: Int8 = 0
withUnsafeMutablePointer(&myInt) { (var p) in
withUnsafeMutablePointer(&p) { (var pp) in
withUnsafeMutablePointer(&pp) { (var ppp) in
// Do stuff with ppp which is a UnsafeMutablePointer<UnsafeMutablePointer<UnsafeMutablePointer<Int8>>>
}
}
}
It's more idiomatic because you're using the function withUnsafeMutablePointer which is supplied by the Swift standard library, rather than defining your own helper. It's safer because you are guaranteed that the UnsafeMutablePointer is only alive during the extent of the call to the closure (so long as the closure itself does not store the pointer).
Related
I got some unexpected behavior using UnsafeMutablePointer on an observed property in a struct I created (on Xcode 10.1, Swift 4.2). See the following playground code:
struct NormalThing {
var anInt = 0
}
struct IntObservingThing {
var anInt: Int = 0 {
didSet {
print("I was just set to \(anInt)")
}
}
}
var normalThing = NormalThing(anInt: 0)
var ptr = UnsafeMutablePointer(&normalThing.anInt)
ptr.pointee = 20
print(normalThing.anInt) // "20\n"
var intObservingThing = IntObservingThing(anInt: 0)
var otherPtr = UnsafeMutablePointer(&intObservingThing.anInt)
// "I was just set to 0."
otherPtr.pointee = 20
print(intObservingThing.anInt) // "0\n"
Seemingly, modifying the pointee on an UnsafeMutablePointer to an observed property doesn't actually modify the value of the property. Also, the act of assigning the pointer to the property fires the didSet action. What am I missing here?
Any time you see a construct like UnsafeMutablePointer(&intObservingThing.anInt), you should be extremely wary about whether it'll exhibit undefined behaviour. In the vast majority of cases, it will.
First, let's break down exactly what's happening here. UnsafeMutablePointer doesn't have any initialisers that take inout parameters, so what initialiser is this calling? Well, the compiler has a special conversion that allows a & prefixed argument to be converted to a mutable pointer to the 'storage' referred to by the expression. This is called an inout-to-pointer conversion.
For example:
func foo(_ ptr: UnsafeMutablePointer<Int>) {
ptr.pointee += 1
}
var i = 0
foo(&i)
print(i) // 1
The compiler inserts a conversion that turns &i into a mutable pointer to i's storage. Okay, but what happens when i doesn't have any storage? For example, what if it's computed?
func foo(_ ptr: UnsafeMutablePointer<Int>) {
ptr.pointee += 1
}
var i: Int {
get { return 0 }
set { print("newValue = \(newValue)") }
}
foo(&i)
// prints: newValue = 1
This still works, so what storage is being pointed to by the pointer? To solve this problem, the compiler:
Calls i's getter, and places the resultant value into a temporary variable.
Gets a pointer to that temporary variable, and passes that to the call to foo.
Calls i's setter with the new value from the temporary.
Effectively doing the following:
var j = i // calling `i`'s getter
foo(&j)
i = j // calling `i`'s setter
It should hopefully be clear from this example that this imposes an important constraint on the lifetime of the pointer passed to foo – it can only be used to mutate the value of i during the call to foo. Attempting to escape the pointer and using it after the call to foo will result in a modification of only the temporary variable's value, and not i.
For example:
func foo(_ ptr: UnsafeMutablePointer<Int>) -> UnsafeMutablePointer<Int> {
return ptr
}
var i: Int {
get { return 0 }
set { print("newValue = \(newValue)") }
}
let ptr = foo(&i)
// prints: newValue = 0
ptr.pointee += 1
ptr.pointee += 1 takes place after i's setter has been called with the temporary variable's new value, therefore it has no effect.
Worse than that, it exhibits undefined behaviour, as the compiler doesn't guarantee that the temporary variable will remain valid after the call to foo has ended. For example, the optimiser could de-initialise it immediately after the call.
Okay, but as long as we only get pointers to variables that aren't computed, we should be able to use the pointer outside of the call it was passed to, right? Unfortunately not, turns out there's lots of other ways to shoot yourself in the foot when escaping inout-to-pointer conversions!
To name just a few (there are many more!):
A local variable is problematic for a similar reason to our temporary variable from earlier – the compiler doesn't guarantee that it will remain initialised until the end of the scope it's declared in. The optimiser is free to de-initialise it earlier.
For example:
func bar() {
var i = 0
let ptr = foo(&i)
// Optimiser could de-initialise `i` here.
// ... making this undefined behaviour!
ptr.pointee += 1
}
A stored variable with observers is problematic because under the hood it's actually implemented as a computed variable that calls its observers in its setter.
For example:
var i: Int = 0 {
willSet(newValue) {
print("willSet to \(newValue), oldValue was \(i)")
}
didSet(oldValue) {
print("didSet to \(i), oldValue was \(oldValue)")
}
}
is essentially syntactic sugar for:
var _i: Int = 0
func willSetI(newValue: Int) {
print("willSet to \(newValue), oldValue was \(i)")
}
func didSetI(oldValue: Int) {
print("didSet to \(i), oldValue was \(oldValue)")
}
var i: Int {
get {
return _i
}
set {
willSetI(newValue: newValue)
let oldValue = _i
_i = newValue
didSetI(oldValue: oldValue)
}
}
A non-final stored property on classes is problematic as it can be overridden by a computed property.
And this isn't even considering cases that rely on implementation details within the compiler.
For this reason, the compiler only guarantees stable and unique pointer values from inout-to-pointer conversions on stored global and static stored variables without observers. In any other case, attempting to escape and use a pointer from an inout-to-pointer conversion after the call it was passed to will lead to undefined behaviour.
Okay, but how does my example with the function foo relate to your example of calling an UnsafeMutablePointer initialiser? Well, UnsafeMutablePointer has an initialiser that takes an UnsafeMutablePointer argument (as a result of conforming to the underscored _Pointer protocol which most standard library pointer types conform to).
This initialiser is effectively same as the foo function – it takes an UnsafeMutablePointer argument and returns it. Therefore when you do UnsafeMutablePointer(&intObservingThing.anInt), you're escaping the pointer produced from the inout-to-pointer conversion – which, as we've discussed, is only valid if it's used on a stored global or static variable without observers.
So, to wrap things up:
var intObservingThing = IntObservingThing(anInt: 0)
var otherPtr = UnsafeMutablePointer(&intObservingThing.anInt)
// "I was just set to 0."
otherPtr.pointee = 20
is undefined behaviour. The pointer produced from the inout-to-pointer conversion is only valid for the duration of the call to UnsafeMutablePointer's initialiser. Attempting to use it afterwards results in undefined behaviour. As matt demonstrates, if you want scoped pointer access to intObservingThing.anInt, you want to use withUnsafeMutablePointer(to:).
I'm actually currently working on implementing a warning (which will hopefully transition to an error) that will be emitted on such unsound inout-to-pointer conversions. Unfortunately I haven't had much time lately to work on it, but all things going well, I'm aiming to start pushing it forwards in the new year, and hopefully get it into a Swift 5.x release.
In addition, it's worth noting that while the compiler doesn't currently guarantee well-defined behaviour for:
var normalThing = NormalThing(anInt: 0)
var ptr = UnsafeMutablePointer(&normalThing.anInt)
ptr.pointee = 20
From the discussion on #20467, it looks like this will likely be something that the compiler does guarantee well-defined behaviour for in a future release, due to the fact that the base (normalThing) is a fragile stored global variable of a struct without observers, and anInt is a fragile stored property without observers.
I'm pretty sure the problem is that what you're doing is illegal. You can't just declare an unsafe pointer and claim that it points at the address of a struct property. (In fact, I don't even understand why your code compiles in the first place; what initializer does the compiler think this is?) The correct way, which gives the expected results, is to ask for a pointer that does point at that address, like this:
struct IntObservingThing {
var anInt: Int = 0 {
didSet {
print("I was just set to \(anInt)")
}
}
}
withUnsafeMutablePointer(to: &intObservingThing.anInt) { ptr -> Void in
ptr.pointee = 20 // I was just set to 20
}
print(intObservingThing.anInt) // 20
Summary:
I made a mistake in my Swift code and I've fixed it. Then I asked myself why this happened and how I could avoid it. I tried some ways but nothing helps.
I put the mistake and my thinking below. I hope you could teach me the right method to avoid this kind of mistake, but any idea, hint, or suggestion would be appreciated.
How can I avoid this kind of logic error?
The following is an excerpt from my assignment for Stanford cs193p course.
class SomeClass {
...
var accumulator: Double?
func someFunc() {
// I use `localAccumulator` and write `self.` for clarity.
if let localAccumulator = self.accumulator { // A
// `self.accumulator` is modified in this method
performPendingBinaryOperation() // B
pendingBinaryOperation = PendingBinaryOperation(firstOperand: localAccumulator) // C
}
}
private func performPendingBinaryOperation() {
accumulator = pendingBinaryOperation.perform(with: accumulator)
}
...
}
The problem here is that the line B has changed the value of instance value self.accumulator, and the line C should use the new value stored in self.accumulator, but it uses the outdated local var localAccumulator which was copied from self.accumulator's old value.
It was easy to find out the logic error via debugger. But then I reflected on my mistake, and was trying to look for a method to avoid this kind of logic error.
Method 1: use nil checking rather than optional binding
if self.accumulator != nil { // A
// `self.accumulator` is modified in this method
performPendingBinaryOperation() // B
pendingBinaryOperation = PendingBinaryOperation(firstOperand: self.accumulator!) // C
}
Actually, what really matters here is the force unwrapped self.accumulator!, it ensures the value comes from the real source. Using nil checking rather than optional binding can force me to force unwrap on self.accumulator.
But in some Swift style guides(GitHub, RayWenderlich, LinkedIn), force unwrapping is not encouraged. They prefer optional binding.
Method 2: use assertions.
if localAccumulator = self.accumulator { // A
// `self.accumulator` is modified in this method
performPendingBinaryOperation() // B
assert(localAccumulator == self.accumulator) // D
pendingBinaryOperation = PendingBinaryOperation(firstOperand: localAccumulator) // C
}
I insert a assertion to check whether the localAccumulator is still equal to self.accumulator. This works, it will stop running once self.accumulator is modified unexpectedly. But it's so easy to forget to add this assertion line.
Method 3: SwiftLint
To find a way to detect this kind of error, I've skimmed SwiftLint's all rules, and also got a basic understanding of
SourceKitten(one of SwiftLint's dependencies). It seems too complicated to detect this kind of error by SwiftLint, especially when I make this pattern more general.
Some similar cases
Case 1: guard optional binding
func someFunc() {
guard let localAccumulator = self.accumulator { // A
return
}
// `self.accumulator` is modified in this method
performPendingBinaryOperation() // B
pendingBinaryOperation = PendingBinaryOperation(firstOperand: localAccumulator) // C
}
In this case, it's much more difficult for human to notice the error, because localAccumulator has a broader scope with guard optional binding than if optional binding.
Case 2: value copy caused by function passing parameters
// Assume that this function will be called somewhere else with `self.accumulator` as its argument, like `someFunc(self.accumulator)`
func someFunc(_ localAccumulator) {
// `self.accumulator` is modified in this method
performPendingBinaryOperation() // B
pendingBinaryOperation = PendingBinaryOperation(firstOperand: localAccumulator) // C
}
In this case, localAccumulator copies from self.accumulator when this function is called, then self.accumulator changes in the line B, the line C expects the self.accumulator's new value, but get its old value from localAccumulator.
In fact, the basic pattern is as below,
var x = oldValue
let y = x
functionChangingX() // assign x newValue
functionExpectingX(y) // expecting y is newValue, but it's oldValue
x ~ self.accumulator
y ~ localAccumulator
functionChangingX ~ performPendingBinaryOperation
functionExpectingX ~ PendingBinaryOperation.init
This error pattern looks like so common that I guess there should be a name for this error pattern.
Anyway, back to my question, how can I avoid this kind of logic error?
This example shows what I understood from your problem:
var name: String? = "Mike"
func doSomething() {
// Check if "name" is set (not nil)
if let personName = name {
addLastNameToGlobalName() // modifies global var "name"
//
// Here, you want to use the updated "name" value. As you mention, at this point,
// "name" could/might/should be different than "personName" (it could
// even be nil).
//
greet(person: ???) // use "name"? (which is optional), "personName"?, or what?
}
}
To me, the general approach is the problem here. The fact that you are:
Checking that a global optional-value is not nil
Calling a function that alters this global optional-value
Wanting to use the updated global optional-value as a non-optional value
A simple change in the approach/design of greeting this person would allow you to avoid the type of "logic error" that you mention.
Example 1:
var name: String? = "Mike"
func doSomething() {
// Check if "name" is set (not nil)
if let personName = name {
greet(person: fullName(for: personName))
}
}
Example 2:
var name: String? = "Mike"
func doSomething() {
// Check if "name" is set (not nil)
if let personName = name {
let fullName = addLastNameToGlobalName() // modifies global var "name" and returns the new value
greet(person: fullName)
}
}
Let's say I do the following in C++:
int i = 1;
int* ptr = &i;
*ptr = 2;
cout << i << '\n';
And I want to do something similar in swift. Could I do the following?
var i : Int = 1
var iptr : UnsafeMutablePointer<Int> = &i
iptr.memory = 2
print(i)
And achieve the same result?
Yes-ish.
You can't do it exactly as you've attempted in the question. It won't compile. Swift won't let you directly access the address of a value like this. At the end of the day, the reason is mostly because there's simply no good reason to do so.
We do see the & operator in Swift however.
First of all, there is the inout keyword when declaring function parameters:
func doubleIfPositive(inout value: Float) -> Bool {
if value > 0 {
value *= 2
return true
}
return false
}
And to call this method, we'd need the & operator:
let weMadeARadian = doubleIfPositive(&pi)
We can see it similarly used when we have a function which takes an argument of type UnsafeMutablePointer (and other variants of these pointer structs). In this specific case, it's primarily for interoperability with C & Objective-C, where we could declare a method as such:
bool doubleIfPositive(float * value) -> bool {
if (value > 0) {
value *= 2;
return true;
}
return false;
}
The Swift interface for that method ends up looking somethin like this:
func doubleIfPositive(value: UnsafeMutablePointer<Float>) -> Bool
And calling this method from Swift actually looks just like it did before when using the inout approach:
let weMadeARadian = doubleIfPositive(&pi)
But these are the only two uses of this & operator I can find in Swift.
With that said, we can write a function that makes use of the second form of passing an argument into a method with the & operator and returns that variable wrapped in an unsafe mutable pointer. It looks like this:
func addressOf<T>(value: UnsafeMutablePointer<T>) -> UnsafeMutablePointer<T> {
return value
}
And it behaves about as you'd expect from your original code snippet:
var i: Int = 1
var iPtr = addressOf(&i)
iPtr.memory = 2
print(i) // prints 2
As noted by Kevin in the comments, we can also directly allocate memory if we want.
var iPtr = UnsafeMutablePointer<Int>.alloc(1)
The argument 1 here is effectively the mount of space to allocate. This says we want to allocate enough memory for a single Int.
This is roughly equivalent to the following C code:
int * iPtr = malloc(1 * sizeof(int));
BUT...
If you're doing any of this for anything other than interoperability with C or Objective-C, you're most likely not Swifting correctly. So before you start running around town with pointers to value types in Swift, please, make sure it's what you absolutely need to be doing. I've been writing Swift since release, and I've never found the need for any of these shenanigans.
Like this (not the only way, but it's clear):
var i : Int = 1
withUnsafeMutablePointer(&i) {
iptr -> () in
iptr.memory = 2
}
print(i)
Not a very interesting example, but it is completely parallel to your pseudo-code, and we really did reach right into the already allocated memory and alter it, which is what you wanted to do.
This sort of thing gets a lot more interesting when what you want to do is something like cycle thru memory just as fast as doing pointer arithmetic in C.
I would like to learn and use more functional programming in Swift. So, I've been trying various things in playground. I don't understand Reduce, though. The basic textbook examples work, but I can't get my head around this problem.
I have an array of strings called "toDoItems". I would like to get the longest string in this array. What is the best practice for handling the initial nil value in such cases? I think this probably happens often. I thought of writing a custom function and use it.
func optionalMax(maxSofar: Int?, newElement: Int) -> Int {
if let definiteMaxSofar = maxSofar {
return max(definiteMaxSofar, newElement)
}
return newElement
}
// Just testing - nums is an array of Ints. Works.
var maxValueOfInts = nums.reduce(0) { optionalMax($0, $1) }
// ERROR: cannot invoke 'reduce' with an argument list of type ‘(nil, (_,_)->_)'
var longestOfStrings = toDoItems.reduce(nil) { optionalMax(count($0), count($1)) }
It might just be that Swift does not automatically infer the type of your initial value. Try making it clear by explicitly declaring it:
var longestOfStrings = toDoItems.reduce(nil as Int?) { optionalMax($0, count($1)) }
By the way notice that I do not count on $0 (your accumulator) since it is not a String but an optional Int Int?
Generally to avoid confusion reading the code later, I explicitly label the accumulator as a and the element coming in from the serie as x:
var longestOfStrings = toDoItems.reduce(nil as Int?) { a, x in optionalMax(a, count(x)) }
This way should be clearer than $0 and $1 in code when the accumulator or the single element are used.
Hope this helps
Initialise it with an empty string "" rather than nil. Or you could even initialise it with the first element of the array, but an empty string seems better.
Second go at this after writing some wrong code, this will return the longest string if you are happy with an empty string being returned for an empty array:
toDoItems.reduce("") { count($0) > count($1) ? $0 : $1 }
Or if you want nil, use
toDoItems.reduce(nil as String?) { count($0!) > count($1) ? $0 : $1 }
The problem is that the compiler cannot infer the types you are using for your seed and accumulator closure if you seed with nil, and you also need to get the optional type correct when using the optional string as $0.
I'm doing some performance testing of Swift vs Objective-C.
I created a Mac OS hybrid Swift/Objective-C project that creates large arrays of prime numbers using either Swift or Objective-C.
It's got a decent UI and shows the results in a clear display. You can check out the project on Github if you're interested. It's called SwiftPerformanceBenchmark.
The Objective-C code uses a malloc'ed C array of ints, and the Swift code uses an Array object.
The Objective C code is therefore a lot faster.
I've read about creating an Array-like wrapper around a buffer of bytes using code like this:
let size = 10000
var ptr = UnsafePointer<Int>malloc(size)
var bytes = UnsafeBufferPointer<Int>(start: ptr, count: data.length)
I'd like to modify my sample program so I can switch between my Array<Int> storage and using an UnsafeBufferPointer<Int> at runtime with a checkbox in the UI.
Thus I need a base type for my primes array that will hold either an Array<Int> or an UnsafeBufferPointer<Int>. I'm still too weak on Swift syntax to figure out how to do this.
For my Array- based code, I'll have to use array.append(value), and for the UnsafeBufferPointer<Int>, which is pre-filled with data, I'll use array[index]. I guess if I have to I could pre-populate my Array object with placeholder values so I could use array[index] syntax in both cases.
Can somebody give me a base type that can hold either an Array<Int> or an UnsafeBufferPointer<Int>, and the type-casts to allocate either type at runtime?
EDIT:
Say, for example, I have the following:
let count = 1000
var swiftArray:[Int]?
let useSwiftArrays = checkbox.isChecked
typealias someType = //A type that lets me use either unsafeArray or swiftArray
var primesArray: someType?
if useSwiftArrays
{
//Create a swift array version
swiftArray [Int](count: count, repeatedValue: 0)
primesArray = someType(swiftArray)
}
else
{
var ptr = UnsafePointer<Int>malloc(count*sizeof(Int))
var unsafeArray = UnsafeBufferPointer<Int>(start: ptr, count: data.length)
primesArray = someType(unsafeArray)
}
if let requiredPrimes = primesArray
{
requiredPrimes[0] = 2
}
#MartinR's suggestion should help get code that can switch between the two. But there's a shortcut you can take to prove whether the performance difference is between Swift arrays and C arrays, and that's to switch the Swift compiler optimization to -Ounchecked. Doing this eliminates the bounds checks on array indices etc that you would be doing manually by using unsafe pointers.
If I download your project from github and do that, I find that the Objective-C version is twice as fast as the Swift version. But... that’s because sizeof(int) is 4, but sizeof(Int) is 8. If you switch the C version to use 8-byte arithmetic as well...
p.s. it works the other way around as well, if I switch the Swift code to use UInt32, it runs at 2x the speed.
OK, it’s not pretty but here is a generic function that will work on any kind of collection, which means you can pass in either an Array, or an UnsafeMutableBufferPointer, which means you can use it on a malloc’d memory range, or using the array’s .withUnsafeMutableBufferPointer.
Unfortunately, some of the necessities of the generic version make it slightly less efficient than the non-generic version when used on an array. But it does show quite a nice performance boost over arrays in -O when used with a buffer:
func storePrimes<C: MutableCollectionType where C.Generator.Element: IntegerType>(inout store: C) {
if isEmpty(store) { return }
var candidate: C.Generator.Element = 3
var primeCount = store.startIndex
store[primeCount++] = 2
var isPrime: Bool
while primeCount != store.endIndex {
isPrime = true
var oldPrimeCount = store.startIndex
for oldPrime in store {
if oldPrimeCount++ == primeCount { break }
if candidate % oldPrime == 0 { isPrime = false; break }
if candidate < oldPrime &* oldPrime { isPrime = true; break }
}
if isPrime { store[primeCount++] = candidate }
candidate = candidate.advancedBy(2)
}
}
let totalCount = 2_000_000
var primes = Array<CInt>(count: totalCount, repeatedValue: 0)
let startTime = CFAbsoluteTimeGetCurrent()
storePrimes(&primes)
// or…
primes.withUnsafeMutableBufferPointer { (inout buffer: UnsafeMutableBufferPointer<CInt>) -> Void in
storePrimes(&buffer)
}
let now = CFAbsoluteTimeGetCurrent()
let totalTime = now - startTime
println("Total time: \(totalTime), per second: \(Double(totalCount)/totalTime)")
I am not 100% sure if I understand your problem correctly, but perhaps
this goes into the direction that you need.
Both Array and UnsafeMutablePointer conform to MutableCollectionType (which requires a subscript getter and setter).
So this function would accept both types:
func foo<T : MutableCollectionType where T.Generator.Element == Int, T.Index == Int>(inout storage : T) {
storage[0] = 1
storage[1] = 2
}
Example with buffer pointer:
let size = 2
var ptr = UnsafeMutablePointer<Int>(malloc(UInt(size * sizeof(Int))))
var buffer = UnsafeMutableBufferPointer<Int>(start: ptr, count: size)
foo(&buffer)
for elem in buffer {
println(elem)
}
Example with array:
var array = [Int](count: 2, repeatedValue: 0)
foo(&array)
for elem in array {
println(elem)
}
For non-mutating functions you can use CollectionType
instead of MutableCollectionType.