I'm trying to convert an interleaved DSPComplex vector to a DSPSplitComplex vector, using vDSP_ctoz from the Swift Accelerate library. The last line of the code below produces the error Segmentation fault: 11
I don't understand why vDSP_ctoz would try to access out-of-bound memory when I've allocated large vectors and am only trying to process a small number of elements. The vectors are size 2048 and the argument for N (number of elements to process) in vDSP_ctoz is 1.
I've also tried using different stride and N values when calling vDSP_ctoz, to no avail.
// set stride values
let dspComplexStride = MemoryLayout<DSPComplex>.stride
let dspSplitComplexStride = MemoryLayout<DSPSplitComplex>.stride
// make interleaved vector
var interleaved = UnsafeMutablePointer<DSPComplex>.allocate(capacity: 2048)
for index in 0..<16 {
interleaved[index] = DSPComplex(real: Float(2*index), imag: Float(2*index+1))
}
// make split vector
var splitComplex = UnsafeMutablePointer<DSPSplitComplex>.allocate(capacity: 2048)
vDSP_ctoz(
interleaved, dspComplexStride, splitComplex, dspSplitComplexStride, 1
)
DSPSplitComplex is a structure containing pointers arrays,
so you need a single DSPSplitComplex element and must allocate
storage for its realp and imagp properties.
The "stride" arguments are not measured in bytes but in "element" units.
So you pass __IZ == 1 because you want to fill contiguous elements
in the destination arrays.
It may not be obvious that you have to pass __IC == 2 for the source array, i.e.
the stride of the source array is given in Float units and not in
DSPComplex units. This can be deduced from the vDSP_ctoz documentation
which mentions that the function effectively does
for (n = 0; n < N; ++n)
{
Z->realp[n*IZ] = C[n*IC/2].real;
Z->imagp[n*IZ] = C[n*IC/2].imag;
}
Finally, the last argument of vDSP_ctoz is the number of elements to
process.
Putting it all together, this is how it should work:
import Accelerate
let N = 16
var interleaved = UnsafeMutablePointer<DSPComplex>.allocate(capacity: N)
for index in 0..<N {
interleaved[index] = DSPComplex(real: Float(2*index), imag: Float(2*index+1))
}
let realp = UnsafeMutablePointer<Float>.allocate(capacity: N)
let imagp = UnsafeMutablePointer<Float>.allocate(capacity: N)
var splitComplex = DSPSplitComplex(realp: realp, imagp: imagp)
vDSP_ctoz(interleaved, 2, &splitComplex, 1, vDSP_Length(N))
for index in 0..<N {
print(splitComplex.realp[index], splitComplex.imagp[index])
}
and of course you have to release the memory eventually.
Related
Could someone explain to me the logic behind this hashMap algorithm? I'm getting confused about how the algorithm receives the total sum. I'm starting to learn about algorithms, so it's a little confusing for me. I made comments in my code to pinpoint each line code, but I'm not sure I'm grasping logic correctly. I'm just looking for an easier way to understand how the algorithm works to avoid confusing myself.
//**calculate Two Number Sum
func twoNumberSum(_ array: [Int], _ targetSum: Int) -> [Int] {
//1) initilize our Array to hold Integer Value: Boolean value to store value into hashTable
var numbersHashMap = [Int:Bool]()
//2) create placeHolder called number that iterates through our Array.
for number in array {
//3) variable = y - x
let match = targetSum - number
//4) ??
if let exists = numbersHashMap[match], exists {
//5) match = y / number = x
return [match, number] //
} else {
//6) Store number in HashTable and repeats
numbersHashMap[number] = true
}
}
return []
}
twoNumberSum([3,5,-4, 8, 11, 1, -1, -6], 10)
// x = Number
// y = Unknown *Solve for Y*
Sure, I can walk you through it. So we have a list of numbers, are we are trying to find two numbers that add together to make the specified target. To do this, for each number x, we check if (target - x) is in the list. If it is not, then we add x to the list. If it is, then we return x and (target - x).
Step 4 in your code is the part where we check if (target - x) is in the list. To see why this makes sense, let's walk through an example.
Say we have [2, 3, -1] and our target is 1. In this case, we first consider x = 2 and check our hashmap for (target - x) = (1 - 2) = -1. Since -1 is not in the hashmap, we add 2 to the hashmap. We then consider x = 3 and check for (1 - 3) = -2. Again, -2 is not in the hashmap, so we add it. Now we check x - -1. In this case, when we check (target - x) = (1 - (-1)) = 2, 2 is in the hashmap. Intuitively, we have already "seen" 2, and know that 2 and -1 can be added to get our value.
This is what provides the speed optimization over checking every two numbers in the list.
Before anything else, I checked if this kind of question fits Stackoverflow, and based on one similar question (javascript) and from this question: https://meta.stackexchange.com/questions/129598/which-computer-science-programming-stack-exchange-sites-do-i-post-on -- it does.
So here it goes. The challenge is pretty simple, in my opinion:
Given five positive integers, find the minimum and maximum values that
can be calculated by summing exactly four of the five integers. Then
print the respective minimum and maximum values as a single line of
two space-separated long integers.
For example, . Our minimum sum is and our maximum sum is . We would
print
16 24
Input Constraint:
1 <= arr[i] <= (10^9)
My solution is pretty simple. This is what I could do best:
func miniMaxSum(arr: [Int]) -> Void {
let sorted = arr.sorted()
let reversed = Array(sorted.reversed())
var minSum = 0
var maxSum = 0
_ = sorted
.filter({ $0 != sorted.last!})
.map { minSum += $0 }
_ = reversed
.filter({ $0 != reversed.last!})
.map { maxSum += $0 }
print("\(minSum) \(maxSum)")
}
As you can see, I have two sorted arrays. One is incrementing, and the other one is decrementing. And I'm removing the last element of the two newly sorted arrays. The way I remove the last element is using filter, which probably creates the problem. But from there, I thought I could get easily the minimum and maximum sum of the 4 elements.
I had 13/14 test cases passed. And my question is, what could be the test case in which this solution will likely to fail?
Problem link: https://www.hackerrank.com/challenges/mini-max-sum/problem
Here
_ = sorted
.filter({ $0 != sorted.last!})
.map { minSum += $0 }
your expectation is that all but the largest element are added. But that is only correct it the largest element is unique. (And similarly for the maximal sum.)
Choosing an array with all identical errors makes the problem more apparent:
miniMaxSum(arr: [1, 1, 1, 1, 1])
// 0 0
A simpler solution would be to compute the sum of all elements once, and then get the result by subtracting the largest respectively smallest array element. I'll leave the implementation to you :)
Here is the O(n) solution:
func miniMaxSum(arr: [Int]) {
var smallest = Int.max
var greatest = Int.min
var sum = 0
for x in arr {
sum += x
smallest = min(smallest, x)
greatest = max(greatest, x)
}
print(sum - greatest, sum - smallest, separator: " ")
}
I know this isn't codereview.stackexchange.com, but I think some clean up is in order, so I'll start with that.
let reversed = Array(sorted.reversed())
The whole point of the ReversedCollection that is returned by Array.reversed() is that it doesn't cause a copy of elements, and it doesn't take up any extra memory or time to produce. It's merely a wrapper around a collection, and intercepts indexing operations and changes them to immitate a buffer that's been reversed. Asked for .first? It'll give you .last of its wrapped collection. Asked for .last? It'll return .first, etc.
By initializing a new Array from sorted.reversed(), you're causing an unecessary copy, and defeating the point of ReversedCollection. There are some circumstances where this might be necessary (e.g. you want to pass a pointer to a buffer of reversed elements to a C API), but this isn't one of them.
So we can just change that to let reversed = sorted.reversed()
-> Void doesn't do anything, omit it.
sorted.filter({ $0 != sorted.last!}) is inefficient.
... but more than that, this is the source of your error. There's a bug in this. If you have an array like [1, 1, 2, 3, 3], your minSum will be 4 (the sum of [1, 1, 2]), when it should be 7 (the sum of [1, 1, 2, 3]). Similarly, the maxSum will be 8 (the sume of [2, 3, 3]) rather than 9 (the sum of [1, 2, 3, 3]).
You're doing a scan of the whole array, doing sorted.count equality checks, only to discard an element with a known position (the last element). Instead, use dropLast(), which returns a collection that wraps the input, but whose operations mask the existing of a last element.
_ = sorted
.dropLast()
.map { minSum += $0 }
_ = reversed
.dropLast()
.map { maxSum += $0 }
_ = someCollection.map(f)
... is an anti-pattern. The distinguishing feature between map and forEach is that it produces a resulting array that stores the return values of the closure as evaluated with every input element. If you're not going to use the result, use forEach
sorted.dropLast().forEach { minSum += $0 }
reversed.dropLast().forEach { maxSum += $0 }
However, there's an even better way. Rather than summing by mutating a variable and manually adding to it, instead use reduce to do so. This is ideal because it allows you to remove the mutability of minSum and maxSum.
let minSum = sorted.dropLast().reduce(0, +)
let maxSum = reversed.dropLast().reduce(0, +)
You don't really need the reversed variable at all. You could just achieve the same thing by operating over sorted and using dropFirst() instead of dropLast():
func miniMaxSum(arr: [Int]) {
let sorted = arr.sorted()
let minSum = sorted.dropLast().reduce(0, +)
let maxSum = sorted.dropFirst().reduce(0, +)
print("\(minSum) \(maxSum)")
}
Your code assumes the input size is always 5. It's good to document that in the code:
func miniMaxSum(arr: [Int]) {
assert(arr.count == 5)
let sorted = arr.sorted()
let minSum = sorted.dropLast().reduce(0, +)
let maxSum = sorted.dropFirst().reduce(0, +)
print("\(minSum) \(maxSum)")
}
A generalization of your solution uses a lot of extra memory, which you might not have available to you.
This problem fixes the number of summed numbers (always 4) and the number of input numbers (always 5). This problem could be generalized to picking summedElementCount numbers out of any sized arr. In this case, sorting and summing twice is inefficient:
Your solution has a space complexity of O(arr.count)
This is caused by the need to hold the sorted array. If you were allowed to mutate arr in-place, this could reduce to `O(1).
Your solution has a time complexity of O((arr.count * log_2(arr.count)) + summedElementCount)
Derivation: Sorting first (which takes O(arr.count * log_2(arr.count))), and then summing the first and last summedElementCount (which is each O(summedElementCount))
O(arr.count * log_2(arr.count)) + (2 * O(summedElementCount))
= O(arr.count * log_2(arr.count)) + O(summedElementCount) // Annihilation of multiplication by a constant factor
= O((arr.count * log_2(arr.count)) + summedElementCount) // Addition law for big O
This problem could instead be solved with a bounded priority queue, like the MinMaxPriorityQueue in Google's Gauva library for Java. It's simply a wrapper for min-max heap that maintains a fixed number of elements, that when added to, causes the greatest element (according to the provided comparator) to be evicted. If you had something like this available to you in Swift, you could do:
func miniMaxSum(arr: [Int], summedElementCount: Int) {
let minQueue = MinMaxPriorityQueue<Int>(size: summedElementCount, comparator: <)
let maxQueue = MinMaxPriorityQueue<Int>(size: summedElementCount, comparator: >)
for i in arr {
minQueue.offer(i)
maxQueue.offer(i)
}
let (minSum, maxSum) = (minQueue.reduce(0, +), maxQueue.reduce(0, +))
print("\(minSum) \(maxSum)")
}
This solution has a space complexity of only O(summedElementCount) extra space, needed to hold the two queues, each of max size summedElementCount.
This is less than the previous solution, because summedElementCount <= arr.count
This solution has a time complexity of O(arr.count * log_2(summedElementCount))
Derviation: The for loop does arr.count iterations, each consisting of a log_2(summedElementCount) operation on both queues.
O(arr.count) * (2 * O(log_2(summedElementCount)))
= O(arr.count) * O(log_2(summedElementCount)) // Annihilation of multiplication by a constant factor
= O(arr.count * log_2(summedElementCount)) // Multiplication law for big O
It's unclear to me whether this is better or worse than O((arr.count * log_2(arr.count)) + summedElementCount). If you know, please let me know in the comments below!
Try this one accepted:
func miniMaxSum(arr: [Int]) -> Void {
let sorted = arr.sorted()
let minSum = sorted[0...3].reduce(0, +)
let maxSum = sorted[1...4].reduce(0, +)
print("\(minSum) \(maxSum)"
}
Try this-
func miniMaxSum(arr: [Int]) -> Void {
var minSum = 0
var maxSum = 0
var minChecked = false
var maxChecked = false
let numMax = arr.reduce(Int.min, { max($0, $1) })
print("Max number in array: \(numMax)")
let numMin = arr.reduce(Int.max, { min($0, $1) })
print("Min number in array: \(numMin)")
for item in arr {
if !minChecked && numMin == item {
minChecked = true
} else {
maxSum = maxSum + item
}
if !maxChecked && numMax == item {
maxChecked = true
} else {
minSum = minSum + item
}
}
print("\(minSum) \(maxSum)")
}
Try this:
func miniMaxSum(arr: [Int]) -> Void {
let min = arr.min()
let max = arr.max()
let total = arr.reduce(0, +)
print(total - max!, total - min!, separator: " ")
}
I have an array of size n. I would like to fill it with values from a geometric series with a functional approach.
What function should I use?
The result should be an array such as :
[a, a^2, a^3, ... a^n]
You can use sequence(first:next:) to compute powers
of a by repeated multiplication, limit the (lazily evaluated) sequence with prefix(_:) to the desired number of entries, and then create an array from the truncated sequence. Example:
let a = 0.5 // The base
let n = 4 // The maximal exponent
let series = Array(sequence(first: a, next: { $0 * a }).prefix(n))
print(series) // [0.5, 0.25, 0.125, 0.0625]
Another option can be to enumerate the sequence without creating an
actual array:
for x in sequence(first: a, next: { $0 * a }).prefix(n) {
// do something with `x`
}
You can create such geometric series by simply calling map on a range and doing the power operation inside map.
func createGeometricSeries(ofSize n:Int, _ a:Int)->[Int]{
return (1...n).map({Int(pow(Double(a), Double($0)))})
}
createGeometricSeries(ofSize: 3,2) //[2,4,8]
You can use map for that,
let resultingArray = yourArray.map({ a * $0 })
resulting array is the array that will meet your requirement
You can find more about it here in Apple Documentation.
vDSP_maxv is not assigning the max value to output in the code below.
I expected the last line to print 2, but instead it prints something different each time, usually a very large or small number like 2.8026e-45
I've read this tutorial, the documentation, and the inline documentation in the header file for vDSP_maxv, but I don't see why the code below isn't producing the expected result.
Making numbers an UnsafePointer instead of an UnsafeMutablePointer didn't work, nor did a number of other things I've tried, so maybe I'm missing something fundamental.
import Accelerate
do {
// INPUT - pointer pointing at: 0.0, 1.0, 2.0
let count = 3
let numbers = UnsafeMutablePointer<Float>
.allocate(capacity: count)
defer { numbers.deinitialize() }
for i in 0..<count {
(numbers+i).initialize(to: Float(i))
}
// OUTPUT
var output = UnsafeMutablePointer<Float>
.allocate(capacity: 1)
// FIND MAX
vDSP_maxv(
numbers,
MemoryLayout<Float>.stride,
output,
vDSP_Length(count)
)
print(output.pointee) // prints various numbers, none of which are expected
}
You are mistaking the usage of the stride parameter to vDSP_maxv.
You need to pass the number of elements consisting a single stride, not the number of bytes.
vDSP_maxv(::::)
*C = -INFINITY;
for (n = 0; n < N; ++n)
if (*C < A[n*I])
*C = A[n*I];
In the pseudo code above, I represents the stride parameter, and you see giving 4 (MemoryLayout<Float>.stride) to I would generate indexes exceeding the bound of A (your numbers).
Some other parts fixed to fit my preference, but the most important thing is the second parameter for vDSP_maxv:
import Accelerate
do {
// INPUT - pointer pointing at: 0.0, 1.0, 2.0
let numbers: [Float] = [0.0, 1.0, 2.0]
// OUTPUT
var output: Float = Float.nan
// FIND MAX
vDSP_maxv(
numbers,
1, //<- when you want to use all elements in `numbers` continuously, you need to pass `1`
&output,
vDSP_Length(numbers.count)
)
print(output) //-> 2.0
}
x is an object that holds an array called point.
x implements the subscript operator so you can do things, like x[i] to get the array's ith element (of type T, which is usually an Int or Double).
This is what I want to do:
x[0...2] = [0...2]
But I get an error that says ClosedInterval<T> is not convertible to Int/Double.
Edit1:
Here is my object x:
let x = Point<Double>(dimensions:3)
For kicks and giggles: define x as [1.0,2.0,0.0]
I can get the first n elements via x[0...2].
What I want to know is how to update x[0...2] to hold [0.0, 0.0.0.0] in one fell swoop. Intuitively, I would want to do x[0...2] = [0...2]. This does not work as can be seen in the answers. I want to update x without iteration (on my end) and by hiding the fact that x is not an array (even though it is not).
[0...2] is an array with one element which, at best, will be a Range<Int> from 0 through 2. You can't assign that to a slice containing, say, Ints.
x[0...2] on the other hand is (probably) a slice, and Sliceable only defines a get subscript, not a setter. So even if the types were more compatible - that is, if you tried x[0...2] = 0...2, which at least is attempting to replace a range within x with the values of a similarly-sized collection - it still wouldn't work.
edit: as #rintaro points out, Array does support a setter subscript for ranges – so if x were a range you could do x[0...2] = Slice(0...2) – but it has to be a slice you assign, so I'd still go with replaceRange.
If what you mean is you want to replace entries 0 through 2 with some values, what you want is replaceRange, as long as your collection conforms to RangeReplaceableCollection (which, for example, Array does):
var x = [0,1,2,3,4,5]
var y = [200,300,400]
x.replaceRange(2..<5, with: y)
// x is now [0,1,200,300,400,5]
Note, the replaced range and y don't have to be the same size, the collection will expand/contract as necessary.
Also, y doesn't have to an array, it can be any kind of collection (has to be a collection though, not a sequence). So the above code could have been written as:
var x = [0,1,2,3,4,5]
var y = lazy(2...4).map { $0 * 100 }
x.replaceRange(2..<5, with: y)
edit: so, per your edit, to in-place zero out an array of any size in one go, you can do:
var x = [1.0,2.0,0.0]
// range to replace is the whole array's range,
// Repeat just generates any given value n times
x.replaceRange(indices(x), with: Repeat(count: x.count, repeatedValue: 0.0))
Adjust the range (and count of replacing entries) accordingly if you want to just zero out a subrange.
Given your example Point class, here is how you could implement this behavior assuming it's backed by an array under the hood:
struct Point<T: FloatLiteralConvertible> {
private var _vals: [T]
init(dimensions: Int) {
_vals = Array(count: dimensions, repeatedValue: 0.0)
}
mutating func replaceRange
<C : CollectionType where C.Generator.Element == T>
(subRange: Range<Array<T>.Index>, with newElements: C) {
// just forwarding on the request - you could perhaps
// do some additional validation first to ensure dimensions
// aren't being altered...
_vals.replaceRange(subRange, with: newElements)
}
}
var x = Point<Double>(dimensions:3)
x.replaceRange(0...2, with: [1.1,2.2,3.3])
You need to implement subscript(InvervalType) to handle the case of multiple assignments like this. That isn't done for you automatically.