Split a large [MLMultiArray] into smaller chunks [[MLMultiArray]]? - swift

I have a large MLMultiArray of length 15360 values.
Sample:
Float32 1 x 15360
[14.78125,-0.6308594,5.609375,13.57812,-1.871094,-19.65625,9.5625,8.640625,-2.728516,3.654297,-3.189453,-1.740234...]
Is there a way I can convert this huge array into 120 small MLMultiArrays with 128 elements each without changing the sequence of this array and in the most efficient way possible?
The entire array of 15360 elements is available in this Link

let chunkSize = 2
let shape = yourLargeArray.shape
let chunkedArrays = try? stride(from: 0, to: shape.count, by: chunkSize)
.map { offset -> MLMultiArray in
let startIndex = shape.index(shape.startIndex, offsetBy: offset)
let endIndex = shape.index(startIndex, offsetBy: chunkSize, limitedBy: shape.endIndex) ?? shape.endIndex
return try MLMultiArray(
shape: Array(shape[startIndex ..< endIndex]),
dataType: yourLargeArray.dataType
)
}

Related

Swift 4, how to remove two nth charterers from sentence, and start counting each letter in sentence from 1 instead of 0

I've come outwith this code to remove two charahcters from sentence, however I was wondering how to start counting sentence from 1 when removing characters.
Examplre user enters in each textfield as following:
textfield.text: Hi, thankyou
inputlabelOne.text: 2
inputLabelTwo.text: 5
My code:
var numberOne = Int (InputLabelOne.text)!
var numberTwo = Int (InputLabelTwo.text)!
var text = textfield.text!
var num1 = numberOne
var num2 = numberTwo
if let oneIndex = text.index ((text.startIndex), offsetBy:
num1, limetedBy:(text.endIndex)) ,
let twoIndex = text.index ((text.startIndex), offsetBy: num2,
limetedBy:(text.endIndex)) {
text.remove(at: oneIndex)
text.remove(at: twoIndex)
outputLabel.Text = "sentence with removed letters: \(text)"
}
Firstly, you need to get first index of the string
let startIndex = string.startIndex
Next, you need to get String.Index of the character at first position
let index1 = string.index(startIndex, offsetBy: num1 - 1)
I type minus one because first character has 0 index
Next, you can remove this character
str.remove(at: index1)
Same for the second character
let offset = num1 > num2 ? 1 : 2
let index2 = string.index(startIndex, offsetBy: num2 - offset)
str.remove(at: index2)
If num1 is less than num2 then offset value is 2 because we have already removed one character.
If num1 is greater than num2 then offset value is 1 as for num1.
To avoid the mutating while iterating mistake you have to remove the characters backwards starting at the highest index.
To get 1-based indices just subtract 1 from the offset respectively
var text = "Hi, thankyou"
let inputLabelOne = 2
let inputLabelTwo = 5
if let oneIndex = text.index(text.startIndex, offsetBy: inputLabelOne - 1, limitedBy: text.endIndex),
let twoIndex = text.index(text.startIndex, offsetBy: inputLabelTwo - 1, limitedBy: text.endIndex) {
text.remove(at: twoIndex)
text.remove(at: oneIndex)
}
print(text) // H, hankyou
or as function
func remove(at indices: [Int], from text: inout String) {
let sortedIndices = indices.sorted(by: >)
for index in sortedIndices {
if let anIndex = text.index(text.startIndex, offsetBy: index - 1, limitedBy: text.endIndex) {
text.remove(at: anIndex)
}
}
}
remove(at: [inputLabelOne, inputLabelTwo], from: &text)

Swift 4 - efficient conversion of Double to big-endian

I need to convert a Double to big-endian in order to write it to a file, using an oil-industry binary file standard, that was originally defined for IBM half-inch 9 track tapes in the 1970s!
I need really efficient Swift 4 code, because this conversion is inside two nested-loops and will be executed upwards of 100,000 times.
You can create an UInt64 containing the big-endian representation
of the Double with
let value = 1.0
var n = value.bitPattern.bigEndian
In order to write that to a file you might need to convert it
to Data:
let data = Data(buffer: UnsafeBufferPointer(start: &n, count: 1))
print(data as NSData) // <3ff00000 00000000>
If many contiguous floating point values are written to the file
then it would be more effective to create an [UInt64] array
with the big-endian representations and convert that to Data,
for example
let values = [1.0, 2.0, 3.0, 4.0]
let array = values.map { $0.bitPattern.bigEndian }
let data = array.withUnsafeBufferPointer { Data(buffer: $0) }
(All the above compiles with Swift 3 and 4.)
I successfully implemented Martin's array suggestion. I decided I should use some "interesting" test values and one thing led to another! Here's my test playground. I hope it is of interest:
//: Playground - noun: a place where people can play
import UIKit
func convert(doubleArray: [Double]) {
let littleEndianArray = doubleArray.map { $0.bitPattern}
var data = littleEndianArray.withUnsafeBufferPointer { Data(buffer: $0) }
print("Little-endian : ", data as NSData)
// Convert and display the big-endian bytes
let bigEndianArray = doubleArray.map { $0.bitPattern.bigEndian }
data = bigEndianArray.withUnsafeBufferPointer { Data(buffer: $0) }
print("Big-endian : ", data as NSData)
}
// Values below are from:
// https://en.wikipedia.org/wiki/Double-precision_floating-point_format
let nan = Double.nan
let plusInfinity = +1.0 / 0.0
let maxDouble = +1.7976931348623157E308
let smallestNumberGreaterThanOne = +1.0000000000000002
let plusOne = +1.0
let maxSubnormalPositiveDouble = +2.2250738585072009E-308
let minSubnormalPositiveDouble = +4.9E-324
let plusZero = +0.0
let minusZero = -0.0
let maxSubnormalNegativeDouble = -4.9E-324
let minSubnormalNegativeDouble = -2.2250738585072009E-308
let minusOne = -1.0
let largestNumberLessThanOne = -1.0000000000000002
let minDouble = -1.7976931348623157E308
let minusInfinity = -1.0 / 0.0
let smallestNumber = "+1.0000000000000002"
let largestNumber = "-1.0000000000000002"
print("\n\nPrint little-endian and big-endian Doubles")
print("\n\nDisplay: NaN and +0.0 to +1.0")
print(" Min. Subnormal Max. Subnormal")
print(" Not a Number Plus Zero Positive Double Positive Double Plus One")
print(String(format: "Decimal : NaN %+8.6e %+8.6e %+8.6e %+8.6e", plusZero, minSubnormalPositiveDouble, maxSubnormalPositiveDouble, plusOne))
var doubleArray = [nan, plusZero, minSubnormalPositiveDouble, maxSubnormalPositiveDouble, plusOne]
convert(doubleArray: doubleArray)
print("\n\nDisplay: +1.0 to +Infinity")
print(" Smallest Number ")
print(" Plus One Greater Than 1.0 Max. Double +Infinity")
print(String(format: "Decimal : %+8.6e \(smallestNumber) %+8.6e%+8.6e", plusOne, maxDouble, plusInfinity))
doubleArray = [plusOne, smallestNumberGreaterThanOne, maxDouble, plusInfinity]
convert(doubleArray: doubleArray)
print("\n\nDisplay: NaN and -0.0 to -1.0")
print(" Min. Subnormal Max. Subnormal")
print(" Not a Number Minus Zero Negative Double Negative Double Minus One")
print(String(format: "Decimal : NaN %+8.6e %+8.6e %+8.6e %+8.6e", minusZero, maxSubnormalNegativeDouble, minSubnormalNegativeDouble, minusOne))
doubleArray = [nan, minusZero, maxSubnormalNegativeDouble, minSubnormalNegativeDouble, minusOne]
convert(doubleArray: doubleArray)
print("\n\nDisplay: -1.0 to -Infinity")
print(" Smallest Number ")
print(" Minus One Less Than -1.0 Min. Double -Infinity")
print(String(format: "Decimal : %+8.6e \(largestNumber) %+8.6e%+8.6e", minusOne, minDouble, minusInfinity))
doubleArray = [minusOne, largestNumberLessThanOne, minDouble, minusInfinity]
convert(doubleArray: doubleArray)

Swift 3: Filter a range

In Swift 2 it was possible to filter a range like this:
let range: Range<Int> = 1..<100
let mult4 = range
.filter{n in n % 4 == 0}
In Swift3 the range seems to have lost its filter method. Any suggestions?
You have to use a countable range:
let range: CountableRange<Int> = 1..<100
// Or simply: let range = 1..<100
let mult4 = range.filter { n in n % 4 == 0 }
A (Closed)Range describes an "interval" and can not be enumerated,
whereas a Countable(Closed)Range is a collection of consecutive values.
You can use stride:
let mult4 = Array(stride(from: 0, to: 100, by: 4))
let range: Range<Int> = 1..<100
let mult4 = [Int](range.lowerBound..<range.upperBound).filter{n in n % 4 == 0}

vDSP_conv occasionally returns NANs

I'm using vDSP_conv to perform autocorrelation. Mostly it works just fine but every so often it's filling the output array with NaNs.
The code:
func corr_test() {
var pass = 0
var x = [Float]()
for i in 0..<2000 {
x.append(Float(i))
}
while true {
print("pass \(pass)")
let corr = autocorr(x)
if corr[1].isNaN {
print("!!!")
}
pass += 1
}
}
func autocorr(a: [Float]) -> [Float] {
let resultLen = a.count * 2 + 1
let padding = [Float].init(count: a.count, repeatedValue: 0.0)
let a_pad = padding + a + padding
var result = [Float].init(count: resultLen, repeatedValue: 0.0)
vDSP_conv(a_pad, 1, a_pad, 1, &result, 1, UInt(resultLen), UInt(a_pad.count))
return result
}
The output:
pass ...
pass 169
pass 170
pass 171
(lldb) p corr
([Float]) $R0 = 4001 values {
[0] = 2.66466637E+9
[1] = NaN
[2] = NaN
[3] = NaN
[4] = NaN
...
I'm not sure what's going on here. I think I'm handling the 0 padding correctly since if I weren't I don't think I'd be getting correct results 99% of the time.
Ideas? Gracias.
Figured it out. The key was this comment from https://developer.apple.com/library/mac/samplecode/vDSPExamples/Listings/DemonstrateConvolution_c.html :
// “The signal length is padded a bit. This length is not actually passed to the vDSP_conv routine; it is the number of elements
// that the signal array must contain. The SignalLength defined below is used to allocate space, and it is the filter length
// rounded up to a multiple of four elements and added to the result length. The extra elements give the vDSP_conv routine
// leeway to perform vector-load instructions, which load multiple elements even if they are not all used. If the caller did not
// guarantee that memory beyond the values used in the signal array were accessible, a memory access violation might result.”
“Padded a bit.” Thanks for being so specific. Anyway here's the final working product:
func autocorr(a: [Float]) -> [Float] {
let filterLen = a.count
let resultLen = filterLen * 2 - 1
let signalLen = ((filterLen + 3) & 0xFFFFFFFC) + resultLen
let padding1 = [Float].init(count: a.count - 1, repeatedValue: 0.0)
let padding2 = [Float].init(count: (signalLen - padding1.count - a.count), repeatedValue: 0.0)
let signal = padding1 + a + padding2
var result = [Float].init(count: resultLen, repeatedValue: 0.0)
vDSP_conv(signal, 1, a, 1, &result, 1, UInt(resultLen), UInt(filterLen))
// Remove the first n-1 values which are just mirrored from the end so that [0] always has the autocorrelation.
result.removeFirst(filterLen - 1)
return result
}
Note that the results here aren't normalized.

How to offset a Range<T> in Swift?

Imagine we have an arbitrary Range<T> and we want to create a new range with startIndex and endIndex advanced by 50 units.
My first thought was to do this:
let startIndex = advance(range.startIndex, 50)
let endIndex = advance(range.endIndex, 50)
var newRange = startIndex..<endIndex
But this gives "fatal error: can not increment endIndex". (Well, it does with Range<String.Index>. I haven't tried it with other generic parameters.) I've tried quite a few permutations of this, including assigning range.startIndex and range.endIndex to new variables, etc. Nothing works.
Let me stress that I'm looking for a solution that works with any T. GoZoner gave an answer below that I haven't tried with Int, but I wouldn't be surprised if it worked. However, no permutation of it I tried will work when T is String.Index
So, how can I do this?
There’s a second version of advance that takes a maximum index not to go beyond:
let s = "Hello, I must be going."
let range = s.startIndex..<s.endIndex
let startIndex = advance(range.startIndex, 50, s.endIndex)
let endIndex = advance(range.endIndex, 50, s.endIndex)
var newRange = startIndex..<endIndex
if newRange.isEmpty {
println("new range out of bounds")
}
Try:
1> var r1 = 1..<50
r1: Range<Int> = 1..<50
2> var r2 = (r1.startIndex+50)..<(r1.endIndex+50)
r2: Range<Int> = 51..<100