How to convert Data of Int16 audio samples to array of float audio samples - swift

I'm currently working with audio samples.
I get them from AVAssetReader and have a CMSampleBuffer with something like this:
guard let sampleBuffer = readerOutput.copyNextSampleBuffer() else {
guard reader.status == .completed else { return nil }
// Completed
// samples is an array of Int16
let samples = sampleData.withUnsafeBytes {
Array(UnsafeBufferPointer<Int16>(
start: $0, count: sampleData.count / MemoryLayout<Int16>.size))
}
// The only way I found to convert [Int16] -> [Float]...
return samples.map { Float($0) / Float(Int16.max)}
}
guard let blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer) else {
return nil
}
let length = CMBlockBufferGetDataLength(blockBuffer)
let sampleBytes = UnsafeMutablePointer<UInt8>.allocate(capacity: length)
CMBlockBufferCopyDataBytes(blockBuffer, 0, length, sampleBytes)
sampleData.append(sampleBytes, count: length)
}
As you can see the only I found to convert [Int16] -> [Float] issamples.map { Float($0) / Float(Int16.max) but by doing this my processing time is increasing. Does it exist an other way to cast a pointer of Int16 to a pointer of Float?

"Casting" or "rebinding" a pointer only changes the way how memory is
interpreted. You want to compute floating point values from integers,
the new values have a different memory representation (and also a different
size).
Therefore you somehow have to iterate over all input values
and compute the new values. What you can do is to omit the Array
creation:
let samples = sampleData.withUnsafeBytes {
UnsafeBufferPointer<Int16>(start: $0, count: sampleData.count / MemoryLayout<Int16>.size)
}
return samples.map { Float($0) / Float(Int16.max) }
Another option would be to use the vDSP functions from the
Accelerate framework:
import Accelerate
// ...
let numSamples = sampleData.count / MemoryLayout<Int16>.size
var factor = Float(Int16.max)
var floats: [Float] = Array(repeating: 0.0, count: numSamples)
// Int16 array to Float array:
sampleData.withUnsafeBytes {
vDSP_vflt16($0, 1, &floats, 1, vDSP_Length(numSamples))
}
// Scaling:
vDSP_vsdiv(&floats, 1, &factor, &floats, 1, vDSP_Length(numSamples))
I don't know if that is faster, you'll have to check.
(Update: It is faster, as ColGraff demonstrated in his answer.)
An explicit loop is also much faster than using map:
let factor = Float(Int16.max)
let samples = sampleData.withUnsafeBytes {
UnsafeBufferPointer<Int16>(start: $0, count: sampleData.count / MemoryLayout<Int16>.size)
}
var floats: [Float] = Array(repeating: 0.0, count: samples.count)
for i in 0..<samples.count {
floats[i] = Float(samples[i]) / factor
}
return floats
An additional option in your case might be to use CMBlockBufferGetDataPointer() instead of CMBlockBufferCopyDataBytes()
into allocated memory.

You can do considerably better if you use the Accelerate Framework for the conversion:
import Accelerate
// Set up random [Int]
var randomInt = [Int16]()
randomInt.reserveCapacity(10000)
for _ in 0..<randomInt.capacity {
let value = Int16(Int32(arc4random_uniform(UInt32(UInt16.max))) - Int32(UInt16.max / 2))
randomInt.append(value)
}
// Time elapsed helper: https://stackoverflow.com/a/25022722/887210
func printTimeElapsedWhenRunningCode(title:String, operation:()->()) {
let startTime = CFAbsoluteTimeGetCurrent()
operation()
let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime
print("Time elapsed for \(title): \(timeElapsed) s.")
}
// Testing
printTimeElapsedWhenRunningCode(title: "vDSP") {
var randomFloat = [Float](repeating: 0, count: randomInt.capacity)
vDSP_vflt16(randomInt, 1, &randomFloat, 1, vDSP_Length(randomInt.capacity))
}
printTimeElapsedWhenRunningCode(title: "map") {
randomInt.map { Float($0) }
}
// Results
//
// Time elapsed for vDSP : 0.000429034233093262 s.
// Time elapsed for flatMap: 0.00233501195907593 s.
It's an improvement of about 5 times faster.
(Edit: Added some changes suggested by Martin R)

#MartinR and #ColGraff gave really good answers, and thank you for everybody and the fast replies.
however I found an easier way to do that without any computation. AVAssetReaderAudioMixOutput requires an audio settings dictionary. Inside we can set the key AVLinearPCMIsFloatKey: true. This way I will read my data like this
let samples = sampleData.withUnsafeBytes {
UnsafeBufferPointer<Float>(start: $0,
count: sampleData.count / MemoryLayout<Float>.size)
}

for: Xcode 8.3.3 • Swift 3.1
extension Collection where Iterator.Element == Int16 {
var floatArray: [Float] {
return flatMap{ Float($0) }
}
}
usage:
let int16Array: [Int16] = [1, 2, 3 ,4]
let floatArray = int16Array.floatArray

Related

Why do I get popping noises from my Core Audio program?

I am trying to figure out how to use Apple's Core Audio APIs to record and play back linear PCM audio without any file I/O. (The recording side seems to work just fine.)
The code I have is pretty short, and it works somewhat. However, I am having trouble with identifying the source of clicks and pops in the output. I've been beating my head against this for many days with no success.
I have posted a git repo here, with a command-line program program that shows where I'm at: https://github.com/maxharris9/AudioRecorderPlayerSwift/tree/main/AudioRecorderPlayerSwift
I put in a couple of functions to prepopulate the recording. The tone generator (makeWave) and noise generator (makeNoise) are just in here as debugging aids. I'm ultimately trying to identify the source of the messed up output when you play back a recording in audioData:
// makeWave(duration: 30.0, frequency: 441.0) // appends to `audioData`
// makeNoise(frameCount: Int(44100.0 * 30)) // appends to `audioData`
_ = Recorder() // appends to `audioData`
_ = Player() // reads from `audioData`
Here's the player code:
var lastIndexRead: Int = 0
func outputCallback(inUserData: UnsafeMutableRawPointer?, inAQ: AudioQueueRef, inBuffer: AudioQueueBufferRef) {
guard let player = inUserData?.assumingMemoryBound(to: Player.PlayingState.self) else {
print("missing user data in output callback")
return
}
let sliceStart = lastIndexRead
let sliceEnd = min(audioData.count, lastIndexRead + bufferByteSize - 1)
print("slice start:", sliceStart, "slice end:", sliceEnd, "audioData.count", audioData.count)
if sliceEnd >= audioData.count {
player.pointee.running = false
print("found end of audio data")
return
}
let slice = Array(audioData[sliceStart ..< sliceEnd])
let sliceCount = slice.count
// doesn't fix it
// audioData[sliceStart ..< sliceEnd].withUnsafeBytes {
// inBuffer.pointee.mAudioData.copyMemory(from: $0.baseAddress!, byteCount: Int(sliceCount))
// }
memcpy(inBuffer.pointee.mAudioData, slice, sliceCount)
inBuffer.pointee.mAudioDataByteSize = UInt32(sliceCount)
lastIndexRead += sliceCount + 1
// enqueue the buffer, or re-enqueue it if it's a used one
check(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil))
}
struct Player {
struct PlayingState {
var packetPosition: UInt32 = 0
var running: Bool = false
var start: Int = 0
var end: Int = Int(bufferByteSize)
}
init() {
var playingState: PlayingState = PlayingState()
var queue: AudioQueueRef?
// this doesn't help
// check(AudioQueueNewOutput(&audioFormat, outputCallback, &playingState, CFRunLoopGetMain(), CFRunLoopMode.commonModes.rawValue, 0, &queue))
check(AudioQueueNewOutput(&audioFormat, outputCallback, &playingState, nil, nil, 0, &queue))
var buffers: [AudioQueueBufferRef?] = Array<AudioQueueBufferRef?>.init(repeating: nil, count: BUFFER_COUNT)
print("Playing\n")
playingState.running = true
for i in 0 ..< BUFFER_COUNT {
check(AudioQueueAllocateBuffer(queue!, UInt32(bufferByteSize), &buffers[i]))
outputCallback(inUserData: &playingState, inAQ: queue!, inBuffer: buffers[i]!)
if !playingState.running {
break
}
}
check(AudioQueueStart(queue!, nil))
repeat {
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, BUFFER_DURATION, false)
} while playingState.running
// delay to ensure queue emits all buffered audio
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, BUFFER_DURATION * Double(BUFFER_COUNT + 1), false)
check(AudioQueueStop(queue!, true))
check(AudioQueueDispose(queue!, true))
}
}
I captured the audio with Audio Hijack, and noticed that the jumps are indeed correlated with the size of the buffer:
Why is this happening, and what can I do to fix it?
I believe you were beginning to zero in on, or at least suspect, the cause of the popping you are hearing: it's caused by discontinuities in your waveform.
My initial hunch was that you were generating the buffers independently (i.e. assuming that each buffer starts at time=0), but I checked out your code and it wasn't that. I suspect some of the calculations in makeWave were at fault. To check this theory I replaced your makeWave with the following:
func makeWave(offset: Double, numSamples: Int, sampleRate: Float64, frequency: Float64, numChannels: Int) -> [Int16] {
var data = [Int16]()
for sample in 0..<numSamples / numChannels {
// time in s
let t = offset + Double(sample) / sampleRate
let value = Double(Int16.max) * sin(2 * Double.pi * frequency * t)
for _ in 0..<numChannels {
data.append(Int16(value))
}
}
return data
}
This function removes the double loop in the original, accepts an offset so it knows which part of the wave is being generated and makes some changes to the sampling of the sine wave.
When Player is modified to use this function you get a lovely steady tone. I'll add the changes to player soon. I can't in good conscience show the quick and dirty mess it is now to the public.
Based on your comments below I refocused on your player. The issue was that the audio buffers expect byte counts but the slice count and some other calculations were based on Int16 counts. The following version of outputCallback will fix it. Concentrate on the use of the new variable bytesPerChannel.
func outputCallback(inUserData: UnsafeMutableRawPointer?, inAQ: AudioQueueRef, inBuffer: AudioQueueBufferRef) {
guard let player = inUserData?.assumingMemoryBound(to: Player.PlayingState.self) else {
print("missing user data in output callback")
return
}
let bytesPerChannel = MemoryLayout<Int16>.size
let sliceStart = lastIndexRead
let sliceEnd = min(audioData.count, lastIndexRead + bufferByteSize/bytesPerChannel)
if sliceEnd >= audioData.count {
player.pointee.running = false
print("found end of audio data")
return
}
let slice = Array(audioData[sliceStart ..< sliceEnd])
let sliceCount = slice.count
print("slice start:", sliceStart, "slice end:", sliceEnd, "audioData.count", audioData.count, "slice count:", sliceCount)
// need to be careful to convert from counts of Ints to bytes
memcpy(inBuffer.pointee.mAudioData, slice, sliceCount*bytesPerChannel)
inBuffer.pointee.mAudioDataByteSize = UInt32(sliceCount*bytesPerChannel)
lastIndexRead += sliceCount
// enqueue the buffer, or re-enqueue it if it's a used one
check(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil))
}
I did not look at the Recorder code, but you may want to check if the same sort of error crept in there.

How to extract UnsafePointer<CGFloat> from UnsafePointer<CGPoint> - Swift

I’m playing around with learning about pointers in Swift.
For instance, this code starts with a CGPoint array, creates an UnsafePointer and then extracts all the x values into a CGFloat array:
import Foundation
let points = [CGPoint(x:1.2, y:3.33), CGPoint(x:1.5, y:1.21), CGPoint(x:1.48, y:3.97)]
print(points)
let ptr = UnsafePointer(points)
print(ptr)
func xValues(buffer: UnsafePointer<CGPoint>, count: Int) -> [CGFloat]? {
return UnsafeBufferPointer(start: buffer, count: count).map { $0.x }
}
let x = xValues(buffer: ptr, count: points.count)
print(x)
And the expected output is:
[Foundation.CGPoint(x: 1.2, y: 3.33), Foundation.CGPoint(x: 1.5, y: 1.21), Foundation.CGPoint(x: 1.48, y: 3.97)]
0x0000556d6b818aa0
Optional([1.2, 1.5, 1.48])
Now I’d like to have the xValues function return directly UnsafePointer<CGFloat>, instead of going through [CGFloat].
How do I do that, is that possible?
It is unsafe to output pointers like that. As mentioned in comments you should use withUnsafeBufferPointer method to access the underlying buffer:
let points = [
CGPoint(x:1.2, y:3.33),
CGPoint(x:1.5, y:1.21),
CGPoint(x:1.48, y:3.97)
]
let xValues = points.withUnsafeBufferPointer { buffer in
return buffer.map { $0.x }
}
If you need a pointer to the array of CGFloat just use the same method as above:
xValues.withUnsafeBufferPointer { buffer in
// Do things with UnsafeBufferPointer<CGFloat>
}
A good Swift Pointer tutorial here.
Edit
Here is a working example:
let points = [
CGPoint(x:1.2, y:3.33),
CGPoint(x:1.5, y:1.21),
CGPoint(x:1.48, y:3.97)
]
// Create, init and defer dealoc
let ptr = UnsafeMutablePointer<CGFloat>.allocate(capacity: points.count)
ptr.initialize(repeating: 0.0, count: points.count)
defer {
ptr.deallocate()
}
// Populate pointer
points.withUnsafeBufferPointer { buffer in
for i in 0..<buffer.count {
ptr.advanced(by: i).pointee = buffer[i].x
}
}
// Do things with UnsafeMutablePointer<CGFloat>, for instance:
let buffer = UnsafeBufferPointer(start: ptr, count: points.count)
for (index, value) in buffer.enumerated() {
print("index: \(index), value: \(value)")
}

Get all sound frequencies of a WAV-file using Swift and AVFoundation

I would like to capture all frequencies between given timespans in a Wav-file. The intent is to do some audio analysis in a later step. For test, I’ve used the application “Sox” to generate a 1 second long Wav-file which includes only a single tone at 13000Hz. I want to read the file and find that frequency.
I’m using AVFoundation (which is important) to read the file. Since the input data is in PCM, I need to use FFT to get the actual frequencies which I do using the Accelerate framework. However, I don’t get the expected result (13000Hz), but rather a lot of values I don’t understand. I’m new to audio development, so any hint about where my code is failing is appreciated. The code includes a few comments where the issue occurs.
Thanks in advance!
Code:
import AVFoundation
import Accelerate
class Analyzer {
// This function is implemented using the code from the following tutorial:
// https://developer.apple.com/documentation/accelerate/vdsp/fast_fourier_transforms/finding_the_component_frequencies_in_a_composite_sine_wave
func fftTransform(signal: [Float], n: vDSP_Length) -> [Int] {
let observed: [DSPComplex] = stride(from: 0, to: Int(n), by: 2).map {
return DSPComplex(real: signal[$0],
imag: signal[$0.advanced(by: 1)])
}
let halfN = Int(n / 2)
var forwardInputReal = [Float](repeating: 0, count: halfN)
var forwardInputImag = [Float](repeating: 0, count: halfN)
var forwardInput = DSPSplitComplex(realp: &forwardInputReal,
imagp: &forwardInputImag)
vDSP_ctoz(observed, 2,
&forwardInput, 1,
vDSP_Length(halfN))
let log2n = vDSP_Length(log2(Float(n)))
guard let fftSetUp = vDSP_create_fftsetup(
log2n,
FFTRadix(kFFTRadix2)) else {
fatalError("Can't create FFT setup.")
}
defer {
vDSP_destroy_fftsetup(fftSetUp)
}
var forwardOutputReal = [Float](repeating: 0, count: halfN)
var forwardOutputImag = [Float](repeating: 0, count: halfN)
var forwardOutput = DSPSplitComplex(realp: &forwardOutputReal,
imagp: &forwardOutputImag)
vDSP_fft_zrop(fftSetUp,
&forwardInput, 1,
&forwardOutput, 1,
log2n,
FFTDirection(kFFTDirection_Forward))
let componentFrequencies = forwardOutputImag.enumerated().filter {
$0.element < -1
}.map {
return $0.offset
}
return componentFrequencies
}
func run() {
// The frequencies array is a array of frequencies which is then converted to points on sinus curves (signal)
let n = vDSP_Length(4*4096)
let frequencies: [Float] = [1, 5, 25, 30, 75, 100, 300, 500, 512, 1023]
let tau: Float = .pi * 2
let signal: [Float] = (0 ... n).map { index in
frequencies.reduce(0) { accumulator, frequency in
let normalizedIndex = Float(index) / Float(n)
return accumulator + sin(normalizedIndex * frequency * tau)
}
}
// These signals are then restored using the fftTransform function above, giving the exact same values as in the "frequencies" variable
let frequenciesRestored = fftTransform(signal: signal, n: n).map({Float($0)})
assert(frequenciesRestored == frequencies)
// Now I want to do the same thing, but reading the frequencies from a file (which includes a constant tone at 13000 Hz)
let file = { PATH TO A WAV-FILE WITH A SINGLE TONE AT 13000Hz RUNNING FOR 1 SECOND }
let asset = AVURLAsset(url: URL(fileURLWithPath: file))
let track = asset.tracks[0]
do {
let reader = try AVAssetReader(asset: asset)
let sampleRate = 48000.0
let outputSettingsDict: [String: Any] = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: Int(sampleRate),
AVLinearPCMIsNonInterleaved: false,
AVLinearPCMBitDepthKey: 16,
AVLinearPCMIsFloatKey: false,
AVLinearPCMIsBigEndianKey: false,
]
let output = AVAssetReaderTrackOutput(track: track, outputSettings: outputSettingsDict)
output.alwaysCopiesSampleData = false
reader.add(output)
reader.startReading()
typealias audioBuffertType = Int16
autoreleasepool {
while (reader.status == .reading) {
if let sampleBuffer = output.copyNextSampleBuffer() {
var audioBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil))
var blockBuffer: CMBlockBuffer?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
bufferListSizeNeededOut: nil,
bufferListOut: &audioBufferList,
bufferListSize: MemoryLayout<AudioBufferList>.size,
blockBufferAllocator: nil,
blockBufferMemoryAllocator: nil,
flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
blockBufferOut: &blockBuffer
);
let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
for buffer in buffers {
let samplesCount = Int(buffer.mDataByteSize) / MemoryLayout<audioBuffertType>.size
let samplesPointer = audioBufferList.mBuffers.mData!.bindMemory(to: audioBuffertType.self, capacity: samplesCount)
let samples = UnsafeMutableBufferPointer<audioBuffertType>(start: samplesPointer, count: samplesCount)
let myValues: [Float] = samples.map {
let value = Float($0)
return value
}
// Here I would expect my array to include multiple "13000" which is the frequency of the tone in my file
// I'm not sure what the variable 'n' does in this case, but changing it seems to change the result.
// The value should be twice as high as the highest measurable frequency (Nyquist frequency) (13000),
// but this crashes the application:
let mySignals = fftTransform(signal: myValues, n: vDSP_Length(2 * 13000))
assert(mySignals[0] == 13000)
}
}
}
}
}
catch {
print("error!")
}
}
}
The test clip can be generated using:
sox -G -n -r 48000 ~/outputfile.wav synth 1.0 sine 13000

How to create a function which measures time performance of algorithms?

If I want to evaluate time performance of a few algos using Date() or Dispatch() how can I create a function that does this?
For example this binary search algo. How can I pass it as a closure parameter and have the closure do all of the time performance measuring using any of the Swift time keeping methods below? Please answer with an example of a closure. Thanks.
let startingPoint = Date()
let startingPoint = Dispatch().now
func binarySearchForValue(searchValue: Int, array: [Int]) -> Bool {
var leftIndex = 0
var rightIndex = array.count - 1
while leftIndex <= rightIndex {
let middleIndex = (leftIndex + rightIndex) / 2
let middleValue = array[middleIndex]
if middleValue == searchValue {
return true
}
if searchValue < middleValue {
rightIndex = middleIndex - 1
}
if searchValue > middleValue {
leftIndex = middleIndex + 1
}
}
return false
}
Since you may want to measure different functions, it probably makes sense to capture the arguments for the function in the closure instead of including their types in the signature. But I did use a generic type for the return value. I hope that this is what you're after:
func measure<R>(_ label: String, operation: () -> R) -> R {
let start = DispatchTime.now()
let result = operation()
let end = DispatchTime.now()
let nanoTime = end.uptimeNanoseconds - start.uptimeNanoseconds
let timeInterval = Double(nanoTime) / 1_000_000_000
print(String(format: "%#: %.9f s", label, timeInterval))
return result
}
Here's how you use it:
let result = measure("search") { binarySearchForValue(searchValue: 3, array: [1, 3, 8]) }
print(result) // that's the result of the function that was measured, not the measurement
measure("some function") { functionWithoutReturnValue() }
If the function has no return value, R will be (), so that should work too. Just don't assign the result to anything (see the example above). If you want to do something with the measurement other than printing it to the console, you can do that, too. But you didn't specify that in your question, so I went with print.
Not sure what exactly you are after here and this solution below will only fit one specific function signature to test
func testBench(search: Int, array: [Int], testCase test: (Int, [Int]) -> Bool) {
let start = DispatchTime.now()
test(search, array)
let end = DispatchTime.now()
print("\(start) - \(end)")
}
called like this
testBench(search: 3, array: [6,7,5,3]) {binarySearchForValue(searchValue: $0, array: $1)}
You should use XCTest to measure the performance ... It gives you proper stats for your method in terms of performance
i.e.
func testMyCodesPerformance() {
measureBlock {
someClass.doSomethingFancy()
}
}
You can do lot more using XCTest measureBlock for performance testing

UInt32 array to String Byte Array in Swift

Before I start I would like to apologise if I say something crazy.
I am working on an app that implements a c library. Among others, It shares idArrays.
I have the part decodes an idArray and it was given to me:
func decodeArrayID(aArray:UnsafeMutablePointer<CChar>, aTokenLen:UInt32)->([UInt32], String){
let arrayCount = Int(aTokenLen / 4)
var idArrayTemp = [UInt32]()
var idArrayStringTemp = ""
for i in 0..<arrayCount{
let idValue = decodeArrayIDItem(index: i, array: aArray)
idArrayTemp.append(idValue)
idArrayStringTemp += "\(idValue) "
}
return (idArrayTemp, idArrayStringTemp)
}
func decodeArrayIDItem(index:Int, array:UnsafeMutablePointer<CChar>) -> UInt32{
var value:UInt32 = UInt32(array[index * 4]) & 0xFF
value <<= 8
value |= UInt32(array [index * 4 + 1]) & 0xFF
value <<= 8
value |= UInt32(array [index * 4 + 2]) & 0xFF
value <<= 8
value |= UInt32(array [index * 4 + 3]) & 0xFF
return value
}
As we can see the idArray is send through UnsafeMutablePointer AKA UnsafeMutablePointer.
Now I am working with the encoding part. The function will take an array of UInt32 values and will try to convert it into byte array and will convert into a sting for sending it through the library.
So far I have the following code but it doesn't work:
func encodeIDArray(idArray:[UInt32])->String{
var aIDArray8:[UInt8] = [UInt8]()
for var value in idArray{
let count = MemoryLayout<UInt32>.size
let bytePtr = withUnsafePointer(to: &value) {
$0.withMemoryRebound(to: UInt8.self, capacity: count) {
UnsafeBufferPointer(start: $0, count: count)
}
}
aIDArray8 += Array(bytePtr)
}
let stringTest = String(data: Data(aIDArray8), encoding: .utf8)
return stringTest!
}
A test result for the input [1,2] returns "\u{01}\0\0\0\u{02}\0\0\0" and something tells is not quite right...
Thank you
Edited
The c functions are
DllExport void STDCALL DvProviderAvOpenhomeOrgPlaylist1EnableActionIdArray(THandle aProvider, CallbackPlaylist1IdArray aCallback, void* aPtr);
where CallbackPlaylist1IdArray is
typedef int32_t (STDCALL *CallbackPlaylist1IdArray)(void* aPtr, IDvInvocationC* aInvocation, void* aInvocationPtr, uint32_t* aToken, char** aArray, uint32_t* aArrayLen);
and the value to aArray is the value that get the Byte array
I believe you are in the right way
func encodeIDArray(idArray:[UInt32])->String{
var aIDArray8:[UInt8] = [UInt8]()
for var value in idArray{
let count = MemoryLayout<UInt32>.size
let bytePtr = withUnsafePointer(to: &value) {
$0.withMemoryRebound(to: UInt8.self, capacity: count) { v in
//Just change it to don't return the pointer itself, but the result of the rebound
UnsafeBufferPointer(start: v, count: count)
}
}
aIDArray8 += Array(bytePtr)
}
let stringTest = String(data: Data(aIDArray8), encoding: .utf8)
return stringTest!
}
Change your test to a some valid value in ASCII Table like this
encodeIDArray(idArray: [65, 66, 67]) // "ABC"
I hope it help you... Good luck and let me know it it works on your case.
You can copy the [UInt32] array values to the allocated memory without creating an intermediate [Int8] array, and use the bigEndian
property instead of bit shifting and masking:
func writeCArrayValue(from pointer:UnsafeMutablePointer<UnsafeMutablePointer<Int8>?>?,
withUInt32Values array: [UInt32]){
pointer?.pointee = UnsafeMutablePointer<Int8>.allocate(capacity: MemoryLayout<UInt32>.size * array.count)
pointer?.pointee?.withMemoryRebound(to: UInt32.self, capacity: array.count) {
for i in 0..<array.count {
$0[i] = array[i].bigEndian
}
}
}
In the same way you can do the decoding:
func decodeArrayID(aArray:UnsafeMutablePointer<CChar>, aTokenLen:UInt32)->[UInt32] {
let arrayCount = Int(aTokenLen / 4)
var idArrayTemp = [UInt32]()
aArray.withMemoryRebound(to: UInt32.self, capacity: arrayCount) {
for i in 0..<arrayCount {
idArrayTemp.append(UInt32(bigEndian: $0[i]))
}
}
return idArrayTemp
}
You can't convert a binary buffer to a string and expect it to work. You should base64 encode your binary data. That IS a valid way to represent binary data as strings.
Consider the following code:
//Utility function that takes a typed pointer to a data buffer an converts it to an array of the desired type of object
func convert<T>(count: Int, data: UnsafePointer<T>) -> [T] {
let buffer = UnsafeBufferPointer(start: data, count: count);
return Array(buffer)
}
//Create an array of UInt32 values
let intArray: [UInt32] = Array<UInt32>(1...10)
print("source arrray = \(intArray)")
let arraySize = MemoryLayout<UInt32>.size * intArray.count
//Convert the array to a Data object
let data = Data(bytes: UnsafeRawPointer(intArray),
count: arraySize)
//Convert the binary Data to base64
let base64String = data.base64EncodedString()
print("Array as base64 data = ", base64String)
if let newData = Data(base64Encoded: base64String) {
newData.withUnsafeBytes { (bytes: UnsafePointer<UInt32>)->Void in
let newArray = convert(count:10, data: bytes)
print("After conversion, newArray = ", newArray)
}
} else {
fatalError("Failed to base-64 decode data!")
}
The output of that code is:
source arrray =[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Array as base64 data = AQAAAAIAAAADAAAABAAAAAUAAAAGAAAABwAAAAgAAAAJAAAACgAAAA==
After conversion, newArray = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Program ended with exit code: 0
Although I really appreciate all the answers I have finally figured out what was happening. I have to say that Duncan's answer was the closest to my problem.
So far I have interpreted char** as String. Turns out that it can be also a pointer to an array (Correct me if I am Wrong!). Converting the array as String gave a format that the library didn't like and it could not be decode on the other end.
The way I ended up doing is:
func encodeIDArray(idArray:[UInt32])->[Int8]{
var aIDArray8 = [UInt8].init(repeating: 0, count: idArray.count*4)
for i in 0..<idArray.count{
aIDArray8[i * 4] = UInt8(idArray[i] >> 24) & 0xff
aIDArray8[i * 4 + 1] = UInt8(idArray[i] >> 16) & 0xff
aIDArray8[i * 4 + 2] = UInt8(idArray[i] >> 8) & 0xff
aIDArray8[i * 4 + 3] = UInt8(idArray[i]) & 0xff
}
return aIDArray8.map { Int8(bitPattern: $0) }
}
and then I am assigning the value of the C Variable in swift like that:
let myArray = encodeIDArray(idArray:theArray)
writeCArrayValue(from: aArrayPointer, withValue: myArray)
func writeCArrayValue(from pointer:UnsafeMutablePointer<UnsafeMutablePointer<Int8>?>?, withValue array:[Int8]){
pointer?.pointee = UnsafeMutablePointer<Int8>.allocate(capacity: array.count)
memcpy(pointer?.pointee, array, array.count)
}
aArrayPointer is a the char** used by the library.