I got a decoded AVFrame whose format shows 160/Videotoolbox_vld. After googled some articles(here) and viewed the FFmpeg source code(here, and here), the CVBuffer handle should be at AVFrame.data[3]. But the CVBuffer I got seems invalid, any CVPixelBufferGetXXX() function returns 0 or nil.
If I used the av_hwframe_transfer_data() like the ffmpeg's example/hw_decode.c did, the sample can be downloaded from HW to SW buffer. Its AVFrame.format will be nv12. After converted via sws_scale to bgra, the sample can be showed on view with correct content.
I think the VideoToolbox decoded frame is OK. The way I convert AVFrame.data[3] to CVBuffer may be wrong. Just learned accessing c pointer in swift but I am not sure how to read a resource handle(CVBuffer) in a pointer correctly.
The following is how I try to extract CVBuffer from AVFrame
var pFrameOpt: UnsafeMutablePointer<AVFrame>? = av_frame_alloc()
avcodec_receive_frame(..., pFrameOpt)
let data3: UnsafeMutablePointer<UInt8>? = pFrameOpt?.pointee.data.3
data3?.withMemoryRebound(to: CVBuffer.self, capacity: 1) { pCvBuf in
let fW = pFrameOpt!.pointee.width // print 3840
let fH = pFrameOpt!.pointee.height // print 2160
let fFmt = pFrameOpt!.pointee.format // print 160
let cvBuf: CVBuffer = pCvBuf.pointee
let a1 = CVPixelBufferGetDataSize(cvBuf) // print 0
let a2 = CVPixelBufferGetPixelFormatType(cvBuf) // print 0
let a3 = CVPixelBufferGetWidth(cvBuf) // print 0
let a4 = CVPixelBufferGetHeight(cvBuf) // print 0
let a5 = CVPixelBufferGetBytesPerRow(cvBuf) // print 0
let a6 = CVPixelBufferGetBytesPerRowOfPlane(cvBuf, 0) // print 0
let a7 = CVPixelBufferGetWidthOfPlane(cvBuf, 0) // print 0
let a8 = CVPixelBufferGetHeightOfPlane(cvBuf, 0) // print 0
let a9 = CVPixelBufferGetPlaneCount(cvBuf) // print 0
let a10 = CVPixelBufferIsPlanar(cvBuf) // print false
let a11 = CVPixelBufferGetIOSurface(cvBuf) // print nil
let a12 = CVPixelBufferGetBaseAddress(cvBuf) // print nil
let a13 = CVPixelBufferGetBaseAddressOfPlane(cvBuf, 0) // print nil
let b1 = CVImageBufferGetCleanRect(cvBuf) // print 0, 0, 0, 0
let b2 = CVImageBufferGetColorSpace(cvBuf) // print nil
let b3 = CVImageBufferGetDisplaySize(cvBuf) // print 0, 0, 0, 0
let b4 = CVImageBufferGetEncodedSize(cvBuf) // print 0, 0, 0, 0
let b5 = CVImageBufferIsFlipped(cvBuf) // print false
// bad exec
var cvTextureOut: CVMetalTexture?
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, ..., cvBuf, nil, .bgra8Unorm, 3840, 2160, 0, ...)
}
CVBuffer is not a fixed size, so rebinding the memory won't work in this way. You need to do this:
Unmanaged<CVBuffer>.fromOpaque(data!).takeRetainedValue()
However, the bottom line is FFmpeg's VideoToolbox backend is not creating a CVPixelBuffer with kCVPixelBufferMetalCompatibilityKey set to true. You won't be able to call CVMetalTextureCacheCreateTextureFromImage(...) successfully in any case.
You could consider using a CVPixelBufferPool with appropriate settings (including kCVPixelBufferMetalCompatibilityKey set to true) and then using VTPixelTransferSession to quickly copy FFmpeg's pixel buffer to your own.
It seems like I wrongly cast void* to CVPixelBuffer* instead of casting void* directly to CVPixelBuffer. I cannot find a swift way to do such c style casting from pointer to numeric value. (Using as! CVPixelBuffer causes crash).
So I create a function for void* to CVPixelBufferRef in C code to do such casting job.
// util.h
#include <CoreVideo/CVPixelBuffer.h>
CVPixelBufferRef CastToCVPixelBuffer(void* p);
// util.c
CVPixelBufferRef CastToCVPixelBuffer(void* p)
{
return (CVPixelBufferRef)p;
}
// BridgeHeader.h
#include "util.h"
Then pass the UnsafeMutablePointer<UInt8> in, get CVPixelBuffer handle out.
let pFrameOpt: UnsafeMutablePointer<AVFrame>? = ...
let data3: UnsafeMutablePointer<UInt8>? = pFrameOpt?.pointee.data.3
let cvBuf: CVBuffer = CastToCVPixelBuffer(data3).takeUnretainedValue()
let width = CVPixelBufferGetWidth(cvBuf) // print 3840
let height = CVPixelBufferGetHeight(cvBuf) // print 2160
Try this
let cvBuf: CVBuffer = Array(UnsafeMutableBufferPointer(start: data3, count: 3))
.withUnsafeBufferPointer {
$0.baseAddress!.withMemoryRebound(to: CVBuffer.self, capacity: 1) { $0 }
}.pointee
or maybe even
let cvBuf: CVBuffer = unsafeBitcast(UnsafeMutableBufferPointer(start: data3, count: 3), to: CVBuffer.self)
/**
#function CVPixelBufferGetBaseAddressOfPlane
#abstract Returns the base address of the plane at planeIndex in the PixelBuffer.
#discussion Retrieving the base address for a PixelBuffer requires that the buffer base address be locked
via a successful call to CVPixelBufferLockBaseAddress. On OSX 10.10 and earlier, or iOS 8 and
earlier, calling this function with a non-planar buffer will have undefined behavior.
#param pixelBuffer Target PixelBuffer.
#param planeIndex Identifying the plane.
#result Base address of the plane, or NULL for non-planar CVPixelBufferRefs.
*/
#available(iOS 4.0, *)
public func CVPixelBufferGetBaseAddressOfPlane(_ pixelBuffer: CVPixelBuffer, _ planeIndex: Int) -> UnsafeMutableRawPointer?
maybe you can try use CVPixelBufferLockBaseAddress before use CVPixelBufferGetBaseAddressOfPlane
Related
I am trying to create a real time video processing app, in which I need to get the RGBA values of all pixels for each frame, and process them using an external library, and show them. I am trying to get the RGBA value for each pixel, but it is too slow the way I am doing it, I was wondering if there is a way to do it faster, using VImage. This is my current code, and the way I get all the pixels, as I get the current frame:
guard let cgImage = context.makeImage() else {
return nil
}
guard let data = cgImage.dataProvider?.data,
let bytes = CFDataGetBytePtr(data) else {
fatalError("Couldn't access image data")
}
assert(cgImage.colorSpace?.model == .rgb)
let bytesPerPixel = cgImage.bitsPerPixel / cgImage.bitsPerComponent
gp.async {
for y in 0 ..< cgImage.height {
for x in 0 ..< cgImage.width {
let offset = (y * cgImage.bytesPerRow) + (x * bytesPerPixel)
let components = (r: bytes[offset], g: bytes[offset + 1], b: bytes[offset + 2])
print("[x:\(x), y:\(y)] \(components)")
}
print("---")
}
}
This is the version using the VImage, but I there is some memory leak, and I can not access the pixels
guard
let format = vImage_CGImageFormat(cgImage: cgImage),
var buffer = try? vImage_Buffer(cgImage: cgImage,
format: format) else {
exit(-1)
}
let rowStride = buffer.rowBytes / MemoryLayout<Pixel_8>.stride / format.componentCount
do {
let componentCount = format.componentCount
var argbSourcePlanarBuffers: [vImage_Buffer] = (0 ..< componentCount).map { _ in
guard let buffer1 = try? vImage_Buffer(width: Int(buffer.width),
height: Int(buffer.height),
bitsPerPixel: format.bitsPerComponent) else {
fatalError("Error creating source buffers.")
}
return buffer1
}
vImageConvert_ARGB8888toPlanar8(&buffer,
&argbSourcePlanarBuffers[0],
&argbSourcePlanarBuffers[1],
&argbSourcePlanarBuffers[2],
&argbSourcePlanarBuffers[3],
vImage_Flags(kvImageNoFlags))
let n = rowStride * Int(argbSourcePlanarBuffers[1].height) * format.componentCount
let start = buffer.data.assumingMemoryBound(to: Pixel_8.self)
var ptr = UnsafeBufferPointer(start: start, count: n)
print(Array(argbSourcePlanarBuffers)[1]) // prints the first 15 interleaved values
buffer.free()
}
You can access the underlying pixels in a vImage buffer to do this.
For example, given an image named cgImage, use the following code to populate a vImage buffer:
guard
let format = vImage_CGImageFormat(cgImage: cgImage),
let buffer = try? vImage_Buffer(cgImage: cgImage,
format: format) else {
exit(-1)
}
let rowStride = buffer.rowBytes / MemoryLayout<Pixel_8>.stride / format.componentCount
Note that a vImage buffer's data may be wider than the image (see: https://developer.apple.com/documentation/accelerate/finding_the_sharpest_image_in_a_sequence_of_captured_images) which is why I've added rowStride.
To access the pixels as a single buffer of interleaved values, use:
do {
let n = rowStride * Int(buffer.height) * format.componentCount
let start = buffer.data.assumingMemoryBound(to: Pixel_8.self)
let ptr = UnsafeBufferPointer(start: start, count: n)
print(Array(ptr)[ 0 ... 15]) // prints the first 15 interleaved values
}
To access the pixels as a buffer of Pixel_8888 values, use (make sure that format.componentCount is 4:
do {
let n = rowStride * Int(buffer.height)
let start = buffer.data.assumingMemoryBound(to: Pixel_8888.self)
let ptr = UnsafeBufferPointer(start: start, count: n)
print(Array(ptr)[ 0 ... 3]) // prints the first 4 pixels
}
This is the slowest way to do it. A faster way is with a custom CoreImage filter.
Faster than that is to write your own OpenGL Shader (or rather, it's equivalent in Metal for current devices)
I've written OpenGL shaders, but have not worked with Metal yet.
Both allow you to write graphics code that runs directly on the GPU.
I am trying to use Metal argument buffers to access data in a Metal compute kernel.
The buffer has an entry when I print out the value CPU-side, but the Xcode debugger shows my argument buffer as empty on the GPU.
I can see my buffer with the sentinel value as an indirect resource in the debugger but no pointer to it in the argument buffer.
Here is the Swift code:
import MetalKit
do {
let device = MTLCreateSystemDefaultDevice()!
let capture_manager = MTLCaptureManager.shared()
let capture_desc = MTLCaptureDescriptor()
capture_desc.captureObject = device
try capture_manager.startCapture(with: capture_desc)
let argument_desc = MTLArgumentDescriptor()
argument_desc.dataType = MTLDataType.pointer
argument_desc.index = 0
argument_desc.arrayLength = 1024
let argument_encoder = device.makeArgumentEncoder(arguments: [argument_desc])!
let argument_buffer = device.makeBuffer(length: argument_encoder.encodedLength, options: MTLResourceOptions())
argument_encoder.setArgumentBuffer(argument_buffer, offset: 0)
var sentinel: UInt32 = 12345
let ptr = UnsafeRawPointer.init(&sentinel)
let buffer = device.makeBuffer(bytes: ptr, length: 4, options: MTLResourceOptions.storageModeShared)!
argument_encoder.setBuffer(buffer, offset: 0, index: 0)
let source = try String(contentsOf: URL.init(fileURLWithPath: "/path/to/kernel.metal"))
let library = try device.makeLibrary(source: source, options: MTLCompileOptions())
let function = library.makeFunction(name: "main0")!
let pipeline = try device.makeComputePipelineState(function: function)
let queue = device.makeCommandQueue()!
let encoder = queue.makeCommandBuffer()!
let compute_encoder = encoder.makeComputeCommandEncoder()!
compute_encoder.setComputePipelineState(pipeline)
compute_encoder.setBuffer(argument_buffer, offset: 0, index: 0)
compute_encoder.useResource(buffer, usage: MTLResourceUsage.read)
compute_encoder.dispatchThreads(MTLSize.init(width: 1, height: 1, depth: 1), threadsPerThreadgroup: MTLSize.init(width: 1, height: 1, depth: 1))
compute_encoder.endEncoding()
encoder.commit()
encoder.waitUntilCompleted()
capture_manager.stopCapture()
} catch {
print(error)
exit(1)
}
And the compute kernel:
#include <metal_stdlib>
#include <simd/simd.h>
using namespace metal;
struct Argument {
constant uint32_t *ptr [[id(0)]];
};
kernel void main0(
constant Argument *bufferArray [[buffer(0)]]
) {
constant uint32_t *ptr = bufferArray[0].ptr;
uint32_t y = *ptr;
}
If anyone has any ideas, I'd greatly appreciate it!
Seems that Metal optimizes out the kernel or something along that line since it only performs read operations.
Changing the kernel to write to the buffer makes everything work and show up properly in the Xcode debugger.
I'm having trouble converting a linear PCM buffer to a compressed AAC ELD (Enhanced Low Delay) buffer.
I got some working code for the conversion into ilbc format from this question:
AVAudioCompressedBuffer to UInt8 array and vice versa
This approach worked fine.
I changed the input for the format to this:
let packetCapacity = 8
let maximumPacketSize = 96
lazy var capacity = packetCapacity * maximumPacketSize // 768
let convertedSampleRate: Double = 16000
lazy var aaceldFormat: AVAudioFormat = {
var descriptor = AudioStreamBasicDescription(mSampleRate: convertedSampleRate, mFormatID: kAudioFormatMPEG4AAC_ELD, mFormatFlags: 0, mBytesPerPacket: 0, mFramesPerPacket: 0, mBytesPerFrame: 0, mChannelsPerFrame: 1, mBitsPerChannel: 0, mReserved: 0)
return AVAudioFormat(streamDescription: &descriptor)!
}()
The conversion to a compressed buffer worked fine and I was able to convert the buffer to a UInt8 Array.
However, the conversion back to a PCM Buffer didn't work. The input block for the conversion back to a buffer looks like this:
func convertToBuffer(uints: [UInt8], outcomeSampleRate: Double) -> AVAudioPCMBuffer? {
// Convert to buffer
let compressedBuffer: AVAudioCompressedBuffer = AVAudioCompressedBuffer(format: aaceldFormat, packetCapacity: AVAudioPacketCount(packetCapacity), maximumPacketSize: maximumPacketSize)
compressedBuffer.byteLength = UInt32(capacity)
compressedBuffer.packetCount = AVAudioPacketCount(packetCapacity)
var compressedBytes = uints
compressedBytes.withUnsafeMutableBufferPointer {
compressedBuffer.data.copyMemory(from: $0.baseAddress!, byteCount: capacity)
}
guard let audioFormat = AVAudioFormat(
commonFormat: AVAudioCommonFormat.pcmFormatFloat32,
sampleRate: outcomeSampleRate,
channels: 1,
interleaved: false
) else { return nil }
guard let uncompressor = getUncompressingConverter(outputFormat: audioFormat) else { return nil }
var newBufferAvailable = true
let inputBlock : AVAudioConverterInputBlock = {
inNumPackets, outStatus in
if newBufferAvailable {
outStatus.pointee = .haveData
newBufferAvailable = false
return compressedBuffer
} else {
outStatus.pointee = .noDataNow
return nil
}
}
guard let uncompressedBuffer: AVAudioPCMBuffer = AVAudioPCMBuffer(pcmFormat: audioFormat, frameCapacity: AVAudioFrameCount((audioFormat.sampleRate / 10))) else { return nil }
var conversionError: NSError?
uncompressor.convert(to: uncompressedBuffer, error: &conversionError, withInputFrom: inputBlock)
if let err = conversionError {
print("couldnt decompress compressed buffer", err)
}
return uncompressedBuffer
}
The error block after the convert method triggers and prints out "too few bits left in input buffer". Also, it seems like the input block only gets called once.
I've tried different codes and this seems to be one of the most common outcomes. I'm also not sure if the problem is in the initial conversion from the pcm buffer to uint8 array although I get an UInt8 Array filled with 768 values every 0.1 seconds (Sometimes the array contains a few zeros at the end, which doesn't happen in ilbc format.
Questions:
1. Is the initial conversion from pcm buffer to uint8 array done with the right approach? Are the packetCapacity, capacity and maximumPacketSize valid? -> Again, seems to work
2. Am I missing something at the conversion back to pcm buffer? Also, am I using the variables in the right way?
3. Has anyone achieved this conversion without using C in the project?
** EDIT: ** I also worked with the approach from this post:
Decode AAC to PCM format using AVAudioConverter Swift
It works fine with AAC format, but not with AAC_LD or AAC_ELD
I would need to set the AudioQueueBufferRef's mAudioData. I tried with copyMemory:
inBuffer.pointee.copyMemory(from: lastItemOfArray, byteCount: byteCount) // byteCount is 512
but it doesnt't work.
The AudioQueueNewOutput() queue is properly setted up to Int16 pcm format
Here is my code:
class CustomObject {
var pcmInt16DataArray = [UnsafeMutableRawPointer]() // this contains pcmInt16 data
}
let callback: AudioQueueOutputCallback = { (
inUserData: UnsafeMutableRawPointer?,
inAQ: AudioQueueRef,
inBuffer: AudioQueueBufferRef) in
guard let aqp: CustomObject = inUserData?.bindMemory(to: CustomObject.self, capacity: 1).pointee else { return }
var numBytes: UInt32 = inBuffer.pointee.mAudioDataBytesCapacity
/// Set inBuffer.pointee.mAudioData to pcmInt16DataArray.popLast()
/// How can I set the mAudioData here??
inBuffer.pointee.mAudioDataByteSize = numBytes
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil)
}
From apple doc: https://developer.apple.com/documentation/audiotoolbox/audioqueuebuffer?language=objc
mAudioData:
The audio data owned the audio queue buffer. The buffer address cannot be changed.
So I guess the solution would be to set a new value to the same address
Anybody who knows how to do it?
UPDATE:
The incoming audio format is "pcm" signal (Little Endian) sampled at 48kHz. Here are my settings:
var dataFormat = AudioStreamBasicDescription()
dataFormat.mSampleRate = 48000;
dataFormat.mFormatID = kAudioFormatLinearPCM
dataFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsNonInterleaved;
dataFormat.mChannelsPerFrame = 1
dataFormat.mFramesPerPacket = 1
dataFormat.mBitsPerChannel = 16
dataFormat.mBytesPerFrame = 2
dataFormat.mBytesPerPacket = 2
And I am collecting the incoming data to
var pcmData = [UnsafeMutableRawPointer]()
You're close!
Try this:
inBuffer.pointee.mAudioData.copyMemory(from: lastItemOfArray, byteCount: Int(numBytes))
or this:
memcpy(inBuffer.pointee.mAudioData, lastItemOfArray, Int(numBytes))
Audio Queue Services was tough enough to work with when it was pure C. Now that we have to do so much bridging to get the API to work with Swift, it's a real pain. If you have the option, try out AVAudioEngine.
A few other things to check:
Make sure your AudioQueue has the same format that you've defined in your AudioStreamBasicDescription.
var queue: AudioQueueRef?
// assumes userData has already been initialized and configured
AudioQueueNewOutput(&dataFormat, callBack, &userData, nil, nil, 0, &queue)
Confirm you have allocated and primed the queue's buffers.
let numBuffers = 3
// using forced optionals here for brevity
for _ in 0..<numBuffers {
var buffer: AudioQueueBufferRef?
if AudioQueueAllocateBuffer(queue!, userData.bufferByteSize, &buffer) == noErr {
userData.mBuffers.append(buffer!)
callBack(inUserData: &userData, inAQ: queue!, inBuffer: buffer!)
}
}
Consider making your callback a function.
func callBack(inUserData: UnsafeMutableRawPointer?, inAQ: AudioQueueRef, inBuffer: AudioQueueBufferRef) {
let numBytes: UInt32 = inBuffer.pointee.mAudioDataBytesCapacity
memcpy(inBuffer.pointee.mAudioData, pcmData, Int(numBytes))
inBuffer.pointee.mAudioDataByteSize = numBytes
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil)
}
Also, see if you can get some basic PCM data to play through your audio queue before attempting to bring in the server side data.
var pcmData: [Int16] = []
for i in 0..<frameCount {
let element = Int16.random(in: Int16.min...Int16.max) // noise
pcmData.append(Int16(element))
}
Before I start I would like to apologise if I say something crazy.
I am working on an app that implements a c library. Among others, It shares idArrays.
I have the part decodes an idArray and it was given to me:
func decodeArrayID(aArray:UnsafeMutablePointer<CChar>, aTokenLen:UInt32)->([UInt32], String){
let arrayCount = Int(aTokenLen / 4)
var idArrayTemp = [UInt32]()
var idArrayStringTemp = ""
for i in 0..<arrayCount{
let idValue = decodeArrayIDItem(index: i, array: aArray)
idArrayTemp.append(idValue)
idArrayStringTemp += "\(idValue) "
}
return (idArrayTemp, idArrayStringTemp)
}
func decodeArrayIDItem(index:Int, array:UnsafeMutablePointer<CChar>) -> UInt32{
var value:UInt32 = UInt32(array[index * 4]) & 0xFF
value <<= 8
value |= UInt32(array [index * 4 + 1]) & 0xFF
value <<= 8
value |= UInt32(array [index * 4 + 2]) & 0xFF
value <<= 8
value |= UInt32(array [index * 4 + 3]) & 0xFF
return value
}
As we can see the idArray is send through UnsafeMutablePointer AKA UnsafeMutablePointer.
Now I am working with the encoding part. The function will take an array of UInt32 values and will try to convert it into byte array and will convert into a sting for sending it through the library.
So far I have the following code but it doesn't work:
func encodeIDArray(idArray:[UInt32])->String{
var aIDArray8:[UInt8] = [UInt8]()
for var value in idArray{
let count = MemoryLayout<UInt32>.size
let bytePtr = withUnsafePointer(to: &value) {
$0.withMemoryRebound(to: UInt8.self, capacity: count) {
UnsafeBufferPointer(start: $0, count: count)
}
}
aIDArray8 += Array(bytePtr)
}
let stringTest = String(data: Data(aIDArray8), encoding: .utf8)
return stringTest!
}
A test result for the input [1,2] returns "\u{01}\0\0\0\u{02}\0\0\0" and something tells is not quite right...
Thank you
Edited
The c functions are
DllExport void STDCALL DvProviderAvOpenhomeOrgPlaylist1EnableActionIdArray(THandle aProvider, CallbackPlaylist1IdArray aCallback, void* aPtr);
where CallbackPlaylist1IdArray is
typedef int32_t (STDCALL *CallbackPlaylist1IdArray)(void* aPtr, IDvInvocationC* aInvocation, void* aInvocationPtr, uint32_t* aToken, char** aArray, uint32_t* aArrayLen);
and the value to aArray is the value that get the Byte array
I believe you are in the right way
func encodeIDArray(idArray:[UInt32])->String{
var aIDArray8:[UInt8] = [UInt8]()
for var value in idArray{
let count = MemoryLayout<UInt32>.size
let bytePtr = withUnsafePointer(to: &value) {
$0.withMemoryRebound(to: UInt8.self, capacity: count) { v in
//Just change it to don't return the pointer itself, but the result of the rebound
UnsafeBufferPointer(start: v, count: count)
}
}
aIDArray8 += Array(bytePtr)
}
let stringTest = String(data: Data(aIDArray8), encoding: .utf8)
return stringTest!
}
Change your test to a some valid value in ASCII Table like this
encodeIDArray(idArray: [65, 66, 67]) // "ABC"
I hope it help you... Good luck and let me know it it works on your case.
You can copy the [UInt32] array values to the allocated memory without creating an intermediate [Int8] array, and use the bigEndian
property instead of bit shifting and masking:
func writeCArrayValue(from pointer:UnsafeMutablePointer<UnsafeMutablePointer<Int8>?>?,
withUInt32Values array: [UInt32]){
pointer?.pointee = UnsafeMutablePointer<Int8>.allocate(capacity: MemoryLayout<UInt32>.size * array.count)
pointer?.pointee?.withMemoryRebound(to: UInt32.self, capacity: array.count) {
for i in 0..<array.count {
$0[i] = array[i].bigEndian
}
}
}
In the same way you can do the decoding:
func decodeArrayID(aArray:UnsafeMutablePointer<CChar>, aTokenLen:UInt32)->[UInt32] {
let arrayCount = Int(aTokenLen / 4)
var idArrayTemp = [UInt32]()
aArray.withMemoryRebound(to: UInt32.self, capacity: arrayCount) {
for i in 0..<arrayCount {
idArrayTemp.append(UInt32(bigEndian: $0[i]))
}
}
return idArrayTemp
}
You can't convert a binary buffer to a string and expect it to work. You should base64 encode your binary data. That IS a valid way to represent binary data as strings.
Consider the following code:
//Utility function that takes a typed pointer to a data buffer an converts it to an array of the desired type of object
func convert<T>(count: Int, data: UnsafePointer<T>) -> [T] {
let buffer = UnsafeBufferPointer(start: data, count: count);
return Array(buffer)
}
//Create an array of UInt32 values
let intArray: [UInt32] = Array<UInt32>(1...10)
print("source arrray = \(intArray)")
let arraySize = MemoryLayout<UInt32>.size * intArray.count
//Convert the array to a Data object
let data = Data(bytes: UnsafeRawPointer(intArray),
count: arraySize)
//Convert the binary Data to base64
let base64String = data.base64EncodedString()
print("Array as base64 data = ", base64String)
if let newData = Data(base64Encoded: base64String) {
newData.withUnsafeBytes { (bytes: UnsafePointer<UInt32>)->Void in
let newArray = convert(count:10, data: bytes)
print("After conversion, newArray = ", newArray)
}
} else {
fatalError("Failed to base-64 decode data!")
}
The output of that code is:
source arrray =[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Array as base64 data = AQAAAAIAAAADAAAABAAAAAUAAAAGAAAABwAAAAgAAAAJAAAACgAAAA==
After conversion, newArray = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Program ended with exit code: 0
Although I really appreciate all the answers I have finally figured out what was happening. I have to say that Duncan's answer was the closest to my problem.
So far I have interpreted char** as String. Turns out that it can be also a pointer to an array (Correct me if I am Wrong!). Converting the array as String gave a format that the library didn't like and it could not be decode on the other end.
The way I ended up doing is:
func encodeIDArray(idArray:[UInt32])->[Int8]{
var aIDArray8 = [UInt8].init(repeating: 0, count: idArray.count*4)
for i in 0..<idArray.count{
aIDArray8[i * 4] = UInt8(idArray[i] >> 24) & 0xff
aIDArray8[i * 4 + 1] = UInt8(idArray[i] >> 16) & 0xff
aIDArray8[i * 4 + 2] = UInt8(idArray[i] >> 8) & 0xff
aIDArray8[i * 4 + 3] = UInt8(idArray[i]) & 0xff
}
return aIDArray8.map { Int8(bitPattern: $0) }
}
and then I am assigning the value of the C Variable in swift like that:
let myArray = encodeIDArray(idArray:theArray)
writeCArrayValue(from: aArrayPointer, withValue: myArray)
func writeCArrayValue(from pointer:UnsafeMutablePointer<UnsafeMutablePointer<Int8>?>?, withValue array:[Int8]){
pointer?.pointee = UnsafeMutablePointer<Int8>.allocate(capacity: array.count)
memcpy(pointer?.pointee, array, array.count)
}
aArrayPointer is a the char** used by the library.