Getting RGBA values for all pixels of CGImage Swift - swift

I am trying to create a real time video processing app, in which I need to get the RGBA values of all pixels for each frame, and process them using an external library, and show them. I am trying to get the RGBA value for each pixel, but it is too slow the way I am doing it, I was wondering if there is a way to do it faster, using VImage. This is my current code, and the way I get all the pixels, as I get the current frame:
guard let cgImage = context.makeImage() else {
return nil
}
guard let data = cgImage.dataProvider?.data,
let bytes = CFDataGetBytePtr(data) else {
fatalError("Couldn't access image data")
}
assert(cgImage.colorSpace?.model == .rgb)
let bytesPerPixel = cgImage.bitsPerPixel / cgImage.bitsPerComponent
gp.async {
for y in 0 ..< cgImage.height {
for x in 0 ..< cgImage.width {
let offset = (y * cgImage.bytesPerRow) + (x * bytesPerPixel)
let components = (r: bytes[offset], g: bytes[offset + 1], b: bytes[offset + 2])
print("[x:\(x), y:\(y)] \(components)")
}
print("---")
}
}
This is the version using the VImage, but I there is some memory leak, and I can not access the pixels
guard
let format = vImage_CGImageFormat(cgImage: cgImage),
var buffer = try? vImage_Buffer(cgImage: cgImage,
format: format) else {
exit(-1)
}
let rowStride = buffer.rowBytes / MemoryLayout<Pixel_8>.stride / format.componentCount
do {
let componentCount = format.componentCount
var argbSourcePlanarBuffers: [vImage_Buffer] = (0 ..< componentCount).map { _ in
guard let buffer1 = try? vImage_Buffer(width: Int(buffer.width),
height: Int(buffer.height),
bitsPerPixel: format.bitsPerComponent) else {
fatalError("Error creating source buffers.")
}
return buffer1
}
vImageConvert_ARGB8888toPlanar8(&buffer,
&argbSourcePlanarBuffers[0],
&argbSourcePlanarBuffers[1],
&argbSourcePlanarBuffers[2],
&argbSourcePlanarBuffers[3],
vImage_Flags(kvImageNoFlags))
let n = rowStride * Int(argbSourcePlanarBuffers[1].height) * format.componentCount
let start = buffer.data.assumingMemoryBound(to: Pixel_8.self)
var ptr = UnsafeBufferPointer(start: start, count: n)
print(Array(argbSourcePlanarBuffers)[1]) // prints the first 15 interleaved values
buffer.free()
}

You can access the underlying pixels in a vImage buffer to do this.
For example, given an image named cgImage, use the following code to populate a vImage buffer:
guard
let format = vImage_CGImageFormat(cgImage: cgImage),
let buffer = try? vImage_Buffer(cgImage: cgImage,
format: format) else {
exit(-1)
}
let rowStride = buffer.rowBytes / MemoryLayout<Pixel_8>.stride / format.componentCount
Note that a vImage buffer's data may be wider than the image (see: https://developer.apple.com/documentation/accelerate/finding_the_sharpest_image_in_a_sequence_of_captured_images) which is why I've added rowStride.
To access the pixels as a single buffer of interleaved values, use:
do {
let n = rowStride * Int(buffer.height) * format.componentCount
let start = buffer.data.assumingMemoryBound(to: Pixel_8.self)
let ptr = UnsafeBufferPointer(start: start, count: n)
print(Array(ptr)[ 0 ... 15]) // prints the first 15 interleaved values
}
To access the pixels as a buffer of Pixel_8888 values, use (make sure that format.componentCount is 4:
do {
let n = rowStride * Int(buffer.height)
let start = buffer.data.assumingMemoryBound(to: Pixel_8888.self)
let ptr = UnsafeBufferPointer(start: start, count: n)
print(Array(ptr)[ 0 ... 3]) // prints the first 4 pixels
}

This is the slowest way to do it. A faster way is with a custom CoreImage filter.
Faster than that is to write your own OpenGL Shader (or rather, it's equivalent in Metal for current devices)
I've written OpenGL shaders, but have not worked with Metal yet.
Both allow you to write graphics code that runs directly on the GPU.

Related

How to ignore cache when repeatedly reading from disk

I am writing an app that contains a small benchmark for I/O operations.
For write operations, I am using a 'FileHandle' which works pretty well. I am testing my old USB stick and my calculation results in values of roughly 20MB/s which seems correct.
However, when reading, the values jump up to 8 GB/s. Although I would love to have an USB stick that fast...I think this has to do with some sort of cacheing.
Here is the code that I am using (some bits were removed):
guard let handle = FileHandle(forUpdatingAtPath: url.path) else { return }
let data = Data(repeating: 0, count: 2 * 1024 * 1024)
var startTime = Date.timestamp
// Write Test
while Date.timestamp - startTime < 5.0
{
handle.write(data)
try? handle.synchronize()
// ...
}
// Go back to beginning of file.
try? handle.seek(toOffset: 0)
// Remove everything at the end of the file
try? handle.truncate(atOffset: blockSize)
startTime = Date.timestamp
// Read Test
while Date.timestamp - startTime < 5.0
{
autoreleasepool
{
if let handle = try? FileHandle(forReadingFrom: fileUrl), let data = try? handle.readToEnd()
{
let count = UInt64(data.count)
self.readData += count
self.totalReadData += count
handle.close()
}
// I also tried FileManager.default.contents(atPath: ) - same result
}
}
I also tried this piece of code (it's either from Martin R. here on SO or from Quinn on the Apple forums):
let fd = open(fileUrl.path, O_RDONLY)
_ = fcntl(fd, F_NOCACHE, 1)
var buffer = Data(count: 1024 * 1024)
buffer.withUnsafeMutableBytes { ptr in
let amount = read(fd, ptr.baseAddress, ptr.count)
self.readData += UInt64(amount)
self.totalReadData += UInt64(amount)
}
close(fd)
The code itself works...but there is still cacheing.
TL;DR How can I disable cacheing when writing to and reading from a file using Swift?
Regards

Why do I get popping noises from my Core Audio program?

I am trying to figure out how to use Apple's Core Audio APIs to record and play back linear PCM audio without any file I/O. (The recording side seems to work just fine.)
The code I have is pretty short, and it works somewhat. However, I am having trouble with identifying the source of clicks and pops in the output. I've been beating my head against this for many days with no success.
I have posted a git repo here, with a command-line program program that shows where I'm at: https://github.com/maxharris9/AudioRecorderPlayerSwift/tree/main/AudioRecorderPlayerSwift
I put in a couple of functions to prepopulate the recording. The tone generator (makeWave) and noise generator (makeNoise) are just in here as debugging aids. I'm ultimately trying to identify the source of the messed up output when you play back a recording in audioData:
// makeWave(duration: 30.0, frequency: 441.0) // appends to `audioData`
// makeNoise(frameCount: Int(44100.0 * 30)) // appends to `audioData`
_ = Recorder() // appends to `audioData`
_ = Player() // reads from `audioData`
Here's the player code:
var lastIndexRead: Int = 0
func outputCallback(inUserData: UnsafeMutableRawPointer?, inAQ: AudioQueueRef, inBuffer: AudioQueueBufferRef) {
guard let player = inUserData?.assumingMemoryBound(to: Player.PlayingState.self) else {
print("missing user data in output callback")
return
}
let sliceStart = lastIndexRead
let sliceEnd = min(audioData.count, lastIndexRead + bufferByteSize - 1)
print("slice start:", sliceStart, "slice end:", sliceEnd, "audioData.count", audioData.count)
if sliceEnd >= audioData.count {
player.pointee.running = false
print("found end of audio data")
return
}
let slice = Array(audioData[sliceStart ..< sliceEnd])
let sliceCount = slice.count
// doesn't fix it
// audioData[sliceStart ..< sliceEnd].withUnsafeBytes {
// inBuffer.pointee.mAudioData.copyMemory(from: $0.baseAddress!, byteCount: Int(sliceCount))
// }
memcpy(inBuffer.pointee.mAudioData, slice, sliceCount)
inBuffer.pointee.mAudioDataByteSize = UInt32(sliceCount)
lastIndexRead += sliceCount + 1
// enqueue the buffer, or re-enqueue it if it's a used one
check(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil))
}
struct Player {
struct PlayingState {
var packetPosition: UInt32 = 0
var running: Bool = false
var start: Int = 0
var end: Int = Int(bufferByteSize)
}
init() {
var playingState: PlayingState = PlayingState()
var queue: AudioQueueRef?
// this doesn't help
// check(AudioQueueNewOutput(&audioFormat, outputCallback, &playingState, CFRunLoopGetMain(), CFRunLoopMode.commonModes.rawValue, 0, &queue))
check(AudioQueueNewOutput(&audioFormat, outputCallback, &playingState, nil, nil, 0, &queue))
var buffers: [AudioQueueBufferRef?] = Array<AudioQueueBufferRef?>.init(repeating: nil, count: BUFFER_COUNT)
print("Playing\n")
playingState.running = true
for i in 0 ..< BUFFER_COUNT {
check(AudioQueueAllocateBuffer(queue!, UInt32(bufferByteSize), &buffers[i]))
outputCallback(inUserData: &playingState, inAQ: queue!, inBuffer: buffers[i]!)
if !playingState.running {
break
}
}
check(AudioQueueStart(queue!, nil))
repeat {
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, BUFFER_DURATION, false)
} while playingState.running
// delay to ensure queue emits all buffered audio
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, BUFFER_DURATION * Double(BUFFER_COUNT + 1), false)
check(AudioQueueStop(queue!, true))
check(AudioQueueDispose(queue!, true))
}
}
I captured the audio with Audio Hijack, and noticed that the jumps are indeed correlated with the size of the buffer:
Why is this happening, and what can I do to fix it?
I believe you were beginning to zero in on, or at least suspect, the cause of the popping you are hearing: it's caused by discontinuities in your waveform.
My initial hunch was that you were generating the buffers independently (i.e. assuming that each buffer starts at time=0), but I checked out your code and it wasn't that. I suspect some of the calculations in makeWave were at fault. To check this theory I replaced your makeWave with the following:
func makeWave(offset: Double, numSamples: Int, sampleRate: Float64, frequency: Float64, numChannels: Int) -> [Int16] {
var data = [Int16]()
for sample in 0..<numSamples / numChannels {
// time in s
let t = offset + Double(sample) / sampleRate
let value = Double(Int16.max) * sin(2 * Double.pi * frequency * t)
for _ in 0..<numChannels {
data.append(Int16(value))
}
}
return data
}
This function removes the double loop in the original, accepts an offset so it knows which part of the wave is being generated and makes some changes to the sampling of the sine wave.
When Player is modified to use this function you get a lovely steady tone. I'll add the changes to player soon. I can't in good conscience show the quick and dirty mess it is now to the public.
Based on your comments below I refocused on your player. The issue was that the audio buffers expect byte counts but the slice count and some other calculations were based on Int16 counts. The following version of outputCallback will fix it. Concentrate on the use of the new variable bytesPerChannel.
func outputCallback(inUserData: UnsafeMutableRawPointer?, inAQ: AudioQueueRef, inBuffer: AudioQueueBufferRef) {
guard let player = inUserData?.assumingMemoryBound(to: Player.PlayingState.self) else {
print("missing user data in output callback")
return
}
let bytesPerChannel = MemoryLayout<Int16>.size
let sliceStart = lastIndexRead
let sliceEnd = min(audioData.count, lastIndexRead + bufferByteSize/bytesPerChannel)
if sliceEnd >= audioData.count {
player.pointee.running = false
print("found end of audio data")
return
}
let slice = Array(audioData[sliceStart ..< sliceEnd])
let sliceCount = slice.count
print("slice start:", sliceStart, "slice end:", sliceEnd, "audioData.count", audioData.count, "slice count:", sliceCount)
// need to be careful to convert from counts of Ints to bytes
memcpy(inBuffer.pointee.mAudioData, slice, sliceCount*bytesPerChannel)
inBuffer.pointee.mAudioDataByteSize = UInt32(sliceCount*bytesPerChannel)
lastIndexRead += sliceCount
// enqueue the buffer, or re-enqueue it if it's a used one
check(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil))
}
I did not look at the Recorder code, but you may want to check if the same sort of error crept in there.

How to get CVPixelBuffer handle from UnsafeMutablePointer<UInt8> in Swift?

I got a decoded AVFrame whose format shows 160/Videotoolbox_vld. After googled some articles(here) and viewed the FFmpeg source code(here, and here), the CVBuffer handle should be at AVFrame.data[3]. But the CVBuffer I got seems invalid, any CVPixelBufferGetXXX() function returns 0 or nil.
If I used the av_hwframe_transfer_data() like the ffmpeg's example/hw_decode.c did, the sample can be downloaded from HW to SW buffer. Its AVFrame.format will be nv12. After converted via sws_scale to bgra, the sample can be showed on view with correct content.
I think the VideoToolbox decoded frame is OK. The way I convert AVFrame.data[3] to CVBuffer may be wrong. Just learned accessing c pointer in swift but I am not sure how to read a resource handle(CVBuffer) in a pointer correctly.
The following is how I try to extract CVBuffer from AVFrame
var pFrameOpt: UnsafeMutablePointer<AVFrame>? = av_frame_alloc()
avcodec_receive_frame(..., pFrameOpt)
let data3: UnsafeMutablePointer<UInt8>? = pFrameOpt?.pointee.data.3
data3?.withMemoryRebound(to: CVBuffer.self, capacity: 1) { pCvBuf in
let fW = pFrameOpt!.pointee.width // print 3840
let fH = pFrameOpt!.pointee.height // print 2160
let fFmt = pFrameOpt!.pointee.format // print 160
let cvBuf: CVBuffer = pCvBuf.pointee
let a1 = CVPixelBufferGetDataSize(cvBuf) // print 0
let a2 = CVPixelBufferGetPixelFormatType(cvBuf) // print 0
let a3 = CVPixelBufferGetWidth(cvBuf) // print 0
let a4 = CVPixelBufferGetHeight(cvBuf) // print 0
let a5 = CVPixelBufferGetBytesPerRow(cvBuf) // print 0
let a6 = CVPixelBufferGetBytesPerRowOfPlane(cvBuf, 0) // print 0
let a7 = CVPixelBufferGetWidthOfPlane(cvBuf, 0) // print 0
let a8 = CVPixelBufferGetHeightOfPlane(cvBuf, 0) // print 0
let a9 = CVPixelBufferGetPlaneCount(cvBuf) // print 0
let a10 = CVPixelBufferIsPlanar(cvBuf) // print false
let a11 = CVPixelBufferGetIOSurface(cvBuf) // print nil
let a12 = CVPixelBufferGetBaseAddress(cvBuf) // print nil
let a13 = CVPixelBufferGetBaseAddressOfPlane(cvBuf, 0) // print nil
let b1 = CVImageBufferGetCleanRect(cvBuf) // print 0, 0, 0, 0
let b2 = CVImageBufferGetColorSpace(cvBuf) // print nil
let b3 = CVImageBufferGetDisplaySize(cvBuf) // print 0, 0, 0, 0
let b4 = CVImageBufferGetEncodedSize(cvBuf) // print 0, 0, 0, 0
let b5 = CVImageBufferIsFlipped(cvBuf) // print false
// bad exec
var cvTextureOut: CVMetalTexture?
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, ..., cvBuf, nil, .bgra8Unorm, 3840, 2160, 0, ...)
}
CVBuffer is not a fixed size, so rebinding the memory won't work in this way. You need to do this:
Unmanaged<CVBuffer>.fromOpaque(data!).takeRetainedValue()
However, the bottom line is FFmpeg's VideoToolbox backend is not creating a CVPixelBuffer with kCVPixelBufferMetalCompatibilityKey set to true. You won't be able to call CVMetalTextureCacheCreateTextureFromImage(...) successfully in any case.
You could consider using a CVPixelBufferPool with appropriate settings (including kCVPixelBufferMetalCompatibilityKey set to true) and then using VTPixelTransferSession to quickly copy FFmpeg's pixel buffer to your own.
It seems like I wrongly cast void* to CVPixelBuffer* instead of casting void* directly to CVPixelBuffer. I cannot find a swift way to do such c style casting from pointer to numeric value. (Using as! CVPixelBuffer causes crash).
So I create a function for void* to CVPixelBufferRef in C code to do such casting job.
// util.h
#include <CoreVideo/CVPixelBuffer.h>
CVPixelBufferRef CastToCVPixelBuffer(void* p);
// util.c
CVPixelBufferRef CastToCVPixelBuffer(void* p)
{
return (CVPixelBufferRef)p;
}
// BridgeHeader.h
#include "util.h"
Then pass the UnsafeMutablePointer<UInt8> in, get CVPixelBuffer handle out.
let pFrameOpt: UnsafeMutablePointer<AVFrame>? = ...
let data3: UnsafeMutablePointer<UInt8>? = pFrameOpt?.pointee.data.3
let cvBuf: CVBuffer = CastToCVPixelBuffer(data3).takeUnretainedValue()
let width = CVPixelBufferGetWidth(cvBuf) // print 3840
let height = CVPixelBufferGetHeight(cvBuf) // print 2160
Try this
let cvBuf: CVBuffer = Array(UnsafeMutableBufferPointer(start: data3, count: 3))
.withUnsafeBufferPointer {
$0.baseAddress!.withMemoryRebound(to: CVBuffer.self, capacity: 1) { $0 }
}.pointee
or maybe even
let cvBuf: CVBuffer = unsafeBitcast(UnsafeMutableBufferPointer(start: data3, count: 3), to: CVBuffer.self)
/**
#function CVPixelBufferGetBaseAddressOfPlane
#abstract Returns the base address of the plane at planeIndex in the PixelBuffer.
#discussion Retrieving the base address for a PixelBuffer requires that the buffer base address be locked
via a successful call to CVPixelBufferLockBaseAddress. On OSX 10.10 and earlier, or iOS 8 and
earlier, calling this function with a non-planar buffer will have undefined behavior.
#param pixelBuffer Target PixelBuffer.
#param planeIndex Identifying the plane.
#result Base address of the plane, or NULL for non-planar CVPixelBufferRefs.
*/
#available(iOS 4.0, *)
public func CVPixelBufferGetBaseAddressOfPlane(_ pixelBuffer: CVPixelBuffer, _ planeIndex: Int) -> UnsafeMutableRawPointer?
maybe you can try use CVPixelBufferLockBaseAddress before use CVPixelBufferGetBaseAddressOfPlane

AudioKit, exporting AVAudioPCMBuffer array to audio file with fade in/out

I'm capturing audio from AKLazyTap and rendering the accumulated [AVAudioPCMBuffer] to an audio file, in the background, while my app's audio is running. This works great, but I want to add fade in/out to clean up the result. I see the convenience extension for adding fades to a single AVAudioPCMBuffer, but I'm not sure how I'd do it on an array. I'd thought to concatenate the buffers, but there doesn't appear to be support for that. Does anyone know if that's currently possible? Basically it would require something similar to copy(from:readOffset:frames), but would need to have a write offset as well...
Or maybe there's an easier way?
UPDATE
Okay, after studying some related AK code, I tried directly copying buffer data over to a single, long buffer, then applying the fade convenience function. But this gives me an empty (well, 4k) file. Is there some obvious error here that I'm just not seeing?
func renderBufferedAudioToFile(_ audioBuffers: [AVAudioPCMBuffer], withStartOffset startOffset: Int, endOffset: Int, fadeIn: Float64, fadeOut: Float64, atURL url: URL) {
// strip off the file name
let name = String(url.lastPathComponent.split(separator: ".")[0])
var url = self.module.stateManager.audioCacheDirectory
// UNCOMPRESSED
url = url.appendingPathComponent("\(name).caf")
let format = Conductor.sharedInstance.sourceMixer.avAudioNode.outputFormat(forBus: 0)
var settings = format.settings
settings["AVLinearPCMIsNonInterleaved"] = false
// temp buffer for fades
let totalFrameCapacity = audioBuffers.reduce(0) { $0 + $1.frameLength }
guard let tempAudioBufferForFades = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: totalFrameCapacity) else {
print("Failed to create fade buffer!")
return
}
// write ring buffer to file.
let file = try! AVAudioFile(forWriting: url, settings: settings)
var writeOffset: AVAudioFrameCount = 0
for i in 0 ..< audioBuffers.count {
var buffer = audioBuffers[i]
let channelCount = Int(buffer.format.channelCount)
if i == 0 && startOffset != 0 {
// copy a subset of samples in the buffer
if let subset = buffer.copyFrom(startSample: AVAudioFrameCount(startOffset)) {
buffer = subset
}
} else if i == audioBuffers.count - 1 && endOffset != 0 {
if let subset = buffer.copyTo(count: AVAudioFrameCount(endOffset)) {
buffer = subset
}
}
// write samples into single, long buffer
for i in 0 ..< buffer.frameLength {
for n in 0 ..< channelCount {
tempAudioBufferForFades.floatChannelData?[n][Int(i + writeOffset)] = (buffer.floatChannelData?[n][Int(i)])!
}
}
print("buffer \(i), writeOffset = \(writeOffset)")
writeOffset = writeOffset + buffer.frameLength
}
// update!
tempAudioBufferForFades.frameLength = totalFrameCapacity
if let bufferWithFades = tempAudioBufferForFades.fade(inTime: fadeIn, outTime: fadeOut) {
try! file.write(from: bufferWithFades)
}
}

How to convert Data of Int16 audio samples to array of float audio samples

I'm currently working with audio samples.
I get them from AVAssetReader and have a CMSampleBuffer with something like this:
guard let sampleBuffer = readerOutput.copyNextSampleBuffer() else {
guard reader.status == .completed else { return nil }
// Completed
// samples is an array of Int16
let samples = sampleData.withUnsafeBytes {
Array(UnsafeBufferPointer<Int16>(
start: $0, count: sampleData.count / MemoryLayout<Int16>.size))
}
// The only way I found to convert [Int16] -> [Float]...
return samples.map { Float($0) / Float(Int16.max)}
}
guard let blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer) else {
return nil
}
let length = CMBlockBufferGetDataLength(blockBuffer)
let sampleBytes = UnsafeMutablePointer<UInt8>.allocate(capacity: length)
CMBlockBufferCopyDataBytes(blockBuffer, 0, length, sampleBytes)
sampleData.append(sampleBytes, count: length)
}
As you can see the only I found to convert [Int16] -> [Float] issamples.map { Float($0) / Float(Int16.max) but by doing this my processing time is increasing. Does it exist an other way to cast a pointer of Int16 to a pointer of Float?
"Casting" or "rebinding" a pointer only changes the way how memory is
interpreted. You want to compute floating point values from integers,
the new values have a different memory representation (and also a different
size).
Therefore you somehow have to iterate over all input values
and compute the new values. What you can do is to omit the Array
creation:
let samples = sampleData.withUnsafeBytes {
UnsafeBufferPointer<Int16>(start: $0, count: sampleData.count / MemoryLayout<Int16>.size)
}
return samples.map { Float($0) / Float(Int16.max) }
Another option would be to use the vDSP functions from the
Accelerate framework:
import Accelerate
// ...
let numSamples = sampleData.count / MemoryLayout<Int16>.size
var factor = Float(Int16.max)
var floats: [Float] = Array(repeating: 0.0, count: numSamples)
// Int16 array to Float array:
sampleData.withUnsafeBytes {
vDSP_vflt16($0, 1, &floats, 1, vDSP_Length(numSamples))
}
// Scaling:
vDSP_vsdiv(&floats, 1, &factor, &floats, 1, vDSP_Length(numSamples))
I don't know if that is faster, you'll have to check.
(Update: It is faster, as ColGraff demonstrated in his answer.)
An explicit loop is also much faster than using map:
let factor = Float(Int16.max)
let samples = sampleData.withUnsafeBytes {
UnsafeBufferPointer<Int16>(start: $0, count: sampleData.count / MemoryLayout<Int16>.size)
}
var floats: [Float] = Array(repeating: 0.0, count: samples.count)
for i in 0..<samples.count {
floats[i] = Float(samples[i]) / factor
}
return floats
An additional option in your case might be to use CMBlockBufferGetDataPointer() instead of CMBlockBufferCopyDataBytes()
into allocated memory.
You can do considerably better if you use the Accelerate Framework for the conversion:
import Accelerate
// Set up random [Int]
var randomInt = [Int16]()
randomInt.reserveCapacity(10000)
for _ in 0..<randomInt.capacity {
let value = Int16(Int32(arc4random_uniform(UInt32(UInt16.max))) - Int32(UInt16.max / 2))
randomInt.append(value)
}
// Time elapsed helper: https://stackoverflow.com/a/25022722/887210
func printTimeElapsedWhenRunningCode(title:String, operation:()->()) {
let startTime = CFAbsoluteTimeGetCurrent()
operation()
let timeElapsed = CFAbsoluteTimeGetCurrent() - startTime
print("Time elapsed for \(title): \(timeElapsed) s.")
}
// Testing
printTimeElapsedWhenRunningCode(title: "vDSP") {
var randomFloat = [Float](repeating: 0, count: randomInt.capacity)
vDSP_vflt16(randomInt, 1, &randomFloat, 1, vDSP_Length(randomInt.capacity))
}
printTimeElapsedWhenRunningCode(title: "map") {
randomInt.map { Float($0) }
}
// Results
//
// Time elapsed for vDSP : 0.000429034233093262 s.
// Time elapsed for flatMap: 0.00233501195907593 s.
It's an improvement of about 5 times faster.
(Edit: Added some changes suggested by Martin R)
#MartinR and #ColGraff gave really good answers, and thank you for everybody and the fast replies.
however I found an easier way to do that without any computation. AVAssetReaderAudioMixOutput requires an audio settings dictionary. Inside we can set the key AVLinearPCMIsFloatKey: true. This way I will read my data like this
let samples = sampleData.withUnsafeBytes {
UnsafeBufferPointer<Float>(start: $0,
count: sampleData.count / MemoryLayout<Float>.size)
}
for: Xcode 8.3.3 • Swift 3.1
extension Collection where Iterator.Element == Int16 {
var floatArray: [Float] {
return flatMap{ Float($0) }
}
}
usage:
let int16Array: [Int16] = [1, 2, 3 ,4]
let floatArray = int16Array.floatArray