AudioKit 4.2.3 Crash Microphone Frequency Analysis Swift 4.1 - swift

I just updated to the latest AudioKit version 4.2.3 and Swift 4.1 I'm getting a crash at audiokit.start() that I can't decipher. Please lmk if you need more of the error code.
AURemoteIO::IOThread (21): EXC_BAD_ACCESS (code=1, address=0x100900000)
FYI I am also using AVAudioRecorder to record the microphone input to file and playing it with AVKit AVAudioPlayer later on in the ViewController. However, since I did not get this crash before updating I do not believe those factors are responsible - but something with the tracker input.
import UIKit
import Speech
import AudioKit
class RecordVoiceViewController: UIViewController {
var tracker: AKFrequencyTracker!
var silence: AKBooster!
var mic: AKMicrophone!
let noteFrequencies = [16.35, 17.32, 18.35, 19.45, 20.6, 21.83, 23.12, 24.5, 25.96, 27.5, 29.14, 30.87]
let noteNamesWithSharps = ["C", "C♯","D","D♯","E","F","F♯","G","G♯","A","A♯","B"]
let noteNamesWithFlats = ["C", "D♭","D","E♭","E","F","G♭","G","A♭","A","B♭","B"]
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
AKSettings.audioInputEnabled = true
mic = AKMicrophone()
tracker = AKFrequencyTracker.init(mic, hopSize: 200, peakCount: 2000)
silence = AKBooster(tracker, gain: 0)
}
func startAudioKit(){
AudioKit.output = self.silence
do {
try AudioKit.start()
} catch {
AKLog("Something went wrong.")
}
}
}
What's interesting is when I initialize the tracker without the hopSize and peakCount, like:
tracker = AKFrequencyTracker.init(mic)
it does not crash, however it also doesn't return the correct frequency. I'm super thankful for any help. Thanks!!!

I've faced exactly the same issue, but finally found a temporary solution.
All you need to do is to add an additional layer between AKMicrophone and AKFrequencyTracker, in my case it was AKHighPassFilter.
Here's the code that works properly:
let microphone = AKMicrophone()
let filter = AKHighPassFilter(microphone, cutoffFrequency: 200, resonance: 0)
let tracker = AKFrequencyTracker(filter)
let silence = AKBooster(tracker, gain: 0)
AKSettings.audioInputEnabled = true
AudioKit.output = silence
try! AudioKit.start()
Hope this helps, good luck!

Related

AVAudioSourceNode not working in swift playgrounds

I am trying to run a simple oscillator using the new AVAudioSourceNode Apple introduced in the latest release. The code is excerpt from the example code Apple released, available here.
However, whenever I run this in a Swift playground, the callback is fired but no sound is emitted. When I move this code to an iOS app, it works fine. Any idea what's happening? AFAIK other audio nodes work well in Playgrounds, so I'm not sure why this specific one fails. See code below. Ran using Xcode 11 and macOS 10.15.
import AVFoundation
import PlaygroundSupport
let audioEngine = AVAudioEngine()
let mainMixerNode = audioEngine.mainMixerNode
let outputNode = audioEngine.outputNode
let format = outputNode.inputFormat(forBus: 0)
let incrementAmount = 1.0 / Float(format.sampleRate)
var time: Float = 0.0
func sineWave(time: Float) -> Float {
return sin(2.0 * Float.pi * 440.0 * time)
}
let sourceNode = AVAudioSourceNode { (_, _, frameCount, audioBufferList) -> OSStatus in
let bufferListPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
for frameIndex in 0..<Int(frameCount) {
let sample = sineWave(time: time)
time += incrementAmount
for buffer in bufferListPointer {
let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
buf[frameIndex] = sample
}
}
return noErr
}
audioEngine.attach(sourceNode)
audioEngine.connect(sourceNode, to: mainMixerNode, format: format)
audioEngine.connect(mainMixerNode, to: outputNode, format: nil)
mainMixerNode.outputVolume = 0.5
audioEngine.prepare()
do {
try audioEngine.start()
} catch {
print(error.localizedDescription)
}
PlaygroundPage.current.needsIndefiniteExecution = true
It seems that Playground printing really ruins the performance of real time processing blocks. I had the same problem and then I moved the AVAudioSourceNode code to a different .swift file in the Sources folder, as suggested here

DispatchGroup On Do, Try, Catch?

I'm working with the AudioKit framework and looking to use DispatchGroup to make a method work async. I'd like for the player.load method to run only after the audioFile has been created; right now it's throwing an error ~50% of the time and I suspect it's due to timing. I've used DispatchGroup with success in other circumstances, but never in a do/try/catch. Is there a way to make this part of the function work with it? If not, is there a way to set up a closure? Thanks!
func createPlayer(fileName: String) -> AKPlayer {
let player = AKPlayer()
let audioFile : AKAudioFile
player.mixer >>> mixer
do {
try audioFile = AKAudioFile(readFileName: "\(fileName).mp3")
player.load(audioFile: audioFile)
print("AudioFile \(fileName), \(audioFile) loaded")
} catch { print("No audio file read, looking for \(fileName).mp3")
}
player.isLooping = false
player.fade.inTime = 2 // in seconds
player.fade.outTime = 2
player.stopEnvelopeTime = 2
player.completionHandler = {
print("Completion")
self.player.detach()
}
player.play()
return player
}

Crossfade Loop in AudioKit

Does AudioKit provide any options to crossfade a loop? I've experimented with using an AKBooster to fade in/out an AKSequencer, however I set a varying tempo/rate on the fly which complicates when to start the fades. AKWaveTable provides a great looping option, however I'm not sure if there's any way to create a "soft" loop that crossfades from it. I'm looking to soft-loop the following example:
import AudioKit
class ViewController: UIViewController {
let mixer = AKMixer()
let wavePlayer = AKWaveTable(file: (try! AKAudioFile(readFileName: "sample.mp3")), startPoint: Sample(44100), endPoint: Sample(44100), rate: 1, volume: 1, maximumSamples: 0, completionHandler: {}, loadCompletionHandler: {})
func play(){
wavePlayer.play()
}
override func viewDidLoad() {
wavePlayer >>> mixer
AudioKit.output = mixer
wavePlayer.loopEnabled = true
wavePlayer.play(from: Sample(44100))
do {
try AudioKit.start()
} catch {
AKLog("AudioKit did not start!")
}
play()
}
}
Thanks!

Swift - Recorded Video is Mirrored on Front Camera - How to flip?

I'm trying to mirror the recorded video from a capture session. The video preview for front facing camera shows a mirrored version, however, when I go to save the file and play it back, the captured video is actually mirrored. I'm using Apple's AVCam demo as a reference and can't seem to figure this out! Please help.
I've tried creating an AVCaptureConnection and trying to set the .isVideoMirrored parameter. However, I get this error:
cannot be added to the session because the source and destination media types are incompatible'
I would have thought mirroring the video would be much easier. I think I may be creating my connection incorrectly. The code below doesn't actually "Add connection" when I call the .canAddConnection check.
var captureSession: AVCaptureSession!
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
captureSession = AVCaptureSession()
//Setup Camera
if let dualCameraDevice = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .front) {
defaultVideoDevice = dualCameraDevice
} else if let frontCameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front) {
// If the rear wide angle camera isn't available, default to the front wide angle camera.
defaultVideoDevice = frontCameraDevice
}
guard let videoDevice = defaultVideoDevice else {
print("Default video device is unavailable.")
// setupResult = .configurationFailed
captureSession.commitConfiguration()
return
}
let videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)
if captureSession.canAddInput(videoDeviceInput) {
captureSession.addInput(videoDeviceInput)
}
let movieOutput = AVCaptureMovieFileOutput()
//Video Input variable for AVCapture Connection
let videoInput: [AVCaptureInput.Port] = videoDeviceInput.ports
if captureSession.canAddOutput(movieOutput) {
captureSession.beginConfiguration()
captureSession.addOutput(movieOutput)
captureSession.sessionPreset = .medium
Then I try to setup the AVCapture connection and try to set the parameters for mirroring. Please tell me if there is an easier way to mirror the output / playback.
avCaptureConnection = AVCaptureConnection(inputPorts: videoInput, output: movieOutput)
avCaptureConnection.isEnabled = true
//Mirror the capture connection?
avCaptureConnection.automaticallyAdjustsVideoMirroring = false
avCaptureConnection.isVideoMirrored = false
//Check if we can add a connection
if captureSession.canAddConnection(avCaptureConnection) {
//Add the connection
captureSession.addConnection(avCaptureConnection)
}
captureSession.commitConfiguration()
self.movieOutput = movieOutput
setupLivePreview()
}
}
Somewhere else in the code, connected to an IBAaction, I initialize the recording
// Start recording video to a temporary file.
let outputFileName = NSUUID().uuidString
let outputFilePath = (NSTemporaryDirectory() as NSString).appendingPathComponent((outputFileName as NSString).appendingPathExtension("mov")!)
print("Recording in tap function")
movieOutput.startRecording(to: URL(fileURLWithPath: outputFilePath), recordingDelegate: self)
I think I'm using AVCaptureConnection incorrectly, especially because of the error stating media types are incompatible. If there is a proper way to implement this function please do let me know. Also open to hearing suggestions for an easier way to mirror the playback. Thank you!

How to capture depth data from camera in iOS 11 and Swift 4?

I'm trying to get depth data from the camera in iOS 11 with AVDepthData, tho when I setup a photoOutput with the AVCapturePhotoCaptureDelegate the photo.depthData is nil.
So I tried setting up the AVCaptureDepthDataOutputDelegate with a AVCaptureDepthDataOutput, tho I don't know how to capture the depth photo?
Has anyone ever got an image from AVDepthData?
Edit:
Here's the code I tried:
// delegates: AVCapturePhotoCaptureDelegate & AVCaptureDepthDataOutputDelegate
#IBOutlet var image_view: UIImageView!
#IBOutlet var capture_button: UIButton!
var captureSession: AVCaptureSession?
var sessionOutput: AVCapturePhotoOutput?
var depthOutput: AVCaptureDepthDataOutput?
var previewLayer: AVCaptureVideoPreviewLayer?
#IBAction func capture(_ sender: Any) {
self.sessionOutput?.capturePhoto(with: AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg]), delegate: self)
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
self.previewLayer?.removeFromSuperlayer()
self.image_view.image = UIImage(data: photo.fileDataRepresentation()!)
let depth_map = photo.depthData?.depthDataMap
print("depth_map:", depth_map) // is nil
}
func depthDataOutput(_ output: AVCaptureDepthDataOutput, didOutput depthData: AVDepthData, timestamp: CMTime, connection: AVCaptureConnection) {
print("depth data") // never called
}
override func viewDidLoad() {
super.viewDidLoad()
self.captureSession = AVCaptureSession()
self.captureSession?.sessionPreset = .photo
self.sessionOutput = AVCapturePhotoOutput()
self.depthOutput = AVCaptureDepthDataOutput()
self.depthOutput?.setDelegate(self, callbackQueue: DispatchQueue(label: "depth queue"))
do {
let device = AVCaptureDevice.default(for: .video)
let input = try AVCaptureDeviceInput(device: device!)
if(self.captureSession?.canAddInput(input))!{
self.captureSession?.addInput(input)
if(self.captureSession?.canAddOutput(self.sessionOutput!))!{
self.captureSession?.addOutput(self.sessionOutput!)
if(self.captureSession?.canAddOutput(self.depthOutput!))!{
self.captureSession?.addOutput(self.depthOutput!)
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!)
self.previewLayer?.frame = self.image_view.bounds
self.previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.previewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
self.image_view.layer.addSublayer(self.previewLayer!)
}
}
}
} catch {}
self.captureSession?.startRunning()
}
I'm trying two things, one where the depth data is nil and one where I'm trying to call a depth delegate method.
Dose anyone know what I'm missing?
First, you need to use the dual camera, otherwise you won't get any depth data.
let device = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .back)
And keep a reference to your queue
let dataOutputQueue = DispatchQueue(label: "data queue", qos: .userInitiated, attributes: [], autoreleaseFrequency: .workItem)
You'll also probably want to synchronize the video and depth data
var outputSynchronizer: AVCaptureDataOutputSynchronizer?
Then you can synchronize the two outputs in your viewDidLoad() method like this
if sessionOutput?.isDepthDataDeliverySupported {
sessionOutput?.isDepthDataDeliveryEnabled = true
depthDataOutput?.connection(with: .depthData)!.isEnabled = true
depthDataOutput?.isFilteringEnabled = true
outputSynchronizer = AVCaptureDataOutputSynchronizer(dataOutputs: [sessionOutput!, depthDataOutput!])
outputSynchronizer!.setDelegate(self, queue: self.dataOutputQueue)
}
I would recommend watching WWDC session 507 - they also provide a full sample app that does exactly what you want.
https://developer.apple.com/videos/play/wwdc2017/507/
To give more details to #klinger answer, here is what you need to do to get Depth Data for each pixel, I wrote some comments, hope it helps!
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
//## Convert Disparity to Depth ##
let depthData = (photo.depthData as AVDepthData!).converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
let depthDataMap = depthData.depthDataMap //AVDepthData -> CVPixelBuffer
//## Data Analysis ##
// Useful data
let width = CVPixelBufferGetWidth(depthDataMap) //768 on an iPhone 7+
let height = CVPixelBufferGetHeight(depthDataMap) //576 on an iPhone 7+
CVPixelBufferLockBaseAddress(depthDataMap, CVPixelBufferLockFlags(rawValue: 0))
// Convert the base address to a safe pointer of the appropriate type
let floatBuffer = unsafeBitCast(CVPixelBufferGetBaseAddress(depthDataMap), to: UnsafeMutablePointer<Float32>.self)
// Read the data (returns value of type Float)
// Accessible values : (width-1) * (height-1) = 767 * 575
let distanceAtXYPoint = floatBuffer[Int(x * y)]
}
There are two ways to do this, and you are trying to do both at once:
Capture depth data along with the image. This is done by using the photo.depthData object from photoOutput(_:didFinishProcessingPhoto:error:). I explain why this did not work for you below.
Use a AVCaptureDepthDataOutput and implement depthDataOutput(_:didOutput:timestamp:connection:). I am not sure why this did not work for you, but implementing depthDataOutput(_:didOutput:timestamp:connection:) might help you figure out why.
I think that #1 is a better option, because it pairs the depth data with the image. Here's how you would do that:
#IBAction func capture(_ sender: Any) {
let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
settings.isDepthDataDeliveryEnabled = true
self.sessionOutput?.capturePhoto(with: settings, delegate: self)
}
// ...
override func viewDidLoad() {
// ...
self.sessionOutput = AVCapturePhotoOutput()
self.sessionOutput.isDepthDataDeliveryEnabled = true
// ...
}
Then, depth_map shouldn't be nil. Make sure to read both this and this (separate but similar pages) for more information about obtaining depth data.
For #2, I'm not quite sure why depthDataOutput(_:didOutput:timestamp:connection:) isn't being called, but you should implement depthDataOutput(_:didDrop:timestamp:connection:reason:) to see if depth data is being dropped for some reason.
The way you init your capture device is not right.
You should use the dual camera mode.
as for oc like follows:
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInDualCamera mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionBack];