Cannot use Voice Isolation with AVAudioRecorder or AudioUnit - swift

I'm trying to record voice audio with either AVAudioRecorder or AUAudioUnit.
In both, after a recording has started, whenever calling AVCaptureDevice.showSystemUserInterface(.microphoneModes) and selecting voice isolation, I get the following error:
"Voice Isolation and Wide Spectrum are currently unavailable"
TLDR: What do I need to allow the user to change to voice isolation mode?

I have an application that plays audio in real time and the audio separation mode is available by writing the following.
private let audioEngine = AVAudioEngine()
// omission (of middle part of a text)
let audioInput = audioEngine.inputNode
audioInput.isVoiceProcessingBypassed = true
do {
try audioInput.setVoiceProcessingEnabled(true)
} catch {
print("Could not enable voice processing \(error)")
return
}
let audioFormat = audioEngine.inputNode.outputFormat(forBus: 0)
audioEngine.connect(audioInput, to: audioEngine.mainMixerNode, format:audioFormat)
Since I am using AVAudioEngine, I believe your objective can be achieved by simply changing the AVAudioEngine output destination.

Related

Why does AVCaptureDevice.default return nil in swiftui?

I've been trying to create my own in-built camera but it's crashing when I try to set up the device.
func setUp() {
do {
self.session.beginConfiguration()
let device = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .front)
let input = try AVCaptureDeviceInput(device: device!)
if self.session.canAddInput(input) {
self.session.addInput(input)
}
if self.session.canAddOutput(self.output) {
self.session.addOutput(self.output)
}
self.session.commitConfiguration()
} catch {
print(error.localizedDescription)
}
}
When I execute the program, it crashed with the input because I try to force unwrap a nil value which is device.
I have set the required authorization so that the app can use the camera and it still end up with a nil value.
If anyone has any clue how to solve the problem it would be very appreciate
You're asking for a builtInDualCamera, i.e. one that supports:
Automatic switching from one camera to the other when the zoom factor, light level, and focus position allow.
Higher-quality zoom for still captures by fusing images from both cameras.
Depth data delivery by measuring the disparity of matched features between the wide and telephoto cameras.
Delivery of photos from constituent devices (wide and telephoto cameras) from a single photo capture request.
And you're requiring it to be on the front of the phone. I don't know any iPhone that has such a camera on the front (particularly the last one). You likely meant to request position: .back like in the example code. But keep in mind that not all phones have a dual camera on the back either.
You might want to use default(for:) to request the default "video" camera rather than requiring a specific type of camera. Alternately, you can use a AVCaptureDevice.DiscoverSession to find a camera based on specific characteristics.

AudioKit error message: Too Many Frames to Process

I'm using the (very cool) AudioKit framework to process audio for a macOS music visualizer app. My audio source ("mic") is iTunes 12 via Rogue Amoeba Loopback.
In the Xcode debug window, I'm seeing the following error message each time I launch my app:
kAudioUnitErr_TooManyFramesToProcess : inFramesToProcess=513, mMaxFramesPerSlice=512
I've gathered from searches that this is probably related to sample rate, but I haven't found a clear description of what this error indicates (or if it even matters). My app is functioning normally, but I'm wondering if this could be affecting efficiency.
EDIT: The error message does not appear if I use Audio MIDI Setup to set the Loopback device output to 44.1kHz. (I set it initially to 48.0kHz to match my other audio devices, which I keep configured to the video standard.)
Keeping Loopback at 44.1kHz is an acceptable solution, but now my question would be: Is it possible to avoid this error even with a 48.0kHz input? (I tried AKSettings.sampleRate = 48000 but that made no difference.) Or can I just safely ignore the error in any case?
AudioKit is initialized thusly:
AKSettings.audioInputEnabled = true
mic = AKMicrophone()
do {
try mic.setDevice(AudioKit.inputDevices![inputDeviceNumber])
}
catch {
AKLog("Device not set")
}
amplitudeTracker = AKAmplitudeTracker(mic)
AudioKit.output = AKBooster(amplitudeTracker, gain: 0)
do {
try AudioKit.start()
} catch {
AKLog("AudioKit did not start")
}
mic.start()
amplitudeTracker?.start()
This string saved my app
try? AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.02)

Synchronizing Playback of Multiple Audio Files in Audio Kit

I am developing a small audio sequencer application using AudioKit. I only need to play back 4 channels of audio. However I need to play them back perfectly synchronized down to the sample level. When I run a test using just two audio files, I can hear that they are not synchronized. The difference is only a few samples, but even a one sample discrepancy would be a problem. I am currently using multiple AKClipPlayer objects routed to an AKMixer object. I called him with the basics for loop like this:
private var clipPlayers : [AKClipPlayer] = []
func play(){
for player in clipPlayers{
player.play()
}
}
Is sample accurate playback timing of multiple audio files possible using AudioKit?
Yes, you need to schedule playback to start in the future with play(at:).
// This can take longer than expected, so do this before choosing a future time
clipPlayers.forEach { $0.prepare(withFrameCount: 10_000) }
let nearFuture = AVAudioTime.now() + 0.2
clipPlayers.forEach { $0.play(at: nearFuture) }

increase volume of audio file recorded with swift

I am developing an application with swift. I would like to be able to increase the volume of a recorded file. Is there a way to do it directly inside the application?
I found Audiokit Here and this question but it didn't help me much.
Thanks!
With AudioKit
Option A:
Do you just want to import a file, then play it louder than you imported it? You can use an AKBooster for that.
import AudioKit
do {
let file = try AKAudioFile(readFileName: "yourfile.wav")
let player = try AKAudioPlayer(file: file)
// Define your gain below. >1 means amplifying it to be louder
let booster = AKBooster(player, gain: 1.3)
AudioKit.output = booster
try AudioKit.start()
// And then to play your file:
player.play()
} catch {
// Log your error
}
Just set the gain value of booster to make it louder.
Option B: You could also try normalizing the audio file, which essentially applies a multiple constant across the recording (with respect to the highest signal level in the recording) so it reaches a new target maximum that you define. Here, I set it to -4dB.
let url = Bundle.main.url(forResource: "sound", withExtension: "wav")
if let file = try? AKAudioFile(forReading: url) {
// Set the new max level (in dB) for the gain here.
if let normalizedFile = try? file.normalized(newMaxLevel: -4) {
print(normalizedFile.maxLevel)
// Play your normalizedFile...
}
}
This method increases the amplitude of everything to a level of dB - so it won't effect the dynamics (SNR) of your file, and it only increases by the amount it needs to reach that new maximum (so you can safely apply it to ALL of your files to have them be uniform).
With AVAudioPlayer
Option A: If you want to adjust/control volume, AVAudioPlayer has a volume member but the docs say:
The playback volume for the audio player, ranging from 0.0 through 1.0 on a linear scale.
Where 1.0 is the volume of the original file and the default. So you can only make it quieter with that. Here's the code for it, in case you're interested:
let soundFileURL = Bundle.main.url(forResource: "sound", withExtension: "mp3")!
let audioPlayer = try? AVAudioPlayer(contentsOf: soundFileURL, fileTypeHint: AVFileType.mp3.rawValue)
audioPlayer?.play()
// Only play once
audioPlayer?.numberOfLoops = 0
// Set the volume of playback here.
audioPlayer?.volume = 1.0
Option B: if your sound file is too quiet, it might be coming out the receiver of the phone. In which case, you could try overriding the output port to use the speaker instead:
do {
try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} catch let error {
print("Override failed: \(error)")
}
You can also set that permanently with this code (but I can't guarantee your app will get into the AppStore):
try? audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord, with: AVAudioSessionCategoryOptions.defaultToSpeaker)
Option C: If Option B doesn't do it for you, you might be out of luck on 'how to make AVAudioPlayer play louder.' You're best off editing the source file with some external software yourself - I can recommend Audacity as a good option to do this.
Option D: One last option I've only heard of. You could also look into MPVolumeView, which has UI to control the system output and volume. I'm not too familiar with it though - may be approaching legacy at this point.
I want to mention a few things here because I was working on a similar problem.
On the contrary to what's written on Apple Docs on the AVAudioPlayer.volume property (https://developer.apple.com/documentation/avfoundation/avaudioplayer/1389330-volume) the volume can go higher than 1.0... And actually this works. I bumped up the volume to 100.0 on my application and recorded audio is way louder and easier to hear.
Another thing that helped me was setting the mode of AVAudioSession like so:
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playAndRecord, options: [.defaultToSpeaker, .allowBluetooh])
try session.setMode(.videoRecording)
try session.setActive(true)
} catch {
debugPrint("Problem with AVAudioSession")
}
session.setMode(.videoRecording) is the key line here. This helps you to send the audio through the louder speakers of the phone and not just the phone call speaker that's next to the face camera in the front. I was having a problem with this and posted a question that helped me here:
AVAudioPlayer NOT playing through speaker after recording with AVAudioRecorder
There are several standard AudioKit DSP components that can increase the volume.
For example, you can use a simple method like AKBooster: http://audiokit.io/docs/Classes/AKBooster.html
OR
Use the following code,
AKSettings.defaultToSpeaker = true
See more details in this post:
https://github.com/audiokit/AudioKit/issues/599
https://github.com/AudioKit/AudioKit/issues/586

Using Apple's new AudioEngine to change Pitch of AudioPlayer sound

I am currently trying to get Apple's new audio engine working with my current audio setup. Specifically, I am trying to change the pitch with Audio Engine, which apparently is possible according to this post.
I have also looked into other pitch changing solutions including Dirac and ObjectAL, but unfortunately both seem to be pretty messed up in terms of working with Swift, which I am using.
My question is how do I change the pitch of an audio file using Apple's new audio engine. I am able to play sounds using AVAudioPlayer, but I am not getting how the file is referenced in audioEngine. In the code on the linked page there is a 'format' that refers to audio file, but I am not getting how to create a format, or what it does.
I am playing sounds with this simple code:
let path = NSBundle.mainBundle().pathForResource(String(randomNumber), ofType:"m4r")
let fileURL = NSURL(fileURLWithPath: path!)
player = AVAudioPlayer(contentsOfURL: fileURL, error: nil)
player.prepareToPlay()
player.play()
You use an AVAudioPlayerNode, not an AVAudioPlayer.
engine = AVAudioEngine()
playerNode = AVAudioPlayerNode()
engine.attachNode(playerNode)
Then you can attach an AVAudioUnitTimePitch.
var mixer = engine.mainMixerNode;
auTimePitch = AVAudioUnitTimePitch()
auTimePitch.pitch = 1200 // In cents. The default value is 1.0. The range of values is -2400 to 2400
auTimePitch.rate = 2 //The default value is 1.0. The range of supported values is 1/32 to 32.0.
engine.attachNode(auTimePitch)
engine.connect(playerNode, to: auTimePitch, format: mixer.outputFormatForBus(0))
engine.connect(auTimePitch, to: mixer, format: mixer.outputFormatForBus(0))