webrtc macOS change input source - swift

I'm trying to change the audio input source (microphone) in my macOS app like this:
var engine = AVAudioEngine()
private func activateNewInput(_ id: AudioDeviceID) {
let input = engine.inputNode
let inputUnit = input.audioUnit!
var inputDeviceID: AudioDeviceID = id
let status = AudioUnitSetProperty(inputUnit,
kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global,
0,
&inputDeviceID,
UInt32(MemoryLayout<AudioDeviceID>.size))
if status != 0 {
NSLog("Could not change input: \(status)")
}
/*
engine.prepare()
do {
try engine.start()
}
catch {
NSLog("\(error.localizedDescription)")
}*/
}
The status of the AudioUnitSetProperty is succesful (zero) however webrtc keeps on using the default input source no matter what.
On iOS it is typically done using. AVAudioSession which is missing on macOS. That's why I use AudioUnitSetProperty from Audio Toolbox framework. I also checked RTCPeerConnectionFactory and it has no constructor to inject ADM (audio device module) nor any APIs to control audio input. I use webrtc branch 106.
Any ideas, hints are much appreciated.
Thanks

Related

Noisiness when playing sound using an AVAudioSourceNode

I'm using TinySoundFont to use SF2 files on watchOS. I want to play the raw audio generated by the framework in real time (which means calling tsf_note_on as soon as the corresponding button is pressed and calling tsf_render_short as soon as new data is needed). I'm using an AVAudioSourceNode to achieve that.
Despite the sound rendering fine when I render it into a file, it's really noisy when played using the AVAudioSourceNode. (Based on the answer from Rob Napier, this might be because I ignore the timestamp property - I'm looking for a solution that addresses that concern.) What causes this issue and how can I fix it?
I'm looking for a solution that renders audio realtime and not precalculates it, since I want to handle looping sounds correctly as well.
You can download a sample GitHub project here.
ContentView.swift
import SwiftUI
import AVFoundation
struct ContentView: View {
#ObservedObject var settings = Settings.shared
init() {
settings.prepare()
}
var body: some View {
Button("Play Sound") {
Settings.shared.playSound()
if !settings.engine.isRunning {
do {
try settings.engine.start()
} catch {
print(error)
}
}
}
}
}
Settings.swift
import SwiftUI
import AVFoundation
class Settings: ObservableObject {
static let shared = Settings()
var engine: AVAudioEngine!
var sourceNode: AVAudioSourceNode!
var tinySoundFont: OpaquePointer!
func prepare() {
let soundFontPath = Bundle.main.path(forResource: "GMGSx", ofType: "sf2")
tinySoundFont = tsf_load_filename(soundFontPath)
tsf_set_output(tinySoundFont, TSF_MONO, 44100, 0)
setUpSound()
}
func setUpSound() {
if let engine = engine,
let sourceNode = sourceNode {
engine.detach(sourceNode)
}
engine = .init()
let mixerNode = engine.mainMixerNode
let audioFormat = AVAudioFormat(
commonFormat: .pcmFormatInt16,
sampleRate: 44100,
channels: 1,
interleaved: false
)
guard let audioFormat = audioFormat else {
return
}
sourceNode = AVAudioSourceNode(format: audioFormat) { silence, timeStamp, frameCount, audioBufferList in
guard let data = self.getSound(length: Int(frameCount)) else {
return 1
}
let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
data.withUnsafeBytes { (intPointer: UnsafePointer<Int16>) in
for index in 0 ..< Int(frameCount) {
let value = intPointer[index]
// Set the same value on all channels (due to the inputFormat, there's only one channel though).
for buffer in ablPointer {
let buf: UnsafeMutableBufferPointer<Int16> = UnsafeMutableBufferPointer(buffer)
buf[index] = value
}
}
}
return noErr
}
engine.attach(sourceNode)
engine.connect(sourceNode, to: mixerNode, format: audioFormat)
do {
try AVAudioSession.sharedInstance().setCategory(.playback)
} catch {
print(error)
}
}
func playSound() {
tsf_note_on(tinySoundFont, 0, 60, 1)
}
func getSound(length: Int) -> Data? {
let array = [Int16]()
var storage = UnsafeMutablePointer<Int16>.allocate(capacity: length)
storage.initialize(from: array, count: length)
tsf_render_short(tinySoundFont, storage, Int32(length), 0)
let data = Data(bytes: storage, count: length)
storage.deallocate()
return data
}
}
The AVAudioSourceNode initializer takes a render block. In the mode you're using (live playback), this is a real-time callback, so you have a very tight deadline to fill the block with the requested data and return it so it can be played. You don't have a ton of time to do calculations. You definitely don't have time to access the filesystem.
In your block, you're re-computing an entire WAV every render cycle, then writing it to disk, then reading it from disk, then filling in the block that was requested. You ignore the timestamp requested, and always fill the buffer starting at sample zero. The mismatch is what's causing the buzzing. The fact that you're so slow about it is probably what's causing the pitch-drop.
Depending on the size of your files, the simplest way to implement this is to first decode everything into memory, and fill in the buffers for the timestamps and lengths requested. It looks like your C code already generates PCM data, so there's no need to convert it into a WAV file. It seems to already be in the right format.
Apple provides a good sample project for a Signal Generator that you should use as a starting point. Download that and make sure it works as expected. Then work to swap in your SF2 code. You may also find the video on this helpful: What’s New in AVAudioEngine.
The easiest tool to use here is probably an AVAudioPlayerNode. Your SoundFontHelper is making things much more complicated, so I've removed it and just call TSF directly from Swift. To do this, create a file called tsf.c as follows:
#define TSF_IMPLEMENTATION
#include "tsf.h"
And add it to BridgingHeader.h:
#import "tsf.h"
Simplify ContentView to this:
import SwiftUI
struct ContentView: View {
#ObservedObject var settings = Settings.shared
init() {
// You'll want error handling here.
try! settings.prepare()
}
var body: some View {
Button("Play Sound") {
settings.play()
}
}
}
And that leaves the new version of Settings, which is the meat of it:
import SwiftUI
import AVFoundation
class Settings: ObservableObject {
static let shared = Settings()
var engine = AVAudioEngine()
let playerNode = AVAudioPlayerNode()
var tsf: OpaquePointer
var outputFormat = AVAudioFormat()
init() {
let soundFontPath = Bundle.main.path(forResource: "GMGSx", ofType: "sf2")
tsf = tsf_load_filename(soundFontPath)
engine.attach(playerNode)
engine.connect(playerNode, to: engine.mainMixerNode, format: nil)
updateOutputFormat()
}
// For simplicity, this object assumes the outputFormat does not change during its lifetime.
// It's important to watch for route changes, and recreate this object if they occur. For details, see:
// https://developer.apple.com/documentation/avfaudio/avaudiosession/responding_to_audio_session_route_changes
func updateOutputFormat() {
outputFormat = engine.mainMixerNode.outputFormat(forBus: 0)
}
func prepare() throws {
// Start the engine
try AVAudioSession.sharedInstance().setCategory(.playback)
try engine.start()
playerNode.play()
updateOutputFormat()
// Configure TSF. The only important thing here is the sample rate, which can be different on different hardware.
// Core Audio has a defined format of "deinterleaved 32-bit floating point."
tsf_set_output(tsf,
TSF_STEREO_UNWEAVED, // mode
Int32(outputFormat.sampleRate), // sampleRate
0) // gain
}
func play() {
tsf_note_on(tsf,
0, // preset_index
60, // key (middle C)
1.0) // velocity
// These tones have a long falloff, so you want a lot of source data. This is 10s.
let frameCount = 10 * Int(outputFormat.sampleRate)
// Create a buffer for the samples
let buffer = AVAudioPCMBuffer(pcmFormat: outputFormat, frameCapacity: AVAudioFrameCount(frameCount))!
buffer.frameLength = buffer.frameCapacity
// Render the samples. Do not mix. This buffer has been extended to
// the needed size by the assignment to `frameLength` above. The call to
// `assumingMemoryBound` is known to be correct because the format is Float32.
let ptr = buffer.audioBufferList.pointee.mBuffers.mData?.assumingMemoryBound(to: Float.self)
tsf_render_float(tsf,
ptr, // buffer
Int32(frameCount), // samples
0) // mixing (do not mix)
// All done. Play the buffer, interrupting whatever is currently playing
playerNode.scheduleBuffer(buffer, at: nil, options: .interrupts)
}
}
You can find the full version at my fork. You can also see the first commit, which is another approach that maintains your SoundFontHelper and does conversions to deal with it, but it's much simpler to just render the audio correctly in the first place.

Where is object id of 0 coming from in this Swift code that is supposed to pass audio from an input device to an output device? See error message

I am fairly new to this and trying to learn, so please excuse if the following is not correct, but what I think it is supposed to do is:
init an input and output node
create an audio unit for each node
assign a "CurrentDevice" of a particular device id for each audio unit
connect the input and output
func start() {
let engine = AVAudioEngine()
let inputNode = engine.inputNode
let outputNode = engine.outputNode
guard let inputUnit: AudioUnit = inputNode.audioUnit else { return }
guard let outputUnit: AudioUnit = outputNode.audioUnit else { return }
var inputDeviceID: AudioDeviceID = 46 // External input device
var outputDeviceID: AudioDeviceID = 57 // Mac mini speakers
AudioUnitSetProperty(inputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &inputDeviceID, UInt32(MemoryLayout<AudioDeviceID>.size))
AudioUnitSetProperty(outputUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &outputDeviceID, UInt32(MemoryLayout<AudioDeviceID>.size))
let bus = 0
engine.connect(inputNode, to: outputNode, fromBus: bus, toBus: bus, format: nil)
engine.prepare()
do {
try engine.start()
} catch {
print("error")
}
}
When I run this, the logs fill with:
AudioObjectGetPropertyDataSize: no object with given ID 0
AudioObjectGetPropertyData: no object with given ID 0
AudioObjectGetPropertyDataSize: no object with given ID 0
AudioObjectGetPropertyData: no object with given ID 0
AudioObjectGetPropertyDataSize: no object with given ID 0
AudioObjectGetPropertyData: no object with given ID 0
...
Where is this ID of 0 coming from?
If I comment out either AudioUnitSetProperty(...), the error does not happen.
What I am doing wrong here?
What you're doing looks correct to me although I usually do this using AUAudioUnit:
try inputNode.AUAudioUnit.setDeviceID(inputDeviceID)
try outputNode.AUAudioUnit.setDeviceID(outputDeviceID)
I don't know that in this case it will make a difference, though.
Previously there was an undocumented limitation on AVAudioEngine restricting simultaneous input and output to the default system device (https://www.mail-archive.com/coreaudio-api#lists.apple.com/msg01663.html). I am not sure if that limitation is still present on Big Sur.
One possible workaround (I haven't had a chance to try yet) is to create an aggregate device containing the input and output devices you'd like and use that device with AVAudioEngine.
AudioObjectID 0 is kAudioObjectUnknown. I suspect your attempt to set the device left the engine in an error state when it failed.

How to send Microphone and InApp Audio CMSampleBuffer to webRTC in swift?

I am working on-screen broadcast application. I want to send my screen recording on WebRTC server.
override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) {
//if source!.isSocketConnected {
switch sampleBufferType {
case RPSampleBufferType.video:
// Handle video sample buffer
source?.processVideoSampleBuffer(sampleBuffer)
break
case RPSampleBufferType.audioApp:
// Handle audio sample buffer for app audio
source?.processInAppAudioSampleBuffer(sampleBuffer)
break
case RPSampleBufferType.audioMic:
// Handle audio sample buffer for mic audio
source?.processAudioSampleBuffer(sampleBuffer)
break
#unknown default:
break
}
}
// VideoBuffer Sending Method
func startCaptureLocalVideo(sampleBuffer: CMSampleBuffer) {
let _pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
if let pixelBuffer = _pixelBuffer {
let rtcPixelBuffer = RTCCVPixelBuffer(pixelBuffer: pixelBuffer)
let timeStampNs = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) * 1000000000
let rtcVideoFrame = RTCVideoFrame(buffer: rtcPixelBuffer, rotation: RTCVideoRotation._90, timeStampNs: Int64(timeStampNs))
localVideoSource!.capturer(videoCapturer!, didCapture: rtcVideoFrame)
}
}
I got success to send VIDEO Sample Buffer on WebRTC but I am getting stuck in AUDIO part.
I did not find any way how to send AUDIO buffer to WebRTC.
Thank you so much for your answer.
I found the solution for that, just come to this link and follow the guideline:
https://github.com/pixiv/webrtc/blob/branch-heads/pixiv-m78/README.pixiv.md
WebRTC team is no longer to support native framework, so we need to modify WebRTC source code and re-build it to use inside another app.
Luckily, I found the person who fork source code from WebRTC project and he update the function that pass CMSampleBuffer from Broadcast extension to RTCPeerConnection.

kAudioUnitType_MusicEffect as AVAudioUnit

I'd like to use my kAudioUnitType_MusicEffect AU in an AVAudioEngine graph. So I try to call:
[AVAudioUnitMIDIInstrument instantiateWithComponentDescription:desc options:kAudioComponentInstantiation_LoadInProcess completionHandler:
but that just yeilds a normal AVAudioUnit, so the midi selectors (like -[AVAudioUnit sendMIDIEvent:data1:data2:]:) are unrecognized. It seems AVAudioUnitMIDIInstrument instantiateWithComponentDescription only works with kAudioUnitType_MusicDevice.
Any way to do this? (Note: OS X 10.11)
Make a subclass and call instantiateWithComponentDescription from its init.
Gory details and github project in this blog post
http://www.rockhoppertech.com/blog/multi-timbral-avaudiounitmidiinstrument/#avfoundation
This uses Swift and kAudioUnitSubType_MIDISynth but you can see how to do it.
This works. It's a subclass. You add it to the engine and you route the signal through it.
class MyAVAudioUnitDistortionEffect: AVAudioUnitEffect {
override init() {
var description = AudioComponentDescription()
description.componentType = kAudioUnitType_Effect
description.componentSubType = kAudioUnitSubType_Distortion
description.componentManufacturer = kAudioUnitManufacturer_Apple
description.componentFlags = 0
description.componentFlagsMask = 0
super.init(audioComponentDescription: description)
}
func setFinalMix(finalMix:Float) {
let status = AudioUnitSetParameter(
self.audioUnit,
AudioUnitPropertyID(kDistortionParam_FinalMix),
AudioUnitScope(kAudioUnitScope_Global),
0,
finalMix,
0)
if status != noErr {
print("error \(status)")
}
}

iOS: Keep application running in background

How do I keep my application running in the background?
Would I have to jailbreak my iPhone to do this? I just need this app to check something from the internet every set interval and notify when needed, for my own use.
Yes, no need to jailbreak. Check out the "Implementing long-running background tasks" section of this doc from Apple.
From Apple's doc:
Declaring Your App’s Supported Background Tasks
Support for some types of background execution must be declared in advance by the app that uses them. An app declares support for a service using its Info.plist file. Add the UIBackgroundModes key to your Info.plist file and set its value to an array containing one or more of the following strings: (see Apple's doc from link mentioned above.)
I guess this is what you required
When an iOS application goes to the background, are lengthy tasks paused?
iOS Application Background Downloading
This might help you ...
Enjoy Coding :)
Use local notifications to do that. But this will not check every time. You will have to set a time where you will check your specific event, you may shorten this by decreasing your time slot. Read more about local notification to know how to achieve this at:
http://developer.apple.com/library/mac/#documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/Introduction/Introduction.html
I found a way, to keep app running in background by playing silence
Make sure, that you selected audio playback in background modes
Also, don't use this method for long time, since it consumes CPU resources and battery juice, but I think it's a suitable way to keep app alive for a few minutes.
Just create an instance of SilencePlayer, call play() and then stop(), when you done
import CoreAudio
public class SilencePlayer {
private var audioQueue: AudioQueueRef? = nil
public private(set) var isStarted = false
public func play() {
if isStarted { return }
print("Playing silence")
let avs = AVAudioSession.sharedInstance()
try! avs.setCategory(AVAudioSessionCategoryPlayback, with: .mixWithOthers)
try! avs.setActive(true)
isStarted = true
var streamFormat = AudioStreamBasicDescription(
mSampleRate: 16000,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked,
mBytesPerPacket: 2,
mFramesPerPacket: 1,
mBytesPerFrame: 2,
mChannelsPerFrame: 1,
mBitsPerChannel: 16,
mReserved: 0
)
let status = AudioQueueNewOutput(
&streamFormat,
SilenceQueueOutputCallback,
nil, nil, nil, 0,
&audioQueue
)
print("OSStatus for silence \(status)")
var buffers = Array<AudioQueueBufferRef?>.init(repeating: nil, count: 3)
for i in 0..<3 {
buffers[i]?.pointee.mAudioDataByteSize = 320
AudioQueueAllocateBuffer(audioQueue!, 320, &(buffers[i]))
SilenceQueueOutputCallback(nil, audioQueue!, buffers[i]!)
}
let startStatus = AudioQueueStart(audioQueue!, nil)
print("Start status for silence \(startStatus)")
}
public func stop() {
guard isStarted else { return }
print("Called stop silence")
if let aq = audioQueue {
AudioQueueStop(aq, true)
audioQueue = nil
}
try! AVAudioSession.sharedInstance().setActive(false)
isStarted = false
}
}
fileprivate func SilenceQueueOutputCallback(_ userData: UnsafeMutableRawPointer?, _ audioQueueRef: AudioQueueRef, _ bufferRef: AudioQueueBufferRef) -> Void {
let pointer = bufferRef.pointee.mAudioData
let length = bufferRef.pointee.mAudioDataByteSize
memset(pointer, 0, Int(length))
if AudioQueueEnqueueBuffer(audioQueueRef, bufferRef, 0, nil) != 0 {
AudioQueueFreeBuffer(audioQueueRef, bufferRef)
}
}
Tested on iOS 10 and Swift 4
I know this is not the answer to your question, but I think it is a solution.
This assumes that your trying to check something or get data from the internet on a regular basis?
Create a service that checks the internet every set interval for whatever it is you want to know, and create a push notification to alert you of it, if the server is down, or whatever it is your trying to monitor has changed state. Just an idea.
Yes you can do something like this. For that you need to set entry in info.plist to tell os that my app will run in background. I have done this while I wanted to pass user's location after particular time stamp to server. For that I have set "Required background modes" set to "App registers for location updates".
You can write a handler of type UIBackgroundTaskIdentifier.
You can already do this in the applicationDidEnterBackground Method