How to send Microphone and InApp Audio CMSampleBuffer to webRTC in swift? - swift

I am working on-screen broadcast application. I want to send my screen recording on WebRTC server.
override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) {
//if source!.isSocketConnected {
switch sampleBufferType {
case RPSampleBufferType.video:
// Handle video sample buffer
source?.processVideoSampleBuffer(sampleBuffer)
break
case RPSampleBufferType.audioApp:
// Handle audio sample buffer for app audio
source?.processInAppAudioSampleBuffer(sampleBuffer)
break
case RPSampleBufferType.audioMic:
// Handle audio sample buffer for mic audio
source?.processAudioSampleBuffer(sampleBuffer)
break
#unknown default:
break
}
}
// VideoBuffer Sending Method
func startCaptureLocalVideo(sampleBuffer: CMSampleBuffer) {
let _pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
if let pixelBuffer = _pixelBuffer {
let rtcPixelBuffer = RTCCVPixelBuffer(pixelBuffer: pixelBuffer)
let timeStampNs = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) * 1000000000
let rtcVideoFrame = RTCVideoFrame(buffer: rtcPixelBuffer, rotation: RTCVideoRotation._90, timeStampNs: Int64(timeStampNs))
localVideoSource!.capturer(videoCapturer!, didCapture: rtcVideoFrame)
}
}
I got success to send VIDEO Sample Buffer on WebRTC but I am getting stuck in AUDIO part.
I did not find any way how to send AUDIO buffer to WebRTC.
Thank you so much for your answer.

I found the solution for that, just come to this link and follow the guideline:
https://github.com/pixiv/webrtc/blob/branch-heads/pixiv-m78/README.pixiv.md
WebRTC team is no longer to support native framework, so we need to modify WebRTC source code and re-build it to use inside another app.
Luckily, I found the person who fork source code from WebRTC project and he update the function that pass CMSampleBuffer from Broadcast extension to RTCPeerConnection.

Related

webrtc macOS change input source

I'm trying to change the audio input source (microphone) in my macOS app like this:
var engine = AVAudioEngine()
private func activateNewInput(_ id: AudioDeviceID) {
let input = engine.inputNode
let inputUnit = input.audioUnit!
var inputDeviceID: AudioDeviceID = id
let status = AudioUnitSetProperty(inputUnit,
kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global,
0,
&inputDeviceID,
UInt32(MemoryLayout<AudioDeviceID>.size))
if status != 0 {
NSLog("Could not change input: \(status)")
}
/*
engine.prepare()
do {
try engine.start()
}
catch {
NSLog("\(error.localizedDescription)")
}*/
}
The status of the AudioUnitSetProperty is succesful (zero) however webrtc keeps on using the default input source no matter what.
On iOS it is typically done using. AVAudioSession which is missing on macOS. That's why I use AudioUnitSetProperty from Audio Toolbox framework. I also checked RTCPeerConnectionFactory and it has no constructor to inject ADM (audio device module) nor any APIs to control audio input. I use webrtc branch 106.
Any ideas, hints are much appreciated.
Thanks

WEBRTC How to capture video from camera library?

Can't load video from a UIImagePickerController using WebRTC.
With saved in-app Bundle file, it works, but if I use UIImagePickerController
UIImagePickerControllerDelegate.imagePickerController(_:didFinishPickingMediaWithInfo:))
so I use mediaInfo like this:
(info[.mediaURL] as! URL).path
This code I use to start capturing a video file
public func startCaptureLocalVideoFile(name: String, renderer: RTCVideoRenderer) {
print("startCaptureLocalVideoFile")
stopLocalCapture()
localRenderer = renderer
videoCapturer = RTCFileVideoCapturer(delegate: videoSource)
guard let capturer = videoCapturer as? RTCFileVideoCapturer else {
print("WebRTCService can't get capturer")
return
}
capturer.startCapturing(fromFileNamed: name) { error in
print("startCapturing error ", error)
return
}
localVideoTrack?.add(renderer)
}
so I get this media info:
info [__C.UIImagePickerControllerInfoKey(_rawValue: UIImagePickerControllerMediaURL): file:///private/var/mobile/Containers/Data/PluginKitPlugin/5F7A4469-5006-4590-8F59-396CD86A083B/tmp/trim.B46C5878-BAF2-432B-B627-9787D74CE7B0.MOV, __C.UIImagePickerControllerInfoKey(_rawValue: UIImagePickerControllerMediaType): public.movie, __C.UIImagePickerControllerInfoKey(_rawValue: UIImagePickerControllerReferenceURL): assets-library://asset/asset.MOV?id=33EECFB7-514A-435A-AA19-26A055FB9F06&ext=MOV]
and this error:
startCapturing error Error Domain=org.webrtc.RTCFileVideoCapturer Code=2001 "(null)" UserInfo={NSUnderlyingError=File /private/var/mobile/Containers/Data/PluginKitPlugin/5F7A4469-5006-4590-8F59-396CD86A083B/tmp/trim.B46C5878-BAF2-432B-B627-9787D74CE7B0.MOV not found in bundle}
Seems like it works with Bundle.main only, but we can't write to it.
Am I doing it right? Maybe there is another way to accomplish this?
Thanks for the help!

AudioKit Creating Sinewave Tone When Returning from Background

I'm using AudioKit to run an AKSequencer() that plays both mp3 and wav files using AKMIDISampler(). Everything works great, except in cases when the app has entered background state for more than 30+ min, and then brought back up again for use. It seems to then lose all of it's audio connections and plays the "missing file" sinewave tone mentioned in other threads. The app can happily can enter background momentarily, user can quit, etc without the tone. It seems to only happen when left in background for long periods of time and then brought up again.
I've tried changing the order of AudioKit.start() and file loading, but nothing seems to completely eliminate the issue.
My current workaround is simply to prevent the user's display from timing out, however that does not address many use-cases of the issue occurring.
Is there a way to handle whatever error I'm setting up that creates this tone? Here is a representative example of what I'm doing with ~40 audio files.
//viewController
override func viewDidLoad() {
sequencer.setupSequencer()
}
class SamplerWav {
let audioWav = AKMIDISampler()
func loadWavFile() {
try? audioWav.loadWav("some_wav_audio_file")
}
class SamplerMp3 {
let audioMp3 = AKMIDISampler()
let audioMp3_akAudioFile = try! AKAudioFile(readFileName: "some_other_audio_file.mp3")
func loadMp3File() {
try? audioMp3.loadAudioFile(audioMp3_akAudioFile)
}
class Sequencer {
let mixer = AKMixer()
let subMix = AKMixer()
let samplerWav = SamplerWav()
let samplerMp3 = SamplerMp3()
var callbackTrack: AKMusicTrack!
let callbackInstr = AKMIDICallbackInstrument()
func setupSequencer{
AudioKit.output = mixer.mixer
try! AudioKit.start()
callbackTrack = sequencer.newTrack()
callbackTrack?.setMIDIOutput(callbackInstr.midiIn)
samplerWav.loadWavFile()
samplerMp3.loadMp3File()
samplerWav.audioWav >>> subMix
samplerMp3.audioMp3 >>> submix
submix >>> mixer
}
//Typically run from a callback track
func playbackSomeSound(){
try? samplerWav.audioWav.play(noteNumber: 60, velocity: 100, channel: 1)
}
}
Thanks! I'm a big fan of AudioKit.
After some trial and error, here's a workflow that seems to address the issue for my circumstance:
-create my callback track(s) -once- from viewDidLoad
-stop AudioKit, and call .detach() on all my AKMIDISampler tracks and any routing in willResignActive
-start AudioKit (again), and reload and reroute all of the audio files/tracks from didBecomeActive

How to get the audio to flow as expected when subclassing SPTCoreAudioController

I am attempting to override Spotify's SPTCoreAudioController in order to get at the audio buffer so I can do some processing. I was able to subclass it and override the correct function. Just as a baseline, I was attempting to pass through the audioFrames and frameCount back to the audio pipeline by calling super.attempt...() The audio flows as expected however it got sped up after the first few seconds. I was expecting it just to pass through and playback normally. Can anyone explain why this is happening and/or point me to what I need to learn in order to work with the audio frames that are passed to me?
Here is the code:
class CoreAudioController: SPTCoreAudioController {
override func attempt(toDeliverAudioFrames audioFrames: UnsafeRawPointer!, ofCount frameCount: Int, streamDescription audioDescription: AudioStreamBasicDescription) -> Int {
print("attempt to deliver audio frames")
super.attempt(toDeliverAudioFrames: audioFrames, ofCount: frameCount, streamDescription: audioDescription)
return frameCount
}
}
Here is where I pass the custom controller above to the Spotify AudioStreamingController:
func initializaPlayer(authSession:SPTSession){
if self.player == nil {
self.audioController = CoreAudioController()
self.player = SPTAudioStreamingController.sharedInstance()
self.player!.playbackDelegate = self
self.player!.delegate = self
try! self.player?.start(withClientId: auth?.clientID, audioController: self.audioController, allowCaching: false)
//try! player?.start(withClientId: auth?.clientID)
self.player!.login(withAccessToken: authSession.accessToken)
}
}

How to pass audio from iPhone microphone as buffer

I am trying to send audio recorded from the iPhone microphone, to an ip camera.
I have an sdk(written in c) in order to communicate with the camera and this is the function i need to pass the data. We are talking of sending realtime audio.
/*
#Name: FosSdk_SendTalkData.
#Description: Send the data of talk.
#param handle: the handle of current connection information.
#param data: The data need to send.
#param len: Len of data.
#return: Please refer to the enum of FOSCMD_RESULT to get more information.
*/
FOSSDK FOSCMD_RESULT FOSAPI FosSdk_SendTalkData(FOSHANDLE handle, char *data, int len);
This is instead the Swift signature( i am currently using swift )
FosSdk_SendTalkData(handle: UInt32, data: UnsafeMutablePointer<Int8>!, len: Int32)
How do i record the audio from iPhone microphone and pass the audio buffer correctly to sendTalkData?
Thanks in advance for any clarification/help.
EDIT
I managed how to get the audio buffer using a AVFoundation and the AVCaptureAudioDataOutputSampleBufferDelegate function, from which i obtain a sample buffer. However, my implementation, does not work. One thing i noticed, is the app crashes when passing length parameter to sendTalkFunction. If for example, i pass Int32(1) as length, the app does not crash but however, it does not have any sense to pass Int32(1).
This snippet of code helped me a lot, but however it s a bit old so i needed to make some edit http://timestocomemobile.com/2014/10/swift-data-to-c-pointer-and-back-again.html
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
guard let blockBufferRef = CMSampleBufferGetDataBuffer(sampleBuffer) else {return}
let lengthOfBlock = CMBlockBufferGetDataLength(blockBufferRef)
guard let data = NSMutableData(length: Int(lengthOfBlock)) else {return}
CMBlockBufferCopyDataBytes(blockBufferRef, 0, lengthOfBlock, data.mutableBytes)
// I need to pass the data as UnsafeMutablePointer<Int8> type
let dataUnsafeMutablePointer = data.mutableBytes.assumingMemoryBound(to: Int8.self)
FosSdk_SendTalkData(CameraConfigurationManager.mhandle, dataUnsafeMutablePointer, Int32(lengthOfBlock))
}
The AVCaptureSessions are starting correctly and i can see the sampleBuffers are collected every tot milliseconds. I just need to fill function parameters correctly and make these audio buffers be played on the ip camera