how to stream mp3 urls (iOS) in Swift? - swift

currently the code below can stream local mp3 files, so if i call
audio.scheduleFile(NSBundle.mainBundle().URLForResource("Moon River", withExtension: "mp3")!)
it will properly play the local file. now I want to be able to be able to stream non-local urls.
what do i need to do in order to allow me to stream mp3 urls?
class Audio: NSObject {
var graph: AUGraph
var filePlayerAU: AudioUnit
var filePlayerNode: AUNode
var outputAU: AudioUnit
var fileID: AudioFileID
var currentFrame: Int64
override init () {
graph = AUGraph()
filePlayerAU = AudioUnit()
filePlayerNode = AUNode()
outputAU = AudioUnit()
fileID = AudioFileID()
currentFrame = 0
super.init()
NewAUGraph(&graph)
// Add file player node
var cd = AudioComponentDescription(componentType: OSType(kAudioUnitType_Generator),
componentSubType: OSType(kAudioUnitSubType_AudioFilePlayer),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: 0, componentFlagsMask: 0)
AUGraphAddNode(graph, &cd, &filePlayerNode)
// Add output node
var outputNode = AUNode()
cd.componentType = OSType(kAudioUnitType_Output)
cd.componentSubType = OSType(kAudioUnitSubType_RemoteIO)
AUGraphAddNode(graph, &cd, &outputNode)
// Graph must be opened before we can get node info!
AUGraphOpen(graph)
AUGraphNodeInfo(graph, filePlayerNode, nil, &filePlayerAU)
AUGraphNodeInfo(graph, outputNode, nil, &outputAU)
AUGraphConnectNodeInput(graph, filePlayerNode, 0, outputNode, 0)
AUGraphInitialize(graph)
registerCallbackForAU(filePlayerAU, nil)
}
func scheduleFile(url: NSURL) {
AudioFileOpenURL(url, 1, 0, &fileID)
// Step 1: schedule the file(s)
// kAudioUnitProperty_ScheduledFileIDs takes an array of AudioFileIDs
var filesToSchedule = [fileID]
AudioUnitSetProperty(filePlayerAU,
AudioUnitPropertyID(kAudioUnitProperty_ScheduledFileIDs),
AudioUnitScope(kAudioUnitScope_Global), 0, filesToSchedule,
UInt32(sizeof(AudioFileID)))
}
func scheduleRegion() {
// Step 2: Schedule the regions of the file(s) to play
// Swift forces us to fill out the structs completely, even if they are not used
let smpteTime = SMPTETime(mSubframes: 0, mSubframeDivisor: 0,
mCounter: 0, mType: 0, mFlags: 0,
mHours: 0, mMinutes: 0, mSeconds: 0, mFrames: 0)
var timeStamp = AudioTimeStamp(mSampleTime: 0, mHostTime: 0, mRateScalar: 0,
mWordClockTime: 0, mSMPTETime: smpteTime,
mFlags: UInt32(kAudioTimeStampSampleTimeValid), mReserved: 0)
var region = ScheduledAudioFileRegion()
region.mTimeStamp = timeStamp
region.mCompletionProc = nil
region.mCompletionProcUserData = nil
region.mAudioFile = fileID
region.mLoopCount = 0
region.mStartFrame = currentFrame
region.mFramesToPlay = UInt32.max
AudioUnitSetProperty(filePlayerAU,
AudioUnitPropertyID(kAudioUnitProperty_ScheduledFileRegion),
AudioUnitScope(kAudioUnitScope_Global), 0, &region,
UInt32(sizeof(ScheduledAudioFileRegion)))
// Step 3: Prime the file player
var primeFrames: UInt32 = 0
AudioUnitSetProperty(filePlayerAU,
AudioUnitPropertyID(kAudioUnitProperty_ScheduledFilePrime),
AudioUnitScope(kAudioUnitScope_Global), 0, &primeFrames,
UInt32(sizeof(UInt32)))
// Step 4: Schedule the start time (-1 = now)
timeStamp.mSampleTime = -1
AudioUnitSetProperty(filePlayerAU,
AudioUnitPropertyID(kAudioUnitProperty_ScheduleStartTimeStamp),
AudioUnitScope(kAudioUnitScope_Global), 0, &timeStamp,
UInt32(sizeof(AudioTimeStamp)))
}
}

If you are looking to play remote files(mp3) from a server. I would look into using AVPlayer().
https://developer.apple.com/documentation/avfoundation/avplayer
Here is a snippet that might nudge you in the right direction. Of course this is just a very basic example.
func playRemoteFile() {
let fileUrl = "http://www.foo.com/bar.mp3"
let url = NSURL(string: fileUrl)
var myPlayer = AVPlayer(URL: url)
myPlayer.play()
}

Related

Is there a way to playback a sequence of notes using noteon/off?

I am making an app that requires playback of a sequence of musical notes, however, use of in-built midi control in swift yielded disappointing results, since for some reason MIDIChannelMessages are getting ignored and I am unable to set an instrument. Code for this solution is provided below:
func playMusic(){
var sequence : MusicSequence?
var musicSequence = NewMusicSequence(&sequence)
var track : MusicTrack?
var musicTrack = MusicSequenceNewTrack(sequence!, &track)
// Adding notes
var time = MusicTimeStamp(0.0)
for notee in TestOne().notes{
var number = freqsScale[notee.frequency] ?? 0
print(number)
print(notee.distance)
number += 11
var note = MIDINoteMessage(channel: 1,
note: UInt8(number),
velocity: 64,
releaseVelocity: 0,
duration: notee.distance )
musicTrack = MusicTrackNewMIDINoteEvent(track!, time, &note)
time += Double(notee.distance)
}
var inMessage = MIDIChannelMessage(status: 0xB0, data1: 120, data2: 0, reserved: 0)
musicTrack = MusicTrackNewMIDIChannelEvent(track!, 0, &inMessage)
var inMessage1 = MIDIChannelMessage(status: 0xC0, data1: 48, data2: 0, reserved: 0)
musicTrack = MusicTrackNewMIDIChannelEvent(track!, 1, &inMessage1)
// Creating a player
var musicPlayer : MusicPlayer? = nil
var player = NewMusicPlayer(&musicPlayer)
player = MusicPlayerSetSequence(musicPlayer!, sequence)
player = MusicPlayerStart(musicPlayer!)
}
Another way to approach playback is to use noteon/off in AudioKit, but I am lost on whether or not AudioKit has a way to playback a sequence of notes using those functions.
Is there a function that I am unable to find?
Or what is wrong with my current solution that the messages are not being sent?

Am trying to add an AVAudioMixerNode with custom formatting, but I keep crashing on installTap

I have a very simple app that taps the microphone and records the audio into multiple files. Working code at the very bottom.
However the files are large and would like to change the sampling of the microphone buffer, which with my current Mac, is sampled at 44100.
If I understand correctly:
The formatting of the incoming audio can be different depending on hardware (i.e. bluetooth headset, etc.)
I can attach mixer nodes to the AVAudioEngine and change the audio formatting specific to a node.
So I make two changes:
After starting the AVAudioEngine, I add:
audioEngine.attach(downMixer)
audioEngine.connect(node, to: downMixer, format: format16KHzMono)
I change the installTap to downMixer and use the lower formatting
downMixer.installTap(onBus: 0, bufferSize: 8192, format: format16KHzMono, block:
However the error I get on .installTap is:
required condition is false: NULL != engine"
The only two things I have noticed is:
That my user defined format16KHzMono has only 7 key values, rather than 8. The one missing key is : AVChannelLayoutKey However, the crash occurs even if I switch to the old format.
The line audioEngine.attach(downMixer) gives me nine "throwing -10878" errors. According to other posts I have found, this "supposedly" can be ignored?
I can only assume I am connecting the mixer incorrectly, or at the wrong time, etc. If anyone can figure out what I am doing wrong I would appreciate it.
class MyAudio {
let audioEngine = AVAudioEngine()
var audioFile : AVAudioFile?
var node : AVAudioInputNode
var recordingFormat : AVAudioFormat
var downMixer = AVAudioMixerNode()
let format16KHzMono = AVAudioFormat.init(commonFormat: .pcmFormatInt16, sampleRate: 16000, channels: 1, interleaved: true)
init() {
node = audioEngine.inputNode
recordingFormat = node.inputFormat(forBus: 0)
let format16KHzMono = AVAudioFormat.init(commonFormat: .pcmFormatInt16, sampleRate: 16000, channels: 1, interleaved: true) }
func startRecording() {
audioBuffs = []
x = -1
node.installTap(onBus: 0, bufferSize: 8192, format: recordingFormat, block: {
[self]
(buffer, _) in
x += 1
audioBuffs.append(buffer)
if x >= 100 {
var cumSilence = 0
var topRange = 0
audioFile = makeFile(format: recordingFormat, index: fileCount)
fileCount += 1
for i in 50...100 {
let curVol = getVolume(from: audioBuffs[i], bufferSize: audioBuffs[i].frameLength)
if curVol < voxGate {
cumSilence += 1
} else if curVol >= voxGate && cumSilence < 4 {
cumSilence = 0
} else if curVol >= voxGate && cumSilence > 4 {
topRange = i
break
}
}
for i in 0...(topRange - Int(cumSilence/2)){
do {
try audioFile!.write(from: audioBuffs[i]);
} catch {
mainView?.setLabelText(tag: 4, text: "write error")
stopRecording()
}
}
for _ in 0...(topRange - Int(cumSilence/2)){
audioBuffs.remove(at: 0) }
x = 0
}
})
audioEngine.prepare()
do {
try audioEngine.start()
} catch let error { print ("oh catch \(error)") }
}

Change AudioQueueBuffer's mAudioData

I would need to set the AudioQueueBufferRef's mAudioData. I tried with copyMemory:
inBuffer.pointee.copyMemory(from: lastItemOfArray, byteCount: byteCount) // byteCount is 512
but it doesnt't work.
The AudioQueueNewOutput() queue is properly setted up to Int16 pcm format
Here is my code:
class CustomObject {
var pcmInt16DataArray = [UnsafeMutableRawPointer]() // this contains pcmInt16 data
}
let callback: AudioQueueOutputCallback = { (
inUserData: UnsafeMutableRawPointer?,
inAQ: AudioQueueRef,
inBuffer: AudioQueueBufferRef) in
guard let aqp: CustomObject = inUserData?.bindMemory(to: CustomObject.self, capacity: 1).pointee else { return }
var numBytes: UInt32 = inBuffer.pointee.mAudioDataBytesCapacity
/// Set inBuffer.pointee.mAudioData to pcmInt16DataArray.popLast()
/// How can I set the mAudioData here??
inBuffer.pointee.mAudioDataByteSize = numBytes
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil)
}
From apple doc: https://developer.apple.com/documentation/audiotoolbox/audioqueuebuffer?language=objc
mAudioData:
The audio data owned the audio queue buffer. The buffer address cannot be changed.
So I guess the solution would be to set a new value to the same address
Anybody who knows how to do it?
UPDATE:
The incoming audio format is "pcm" signal (Little Endian) sampled at 48kHz. Here are my settings:
var dataFormat = AudioStreamBasicDescription()
dataFormat.mSampleRate = 48000;
dataFormat.mFormatID = kAudioFormatLinearPCM
dataFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsNonInterleaved;
dataFormat.mChannelsPerFrame = 1
dataFormat.mFramesPerPacket = 1
dataFormat.mBitsPerChannel = 16
dataFormat.mBytesPerFrame = 2
dataFormat.mBytesPerPacket = 2
And I am collecting the incoming data to
var pcmData = [UnsafeMutableRawPointer]()
You're close!
Try this:
inBuffer.pointee.mAudioData.copyMemory(from: lastItemOfArray, byteCount: Int(numBytes))
or this:
memcpy(inBuffer.pointee.mAudioData, lastItemOfArray, Int(numBytes))
Audio Queue Services was tough enough to work with when it was pure C. Now that we have to do so much bridging to get the API to work with Swift, it's a real pain. If you have the option, try out AVAudioEngine.
A few other things to check:
Make sure your AudioQueue has the same format that you've defined in your AudioStreamBasicDescription.
var queue: AudioQueueRef?
// assumes userData has already been initialized and configured
AudioQueueNewOutput(&dataFormat, callBack, &userData, nil, nil, 0, &queue)
Confirm you have allocated and primed the queue's buffers.
let numBuffers = 3
// using forced optionals here for brevity
for _ in 0..<numBuffers {
var buffer: AudioQueueBufferRef?
if AudioQueueAllocateBuffer(queue!, userData.bufferByteSize, &buffer) == noErr {
userData.mBuffers.append(buffer!)
callBack(inUserData: &userData, inAQ: queue!, inBuffer: buffer!)
}
}
Consider making your callback a function.
func callBack(inUserData: UnsafeMutableRawPointer?, inAQ: AudioQueueRef, inBuffer: AudioQueueBufferRef) {
let numBytes: UInt32 = inBuffer.pointee.mAudioDataBytesCapacity
memcpy(inBuffer.pointee.mAudioData, pcmData, Int(numBytes))
inBuffer.pointee.mAudioDataByteSize = numBytes
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, nil)
}
Also, see if you can get some basic PCM data to play through your audio queue before attempting to bring in the server side data.
var pcmData: [Int16] = []
for i in 0..<frameCount {
let element = Int16.random(in: Int16.min...Int16.max) // noise
pcmData.append(Int16(element))
}

Get all sound frequencies of a WAV-file using Swift and AVFoundation

I would like to capture all frequencies between given timespans in a Wav-file. The intent is to do some audio analysis in a later step. For test, I’ve used the application “Sox” to generate a 1 second long Wav-file which includes only a single tone at 13000Hz. I want to read the file and find that frequency.
I’m using AVFoundation (which is important) to read the file. Since the input data is in PCM, I need to use FFT to get the actual frequencies which I do using the Accelerate framework. However, I don’t get the expected result (13000Hz), but rather a lot of values I don’t understand. I’m new to audio development, so any hint about where my code is failing is appreciated. The code includes a few comments where the issue occurs.
Thanks in advance!
Code:
import AVFoundation
import Accelerate
class Analyzer {
// This function is implemented using the code from the following tutorial:
// https://developer.apple.com/documentation/accelerate/vdsp/fast_fourier_transforms/finding_the_component_frequencies_in_a_composite_sine_wave
func fftTransform(signal: [Float], n: vDSP_Length) -> [Int] {
let observed: [DSPComplex] = stride(from: 0, to: Int(n), by: 2).map {
return DSPComplex(real: signal[$0],
imag: signal[$0.advanced(by: 1)])
}
let halfN = Int(n / 2)
var forwardInputReal = [Float](repeating: 0, count: halfN)
var forwardInputImag = [Float](repeating: 0, count: halfN)
var forwardInput = DSPSplitComplex(realp: &forwardInputReal,
imagp: &forwardInputImag)
vDSP_ctoz(observed, 2,
&forwardInput, 1,
vDSP_Length(halfN))
let log2n = vDSP_Length(log2(Float(n)))
guard let fftSetUp = vDSP_create_fftsetup(
log2n,
FFTRadix(kFFTRadix2)) else {
fatalError("Can't create FFT setup.")
}
defer {
vDSP_destroy_fftsetup(fftSetUp)
}
var forwardOutputReal = [Float](repeating: 0, count: halfN)
var forwardOutputImag = [Float](repeating: 0, count: halfN)
var forwardOutput = DSPSplitComplex(realp: &forwardOutputReal,
imagp: &forwardOutputImag)
vDSP_fft_zrop(fftSetUp,
&forwardInput, 1,
&forwardOutput, 1,
log2n,
FFTDirection(kFFTDirection_Forward))
let componentFrequencies = forwardOutputImag.enumerated().filter {
$0.element < -1
}.map {
return $0.offset
}
return componentFrequencies
}
func run() {
// The frequencies array is a array of frequencies which is then converted to points on sinus curves (signal)
let n = vDSP_Length(4*4096)
let frequencies: [Float] = [1, 5, 25, 30, 75, 100, 300, 500, 512, 1023]
let tau: Float = .pi * 2
let signal: [Float] = (0 ... n).map { index in
frequencies.reduce(0) { accumulator, frequency in
let normalizedIndex = Float(index) / Float(n)
return accumulator + sin(normalizedIndex * frequency * tau)
}
}
// These signals are then restored using the fftTransform function above, giving the exact same values as in the "frequencies" variable
let frequenciesRestored = fftTransform(signal: signal, n: n).map({Float($0)})
assert(frequenciesRestored == frequencies)
// Now I want to do the same thing, but reading the frequencies from a file (which includes a constant tone at 13000 Hz)
let file = { PATH TO A WAV-FILE WITH A SINGLE TONE AT 13000Hz RUNNING FOR 1 SECOND }
let asset = AVURLAsset(url: URL(fileURLWithPath: file))
let track = asset.tracks[0]
do {
let reader = try AVAssetReader(asset: asset)
let sampleRate = 48000.0
let outputSettingsDict: [String: Any] = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: Int(sampleRate),
AVLinearPCMIsNonInterleaved: false,
AVLinearPCMBitDepthKey: 16,
AVLinearPCMIsFloatKey: false,
AVLinearPCMIsBigEndianKey: false,
]
let output = AVAssetReaderTrackOutput(track: track, outputSettings: outputSettingsDict)
output.alwaysCopiesSampleData = false
reader.add(output)
reader.startReading()
typealias audioBuffertType = Int16
autoreleasepool {
while (reader.status == .reading) {
if let sampleBuffer = output.copyNextSampleBuffer() {
var audioBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil))
var blockBuffer: CMBlockBuffer?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
bufferListSizeNeededOut: nil,
bufferListOut: &audioBufferList,
bufferListSize: MemoryLayout<AudioBufferList>.size,
blockBufferAllocator: nil,
blockBufferMemoryAllocator: nil,
flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
blockBufferOut: &blockBuffer
);
let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
for buffer in buffers {
let samplesCount = Int(buffer.mDataByteSize) / MemoryLayout<audioBuffertType>.size
let samplesPointer = audioBufferList.mBuffers.mData!.bindMemory(to: audioBuffertType.self, capacity: samplesCount)
let samples = UnsafeMutableBufferPointer<audioBuffertType>(start: samplesPointer, count: samplesCount)
let myValues: [Float] = samples.map {
let value = Float($0)
return value
}
// Here I would expect my array to include multiple "13000" which is the frequency of the tone in my file
// I'm not sure what the variable 'n' does in this case, but changing it seems to change the result.
// The value should be twice as high as the highest measurable frequency (Nyquist frequency) (13000),
// but this crashes the application:
let mySignals = fftTransform(signal: myValues, n: vDSP_Length(2 * 13000))
assert(mySignals[0] == 13000)
}
}
}
}
}
catch {
print("error!")
}
}
}
The test clip can be generated using:
sox -G -n -r 48000 ~/outputfile.wav synth 1.0 sine 13000

Access element given structure UnsafeMutablePointer

In C, I have the following code to allocate a AudioBufferList with the appropriate size and then populate it with relevant data.
AudioObjectPropertyScope scope = mIsInput ? kAudioDevicePropertyScopeInput : kAudioDevicePropertyScopeOutput;
AudioObjectPropertyAddress address = { kAudioDevicePropertyStreamConfiguration, scope, 0 };
UInt32 propertySize;
__Verify_noErr(
AudioObjectGetPropertyDataSize(mID, &address, 0, NULL, &propertySize)
);
AudioBufferList *bufferList = (AudioBufferList *) malloc(propertySize);
__Verify_noErr(
AudioObjectGetPropertyData(mID, &address, 0, NULL, &propertySize, bufferList)
);
Then, I can access struct element :
UInt32 result { 0 };
for(UInt32 i = 0; i < bufferList->mNumberBuffers; ++i)
{
result += bufferList->mBuffers[i].mNumberChannels;
}
free(bufferList)
How can I replicate this behavior in Swift, given the fact that I use the same framework, i.e. AudioToolbox?
I have tried the following but I can't access the mNumberBuffers
let scope: AudioObjectPropertyScope = scope ? kAudioDevicePropertyScopeInput : kAudioDevicePropertyScopeOutput
var address: AudioObjectPropertyAddress = AudioObjectPropertyAddress(mSelector: kAudioDevicePropertyStreamConfiguration, mScope: scope, mElement: 0)
var size: UInt32 = 0
CheckError(
AudioObjectGetPropertyDataSize(mID, &address, 0, nil, &size),
"Couldn't get stream configuration data size."
)
var bufferList = UnsafeMutableRawPointer.allocate(bytes: Int(size), alignedTo: MemoryLayout<AudioBufferList>.alignment).assumingMemoryBound(to: AudioBufferList.self)
CheckError(
AudioObjectGetPropertyData(mID, &address, 0, nil, &size, bufferList),
"Couldn't get device's stream configuration"
)
You can create an AudioBufferList like this:
import AudioUnit
import AVFoundation
var myBufferList = AudioBufferList(
mNumberBuffers: 2,
mBuffers: AudioBuffer(
mNumberChannels: UInt32(2),
mDataByteSize: 2048,
mData: nil) )
When handed a bufferList with an unknown number of buffers, you can get the number of buffers and the sample data like this:
let myBufferListPtr = UnsafeMutableAudioBufferListPointer(myBufferList)
let numBuffers = myBufferListPtr.count
if (numBuffers > 0) {
let buffer : AudioBuffer = myBufferListPtr[0]
let bufferDataPointer = UnsafeMutableRawPointer(buffer.mData)
if let dataPtr = bufferDataPointer {
dataPtr.assumingMemoryBound(to: Float.self)[ i ] = x
...
The rest of my source code example is on GitHub: https://gist.github.com/hotpaw2/ba815fc23b5d642705f2b1dedfaf0107