AudioKit - How do you tap an AKMicrophone's data to an array of doubles? - swift

I need to get the data from AKMicrophone in raw form so i can make a custom plot. All of the examples from AudioKit use their built in plots, but I need to use a plot I made. The input my plot is expecting is an array of Doubles, but im not very worried about typing since I can change that. I just cant get a tap to access the data working correctly. I have already looked at these:
https://groups.google.com/forum/#!searchin/audiokit/tap%7Csort:date/audiokit/V16xV7zZzPM/wuJjmhG8BwAJ
AudioKit - How to get Real Time floatChannelData from Microphone?
but these answers really just show examples from the audiokit examples which aren't helpful for what i need
Here is my attempt, which instantly crashes saying "required condition is false: [AVAEGraphNode.mm:851:CreateRecordingTap: (nullptr == Tap())]
2018-12-27 13:13:25.628188-0700"
mic.avAudioNode.installTap(onBus: 0, bufferSize:
AVAudioFrameCount(bufferSize), format: nil) { [weak self] (buffer, _) in
guard let strongSelf = self else {
AKLog("Unable to create strong reference to self")
return
}
buffer.frameLength = AVAudioFrameCount(strongSelf.bufferSize)
let offset = Int(buffer.frameCapacity - buffer.frameLength)
if let tail = buffer.floatChannelData?[0] {
print(tail)
}
}

Related

Cannot Query Feature Attributes from ArcGIS Online Feature Service

I have created a feature service on ArcGIS online which has approximately 2000 features. Each feature has four fields: name, latitude, longitude and a boolean validation field (true/false). Two custom symbols are used - one for validated features and one for non-validated features.
I have successfully connected to the feature service from my native (xcode/swift) iOS application and the features are displayed properly on top of the basemap.
I have implemented a touch delegate and successfully detect when a feature symbol is tapped. The issue I am having is trying to query (read) the "name" field attribute associated with the symbol that was tapped. I have tried using the code below but have not been able to read the attribute:
func geoView(_ geoView: AGSGeoView, didTapAtScreenPoint screenPoint: CGPoint, mapPoint: AGSPoint) {
if let activeSelectionQuery = activeSelectionQuery {
activeSelectionQuery.cancel()
}
guard let featureLayer = featureLayer else {
return
}
//tolerance level
let toleranceInPoints: Double = 12
//use tolerance to compute the envelope for query
let toleranceInMapUnits = toleranceInPoints * viewMap.unitsPerPoint
let envelope = AGSEnvelope(xMin: mapPoint.x - toleranceInMapUnits,
yMin: mapPoint.y - toleranceInMapUnits,
xMax: mapPoint.x + toleranceInMapUnits,
yMax: mapPoint.y + toleranceInMapUnits,
spatialReference: viewMap.map?.spatialReference)
//create query parameters object
let queryParams = AGSQueryParameters()
queryParams.geometry = envelope
//run the selection query
activeSelectionQuery = featureLayer.selectFeatures(withQuery: queryParams, mode: .new) { [weak self] (queryResult: AGSFeatureQueryResult?, error: Error?) in
if let error = error {
print("error: ",error)
}
if let result = queryResult {
print("\(result.featureEnumerator().allObjects.count) feature(s) selected")
print("name: ", result.fields)
}
}
}
I am using the ArGIS iOS 100.6 SDK.
Any help would be appreciated in solving this issue.
The featureLayer selection methods merely update the map view display to visually highlight the features.
From the featureLayer, you should get the featureTable and then call query() on that. Note that there are two methods. A simple query() that gets minimal attributes back, or an override on AGSServiceFeatureTable that allows you to specify that you want all fields back. You might need to specify .loadAll on that override to get the name field back. We do it this way to avoid downloading too much information (by default we download enough to symbolize and label the feature).

AudioKit Creating Sinewave Tone When Returning from Background

I'm using AudioKit to run an AKSequencer() that plays both mp3 and wav files using AKMIDISampler(). Everything works great, except in cases when the app has entered background state for more than 30+ min, and then brought back up again for use. It seems to then lose all of it's audio connections and plays the "missing file" sinewave tone mentioned in other threads. The app can happily can enter background momentarily, user can quit, etc without the tone. It seems to only happen when left in background for long periods of time and then brought up again.
I've tried changing the order of AudioKit.start() and file loading, but nothing seems to completely eliminate the issue.
My current workaround is simply to prevent the user's display from timing out, however that does not address many use-cases of the issue occurring.
Is there a way to handle whatever error I'm setting up that creates this tone? Here is a representative example of what I'm doing with ~40 audio files.
//viewController
override func viewDidLoad() {
sequencer.setupSequencer()
}
class SamplerWav {
let audioWav = AKMIDISampler()
func loadWavFile() {
try? audioWav.loadWav("some_wav_audio_file")
}
class SamplerMp3 {
let audioMp3 = AKMIDISampler()
let audioMp3_akAudioFile = try! AKAudioFile(readFileName: "some_other_audio_file.mp3")
func loadMp3File() {
try? audioMp3.loadAudioFile(audioMp3_akAudioFile)
}
class Sequencer {
let mixer = AKMixer()
let subMix = AKMixer()
let samplerWav = SamplerWav()
let samplerMp3 = SamplerMp3()
var callbackTrack: AKMusicTrack!
let callbackInstr = AKMIDICallbackInstrument()
func setupSequencer{
AudioKit.output = mixer.mixer
try! AudioKit.start()
callbackTrack = sequencer.newTrack()
callbackTrack?.setMIDIOutput(callbackInstr.midiIn)
samplerWav.loadWavFile()
samplerMp3.loadMp3File()
samplerWav.audioWav >>> subMix
samplerMp3.audioMp3 >>> submix
submix >>> mixer
}
//Typically run from a callback track
func playbackSomeSound(){
try? samplerWav.audioWav.play(noteNumber: 60, velocity: 100, channel: 1)
}
}
Thanks! I'm a big fan of AudioKit.
After some trial and error, here's a workflow that seems to address the issue for my circumstance:
-create my callback track(s) -once- from viewDidLoad
-stop AudioKit, and call .detach() on all my AKMIDISampler tracks and any routing in willResignActive
-start AudioKit (again), and reload and reroute all of the audio files/tracks from didBecomeActive

Append multiple VNCoreMLModel ARKit and CoreML

I'm a noob and I don't really know how can I happened multiple CoreML model to the VNCoreMLRequest.
With the code below is just using one model but I want to append also another model (visionModel2 on the example below). Can anyone help me? Thank you!
private func performVisionRequest(pixelBuffer: CVPixelBuffer){
let visionModel = try! VNCoreMLModel(for: self.iFaceModel.model)
let visionModel2 = try! VNCoreMLModel(for: self.ageModel.model)
let request = VNCoreMLRequest(model: visionModel){ request, error in
if error != nil {
return
}
guard let observations = request.results else {
return
}
let observation = observations.first as! VNClassificationObservation
print("Name \(observation.identifier) and confidence is \(observation.confidence)")
DispatchQueue.main.async {
if observation.confidence.isLess(than: 0.04) {
self.displayPredictions(text: "Not recognized")
print("Hidden")
}else {
self.displayPredictions(text: observation.identifier)
}
}
}
To evaluate an image using multiple ML models, you’ll need to perform multiple requests. For example:
let faceModelRequest = VNCoreMLRequest(model: visionModel)
let ageModelRequest = VNCoreMLRequest(model: visionModel2)
let handler = VNImageRequestHandler( /* my image and options */ )
handler.perform([faceModelRequest, ageModelRequest])
guard let faceResults = faceModelRequest.results as? [VNClassificationObservation],
let ageResults = ageModelRequest.results as? [VNClassificationObservation]
else { /*handle errors from each request */ }
(Yes, you can run Vision requests without a completion handler and then collect the results from multiple requests. Might want to check prefersBackgroundProcessing on the requests and dispatch everything to a background queue yourself, though.)
After that, you probably want to iterate the results from both requests together. Here’s a handy way you could do that with Swift standard library sequence functions, but it assumes that both models return information about the same faces in the same order:
for (faceObservation, ageObservation) in zip (faceResults, ageResults) {
print(“face \(faceObservation.classification) confidence \(faceObservation.confidence)”)
print(“age \(ageObservation.classification) confidence \(ageObservation.confidence)”)
// whatever else you want to do with results...
}
Disclaimer: Code written in StackExchange iOS app, not tested. But it’s at least a sketch of what you’re probably looking for — tweak as needed.

How to use AVAudioNodeTapBlock in a tap in AVAudioEngine.

I am trying to install a tap on an AVAudioEngine. I have the current code:
guard let engine = engine, let input = engine.inputNode else {
print("error!")
return
}
let format = input.inputFormat(forBus: 0)
let bufferSize = 4096
input.installTap(onBus: 0, bufferSize: AVAudioFrameCount(bufferSize), format: format, block: )
I am unsure on what goes in the block. There isn't much documentation on this. I have found this: https://developer.apple.com/reference/avfoundation/avaudionodetapblock?language=objc
Could someone explain how to use this?
Thanks,
Feras A.
You'd better check the Swift version of the reference, if you want to write it in Swift.
Declaration
typealias AVAudioNodeTapBlock = (AVAudioPCMBuffer, AVAudioTime) -> Void
You need to pass a closure taking two arguments and returning nothing, so you can write it as:
input.installTap(onBus: 0, bufferSize: AVAudioFrameCount(bufferSize), format: format, block: {buffer, when in
//...
})
The type of two arguments buffer and when are AVAudioPCMBuffer and AVAudioTime respectively.
So, if you want to record the tapped audio into an audio file, you can write something like this:
input.installTap(onBus: 0, bufferSize: AVAudioFrameCount(bufferSize), format: format, block: {buffer, when in
do {
try self.audioFile?.write(from: buffer)
} catch {
print(error)
}
})
(Assume audioFile is an instance property of type AVAudioFile?.)
Anyway, you need to know how to use AVAudioPCMBuffer.
I'm not sure if input.inputFormat(forBus: 0) can be an appropriate format in your case, but that may be another issue.

Unexpectedly unwrapping an optional to find a nil after an API call to Spotify

So I know this may be a bit specific but I've been staring at my code and am unable to resolve this issue. Basically, I'm making a network call to spotify to obtain a certain playlist and pass a number that will ultimately determine the number of songs I get back. The code is basically as follows:
// A network call is made just above to return somePlaylist
let playlist = somePlaylist as! SPTPartialPlaylist
var songs: [SPTPartialTrack] = []
// load in playlist to receive back songs
SPTPlaylistSnapshot.playlistWithURI(playlist.uri, session: someSession) { (error: NSError!, data: AnyObject!) in
// cast the data into a correct format
let playlistViewer = data as! SPTPlaylistSnapshot
let playlist = playlistViewer.firstTrackPage
// get the songs
for _ in 1...numberOfSongs {
let random = Int(arc4random_uniform(UInt32(playlist.items.count)))
songs.append(playlist.items[random] as! SPTPartialTrack)
}
}
The problem comes at the portion of code that initializes random. In maybe 1 in 20 calls to this function I, for whatever, reason unwrap a nil value for playlist.items.count and can't seem to figure out why. Maybe it's something I don't understand about API calls or something else I'm failing to see but I can't seem to make sense of it.
Anyone have any recommendations on addressing this issue or how to go about debugging this?
Ok, after sleeping on it and working on it some more I seem to have resolved the issue. Here's the error handling I implemented into my code.
if let actualPlaylist = playlist, actualItems = actualPlaylist.items {
if actualItems.count == 0 {
SongScraper.playlistHasSongs = false
print("Empty playlist, loading another playlist")
return
}
for _ in 1...numberOfSongs {
let random = Int(arc4random_uniform(UInt32(actualItems.count)))
songs.append(actualPlaylist.items[random] as! SPTPartialTrack)
}
completionHandler(songs: songs)
}
else {
print("Returned a nil playlist, loading another playlist")
SongScraper.playlistHasSongs = false
return
}