IOS Swift WebRtc insertDtmf issue - swift

I am building an app that works with janus gateway via websocket and webrtc. everything works fine, I can send and receive voice calls successfully but insertDtmf metod doesnt send my dtmf to other peer.
Same account and same codes in android works fine.
Here is where I prepare webrtc
private func prepareWebRtc( callbacks:PluginHandleWebRTCCallbacksDelegate) {
if (pc != nil) {
if (callbacks.getJsep() == nil) {
createSdpInternal(callbacks: callbacks, isOffer: isOffer)
} else {
let jsep = callbacks.getJsep()!
let sdpString:String = jsep["sdp"] as! String
let type:RTCSdpType = RTCSessionDescription.type(for: jsep["type"] as! String)
let sdp:RTCSessionDescription = RTCSessionDescription.init(type: type, sdp: sdpString)
pc.setRemoteDescription(sdp) { (err) in}
}
} else {
trickle = callbacks.getTrickle() != nil ? callbacks.getTrickle()! : false
streamsDone(webRTCCallbacks: callbacks)
}
}
private func streamsDone(webRTCCallbacks:PluginHandleWebRTCCallbacksDelegate) {
let rtcConfig = RTCConfiguration.init()
rtcConfig.iceServers = server.iceServers
rtcConfig.bundlePolicy = RTCBundlePolicy.maxBundle
rtcConfig.rtcpMuxPolicy = RTCRtcpMuxPolicy.require
rtcConfig.continualGatheringPolicy = RTCContinualGatheringPolicy.gatherContinually
rtcConfig.sdpSemantics = .planB
let source :RTCAudioSource = sessionFactory.audioSource(with: audioConstraints)
let audioTrack:RTCAudioTrack? = sessionFactory.audioTrack(with: source, trackId: AUDIO_TRACK_ID)
let stream:RTCMediaStream? = sessionFactory.mediaStream(withStreamId: LOCAL_MEDIA_ID)
if (audioTrack != nil){
stream!.addAudioTrack(audioTrack!)
myStream = stream
}
if (stream != nil){
onLocalStream(stream: stream!)
}
// pc.addTrack(audioTrack, mediaStreamLabels);
pc = sessionFactory.peerConnection(with: rtcConfig, constraints: audioConstraints, delegate: nil)
if (myStream != nil){
pc.add(myStream)
}
if let obj:[String:Any] = webRTCCallbacks.getJsep(){
let sdp:String = obj["sdp"] as! String
let type:RTCSdpType = RTCSessionDescription.type(for: obj["type"] as! String)
let sessionDescription:RTCSessionDescription = RTCSessionDescription(type: type, sdp: sdp)
print(" STREAMS DONE JSEP NULL DEĞİL")
// pc.setRemoteDescription(WebRtcObserver(webRTCCallbacks), sessionDescription);
pc.setRemoteDescription(sessionDescription) { (err) in
}
}else{
createSdpInternal(callbacks: webRTCCallbacks, isOffer: isOffer)
print(" STREAMS DONE JSEP NULL ");
}
/* } catch (Exception ex) {
webRTCCallbacks.onCallbackError(ex.getMessage());
}*/
}
and here where I try to send dtmf
public func insertDTMF(_ tone:String){
if(pc != nil){
if let dtmfSender = pc.senders.first?.dtmfSender{
dtmfSender.insertDtmf(tone, duration: 200, interToneGap: 70)
}
//Here the timers are in ms
}
}

In my case, this is how I have handled insert DTMF functionality.
a - First filter out audio RTCRtpSender track:
var audioSender: RTCRtpSender?
for rtpSender in pc.senders {
if rtpSender.track?.kind == "audio" {
audioSender = rtpSender
}
}
b - And then use the same filtered audioSender object to insert the tone using OperationQueue
if let audioSender = audioSender {
let queue = OperationQueue()
queue.addOperation({
audioSender.dtmfSender?.insertDtmf(dtmfTone, duration: TimeInterval(0.1),interToneGap: TimeInterval(0.5))
})
}
Note: you can modify duration and interToneGap as per your requirement.
Hope this solution works for you as well.
The original answer can be found here: https://stackoverflow.com/a/60148372/4515269

Related

Too many open files using NWListener on macOS

I'm trying to create an app that listens for incoming data on a UDP port using NWListener. This works fine until 248 messages are received - at which point the app crashes with the error message nw_listener_inbox_accept_udp socket() failed [24: Too many open files].
This (I think) relates to the file descriptor limit and so I tried resetting the NWListener within a safe count of 100, but the problem still persists. This seems clumsy and I'm not sure it's actually possible within a receive.
How can I truly reset it such that it releases any open files?
Full code below, which is instantiated within a SwiftUI content view using:
.onAppear() {
udpListener.start(port: self.udpPort)
}
Full class:
import Foundation
import Network
class UdpListener: NSObject, ObservableObject {
private var listener: NWListener?
private var port: NWEndpoint.Port?
#Published var incoming: String = ""
#Published var messageCount: Int = 0
func start(port: NWEndpoint.Port) {
self.port = port
do {
let params = NWParameters.udp
params.allowLocalEndpointReuse = true
self.listener = try NWListener(using: params, on: port)
self.listener?.stateUpdateHandler = {(newState) in
switch newState {
case .ready:
print("ready")
default:
break
}
}
self.listener?.newConnectionHandler = {(newConnection) in
newConnection.stateUpdateHandler = {newState in
switch newState {
case .ready:
self.receive(on: newConnection)
default:
break
}
}
newConnection.start(queue: DispatchQueue(label: "new client"))
}
} catch {
print("unable to create listener")
}
self.listener?.start(queue: .main)
}
func receive(on connection: NWConnection) {
connection.receiveMessage { (data, context, isComplete, error) in
if let error = error {
print(error)
return
}
guard let data = data, !data.isEmpty else {
print("unable to receive data")
return
}
let date = Date()
let formatter = DateFormatter()
formatter.dateFormat = "HH:mm:ss.SSSS"
DispatchQueue.main.async {
self.messageCount = self.messageCount + 1
self.incoming = formatter.string(from: date) + " " + String(decoding: data, as: UTF8.self) + "\n" + self.incoming
}
if self.messageCount == 100 {
print("Resetting")
self.listener?.stateUpdateHandler = nil
self.listener?.newConnectionHandler = nil
self.listener?.cancel()
self.listener = nil
self.start(port: self.port!)
}
}
}
}

List all available audio devices

I want to list all available audio devices in swift to provide a selection for input and output. My application should listen on a audio channel and "write" to another. I do not want the system default!
let devices = AVCaptureDevice.devices(for: .audio)
print(devices.count)
for device in devices {
print(device.localizedName)
}
The Code lists 0 devices. But I expect at least the internal output.
Some links to CoreAudio, AudioToolbox and AVFoundation that explain the audio source selection would be nice.
Here's some Swift 5 code that will enumerate all the audio devices.
You can use the uid with AVAudioPlayer's currentDevice property to output to a specific device.
import Cocoa
import AVFoundation
class AudioDevice {
var audioDeviceID:AudioDeviceID
init(deviceID:AudioDeviceID) {
self.audioDeviceID = deviceID
}
var hasOutput: Bool {
get {
var address:AudioObjectPropertyAddress = AudioObjectPropertyAddress(
mSelector:AudioObjectPropertySelector(kAudioDevicePropertyStreamConfiguration),
mScope:AudioObjectPropertyScope(kAudioDevicePropertyScopeOutput),
mElement:0)
var propsize:UInt32 = UInt32(MemoryLayout<CFString?>.size);
var result:OSStatus = AudioObjectGetPropertyDataSize(self.audioDeviceID, &address, 0, nil, &propsize);
if (result != 0) {
return false;
}
let bufferList = UnsafeMutablePointer<AudioBufferList>.allocate(capacity:Int(propsize))
result = AudioObjectGetPropertyData(self.audioDeviceID, &address, 0, nil, &propsize, bufferList);
if (result != 0) {
return false
}
let buffers = UnsafeMutableAudioBufferListPointer(bufferList)
for bufferNum in 0..<buffers.count {
if buffers[bufferNum].mNumberChannels > 0 {
return true
}
}
return false
}
}
var uid:String? {
get {
var address:AudioObjectPropertyAddress = AudioObjectPropertyAddress(
mSelector:AudioObjectPropertySelector(kAudioDevicePropertyDeviceUID),
mScope:AudioObjectPropertyScope(kAudioObjectPropertyScopeGlobal),
mElement:AudioObjectPropertyElement(kAudioObjectPropertyElementMaster))
var name:CFString? = nil
var propsize:UInt32 = UInt32(MemoryLayout<CFString?>.size)
let result:OSStatus = AudioObjectGetPropertyData(self.audioDeviceID, &address, 0, nil, &propsize, &name)
if (result != 0) {
return nil
}
return name as String?
}
}
var name:String? {
get {
var address:AudioObjectPropertyAddress = AudioObjectPropertyAddress(
mSelector:AudioObjectPropertySelector(kAudioDevicePropertyDeviceNameCFString),
mScope:AudioObjectPropertyScope(kAudioObjectPropertyScopeGlobal),
mElement:AudioObjectPropertyElement(kAudioObjectPropertyElementMaster))
var name:CFString? = nil
var propsize:UInt32 = UInt32(MemoryLayout<CFString?>.size)
let result:OSStatus = AudioObjectGetPropertyData(self.audioDeviceID, &address, 0, nil, &propsize, &name)
if (result != 0) {
return nil
}
return name as String?
}
}
}
class AudioDeviceFinder {
static func findDevices() {
var propsize:UInt32 = 0
var address:AudioObjectPropertyAddress = AudioObjectPropertyAddress(
mSelector:AudioObjectPropertySelector(kAudioHardwarePropertyDevices),
mScope:AudioObjectPropertyScope(kAudioObjectPropertyScopeGlobal),
mElement:AudioObjectPropertyElement(kAudioObjectPropertyElementMaster))
var result:OSStatus = AudioObjectGetPropertyDataSize(AudioObjectID(kAudioObjectSystemObject), &address, UInt32(MemoryLayout<AudioObjectPropertyAddress>.size), nil, &propsize)
if (result != 0) {
print("Error \(result) from AudioObjectGetPropertyDataSize")
return
}
let numDevices = Int(propsize / UInt32(MemoryLayout<AudioDeviceID>.size))
var devids = [AudioDeviceID]()
for _ in 0..<numDevices {
devids.append(AudioDeviceID())
}
result = AudioObjectGetPropertyData(AudioObjectID(kAudioObjectSystemObject), &address, 0, nil, &propsize, &devids);
if (result != 0) {
print("Error \(result) from AudioObjectGetPropertyData")
return
}
for i in 0..<numDevices {
let audioDevice = AudioDevice(deviceID:devids[i])
if (audioDevice.hasOutput) {
if let name = audioDevice.name,
let uid = audioDevice.uid {
print("Found device \"\(name)\", uid=\(uid)")
}
}
}
}
}
The code you posted works perfectly fine for audio input devices when I paste it into an Xcode Playground.
Note, however, that AVCaptureDevice API does not list audio output devices as they are no capture devices but playback devices. If a device supports both, input and output, you can still use the device's uniqueID in an output context, for example with AVPlayer's audioOutputDeviceUniqueID.
(Also note, that if you want your code to work on iOS as well, devices(for:) is marked as deprecated since iOS 11 and you should move to AVCaptureDevice.DiscoverySession instead.)
Regarding your request for additional info on Core Audio and AudioToolbox, this SO question has some pretty comprehensive answers on the matter. The question asks for input devices but the answers provide enough context to let you understand handling of the output side as well. There's even an answer with some (dated) Swift code. On a personal note I have to say calling Core Audio API from Swift is oftentimes more pain than gain. Because of that it might be faster, although a bit unsafer, wrapping those portions of code into Objective-C or plain C and exposing them via the Swift bridging header, if your project allows it.
If you want something like a actionSheet and need to switch between audio devices seamlessly. Use this code.
Code
import Foundation
import AVFoundation
import UIKit
#objc class AudioDeviceHandler: NSObject {
#objc static let shared = AudioDeviceHandler()
/// Present audio device selection alert
/// - Parameters:
/// - presenterViewController: viewController where the alert need to present
/// - sourceView: alertController source view in case of iPad
#objc func presentAudioOutput(_ presenterViewController : UIViewController, _ sourceView: UIView) {
let speakerTitle = "Speaker"
let headphoneTitle = "Headphones"
let deviceTitle = (UIDevice.current.userInterfaceIdiom == .pad) ? "iPad" : "iPhone"
let cancelTitle = "Cancel"
var deviceAction = UIAlertAction()
var headphonesExist = false
let optionMenu = UIAlertController(title: nil, message: nil, preferredStyle: .actionSheet)
guard let availableInputs = AVAudioSession.sharedInstance().availableInputs else {
print("No inputs available ")
return
}
for audioPort in availableInputs {
switch audioPort.portType {
case .bluetoothA2DP, .bluetoothHFP, .bluetoothLE :
let bluetoothAction = UIAlertAction(title: audioPort.portName, style: .default) { _ in
self.setPreferredInput(port: audioPort)
}
if isCurrentOutput(portType: audioPort.portType) {
bluetoothAction.setValue(true, forKey: "checked")
}
optionMenu.addAction(bluetoothAction)
case .builtInMic, .builtInReceiver:
deviceAction = UIAlertAction(title: deviceTitle, style: .default, handler: { _ in
self.setToDevice(port: audioPort)
})
case .headphones, .headsetMic:
headphonesExist = true
let headphoneAction = UIAlertAction(title: headphoneTitle, style: .default) { _ in
self.setPreferredInput(port: audioPort)
}
if isCurrentOutput(portType: .headphones) || isCurrentOutput(portType: .headsetMic) {
headphoneAction.setValue(true, forKey: "checked")
}
optionMenu.addAction(headphoneAction)
case .carAudio:
let carAction = UIAlertAction(title: audioPort.portName, style: .default) { _ in
self.setPreferredInput(port: audioPort)
}
if isCurrentOutput(portType: audioPort.portType) {
carAction.setValue(true, forKey: "checked")
}
optionMenu.addAction(carAction)
default:
break
}
}
// device actions only required if no headphone available
if !headphonesExist {
if (isCurrentOutput(portType: .builtInReceiver) ||
isCurrentOutput(portType: .builtInMic)) {
deviceAction.setValue(true, forKey: "checked")
}
optionMenu.addAction(deviceAction)
}
// configure speaker action
let speakerAction = UIAlertAction(title: speakerTitle, style: .default) { _ in
self.setOutputToSpeaker()
}
if isCurrentOutput(portType: .builtInSpeaker) {
speakerAction.setValue(true, forKey: "checked")
}
optionMenu.addAction(speakerAction)
// configure cancel action
let cancelAction = UIAlertAction(title: cancelTitle, style: .cancel)
optionMenu.addAction(cancelAction)
optionMenu.modalPresentationStyle = .popover
if let presenter = optionMenu.popoverPresentationController {
presenter.sourceView = sourceView
presenter.sourceRect = sourceView.bounds
}
presenterViewController.present(optionMenu, animated: true, completion: nil)
// auto dismiss after 5 seconds
DispatchQueue.main.asyncAfter(deadline: .now() + 5.0) {
optionMenu.dismiss(animated: true, completion: nil)
}
}
#objc func setOutputToSpeaker() {
do {
try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)
} catch let error as NSError {
print("audioSession error turning on speaker: \(error.localizedDescription)")
}
}
fileprivate func setPreferredInput(port: AVAudioSessionPortDescription) {
do {
try AVAudioSession.sharedInstance().setPreferredInput(port)
} catch let error as NSError {
print("audioSession error change to input: \(port.portName) with error: \(error.localizedDescription)")
}
}
fileprivate func setToDevice(port: AVAudioSessionPortDescription) {
do {
// remove speaker if needed
try AVAudioSession.sharedInstance().overrideOutputAudioPort(AVAudioSession.PortOverride.none)
// set new input
try AVAudioSession.sharedInstance().setPreferredInput(port)
} catch let error as NSError {
print("audioSession error change to input: \(AVAudioSession.PortOverride.none.rawValue) with error: \(error.localizedDescription)")
}
}
#objc func isCurrentOutput(portType: AVAudioSession.Port) -> Bool {
AVAudioSession.sharedInstance().currentRoute.outputs.contains(where: { $0.portType == portType })
}
}
How to use
class ViewController: UIViewController {
#IBOutlet weak var audioButton: UIButton!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
}
#IBAction func selectAudio(_ sender: Any) {
// present audio device selection action sheet
AudioDeviceHandler.shared.presentAudioOutput(self, audioButton)
}
}
Result
It is possible to list input and output devices. This is a simplification of stevex's answer.
For output devices:
if (audioDevice.hasOutput) {
if let name = audioDevice.name,
let uid = audioDevice.uid {
print("Found device \"\(name)\", uid=\(uid)")
}
}
For input devices:
if (!audioDevice.hasOutput) {
if let name = audioDevice.name,
let uid = audioDevice.uid {
print("Found device \"\(name)\", uid=\(uid)")
}
}
(Notice the ! before audioDevice.hasOutput.)

Swift AVFoundation in playground not outputting sound

My Morse code translator will not output the sound as it should. I have tested the speakers and my methods without this function and it works flawlessly, but it is not in context with the rest of the program. The compiler gives me no errors and the playground does not crash, it just doesn't play sound. Volume and ringer is at full.
func speakTheCode(message: String) {
var speaker = AVAudioPlayer()
let longBeep = #fileLiteral(resourceName: "beep_long.mp3")
let shortBeep = #fileLiteral(resourceName: "beep_short.mp3")
let dash = "-"
let dot = "."
for character in message.characters {
if character == dash[dash.startIndex] {
speaker = try! AVAudioPlayer(contentsOf: longBeep)
speaker.prepareToPlay()
print("-")
}
else if character == dot[dot.startIndex] {
speaker = try! AVAudioPlayer(contentsOf: shortBeep)
speaker.prepareToPlay()
print(".")
}
speaker.play()
}
}
I've been messing around with the code for hours now and nothing is working. What (if anything) am I doing wrong?
There seems to some playgrounds issues with playing audio. See this thread:
Playing a sound in a Swift Playground
However, I was able to make some changes to your code and get it to work. Here's my code:
class Morse:NSObject, AVAudioPlayerDelegate {
private var message = ""
private var dotSound:AVAudioPlayer!
private var dashSound:AVAudioPlayer!
private let dash = Character("-")
private let dot = Character(".")
private var index:String.Index!
init(message:String) {
super.init()
do {
if let url = Bundle.main.url(forResource:"beep_short", withExtension:"mp3") {
self.dotSound = try AVAudioPlayer(contentsOf:url)
self.dotSound.delegate = self
self.dotSound.prepareToPlay()
}
} catch {
NSLog("Error loading dot audio!")
}
do {
if let url = Bundle.main.url(forResource:"beep_long", withExtension:"mp3") {
self.dashSound = try AVAudioPlayer(contentsOf:url)
self.dashSound.delegate = self
self.dashSound.prepareToPlay()
}
} catch {
NSLog("Error loading dash audio!")
}
self.message = message
self.index = message.startIndex
}
func playCharacter() {
let character = message.characters[index]
NSLog("Character: \(character)")
if character == dash {
dashSound.play()
} else if character == dot {
dotSound.play()
}
}
func audioPlayerDidFinishPlaying(_ player: AVAudioPlayer, successfully flag: Bool) {
NSLog("Finished playing")
if index != message.endIndex {
self.index = message.index(after:index)
playCharacter()
}
}
}
let m = Morse(message:"...---")
m.playCharacter()
PlaygroundPage.current.needsIndefiniteExecution = true
I had to enable indefinite execution to get the code to execute at all. Also, I had some issues with the second audio file loading but I didn't investigate further to see if it was an issue with my test file or something else since it mostly worked.
#Fahim still it is showing error
class Morse:NSObject, AVAudioPlayerDelegate {
private var message = ""
private var dotSound:AVAudioPlayer!
private var dashSound:AVAudioPlayer!
private let dash = Character("-")
private let dot = Character(".")
private var index:String.Index!
init(message:String) {
super.init()
do {
if let url = Bundle.main.url(forResource:"beep_short", withExtension:"mp3") {
self.dotSound = try AVAudioPlayer(contentsOf:url)
self.dotSound.delegate = self
self.dotSound.prepareToPlay()
}
} catch {
NSLog("Error loading dot audio!")
}
do {
if let url = Bundle.main.url(forResource:"beep_long", withExtension:"mp3") {
self.dashSound = try AVAudioPlayer(contentsOf:url)
self.dashSound.delegate = self
self.dashSound.prepareToPlay()
}
} catch {
NSLog("Error loading dash audio!")
}
self.message = message
self.index = message.startIndex
}
func playCharacter() {
let character = message.characters[index]
NSLog("Character: \(character)")
if character == dash {
dashSound.play()
} else if character == dot {
dotSound.play()
}
}
func audioPlayerDidFinishPlaying(_ player: AVAudioPlayer, successfully flag: Bool) {
NSLog("Finished playing")
if index != message.endIndex {
self.index = message.index(after:index)
playCharacter()
}
}
}
let m = Morse(message:"...---")
m.playCharacter()
PlaygroundPage.current.needsIndefiniteExecution = true

CloudKit shared record: how to make it editable by the user who received the link

I tried everything, read everywhere, but I'm unable to make a shared record editable by the user who received the link
Everything works fine except the edit performed by the invited user
here's the sharing code (like the WWDC16 video):
let sharingController = UICloudSharingController { (controller, preparationCompletionHandler) in
let share = CKShare(rootRecord: record)
share.publicPermission = .readWrite
share[CKShareTitleKey] = "Help me to improve data" as CKRecordValue
share[CKShareTypeKey] = "com.company.AppName" as CKRecordValue
let modifyRecordsOperation = CKModifyRecordsOperation( recordsToSave: [record, share], recordIDsToDelete: nil)
modifyRecordsOperation.modifyRecordsCompletionBlock = { (records, recordIDs, error) in
if let errorK = error {
print(errorK.localizedDescription)
}
preparationCompletionHandler(share, CKContainer.default(), error)
}
CKContainer.default().privateCloudDatabase.add(modifyRecordsOperation)
}
sharingController.availablePermissions = [.allowPublic, .allowPrivate, .allowReadWrite]
sharingController.delegate = self
controller.present(sharingController, animated: true)
The console always print:
PrivateDB can't be used to access another user's zone
thank you
when you retrive the shared record you need to add the operation to the sharedDatabase:
func fetchShare(_ metadata: CKShareMetadata) {
debugPrint("fetchShare")
let operation = CKFetchRecordsOperation(recordIDs: [metadata.rootRecordID])
operation.perRecordCompletionBlock = { record, _, error in
if let errore = error { debugPrint("Error fetch shared record \(errore.localizedDescription)") }
if let recordOk = record {
DispatchQueue.main.async() {
self.storage.append(recordOk)
}
}
}
operation.fetchRecordsCompletionBlock = { (recordsByRecordID, error) in
if let errore = error { debugPrint("Error fetch shared record \(errore.localizedDescription)") }
}
CKContainer.default().sharedCloudDatabase.add(operation)
}
but now there is a problem when you try to update the record, you need to know who is the owner, if the owner is the one who shared the record you need to save to the private db, if the owner is another person you need to save to the shared db... so:
func updateOrSaveRecord(_ record:CKRecord, update:Bool) {
var db : CKDatabase
if update == true {
guard let creatorUserID = record.creatorUserRecordID else { return }
if record.share != nil && creatorUserID.recordName != CKCurrentUserDefaultName {
debugPrint("record shared from another user")
db = CKContainer.default().sharedCloudDatabase
} else {
debugPrint("private record")
db = CKContainer.default().privateCloudDatabase
}
} else { db = CKContainer.default().privateCloudDatabase }
db.save(record) { (savedRecord, error) in
if let errorTest = error {
print(errorTest.localizedDescription)
} else {
if let recordOK = savedRecord {
DispatchQueue.main.async {
if update == false {
self.storage.append(recordOK)
} else {
self.dettCont?.updateScreen()
}
self.listCont?.tableView.reloadData()
}
}
}
}
}
to know if the record was created by the current user the trick is to compare against CKCurrentUserDefaultName

NSURLSession, Completion Block, Swift

Im working with NSURLSession. I have an array with restaurants and i'm requesting the dishes for every restaurant in the array to the api. The dataTask works,i'm just having a real hard time trying to call a method only when the all dataTasks are finished.
self.findAllDishesOfRestaurants(self.restaurantsNearMe) { (result) -> Void in
if result.count != 0 {
self.updateDataSourceAndReloadTableView(result, term: "protein")
} else {
print("not ready yet")
}
}
the self.updateDataSourceAndREloadTableView never gets called, regardless of my completion block. Here is my findAllDishesOfRestaurants function
func findAllDishesOfRestaurants(restaurants:NSArray, completion:(result: NSArray) -> Void) {
let allDishesArray:NSMutableArray = NSMutableArray()
for restaurant in restaurants as! [Resturant] {
let currentRestaurant:Resturant? = restaurant
if currentRestaurant == nil {
print("restaurant is nil")
} else {
self.getDishesByRestaurantName(restaurant, completion: { (result) -> Void in
if let dishesArray:NSArray = result {
restaurant.dishes = dishesArray
print(restaurant.dishes?.count)
allDishesArray.addObjectsFromArray(dishesArray as [AnyObject])
self.allDishes.addObjectsFromArray(dishesArray as [AnyObject])
print(self.allDishes.count)
}
else {
print("not dishes found")
}
// completion(result:allDishesArray)
})
completion(result:allDishesArray)
}
}
}
And here is my the function where i perform the dataTasks.
func getDishesByRestaurantName(restaurant:Resturant, completion:(result:NSArray) ->Void) {
var restaurantNameFormatted = String()
if let name = restaurant.name {
for charachter in name.characters {
var newString = String()
var sameCharacter:Character!
if charachter == " " {
newString = "%20"
restaurantNameFormatted = restaurantNameFormatted + newString
} else {
sameCharacter = charachter
restaurantNameFormatted.append(sameCharacter)
}
// print(restaurantNameFormatted)
}
}
var urlString:String!
//not to myself, when using string with format, we need to igone all the % marks arent ours to replace with a string, otherwise they will be expecting to be replaced by a value
urlString = String(format:"https://api.nutritionix.com/v1_1/search/%#?results=0%%3A20&cal_min=0&cal_max=50000&fields=*&appId=XXXXXXXXXappKey=XXXXXXXXXXXXXXXXXXXXXXXXXXXX",restaurantNameFormatted)
let URL = NSURL(string:urlString)
let restaurantDishesArray = NSMutableArray()
let session = NSURLSession.sharedSession()
let dataTask = session.dataTaskWithURL(URL!) { (data:NSData?, response:NSURLResponse?, error:NSError?) -> Void in
do {
let anyObjectFromResponse:AnyObject = try NSJSONSerialization.JSONObjectWithData(data!, options: NSJSONReadingOptions.AllowFragments)
if let asNSDictionary = anyObjectFromResponse as? NSDictionary {
let hitsArray = asNSDictionary.valueForKey("hits") as? [AnyObject]
for newDictionary in hitsArray! as! [NSDictionary]{
let fieldsDictionary = newDictionary.valueForKey("fields") as? NSDictionary
let newDish = Dish.init(dictionary:fieldsDictionary!, restaurant: restaurant)
restaurantDishesArray.addObject(newDish)
}
}
completion(result:restaurantDishesArray)
} catch let error as NSError {
print("failed to connec to api")
print(error.localizedDescription)
}
}
dataTask.resume()
}
Like i said before, I need to wait until the fun findAllDishesOfRestaurants is done. I tried writing my completion blocks but I'm not sure I'm doing it right. Any help is greatly appreciated. Thank
The problem is that you are calling the completion method in findAllDishesOfRestaurants before al tasks are complete. In fact, you are calling it once for each restaurant in the list, which is probably not what you want.
My recommendation would be for you to look into NSOperationQueue for two reasons:
It will let you limit the number of concurrent requests to the server, so your server does not get flooded with requests.
It will let you easily control when all operations are complete.
However, if you are looking for a quick fix, what you need is to use GCD groups dispatch_group_create, dispatch_group_enter, dispatch_group_leave, and dispatch_group_notify as follows.
func findAllDishesOfRestaurants(restaurants:NSArray, completion:(result: NSArray) -> Void) {
let group = dispatch_group_create() // Create GCD group
let allDishesArray:NSMutableArray = NSMutableArray()
for restaurant in restaurants as! [Resturant] {
let currentRestaurant:Resturant? = restaurant
if currentRestaurant == nil {
print("restaurant is nil")
} else {
dispatch_group_enter(group) // Enter group for this restaurant
self.getDishesByRestaurantName(restaurant, completion: { (result) -> Void in
if let dishesArray:NSArray = result {
restaurant.dishes = dishesArray
print(restaurant.dishes?.count)
allDishesArray.addObjectsFromArray(dishesArray as [AnyObject])
// self.allDishes.addObjectsFromArray(dishesArray as [AnyObject]) <-- do not do this
// print(self.allDishes.count)
}
else {
print("not dishes found")
}
// completion(result:allDishesArray) <-- No need for this, remove
dispatch_group_leave(group) // Leave group, marking this restaurant as complete
})
// completion(result:allDishesArray) <-- Do not call here either
}
}
// Wait for all groups to complete
dispatch_group_notify(group, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)) {
completion(result:allDishesArray)
}
}