I'm using Amadeus API for flight search. Amadeus API requires that requests must not be more frequent than 1/100ms. I use the following code to limit the request frequency
for depDate in depDates{
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
AmadeusHelper.sharedInstance().searchAFlight(depAirport: depAirport, arrAirport: arrAirport, depDate: depDate, airlineCode: airlineCode, flightNumber: flightNum) { result in
switch result{
case .failure(let err):
print(err)
case .success(let flightOffer):
if let json = flightOffer.get(){
flightCandidate.offers?.append(json)
}
}
}
}
}
The above for-loop runs 4 loops. The searchAFlight function is essentially a wrapper of HTTP request. asyncAfter function will delay each request by 1 second. I thought this would be enough. However, I still get tooManyRequests error. When I was debugging, I followed the code step by step, which should took much longer to send out the requests. The requests should spread out across several minutes, long enough to satisfy the 100ms interval. Did I do something wrong here? Thanks.
===========Update=========
Based on Eric's comments, I changed the code to the following
var timeNow = DispatchTime.now() // each request is delayed with an increasing interval
for (index, depDate) in depDates.enumerated(){
DispatchQueue.main.asyncAfter(deadline: timeNow + Double(1*index)) {
AmadeusHelper.sharedInstance().searchAFlight(depAirport: depAirport, arrAirport: arrAirport, depDate: depDate, airlineCode: airlineCode, flightNumber: flightNum) { result in
switch result{
case .failure(let err):
print(err)
case .success(let flightOffer):
if let json = flightOffer.get(){
flightCandidate.offers?.append(json)
}
}
}
}
}
Unfortunately, I still get tooManyRequest error.
Your first attempt was:
for depDate in depDates {
DispatchQueue.main.asyncAfter(deadline: .now() + 1) { ... }
}
That will not work because you are scheduling all of those iterations to start one second from now, not one second from each other.
You then attempted:
for (index, depDate) in depDates.enumerated() {
DispatchQueue.main.asyncAfter(deadline: .now() + Double(index)) { ... }
}
That is likely closer to what you want, but is subject to “timer coalescing”, a feature where the OS will start grouping/coalescing dispatched blocks together. This is a great power saving feature, but will circumvent the desire to have delays between the requests. Also, you’re going to have troubles if you ever want to cancel some of these subsequent requests (without complicating the code a bit, at least).
The simplest solution is to adopt a recursive pattern, where you trigger the next request in the completion handler of the prior one.
func searchFlight(at index: Int = 0) {
guard index < depDates.count else { return }
let depDate = depDates[index]
AmadeusHelper.sharedInstance().searchAFlight(depDate: depDate, ...) { result in
defer {
if index < (depDates.count - 1) {
DispatchQueue.main.asyncAfter(deadline: .now() + 1) { [weak self] in
self?.searchFlight(at: index + 1)
}
}
}
switch result { ... }
}
}
There’s no coalescing of these calls. Also, this has the virtue that the delay for a subsequent request will be based upon when the prior one finished, not after the prior one was issued (which can be an issue if scheduling a bunch of these requests up-front).
That having been said, I would advise that you direct this inquiry to Amadeus. I wouldn’t be surprised if this 100ms limitation was introduced to prevent people from mining their database through their API. I also wouldn’t be surprised if they have other, undocumented, techniques for identifying excessive requests (e.g. number of requests per hour, per 24 hours, etc.). The question of “why am I receiving tooManyRequest error” is best directed to them.
Related
I’m testing code that uses an actor, and I’d like to test that I’m properly handling concurrent access and reentrancy. One of my usual approaches would be to use DispatchQueue.concurrentPerform to fire off a bunch of requests from different threads, and ensure my values resolve as expected. However, since the actor uses structured concurrency, I’m not sure how to actually wait for the tasks to complete.
What I’d like to do is something like:
let iterationCount = 100
let allTasksComplete = expectation(description: "allTasksComplete")
allTasksComplete.expectedFulfillmentCount = iterationCount
DispatchQueue.concurrentPerform(iterations: iterationCount) { _ in
Task {
// Do some async work here, and assert
allTasksComplete.fulfill()
}
}
wait(for: [allTasksComplete], timeout: 1.0)
However the timeout for the allTasksComplete expectation expires every time, regardless of whether the iteration count is 1 or 100, and regardless of the length of the timeout. I’m assuming this has something to do with the fact that mixing structured and DispatchQueue-style concurrency is a no-no?
How can I properly test concurrent access — specifically how can I guarantee that the actor is accessed from different threads, and wait for the test to complete until all expectations are fulfilled?
A few observations:
When testing Swift concurrency, we no longer need to rely upon expectations. We can just mark our tests as async methods. See Asynchronous Tests and Expectations. Here is an async test adapted from that example:
func testDownloadWebDataWithConcurrency() async throws {
let url = try XCTUnwrap(URL(string: "https://apple.com"), "Expected valid URL.")
let (_, response) = try await URLSession.shared.data(from: url)
let httpResponse = try XCTUnwrap(response as? HTTPURLResponse, "Expected an HTTPURLResponse.")
XCTAssertEqual(httpResponse.statusCode, 200, "Expected a 200 OK response.")
}
FWIW, while we can now use async tests when testing Swift concurrency, we still can use expectations:
func testWithExpectation() {
let iterations = 100
let experiment = ExperimentActor()
let e = self.expectation(description: #function)
e.expectedFulfillmentCount = iterations
for i in 0 ..< iterations {
Task.detached {
let result = await experiment.reentrantCalculation(i)
let success = await experiment.isAcceptable(result)
XCTAssert(success, "Incorrect value")
e.fulfill()
}
}
wait(for: [e], timeout: 10)
}
You said:
However the timeout for the allTasksComplete expectation expires every time, regardless of whether the iteration count is 1 or 100, and regardless of the length of the timeout.
We cannot comment without seeing a reproducible example of the code replaced with the comment “Do some async work here, and assert”. We do not need to see your actual implementation, but rather construct the simplest possible example that manifests the behavior you describe. See How to create a Minimal, Reproducible Example.
I personally suspect that you have some other, unrelated deadlock. E.g., given that concurrentPerform blocks the thread from which you call it, maybe you are doing something that requires the blocked thread. Also, be careful with Task { ... }, which runs the task on the current actor, so if you are doing something slow and synchronous inside there, that could cause problems. We might use detached tasks, instead.
In short, we cannot diagnose the issue without a Minimal, Reproducible Example.
As a more general observation, one should be wary about mixing GCD (or semaphores or long-lived locks or whatever) with Swift concurrency, because the latter uses a cooperative thread pool, which relies upon assumptions about its threads being able to make forward progress. But if you have GCD API blocking threads, those assumptions may no longer be valid. It may not the source of the problem here, but I mention it as a cautionary note.
As an aside, concurrentPerform (which constrains the degree of parallelism) only makes sense if the work being executed runs synchronously. Using concurrentPerform to launch a series of asynchronous tasks will not constrain the concurrency at all. (The cooperative thread pool may, but concurrentPerform will not.)
So, for example, if we wanted to test a bunch of calculations in parallel, rather than concurrentPerform, we might use a TaskGroup:
func testWithStructuredConcurrency() async {
let iterations = 100
let experiment = ExperimentActor()
await withTaskGroup(of: Void.self) { group in
for i in 0 ..< iterations {
group.addTask {
let result = await experiment.reentrantCalculation(i)
let success = await experiment.isAcceptable(result)
XCTAssert(success, "Incorrect value")
}
}
}
let count = await experiment.count
XCTAssertEqual(count, iterations)
}
Now if you wanted to verify concurrent execution within an app, normally I would just profile the app (not unit tests) with Instruments, and either watch intervals in the “Points of Interest” tool or look at the new “Swift Tasks” tool described in WWDC 2022’s Visualize and optimize Swift concurrency video. E.g., here I have launched forty tasks and I can see that my device runs six at a time:
See Alternative to DTSendSignalFlag to identify key events in Instruments? for references about the “Points of Interest” tool.
If you really wanted to write a unit test to confirm concurrency, you could theoretically keep track of your own counters, e.g.,
final class MyAppTests: XCTestCase {
func testWithStructuredConcurrency() async {
let iterations = 100
let experiment = ExperimentActor()
await withTaskGroup(of: Void.self) { group in
for i in 0 ..< iterations {
group.addTask {
let result = await experiment.reentrantCalculation(i)
let success = await experiment.isAcceptable(result)
XCTAssert(success, "Incorrect value")
}
}
}
let count = await experiment.count
XCTAssertEqual(count, iterations, "Correct count")
let degreeOfConcurrency = await experiment.maxDegreeOfConcurrency
XCTAssertGreaterThan(degreeOfConcurrency, 1, "No concurrency")
}
}
Where:
actor ExperimentActor {
var degreeOfConcurrency = 0
var maxDegreeOfConcurrency = 0
var count = 0
/// Calculate pi with Leibniz series
///
/// Note: I am awaiting a detached task so that I can manifest actor reentrancy.
func reentrantCalculation(_ index: Int, decimalPlaces: Int = 8) async -> Double {
let task = Task.detached {
logger.log("starting \(index)") // I wouldn’t generally log in a unit test, but it’s a quick visual confirmation that I’m enjoying parallel execution
await self.increaseConcurrencyCount()
let threshold = pow(0.1, Double(decimalPlaces))
var isPositive = true
var denominator: Double = 1
var result: Double = 0
var increment: Double
repeat {
increment = 4 / denominator
if isPositive {
result += increment
} else {
result -= increment
}
isPositive.toggle()
denominator += 2
} while increment >= threshold
logger.log("finished \(index)")
await self.decreaseConcurrencyCount()
return result
}
count += 1
return await task.value
}
func increaseConcurrencyCount() {
degreeOfConcurrency += 1
if degreeOfConcurrency > maxDegreeOfConcurrency { maxDegreeOfConcurrency = degreeOfConcurrency}
}
func decreaseConcurrencyCount() {
degreeOfConcurrency -= 1
}
func isAcceptable(_ result: Double) -> Bool {
return abs(.pi - result) < 0.0001
}
}
Please note that if testing/running on simulator, the cooperative thread pool is somewhat constrained, not exhibiting the same degree of concurrency as you will see on an actual device.
Also note that if you are testing whether a particular test is exhibiting parallel execution, you might want to disable the parallel execution of tests, themselves, so that other tests do not tie up your cores, preventing any given particular test from enjoying parallel execution.
I have a limit of 40 URL Session calls to my API per minute.
I have timed the number of calls in any 60s and when 40 calls have been reached I introduced sleep(x). Where x is 60 - seconds remaining before new minute start. This works fine and the calls don’t go over 40 in any given minute. However the limit is still exceeded as there might be more calls towards the end of the minute and more at the beginning of the next 60s count. Resulting in an API error.
I could add a:
usleep(x)
Where x would be 60/40 in milliseconds. However as some large data returns take much longer than simple queries that are instant. This would increase the overall download time significantly.
Is there a way to track the actual rate to see by how much to slow the function down?
Might not be the neatest approach, but it works perfectly. Simply storing the time of each call and comparing it to see if new calls can be made and if not, the delay required.
Using previously suggested approach of delay before each API call of 60/40 = 1.5s (Minute / CallsPerMinute), as each call takes a different time to produce response, total time taken to make 500 calls was 15min 22s. Using the below approach time taken: 11min 52s as no unnecessary delay has been introduced.
Call before each API Request:
API.calls.addCall()
Call in function before executing new API task:
let limit = API.calls.isOverLimit()
if limit.isOver {
sleep(limit.waitTime)
}
Background Support Code:
var globalApiCalls: [Date] = []
public class API {
let limitePerMinute = 40 // Set API limit per minute
let margin = 2 // Margin in case you issue more than one request at a time
static let calls = API()
func addCall() {
globalApiCalls.append(Date())
}
func isOverLimit() -> (isOver: Bool, waitTime: UInt32)
{
let callInLast60s = globalApiCalls.filter({ $0 > date60sAgo() })
if callInLast60s.count > limitePerMinute - margin {
if let firstCallInSequence = callInLast60s.sorted(by: { $0 > $1 }).dropLast(2).last {
let seconds = Date().timeIntervalSince1970 - firstCallInSequence.timeIntervalSince1970
if seconds < 60 { return (true, UInt32(60 + margin) - UInt32(seconds.rounded(.up))) }
}
}
return (false, 0)
}
private func date60sAgo() -> Date
{
var dayComponent = DateComponents(); dayComponent.second = -60
return Calendar.current.date(byAdding: dayComponent, to: Date())!
}
}
Instead of using sleep have a counter. You can do this with a Semaphore (it is a counter for threads, on x amount of threads allowed at a time).
So if you only allow 40 threads at a time you will never have more. New threads will be blocked. This is much more efficient than calling sleep because it will interactively account for long calls and short calls.
The trick here is that you would call a function like this every sixty second. That would make a new semaphore every minute that would only allow 40 calls. Each semaphore would not affect one another but only it's own threads.
func uploadImages() {
let uploadQueue = DispatchQueue.global(qos: .userInitiated)
let uploadGroup = DispatchGroup()
let uploadSemaphore = DispatchSemaphore(value: 40)
uploadQueue.async(group: uploadGroup) { [weak self] in
guard let self = self else { return }
for (_, image) in images.enumerated() {
uploadGroup.enter()
uploadSemaphore.wait()
self.callAPIUploadImage(image: image) { (success, error) in
uploadGroup.leave()
uploadSemaphore.signal()
}
}
}
uploadGroup.notify(queue: .main) {
// completion
}
}
I have a func that gets a list of Players. When i fetch the players i need only to show those who belongs to the current Team so i am showing only a subset of the original list by filtering them. I don't know in advance, before making the request, how much players belong to the Team selected by the User, so i may need to do additional requests until i can display on the TableView at least 10 rows of Players. The User by pulling up from the bottom of the TableView can request more players to display. To do this i am calling a first async func request which in turn calls, inside a while, another nested async func request. Here a code to give you an idea of what i am trying to do:
let semaphore = DispatchSemaphore(value: 0)
func getTeamPlayersRequest() {
service.getTeamPlayers(...)
{
(result) in
switch result
{
case .success(let playersModel):
if let validCurrentPage = currentPageTmp ,
let validTotalPages = totalPagesTmp ,
let validNextPage = self.getTeamPlayersListNextPage()
{
while self.playersToShowTemp.count < 10 && self.currentPage < validTotalPages
{
self.currentPage = validNextPage //global var
self.fetchMorePlayers()
self.semaphore.wait() //global semaphore
}
}
case .failure(let error):
//some code...
}
})
}
private func fetchMorePlayers(){
// Completion handler of the following function is never called..
service.getTeamPlayers(requestedPage: currentPage, completion: {
(result) in
switch result
{
case .success(let playersModel):
if let validPlayerList = playersList,
let validPlayerListData = validPlayerList.data,
let validTeamModel = self.teamPlayerModel,
let validNextPage = self.getTeamPlayersListNextPage()
{
for player in validPlayerListData
{
if ( validTeamModel.id == player.team?.id)
{
self.playersToShowTemp.append(player)
}
}
}
self.currentPage = validNextPage
self.semaphore.signal() //global semaphore
case .failure(let error):
//some code...
}
}
}
I have tried both with DispatchGroup and Semaphore but i don't get it what i am doing wrong. I debugged the code and saw that the first async call get executed in a different queue (not the main queue) and a different thread. The nested async call getexecuted on a different thread but i don't know if it's the same concurrent queue of the first async call.
The completion handler of thenested call it's never called. Does anyone know why? is the self.semaphore.wait(), even if it get executed after the fetchMorePlayers() return, blocking/preventing the nested async completion handler to be called?
I am noticing through the Debugger that the completion() in the Xcode vars window has the note "swift partial apply forwarder for closure #1"
If we inline the function call in your loop, it looks something like this:
while self.playersToShowTemp.count < 10 && self.currentPage < validTotalPages
{
self.currentPage = validNextPage //global var
nbaService.getTeamPlayers(requestedPage: currentPage, completion: { ... })
self.semaphore.wait() //global semaphore
}
So nbaService.getTeamPlayers schedules a request, probably on the main DispatchQueue and immediately returns. Then you call wait on your semaphore, which blocks, probably before GCD even tries to run the task scheduled by nbaService.getTeamPlayers.
That's a problem on DispatchQueue.main, which is a serial queue. It has to be a serial queue for UI updates to work. What normally happens is on some iteration of the run loop you make a request, and return.. that bubbles back up to the run loop, which checks for more events and queued tasks. In this case, when your completion handler in getTeamPlayersRequest is waiting to be run, the run loop (via GCD) executes it for that iteration. Then you block the main thread, so the run loop can't continue. If you do need to block always do it on a different DispatchQueue, preferably a .concurrent one.
There is sometimes confusion about what .async does. It only means "run this later and right now return control back to the caller". That's all. It does not guarantee that your closure will run concurrently. It merely schedules it to be run later (possibly soon) on whatever DispatchQueue you called it on. If that queue is a serial queue, then it will be queued to run in its turn in that dispatch queue's run loop. If it's a concurrent queue (ie one you specifically set the attributes to include .concurrent). Then it will run, possibly at the same time as other tasks on that same DispatchQueue.
To avoid that instead of using a loop you can use async-chaining.
private func fetchMorePlayers(while condition: #autoclosure #escaping () -> Bool){
guard condition() else { return }
nbaService.getTeamPlayers(requestedPage: currentPage, completion: {
(result) in
switch result
{
case .success(let playersModel):
if let validPlayerList = playersList,
let validPlayerListData = validPlayerList.data,
let validTeamModel = self.teamPlayerModel,
let validNextPage = self.getTeamPlayersListNextPage()
{
for player in validPlayerListData
{
if ( validTeamModel.id == player.team?.id)
{
self.playersToShowTemp.append(player)
}
}
}
self.currentPage = validNextPage
// Chain to next call
self.fetchMorePlayers(while: condition))
case .failure(let error):
//some code...
}
}
}
Then in getTeamPlayersRequest you can do this:
func getTeamPlayersRequest() {
service.getTeamPlayers(...)
{
(result) in
switch result
{
case .success(let playersModel):
if let validCurrentPage = currentPageTmp ,
let validTotalPages = totalPagesTmp ,
let validNextPage = self.getTeamPlayersListNextPage()
{
self.currentPage = validNextPage //global var
self.fetchMorePlayers(while: self.playersToShowTemp.count < 10 && self.currentPage < validTotalPages)
}
case .failure(let error):
//some code...
}
})
}
This avoids the need to block on a semaphore, because each subsequent request happens in the completion handler of the previously completed one. The only issue is if you need for the completion handler in getTeamPlayersRequest to block while the fetchMorePlayers requests are being fetched, because now it won't you can re-introduce the semaphore. In that case the guard statement in fetchMorePlayers becomes:
guard condition() else
{
self.semaphore.signal()
return
}
That way it only signals on the last completion handler in the chain. You may need to block in a different DispatchQueue though. I think if you need to block, you probably have something about your design that needs to be reconsidered.
If you find yourself reaching for semaphores, it is almost always a mistake. Semaphores are inefficient at best, and introduce deadlock risks if misused. Semaphores should generally be avoided. (Don't get me wrong: Semaphores can be useful in some very narrow use cases, but this is not one of them.)
Use asynchronous patterns. One simple approach might be to recursively call the routine, calling the completion handler when done:
func startFetching(#escaping completion: () -> Void) {
fetchPlayers(page: 0, completion: completion)
}
private func fetchPlayers(page: Int, #escaping completion: () -> Void) {
// prepare request
// now perform request
performRequest(...) { ...
if let error = error {
completion()
return
}
...
if doesNeedMorePlayers {
fetchPlayers(page: page + 1, completion: completion)
} else {
completion()
}
}
}
Personally, I might probably add another closure to emit the players retrieved as we go along, e.g. like, if not actually, a Combine Publisher. Or if you want to update the UI all at once at the very end, just pass the players retrieved thus far as additional parameter in this recursive routine and pass the whole array back in the completion handler. But avoid globals or other state properties.
But the broader idea is to scrupulously avoid semaphores and instead embrace asynchronous patterns.
I am setting up an app that utilizes promiseKit as a way to order asynchronous tasks. I currently have a set up which ensures two async functions (referred to as promises) are done in order (lets call them 1 and 2) and that another set of functions (3 and 4) are done in order. Roughly:
import PromiseKit
override func viewDidAppear(_ animated: Bool) {
firstly{
self.promiseOne() //promise #1 happening first (in relation to # 1 and #2)
}.then{_ -> Promise<[String]> in
self.promiseTwo()//promise #2 starting after 1 has completed
}
.catch{ error in
print(error)
}
firstly{
self.promiseThree()//Promise #3 happening first (in relation to #3 and #4)
}.then{_ -> Promise<[String]> in
self.promiseFour()//Promise #4 starting after #3 has completed
}.
.catch{ error in
print(error)
}
}
Each firstly ensures the order of the functions within them by making sure the first one is completed before the second one can initiate. Using two separate firstly's ensures that 1 is done before 2, 3 is done before 4, and (importantly) 1 and 3 start roughly around the same time (at the onset of the viewDidAppear()). This is done on purpose because 1 and 3 are not related to each other and can start at the same time without any issues (same goes for 2 and 4). The issue is that there is a fifth promise, lets call it promiseFive that must only be run after 2 and 4 have been completed. I could just link one firstly that ensure the order is 1,2,3,4,5, but since the order of 1/2 and 3/4 is not relevant, linking them in this fashion would waste time.
I am not sure how to set this up so that promiseFive is only run upon completion of both 2 and 4. I have thought to have boolean-checked functions calls at the end of both 2 and 4, making sure the other firstly has finished to then call promiseFive but, since they begin asynchronously (1/2 and 3/4), it is possible that promiseFive would be called by both at the exact same time with this approach, which would obviously create issues. What is the best way to go about this?
You can use when or join to start something after multiple other promises have completed. The difference is in how they handled failed promises. It sounds like you want join. Here is a concrete, though simple example.
This first block of code is an example of how to create 2 promise chains and then wait for both of them to complete before starting the next task. The actual work being done is abstracted away into some functions. Focus on this block of code as it contains all the conceptual information you need.
Snippet
let chain1 = firstly(execute: { () -> (Promise<String>, Promise<String>) in
let secondPieceOfInformation = "otherInfo" // This static data is for demonstration only
// Pass 2 promises, now the next `then` block will be called when both are fulfilled
// Promise initialized with values are already fulfilled, so the effect is identical
// to just returning the single promise, you can do a tuple of up to 5 promises/values
return (fetchUserData(), Promise(value: secondPieceOfInformation))
}).then { (result: String, secondResult: String) -> Promise<String> in
self.fetchUpdatedUserImage()
}
let chain2 = firstly {
fetchNewsFeed() //This promise returns an array
}.then { (result: [String : Any]) -> Promise<String> in
for (key, value) in result {
print("\(key) \(value)")
}
// now `result` is a collection
return self.fetchFeedItemHeroImages()
}
join(chain1, chain2).always {
// You can use `always` if you don't care about the earlier values
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)
print(String(format: "All promises finished %.2f seconds later", executionTime))
}
PromiseKit uses closures to provide it's API. Closures have an scope just like an if statement. If you define a value within an if statement's scope, then you won't be able to access it outside of that scope.
You have several options to passing multiple pieces of data to the next then block.
Use a variable that shares a scope with all of the promises (you'll likely want to avoid this as it works against you in managing the flow of asynchronous data propagation)
Use a custom data type to hold both (or more) values. This can be a tuple, struct, class, or enum.
Use a collection (such as a dictionary), example in chain2
Return a tuple of promises, example included in chain1
You'll need to use your best judgement when choosing your method.
Complete Code
import UIKit
import PromiseKit
class ViewController: UIViewController {
let methodStart = Date()
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
<<Insert The Other Code Snippet Here To Complete The Code>>
// I'll also mention that `join` is being deprecated in PromiseKit
// It provides `when(resolved:)`, which acts just like `join` and
// `when(fulfilled:)` which fails as soon as any of the promises fail
when(resolved: chain1, chain2).then { (results) -> Promise<String> in
for case .fulfilled(let value) in results {
// These promises succeeded, and the values will be what is return from
// the last promises in chain1 and chain2
print("Promise value is: \(value)")
}
for case .rejected(let error) in results {
// These promises failed
print("Promise value is: \(error)")
}
return Promise(value: "finished")
}.catch { error in
// With the caveat that `when` never rejects
}
}
func fetchUserData() -> Promise<String> {
let promise = Promise<String> { (fulfill, reject) in
// These dispatch queue delays are standins for your long-running asynchronous tasks
// They might be network calls, or batch file processing, etc
// So, they're just here to provide a concise, illustrative, working example
DispatchQueue.global().asyncAfter(deadline: .now() + 2.0) {
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)
print(String(format: "promise1 %.2f seconds later", executionTime))
fulfill("promise1")
}
}
return promise
}
func fetchUpdatedUserImage() -> Promise<String> {
let promise = Promise<String> { (fulfill, reject) in
DispatchQueue.global().asyncAfter(deadline: .now() + 2.0) {
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)
print(String(format: "promise2 %.2f seconds later", executionTime))
fulfill("promise2")
}
}
return promise
}
func fetchNewsFeed() -> Promise<[String : Any]> {
let promise = Promise<[String : Any]> { (fulfill, reject) in
DispatchQueue.global().asyncAfter(deadline: .now() + 1.0) {
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)
print(String(format: "promise3 %.2f seconds later", executionTime))
fulfill(["key1" : Date(),
"array" : ["my", "array"]])
}
}
return promise
}
func fetchFeedItemHeroImages() -> Promise<String> {
let promise = Promise<String> { (fulfill, reject) in
DispatchQueue.global().asyncAfter(deadline: .now() + 2.0) {
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)
print(String(format: "promise4 %.2f seconds later", executionTime))
fulfill("promise4")
}
}
return promise
}
}
Output
promise3 1.05 seconds later
array ["my", "array"]
key1 2017-07-18 13:52:06 +0000
promise1 2.04 seconds later
promise4 3.22 seconds later
promise2 4.04 seconds later
All promises finished 4.04 seconds later
Promise value is: promise2
Promise value is: promise4
The details depend a little upon what types of these various promises are, but you can basically return the promise of 1 followed by 2 as one promise, and return the promise of 3 followed by 4 as another, and then use when to run those two sequences of promises concurrently with respect to each other, but still enjoy the consecutive behavior within those sequences. For example:
let firstTwo = promiseOne().then { something1 in
self.promiseTwo(something1)
}
let secondTwo = promiseThree().then { something2 in
self.promiseFour(something2)
}
when(fulfilled: [firstTwo, secondTwo]).then { results in
os_log("all done: %#", results)
}.catch { error in
os_log("some error: %#", error.localizedDescription)
}
This might be a situation in which your attempt to keep the question fairly generic might make it harder to see how to apply this answer in your case. So, if you are stumbling, you might want to be more specific about what these four promises are doing and what they're passing to each other (because this passing of results from one to another is one of the elegant features of promises).
I have code of this form:
func myFunction(<...>, completionHandler: (ResponseType) -> Void) {
<prepare parameters>
mySessionManager.upload(multipartFormData: someClosure,
to: saveUrl, method: .post, headers: headers) { encodingResult in
// encodingCompletion
switch encodingResult {
case .failure(let err):
completionHandler(.error(err))
case .success(let request, _, _):
request.response(queue: self.asyncQueue) { response in
// upload completion
<extract result>
completionHandler(.success(result))
}
}
}
}
And testing code like this:
func testMyFunction() {
<prepare parameters>
var error: Error? = nil
var result: MyResultType? = nil
let sem = DispatchSemaphore(value: 0)
var ran = false
myFunction(<...>) { response in
if ran {
error = "ran twice"
return
}
defer {
ran = true
sem.signal()
}
switch response {
case .error(let err): error = err
case .success(let res): result = res
}
}
sem.wait()
XCTAssertNil(error, "Did not want to see this error: \(error!)")
<test response>
}
I use a semaphore to block the main thread until the request is processed asynchronously; this works fine for all my other Alamofire requests -- but not this one. The test hangs.
(Note bene: Using active waiting does not change things.)
Using the debugger, I figured out that
all code that executes does so just fine but
encodingCompletion is never called.
Now my best guess is that DispatchQueue.main.async says, "execute this on the main thread when it has time" -- which it never will, since my test code is blocking there (and will run further tests, anyway).
I replaced it with self.queue.async and upload.delegate.queue.addOperation, two other queueing operations found in the same function. Then the test runs through but yields unexpected errors; my guess is that then, encodingCompletion is called too early.
There are several questions to ask here; an answer to any can solve my problem.
Can I test such code differently so that DispatchQueue.main can get to other tasks?
How can I use the debugger to find out which thread runs when?
How can I adapt Alamofire at the critical position so that it does not require the main queue?
As explained here, this is a bad "solution" as it introduces the possibility for deadlocks when requests are nested. I'm leaving this here for instructional purposes.
Changing
DispatchQueue.main.async {
let encodingResult = MultipartFormDataEncodingResult.success(
request: upload,
streamingFromDisk: true,
streamFileURL: fileURL
)
encodingCompletion?(encodingResult)
}
in SessionManager.swift to
self.queue.sync {
...
}
solves (read: works around) the problem.
I have no idea if this is a robust fix or anything; I have filed an issue.
We should not block the main thread. XCTest has its own solution for waiting on asynchronous computations:
let expectation = self.expectation(description: "Operation should finish.")
operation(...) { response in
...
expectation.fulfill()
}
waitForExpectations(timeout: self.timeout)
From the documentation:
Runs the run loop while handling events until all expectations are fulfilled or the timeout is reached. Clients should not manipulate the run loop while using this API.
Outside of XCTest, we can use a similar mechanism as XCTestCase.waitForExpectations() does:
var done = false
operation(...) { response in
...
done = true
}
repeat {
RunLoop.current.run(until: Date(timeIntervalSinceNow: 0.1))
} while !done
Note: This assumes that operation sends its work to the same queue itself is executed on. If it uses another queue, this won't work; but then the approach using DispatchSemaphore (see the question) does not cause a deadlock and can be used.
The implementation in XCTest does a lot more (multiple expectations, timeout, configurable sleep interval, etc.) but this is the basic mechanism.