I'm trying to do multiple request in background to download many jsons and check data from them but I don't know how to use AFNetworking in that case.
I tried to do like Wiki explaings but when it's going to download the second file then the app breaks. I want to do all the process in background.
Thanks
AFNetworking will definitely handle this. We use it for exchanging data with a RESTful set of services. The things to keep in mind:
An operation (eg. AFHTTPRequestOperation) can only be used once.
An operation is asynchronous.
Put your operations in an NSOperationQueue, or use AFHTTPClient (suggested) to manage the operations for you.
When sending multiple requests, always assume that the responses will come back in a random sequence. There is no guarantee that you will get the responses in the same sequence as the requests.
Hope this helps to point you towards a solution to your problem. Without more detail in your question, it's difficult to give you a specific answer.
Check out AFHTTPClient's
enqueueBatchOfHTTPRequestOperations:progressBlock:completionBlock:, which lets you enqueue multiple requests operations at once with the added bonus of having a completion handler that is called when all of those requests have finished, as well as a block for tracking the progress. Also note, that every single operation can still have its own completion handler (useful if you have to process the results of a request, for example).
If you don't need to customize the request operation (and don't need individual completion blocks), you can also use enqueueBatchOfHTTPRequestOperationsWithRequests:progressBlock:completionBlock:, which allows you to pass an array of NSURLRequest directly without having to build the operations yourself.
Related
I have been looking at RESTful Web Services and was wondering about modelling an event queue in REST.
Assuming the event queue is accessible at URL: http://my.domain/events, it seems to me that a POST operation applied to this URL is okay because it will add the event to the end of the list that represents the queue. Further, if I perform a GET operation on this URL, it seems to me that returning the head of queue also is okay.
My question is - is it okay for the GET operation to also remove the head of the queue or should this be performed by a separate DELETE operation?
is it okay for the GET operation to also remove the head of the queue
No, it is not from REST perspective. GET request should be safe according to REST best practices. Making any number of GET requests to a URL should have the same effect as making no requests at all.
There's one more concern about your design. There are usually two common patterns to retrieve a queue head:
The first one is to just get a head, process it and then notify the queue to remove the message if it was processed successfully, if not, the message gets back to the queue to be processed later again. It's a more robust approach.
The second one is to just get a queue head and remove it at the same time just like you described in your question.
To support both patterns I think you should only retrieve a message when doing GET and implement DELETE method so it returns a deleted message object as a response. This way you will comply with REST uniform interface and your queue client will be able to implement both patters.
Hope it helps!
Does your integrity requirements allow GET + DELETE in one step?
Events normally should not get lost. What happens if the response retrieval fails after the delete was executed?
I would GET the head of the queue and then send an acknowledgement containing the event ID that was received and successfully processed. Thus, you guarantee an at-least-once-delivery.
Depending on the number of events you are processing, a message bus might be the more suitable option here.
Do not become an overzealous REST paradigm worshipper. REST is a protocol but it does not necessarily need to convey the contract of the service.
What you say is perfectly fine as long as the contract between the consumer and the queue are clear and documented.
So I need to make like 300+ get api calls and I don't want to pass them all at once and strain the server. So I was thinking of maybe doing about 5 asynchronous calls at a time.
From what I have read here operation queues sound awesome and are very useful. From the answer from this question the asynchronous example makes an operation queue and passes it into the call. I am assuming if I did something like this I could just make 5 separate queues and funnel my calls into those 5 queues (I'm assuming, haven't actually tried it since I would like to use Alamofire). Is something similar possible with Alamofire?
Network requests are automatically managed by the underlying URL Loading System, so Alamofire should be able to handle whatever you throw at it. There are ways to schedule Alamofire requests on a queue, but it shouldn't be necessary. Always better to try and measure results rather than speculate.
I've got a Backbone web application that talks to a RESTful PHP server. For PUT and POST it matters in which order the requests arrive at the server and for GET it matters in which order the responses arrive at the client.
The web application does not need to be used concurrently by multiple users, but what might happen is that the user changes its name twice really fast. Then the order in which the server processes PUT /name/Ann and PUT /name/Bea determines whether the name is set to Ann or Bea.
Backbone.Safesync and Backbone.Sync.AjaxQueue are two libraries that try to solve this problem. Doesn't Safesync only solve the problem with GET? Sync.AjaxQueue is outdated, but might serve as inspiration to implement a custom queued sync function. Making sync synchronous would solve the problem. If a request is only sent after the previous response is received, then only one request is processed at a time.
Any advice on how to proceed?
BTW: I don't think using PATCH requests would solve anything, because in my example the same attribute is changed twice.
There's a few ways to solve this, here's two:
add a timestamp to all requests, store it in the DB as "modified" and let the server check whether the timestamp of the new request is later than the one in the DB in order to be valid
use Promises to delay the second request from being made before the first one is responded on, there's a promise/deferred mechanism built into jquery, but you can also use a 3rd party one, for instance Q or when
If you can afford the delay, an easy approach is to set the async option to false when you call whatever method you're calling that results in the Backbone.sync. For example, in the appropriate model(s) simply override the default sync method to include the additional option.
HTTP requests made with NSURLConnection are event driven. This makes things a little weird when you need to issue say three requests one after another, where each request uses information returned by the previous one.
I'm used to doing it like this:
response1 = request1();
response2 = request2(response1);
response3 = request3(response2);
But the only way I could find how to do this with NSURLConnection is to have connectionDidFinishLoading: make the next request. But when the number of sequential requests grows, this can get messy.
What's the idiomatic way to handle sequential HTTP requests with cocoa?
You could wrap the requests in an NSOperation and then define operation dependencies, so that each request must wait on its dependent requests before executing.
From the Apple Docs:
Dependencies are a convenient way to execute operations in a specific order. You can add and remove dependencies for an operation using the addDependency: and removeDependency: methods. By default, an operation object that has dependencies is not considered ready until all of its dependent operation objects have finished executing. Once the last dependent operation finishes, however, the operation object becomes ready and able to execute.
I would advise you using a 3rd party library called MKNetworkKit. It can handle the hard work for you, so you can focus on the key aspects of your application. You can find it here.
You can and should use NSOperation and NSOperationQueues.
A good tutorial can be found here: How To Use NSOperations And NSOperationQueues
i am getting confused what is the difference between Synchronous NSUrlConnection and ASynchronous NSUrlConnection?is there Synchronous or ASynchronous? if we use detachNewThreadSelector in connectionDidFinishLoading method,is it
ASynchronous NSUrlConnection? which is the best way?any tutorial ...
Synchronous means that you trigger your NSURLConnection request and wait for it to be done.
Asynchronous means that you can trigger the request and do other stuff while NSURLConnection downloads data.
Which is "best"?
Synchronous is very straightforward: you set it up, fire it, and wait for the data to come back. But your application sits there and does nothing until all the data is downloaded, some error occurs, or the request times out. If you're dealing with anything more than a small amount of data, your user will sit there waiting, which will not make for a good user experience.
Asynchronous requires just a little more work, but your user can do other stuff while the request does its thing, which is usually preferable. You set up some delegate methods that let you keep track of data as it comes in, which is useful for tracking download progress. This approach is probably better for most usage cases.
You can do both synchronous and asynchronous requests with NSURLConnection. Apple's documentation provides a clear explanation of the two approaches and delegate methods required for the latter approach.
It seems that you're conflating synchronous/asynchronous connections and threading. In my app I used asynchronous connections as an alternative to threading.
Let's say you want to download a big file without causing the UI to freeze. You have two basic options:
Asynchronous connection. You start with + connectionWithRequest:delegate: (or one of the other non-autorelease options) and it downloads bits of the file, calling your delegate when interesting thing happen. The runloop is still going, so your UI stays responsive. Of course you have to be careful that your delegate don't go out of scope.
Synchronous. You start the connection with + sendSynchronousRequest:returningResponse:error: but the code waits until the download is complete. You'll really need to spawn a new thread (or one of the higher level threading operations that Cocoa supports) or the UI will block.
Which option is "best" or the least painful will depend on the architecture of your application and what you're trying to achieve. If you need to create a thread for a long running process anyway, you might go with the second option. In general I would say the first option is easiest.
It's all pretty well documented on Apple's Developer site.
Something which hasn't been mentioned in the other responses is the size of the request. If you're downloading a large file, for example, then using an asynchronous connection is better. Your delegate will receive blocks of data as they arrive. In comparison, the synchronous method will wait for all the data before making it available to you. The delegate can start processing the response sooner (better user experience), or save save it to a file instead of memory (better resource usage). You also have the option to stop the response without waiting for all the data.
Basically, the asynchronous method gives you more control over the connection but at the cost of complexity. The synchronous method is much simpler, but shouldn't be used on the main UI thread because it blocks.
In response to the other answers regarding the file size: I think file size doesn't matter. If the server responds really slowly and you're loading data synchronous your UI still freezes, even if you're loading a small amount of data, like 3k.
So I'd go for the asynchronous option in every situation, cause you never know what you're going to get with regards to file size, server responsiveness or network speeds.