I'm working on an application which has to send data to a web server constantly.
I will be sending text data
they should be submitted to the web server as they are made available
Like a queue First in first out
In case a request fails to go through, it should retry to resubmitted it before jumping to the next request.
all the operations should be done in the background, and not interrupt the main application
What is the best way to implement this
Like a queue First in first out
So use a queue. Add messages at the tail of the queue. Have a background thread remove messages from the front of the queue, send them, verify that the data was transferred successfully, and move on to the next message. You'll want to make sure that you access the queue in a thread-safe manner from all threads that use it.
Create a Grand Central Dispatch queue, and add a block to the queue for each message using dispatch_async. Each block can send its message synchronously and retry until it succeeds.
Dispatch Queues in Apple's Concurrency Programming Guide
There are two videos about GCD from WWDC 2010: Introducing Blocks and Grand Central Dispatch on iPhone and Simplifying iPhone App Development with Grand Central Dispatch.
There's also a video from WWDC 2011: Blocks and Grand Central Dispatch in Practice.
Related
Is there any way to configure an MSMQ queue to send copies of all messages it receives to another MSMQ queue? I have a memory leak on a production application that services a queue. I have a test version (that hopefully fixes the memory leak) on a test server, that services a test queue. I want to deluge the test version with the production stream of messages, to ensure that the memory leak has been fixed. After I am done testing, I would like to shut off this "message forwarding"
I had the same problem on my application, I was faced with 2 solutions, the easiest one I would recommend you to do is to make a very simple application that Peeks every message in a Queue via a transaction, and send a copy of the Message object to another queue, and you're done, just Abort() the transaction, that way you can be sure it'll be restored and wait for the production app to process the messages.
The other alternative would be to have the Message Queue apps just send the messages to yet another message queue, that way you don't have to mess around with peeks in Production and you'll have full access to your own queue in a test environment.
No. MSMQ is a protocol for delivering messages. You would need an application to read the delivered messages and send new copies to a different queue.
We have an application written against Mobicents SIP Servlets, currently this is using v2.1.547 but I have also tested against v3.1.633 with the same behavior noted.
Our application is working as a B2BUA, we have an incoming SIP call and we also have an outbound SIP call being placed to an MRF which is executing VXML. These two SIP calls are associated with a single SipApplicationSession - which is the concurrency model we have configured.
The scenario which recreates this 100% of the time is as follows:
inbound call placed to our application (call is not answered)
outbound call placed to MRF
inbound call hangsup
application attempts to terminate the SipSession associated with the outbound call
I am seeing this being logged:
2015-12-17 09:53:56,771 WARN [SipApplicationSessionImpl] (MSS-Executor-Thread-14) Failed to acquire session semaphore java.util.concurrent.Semaphore#55fcc0cb[Permits = 0] for 30 secs. We will unlock the semaphore no matter what because the transaction is about to timeout. THIS MIGHT ALSO BE CONCURRENCY CONTROL RISK. app Session is5faf5a3a-6a83-4f23-a30a-57d3eff3281c;SipController
I am willing to believe somehow our application might be triggering this behavior but I can't see how at the moment. I would have thought acquiring/releasing the Semaphore was all internal to the implementation so it should ensure something doesn't acquire the Semaphore and never release it?
Any pointers on how to get to the bottom of this would be appreciated, as I said it is 100% repeatable so getting logs etc is all possible.
It's hard to tell without seeing any logs or application code on how you access and schedule messages to be sent. But if you use the same SipApplicationSession in an asynchronous manner you may want to use our vendor specific asynchronous API https://mobicents.ci.cloudbees.com/job/MobicentsSipServlets-Release/lastSuccessfulBuild/artifact/documentation/jsr289-extensions-apidocs/org/mobicents/javax/servlet/sip/SipSessionsUtilExt.html#scheduleAsynchronousWork(java.lang.String,%20org.mobicents.javax.servlet.sip.SipApplicationSessionAsynchronousWork) which will guarantee that the access to the SipapplicationSession is serialized and avoid any concurrency issues.
I am building socket web server with Netty 5.0. I came through WebSocketServer example (https://github.com/netty/netty/tree/master/example/src/main/java/io/netty/example/http/websocketx/server).
But I can't understand how to send events to sockets from separate thread. So I have a thread which each second loads some data from external resource. This is StockThread which receives stock data. After receiving data the thread should send events to sockets. What is best practise to do this?
It am using following approach: inside StockThread I store list of ChannelHandlerContext. After receving data I just call write() method of ChannelHandlerContext. So write() method is called from StockThread. Is it okay or there is more appropriate way for this?
Yes, ChannelHandlerContext is thread-safe and can be cached, so this way of usage is completely ok.
See note from "Netty In Action" book, that proves my words:
You can keep the ChannelHandlerContext for later use,
such as triggering an event outside the handler methods,
even from a different Thread.
Is that possible to have mutex in RabbitMQ queue, i.e. If a client is reading from the queue, no other client should read from the queue. is that possible?
Let me explain my scenario:
Two application running in two different servers. reading the same queue. But, if one application is running and reading the messages from the Queue, the other application should not do anything. if the Main application fails or stopped, then the other application should
start reading from this queue.
This is kind of a fail over mechanism. Have anyone tried this before. Any help is much appreciated.
As long as i have searched, no solutions found...A simple solution is
create a queue call it as Lock Queue.
Have only one message make the application to read it from the queue.
When ever the application starts in a another server, it will wait for the message in the Queue. so, if the first one fails second
one will read the message and start processing the message in desired queue from which it should read.
A Mutex in Queue, that's it.
Note: This approach will work only if there is only message in the lock queue. make sure you handle it in your application.
This talk explicitly explains why this is a bad idea:
http://www.youtube.com/watch?v=XiXZOF6dZuE&feature=share&t=29m55s
from ~ 29m 55s in
I have a queue of jobs that need to be processed, the queue is periodically kicked by a timer but also by calling threads when a new job is added to the queue.
When the queue is kicked I want to initiate the processing of the queue on another thread as I don't want to block the calling thread (Which in a lot of cases will be the UI thread).
So to do this I run a grand central dispatch operation on the high priority concurrent queue, this creates an instance of my http class and submits the job through it (A job is essentially an http request).
The http class executes requests async using NSURLConnection internally.
My problem is the GCD operation finishes (As it has submitted all the async http requests) so then I guess the thread it was running on is either cleaned up and reused or exits. Which is wiping out my http classes that are in the process of doing these async web requests.
My question is, how do I make my http class hang around and finish processing the request without making the GCD operation wait for them?
Cheers
The main purpose of an asynch http would be not to block the main thread, since you are using a background thread through operation queue I'd say it is already taken care of. So what lacks if a defined end of processing where you can get your data back.
I'd say synchronous http would be cleaner to design and easier to support. You could use a NSURLConnection which has a synchronous method:
[NSURLConnection sendSynchronousRequest:returningResponse:error:]
I also just found a library that provides synchronouse mode of operation and promises to make cocoa (incl. cocoa touch) networking programming much easier: ASIHttpRequest