I've made a simple dummy server/dummy client program using IOCP for some testing/profiling purpose. (And I also wanted to note that I'm new to asynchronous network programming)
It looks like the server works well with original client, but when the dummy client tries to connect to the server with ConnectEx function, IOCP Worker thread still gets blocked by GetQueuedCompletionStatus function and never returns result while the server succeeds in accepting the connection.
What is the problem and/or the reason, and how should I do to solve this problem?
I think you answer your own question with your comment.
Your sequence of events is incorrect, you say that you Bind, ConnectEx, Associate to IOCP.
You should Bind, associate the socket with the IOCP and THEN call ConnectEx.
Even after you associate your accepted socket to IOCP, your worker thread will remain blocked on GetQueuedCompletionStatus untill you post an "unlocking" completion event.
Completion events for receive/write operation wo'nt be sent by the system unless you "unlock" your new socket.
For details ckeck the source code of Push Framework http://www.pushframework.com It is a C++ network application framework using IOCP.
The "unlocking" trick exists in the "IOCPQueue" class.
Related
The question is about Play framework specifically although concept is generic. I guess that the blocked client is listening to a socket which is tracked on the server side and passed around with the Future[Result] so that when the Future finishes, then the response is written to the socket and then the socket is closed.
Can someone share more concrete explanation with references?
Quoting from:
https://www.playframework.com/documentation/2.6.18/ScalaAsync
The web client will be blocked while waiting for the response, but
nothing will be blocked on the server, and server resources can be
used to serve other clients.
Note that Play does not manage how to address the client. This is managed by TCP. Basically (as a simple analogy) you can think of a client, like a web browser, as making a telephone call to the server. When the clients makes a request, one of it's sockets gets connected to a particular socket on the server - this is a persistent connection between the sockets for the duration of the request/response. Play's underlying server (Netty for older versions or Akka Http for v2.6+) will accept the incoming request from the socket and assign it a thread. Play will do the work and the resulting Response will get mapped back to the correct socket by the server. The TCP server will manage the mapping between response and the socket, not Play.
As others have noted, the reference to blocking is essentially to do with the way Play Action's are intended to work (non-blocking). They take the request, wrap whatever work you have coded in a Future, and hand this off to get completed at some point in the near future (it might be a different thread that completes the Future, or it could even end up being the same thread). The point is that the creation of the Future is quick and so the thread that handled the request gets returned quickly to the pool so it can pick up another request to work on. If you have heard about Reactive Programming then this is essentially the idea being keeping an Application Responsive.
The web client will be blocked while waiting for the response, but
nothing will be blocked on the server, and server resources can be
used to serve other clients.
So the client might be blocked on it's end whilst waiting for the response to come back through it's socket (unless it too is making async calls), but the idea is that the thread pool handling the requests in Play will not be blocked because of the way they create a Future and hand the completion of this back to Play so they can go back to handle other requests.
There is a bit more to it but hopefully this gives a bit more context to that particular statement from Play's docs.
It's not clear how to use jain SIP stack in mutli-thread environment. I need to create multiple SIP sessions from different threads, e.g each client should be proceeded in its own transaction. Below is few options:
Use single SipProvider for receiving and sending SIP requests and do multiplexing on application side. SipProvider is not thread-safe, hence sending requests requires proper locking.
Create new SipProvider and new ListeningPoint for each client. This is how it works for me now. However, I don't really like it. And it's not clear, whther SipStack threadsafe or not
Create new instance of SipStack for every client
Its been a long time since I thought about JAIN-SIP (or even SIP for that matter or even Java) but here goes:
Set the re-entrant listener flag when you create the stack. (look up the javadocs). Specify a thread pool size. When a sip request or response comes along, the stack may potentially create a new thread for you and invoke your listener.
Your critical section is the SipListener implementation. You should not block for ever in it - otherwise new inbound requests and responses will not be routed to the sip listener for the transaction that is being processed at the time you blocked.
Hope that answers your question. Happy hacking.
Thats it.
why don't you ue SIP Servlets, it lets you focus on your application logic and handles those details for you ?
See http://code.google.com/p/sipservlets/
Let's say, I have a server with many connected clients via TCP, i have a socket for every client and i have a sending and receiving thread for every client. Is it safe and possible to call send function at the same time as it will not call send function for same socket.
If it's safe and ok, Can i stream data to clients simultaneously without blocking send function for other clients ?
Thank you very much for answers.
Yes it is possible and thread-safe. You could have tested it, or worked out for yourself that IS, IIS, SQL Server etc. wouldn't work very well if it wasn't.
Assuming this is Windows from the tag of "Winsock".
This design (having a send/receive thread for every single connected client), overall, is not going to scale. Hopefully you are aware of that and you know that you have an extremely limited number of clients (even then, I wouldn't write it this way).
You don't need to have a thread pair for every single client.
You can serve tons of clients with a single thread using non-blocking IO and read/write ready notifications (either with select() or one of the varieties of Overlapped IO such as completion routines or completion ports). If you use completion ports you can set a pool of threads to handle socket IO and queue the work for your own worker thread or threads/threadpool.
Yes, you can send and receive to many sockets at once from different threads; but you shouldn't need those extra threads because you shouldn't be making blocking calls to send/recv at all. When you make a non-blocking call the amount that could be written immediately is written and the function returns, you then note how much was sent and ask for notification when the socket is next writable.
I think you might want to consider a different approach as this isn't simple stuff; if you're using .Net you might get by with building this with TcpListener or HttpListener (both of which use completion ports for you), though be aware that you can't easily disable Nagle's algorithm with those so if you need interactivity (think of the auto-complete on Google's search page) then you probably won't get the performance you want.
I am wring a small http server which is using the Microsoft Windows WinSock API.
Do I need to apply multithreaded logic when handling multiple users?
Currently Windows sends a message when there is a network event and each message
carried (in wParam) the socket to be used in either send() or recv().
When client A connects and requests a couple of files usually a number of socket
are created by Winsock. My server then get a message that "send this file to
socket 123" and later "send that file to socket 456"
When another client connect it too gets a few sockets, say 789 and 654.
My server then respond to requests to send data using supplied socket number. It
does not have to know who wants the file since the correct file has to be sent to
the right socket.
I do not know whether Windows itself uses multiple threads when handling
accepting connection and sending the message down to my program.
So my question is:
Do I need to apply multithreaded logic when handling multiple users? And if so at
what point should I create a thread?
You typically use a thread per socket. And if you are accepting connections, a thread in a loop to block, waiting for an incoming connection socket. You then create a new thread and pass this socket handle to the new thread to handle. When that connection is closed and done with, simply let that thread terminate (or join). This is the basis of a threaded server.
in psudo code...
loop {
socket = accept();
new ThreadHandler( socket )
}
Using a single thread to handle multiple sockets is tricky, mainly because the thread can block (stop, waiting) while its writing, or more often, reading from a socket. It's not for the faint hearted.
For most applications, there is no point in using multiple threads to handle network connections. I've made a small writeup in an answer to this question.
Multiple threads become useful when handling the received data requires an unpredictable amount of CPU time, for example in database servers, or when the program structure does not allow for requests to be handled asynchronously.
There is also a third option, the "worker pool". A single thread handles all incoming connections and deserializes incoming requests, and then passes off work items to a pool of threads that handle one item at a time.
This way, simply opening a connection does not yet consume the resources needed for an entire thread, and system load is implicitly limited by the number of threads in the pool.
I have a script (I don't have the code example here at the moment but I used IO::Async) which connects to socket on a remote server and listens. Client usually just listens for new data.
Problem is that the client is not able to detect if network problems occur and the socket connection is gone.
I used IO::Async and I also tried it with IO::Socket. Handle is always "connected" after the initial connection is established.
If the network connection is established again the socket connection is naturally still lost because the script has no idea that it should reconnect.
I was thinking to create some kind of "keepAlive" which "pings" (syswrite) the socket every X seconds (if nothing new came through socket) to check whether the connection is still there.
Is this the correct way to do it or is there maybe an another more creative or cleaner solution?
You can set the SO_KEEPALIVE socket option which, for TCP, sends periodic keepalive messages, and may help detect this condition. If this is detected, you will be delivered an EOF condition (most likely causing the containing IO::Async::Stream to fire on_read_eof).
For a better solution you might consider some sort of application-level keepalive message, such as IRC's PING command.
The short answer is there is no default way to automatically detect a dropped socket in perl.
Your approach of pinging would probably work pretty well; you could run a continuous thread in the background that sends ping requests and if it doesn't receive a response the main thread can be notified and a reconnect should be issued.
If you want to get messy you can work with select() to detect keep alive messages; however this may require some OS configuration depending upon your platform.
See this thread for more details: http://www.perlmonks.org/?node_id=566568