C# How to speed up ThreadPool.QueueUserWorkItem? - threadpool

I created a server, which listens on a port for an incoming connection, then call ThreadPool.QueueUserWorkItem to create a server thread to handle the client request.
I then have 400 threads trying to connect to the server at the same time. I found that the server handles them one by one, at roughly 0.5 second each.
How do I tell ThreadPool NOT TO think for 0.5 second before launching a new thread? This server is meant to be a high-throughput server.
Thanks.

Related

client/server socket reconnection

I developed a client/server application based on sockets.
The client side is in Delphi. The server side is on an IBM I (as400)
Sometimes, the client and the server get disconnected. I'm not really sure why, but I think it's because of a machine between them (a proxy, a router, a firewall) sending a RST packet.
Anyway, I'm trying to reconnect the client with the same process on the server. (not another one, the same, that's important).
To do that, I create a new connection from the client. So, I have two processes on the server. I'll call them the "LostProcess" and the "HelperProcess".
The LostProcess is waiting for data in a data queue.
The client tells the HelperProcess that it was connected to the LostProcess.
The HelperProcess sends data to the LostProcess (via the data queue).
The HelperProcess makes a giveDescriptor, and the LostProcess makes a takeDescriptor.
Then the HelperProcess stops and the LostProcess sends data to the client (to say “I'm back”).
So far, it works, but when the client sends data , the LostProcess (we can call it the RebornProcess now) never receives them (I tried not to stop the HelperProcess, and that he is who receives the data).
With Wireshark, I could see that the client sends data with a different local port, so I guess that's why the RebornProcess does not receive them.
I tried to force the local port of the new client socket to be the same as the first one, but then the new client socket cannot connect for a while, and if I wait long enough, I have the same problem as before.
Does somebody have an idea how to make the reconnection work?
What you are doing is generally not possible. Once a TCP connection has been lost, it is gone forever. Both apps must close their respective sockets for the lost connection, and the client app must create a new socket connection to continue exchanging data with the server.
If the client app wants to reuse the same local port via bind() (which is generally not advisable in most cases), but does not want to wait for the OS to release the port first, then the client can enable the SO_REUSEADDR option via setsockopt() on the new socket before calling bind() and connect().
Pretty sure the answer is you can't.
There'd be all kinds of security issues if TCP/IP allowed a new connection to reconnect to an existing processes connection.
You should have the lost process terminate and just use the new process instead.

Do I need to `ping` connected websocket connections?

I would like to keep the Websocket connection alive for an undefined amount of time. The socket will ideally be sending data every so often but this is not assured, and I also would not like to make assumptions since a user can be in an idle state.
I have an object that stores references to all websocket connections. Would it be appropriate for me to schedule a function every x number of minutes? seconds? that basically iterates through all the connections, pings them and then discards those that haven't received pongs? Or do I need to enable a flag that automatically keeps the connection alive?
I am using the ws library on my server, but create websocket connections natively on the client.
There's no good way for you, on the client end of things, to know how many proxies, firewalls, NATs, etc occur in the network path from your client machine to the destination server. Any one of those could have its own separate idle timer. Using TCP keepalive may work, but only for the TCP session from your client to the next hop -- which may or may not actually be the end server.
Given the above, I would recommend that yes, you should ping your connected WebSocket sessions periodically. Whether you receive the pong from the server is, from the point of view of keeping your connections alive through that (possibly convoluted) chain of network middleboxes, irrelevant; you simply want to make sure that everything along the path sees some traffic flowing in order to reset their idle timers.
Obviously you want to trade off how often you ping your connected WebSocket sessions with how much overhead is incurred; pinging every 1 second would be a bit much, for example. You may need some fine-tuning to determine, experimentally, just what a good ping interval is for your needs.
Hope this helps!

ZeroMQ mixed PUB/SUB DEALER/ROUTER pattern

I need to do the following:
multiple clients connecting to the SAME remote port
each of the clients open 2 different sockets, one is a PUB/SUB, the
other is a ROUTER/DEALER ( the server can occasionally send back to client heartbeats, different server related information ).
I am completely lost whether it can be done in ZeroMQ or not.
Obviously if I can use 2 remote ports, that is not an issue, but I fail
to understand if my setup can be achieved with some kind of envelope
usage in ZeroMQ.
Can it be done?
Thanks,
Update:
To clarify what I wish to achieve.
Multiple clients can communicate with the server
Clients operate on request-response basis mostly(on one socket)
Clients create a session socket, which means that whenever this
type of socket is created, a separate worker thread needs to be created
and from that time on the client communicates with this worker thread
with regards to requests processing, e.g. server thread must not block
the connection of other clients when dealing with the request of one client
However clients can receive occasional messages from the worker thread with regards to heartbeats of the worker.
Update2:
Actually I could sort it out. What I did:
identify clients obviously, so ROUTER/DEALER is used, e.g. clients
are indeed dealers, hence async processing is provided
clients send messages to the one and only local port, where the router sits
router peeks into messages (kinda the lazy pirate example), checks whether a new client comes in; if yes it offloads to a separate thread, and connects the separate thread with an internal "inproc:" socket
router obviously polls for the frontend and all connected clients' backends and sends messages back and forth.
What bugs me is that it is an overkill if I compare this with a "regular" socket solution, where I could have connected the client with the worker thread DIRECTLY (e.g. worker thread could recv from the socket opened by the client directly), hence I could spare the routing completely.
What am I missing?
There was a discussion on the ZeroMQ mailing list recently about multiplexing multiple services on one TCP socket. The proposed solutions is essentially what you implemented.
The discussion also mentions Malamute with its brokers which essentially provides a framework based on ZeroMQ which also provides the functionality you need. I haven't had the time to look into it myself, but it looks promising.

Does winsock api multithread automatically?

I am wring a small http server which is using the Microsoft Windows WinSock API.
Do I need to apply multithreaded logic when handling multiple users?
Currently Windows sends a message when there is a network event and each message
carried (in wParam) the socket to be used in either send() or recv().
When client A connects and requests a couple of files usually a number of socket
are created by Winsock. My server then get a message that "send this file to
socket 123" and later "send that file to socket 456"
When another client connect it too gets a few sockets, say 789 and 654.
My server then respond to requests to send data using supplied socket number. It
does not have to know who wants the file since the correct file has to be sent to
the right socket.
I do not know whether Windows itself uses multiple threads when handling
accepting connection and sending the message down to my program.
So my question is:
Do I need to apply multithreaded logic when handling multiple users? And if so at
what point should I create a thread?
You typically use a thread per socket. And if you are accepting connections, a thread in a loop to block, waiting for an incoming connection socket. You then create a new thread and pass this socket handle to the new thread to handle. When that connection is closed and done with, simply let that thread terminate (or join). This is the basis of a threaded server.
in psudo code...
loop {
socket = accept();
new ThreadHandler( socket )
}
Using a single thread to handle multiple sockets is tricky, mainly because the thread can block (stop, waiting) while its writing, or more often, reading from a socket. It's not for the faint hearted.
For most applications, there is no point in using multiple threads to handle network connections. I've made a small writeup in an answer to this question.
Multiple threads become useful when handling the received data requires an unpredictable amount of CPU time, for example in database servers, or when the program structure does not allow for requests to be handled asynchronously.
There is also a third option, the "worker pool". A single thread handles all incoming connections and deserializes incoming requests, and then passes off work items to a pool of threads that handle one item at a time.
This way, simply opening a connection does not yet consume the resources needed for an entire thread, and system load is implicitly limited by the number of threads in the pool.

Should I keep a socket open during a long running process?

I've got some programs that occasionally (anywhere from every few minutes to once an hour) need to send metrics to Graphite. Should I keep the socket to the graphite server open for the duration of my process or make a new connection every time I need to send some metrics? What are the considerations when doing one or the other?
Sounds like you need a TCP connection.
If you should keep the connection active or not depends on answers to points like:
- Would you like to monitor the "connected" clients at the server at any given time?
- Is there a limit at the Server side in relation to the previous point?
- The amount of such clients "connected" to the server?
- Is it a problem if the connection creation takes some time?
If you keep the connection open, just make sure to send keep-alive messages from time to time (application level proffered).
A large amount of clients connected to the server, even when not active, may consume resources of memory or objects (for example, if there is one thread per connection).
On the other hand, keeping the connection on, will allow the client to detect if there is a connection problem to the server much faster (if that even matters).
it all depends on when is needed.