I'm developping a little data processor in c++ over UDP sockets, and have a thread (just one, and apart the sockets) that process the info received from them.
My problem happens when i need to receive info from multiple clients in the socket at the same time.
How could i do something like:
Socket foo;
/* init socket vars and attribs */
while (serving){
thread_processing(foo_info);
}
for multiple clients (many concurrent access) in c++?
I'm using winsocks atm on win32, but just get standard blocking udp sockets working. No gui, it's a console app.
I'll appreciate so much an example or pointer to one ;).
Thanks in advance.
UDP socket is able to receive datagrams from multiple clients with the recvfrom() function. Just block on receive, read the request, process it, send the reply, repeat. You don't even need a thread unless processing takes a very long time (in that case a thread connected with two queues, in- and out-, would work).
I would suggest this is best tackled by putting the requests in a queue and letting the other thread work off the queue. This decouples the socket receive from the process and thus allows you to scale to more listeners and processing threads if your requirements change.
Related
I am just starting to learn sockets and client/servers. I am not clear on the following concept. Assume non-blocking sockets.
Assume I have a server application, and I have 1000 clients trying to talk to it, I think it is very realistic. Assume the client and server talk via sockets.
1- Does this mean that with every client, there is a separate socket connection? (Do we have 1000 sockets, or one socket with 1000 connections?
2- Does every socket connection belong to a separate thread? If Yes, How can we limit number of threads as it can get out of control?
Assuming you're using TCP, then every connection is over a separate socket. The operating system allocates them using file descriptors.
When using a protocol like UDP, this need not be the case, and won't be unless you write the code to do make it happen.
Threading? It depends on how you build the server. You don't need threads to be a part of a server at all and you can (obviously) have multiple threads with just a single connection. One common way of doing things, however, is to hand the socket returned by accept() to a new thread, yes.
If you don't have an interest in threads--for example, if the server only performs very quick tasks and creating a thread is just wasting time--you can use select() to poll the sockets and determine which ones need attention. Some servers use a combination of threading and polling to try to maximize throughput.
I'm writing a game server for a turn-based game. One criteria is that the game needs to be as fair for all players as possible.
So far it works like this:
Each client has a TCP connection. (If relevant, the connection is opened via WebSockets)
While running, continually check for incoming socket messages via epoll.
Iterate through clients with sockets ready to read:
Read all messages from the client.
Update the internal game state for each message.
Queue outgoing messages to affected clients.
At the end of each "window" (turn):
Iterate through clients and write all queued outgoing messages to their sockets
My concern for fairness raises the following questions:
Does it matter in which order I send messages to the clients?
Calling write() on all the sockets takes only a fraction of a second for my program, but somewhere in the underlying OS or networking would it make a difference if I sorted the client list?
Perhaps I should be sending to the highest-latency clients first?
Does it matter how I write the outgoing messages to the sockets?
Currently I'm writing them as one large chunk. The size can exceed a single packet.
Would it be faster for the client to begin its processing if I sent messages in smaller chunks than 1 packet?
Would it be better to write 1 packet worth to each client at a time, and iterate over the clients multiple times?
Are there any linux/networking configurations that would bear impact here?
Thanks in advance for your feedback and tips.
Does it matter in which order I send messages to the clients?
Yes, by fractions of milliseconds. If the network interface is available for sending the OS will immediately start sending. Why would it wait?
Perhaps I should be sending to the highest-latency clients first?
I think you should be sending in random order. Shuffle the list prior to sending. This makes it fair. I think your question is valid and this should be addressed.
Currently I'm writing them as one large chunk. [...]
First, realize that TCP is stream-based and that there are no packets/messages at the protocol level. On a physical level data is indeed packetized.
It is not necessary to manually split off packets because clients will read data as it arrives anyway. If a client issues a read, that read will complete immediately once the first packet has arrived. There is no artificial waiting in the OS.
Are there any linux/networking configurations that would bear impact here?
I don't know. Be sure to disable nagling.
I'm weighing up how to implement a TCP based server (in C) - the server will accept a connection from a client, receive commands from the client, and then send the response. Pretty simple stuff - but the processing of the command must be done by another thread in the system, which introduces a bit of concurrency to the mix.
So I'm trying to decide whether to handle all TCP comms in one thread, using non-blocking sockets and select(), or to use blocking sockets and two separate comms threads (one for sending, one for receiving).
My concern about the latter is handling socket synchronisation - if I close the socket in the send thread, what happens in the receive thread (or vice versa) .. and how to deal with this and clean up in the correct manner.
Any advice would be much appreciated.
You do not need separate receive and send threads for a client. When the client is accepted, create one thread that handles all of the I/O for that client, both receiving and sending (especially since you are implementing a command/response protocol). But if you do choose to use separate threads, closing a socket in one thread will cause detectable errors in the other thread that is using the same socket. Simply terminate each thread when a socket error occurs, and then decide which thread is going to be responsible for closing the socket.
However, if you need to handle a high number of concurrent clients then threading is not the best choice. Asynchronous I/O using non-blocking sockets (or on Windows, using I/O Completion Ports) is better, as it requires a smaller number of threads.
Let's say, I have a server with many connected clients via TCP, i have a socket for every client and i have a sending and receiving thread for every client. Is it safe and possible to call send function at the same time as it will not call send function for same socket.
If it's safe and ok, Can i stream data to clients simultaneously without blocking send function for other clients ?
Thank you very much for answers.
Yes it is possible and thread-safe. You could have tested it, or worked out for yourself that IS, IIS, SQL Server etc. wouldn't work very well if it wasn't.
Assuming this is Windows from the tag of "Winsock".
This design (having a send/receive thread for every single connected client), overall, is not going to scale. Hopefully you are aware of that and you know that you have an extremely limited number of clients (even then, I wouldn't write it this way).
You don't need to have a thread pair for every single client.
You can serve tons of clients with a single thread using non-blocking IO and read/write ready notifications (either with select() or one of the varieties of Overlapped IO such as completion routines or completion ports). If you use completion ports you can set a pool of threads to handle socket IO and queue the work for your own worker thread or threads/threadpool.
Yes, you can send and receive to many sockets at once from different threads; but you shouldn't need those extra threads because you shouldn't be making blocking calls to send/recv at all. When you make a non-blocking call the amount that could be written immediately is written and the function returns, you then note how much was sent and ask for notification when the socket is next writable.
I think you might want to consider a different approach as this isn't simple stuff; if you're using .Net you might get by with building this with TcpListener or HttpListener (both of which use completion ports for you), though be aware that you can't easily disable Nagle's algorithm with those so if you need interactivity (think of the auto-complete on Google's search page) then you probably won't get the performance you want.
I am wring a small http server which is using the Microsoft Windows WinSock API.
Do I need to apply multithreaded logic when handling multiple users?
Currently Windows sends a message when there is a network event and each message
carried (in wParam) the socket to be used in either send() or recv().
When client A connects and requests a couple of files usually a number of socket
are created by Winsock. My server then get a message that "send this file to
socket 123" and later "send that file to socket 456"
When another client connect it too gets a few sockets, say 789 and 654.
My server then respond to requests to send data using supplied socket number. It
does not have to know who wants the file since the correct file has to be sent to
the right socket.
I do not know whether Windows itself uses multiple threads when handling
accepting connection and sending the message down to my program.
So my question is:
Do I need to apply multithreaded logic when handling multiple users? And if so at
what point should I create a thread?
You typically use a thread per socket. And if you are accepting connections, a thread in a loop to block, waiting for an incoming connection socket. You then create a new thread and pass this socket handle to the new thread to handle. When that connection is closed and done with, simply let that thread terminate (or join). This is the basis of a threaded server.
in psudo code...
loop {
socket = accept();
new ThreadHandler( socket )
}
Using a single thread to handle multiple sockets is tricky, mainly because the thread can block (stop, waiting) while its writing, or more often, reading from a socket. It's not for the faint hearted.
For most applications, there is no point in using multiple threads to handle network connections. I've made a small writeup in an answer to this question.
Multiple threads become useful when handling the received data requires an unpredictable amount of CPU time, for example in database servers, or when the program structure does not allow for requests to be handled asynchronously.
There is also a third option, the "worker pool". A single thread handles all incoming connections and deserializes incoming requests, and then passes off work items to a pool of threads that handle one item at a time.
This way, simply opening a connection does not yet consume the resources needed for an entire thread, and system load is implicitly limited by the number of threads in the pool.