How to attach GIOChannel for different context instead of default context? - sockets

I am writing one simple server and client application which uses socket for communication.
In the client side I have created a GIOChannel for listening to socket events such as read, write, exception..etc. Client provides some asynchronous APIs.
I have written one sample application for testing my code which creates g_main_loop and creates one GIOChannel for keyboard events.
mainloop = g_main_loop_new(NULL, FALSE);
channel = g_io_channel_unix_new(0);
g_test_io_watch_id = g_io_add_watch(channel, (GIOCondition)(G_IO_IN | G_IO_ERR | G_IO_HUP | G_IO_NVAL), test_thread, NULL);
g_main_loop_run(mainloop);
It is working fine if I dont loop or block the main thread in the callback function test_thread. For example when I call any asynchronous API of client I put sleep in my sample program for some time and expecting asynchronous message from server by the time main thread wakes up. But this is not happening, client socket getting read event for reading asynchronous message from server only after the main thread which called the API returns.
From this I got to know both keyboad events and socket events registered for same default context and they can not be notified by the main event dispatcher for the same time.
I have to make my program such a way that socket reading at client side should not dependent on the default context of g_main_loop so that sync and async both will happen from seperate threads.
I found some APIs through GNome docs which are for adding GIOChannel for only default context. I have to add the GIOChannel created for socket reading to different context.
Can anybody suggest me how to do it or is there any better option available for handling socket reading asynchronously using GLIB.
Thank you.

Related

Netty send event to sockets

I am building socket web server with Netty 5.0. I came through WebSocketServer example (https://github.com/netty/netty/tree/master/example/src/main/java/io/netty/example/http/websocketx/server).
But I can't understand how to send events to sockets from separate thread. So I have a thread which each second loads some data from external resource. This is StockThread which receives stock data. After receiving data the thread should send events to sockets. What is best practise to do this?
It am using following approach: inside StockThread I store list of ChannelHandlerContext. After receving data I just call write() method of ChannelHandlerContext. So write() method is called from StockThread. Is it okay or there is more appropriate way for this?
Yes, ChannelHandlerContext is thread-safe and can be cached, so this way of usage is completely ok.
See note from "Netty In Action" book, that proves my words:
You can keep the ChannelHandlerContext for later use,
such as triggering an event outside the handler methods,
even from a different Thread.

Winsock: Can i call send function at the same time for different socket?

Let's say, I have a server with many connected clients via TCP, i have a socket for every client and i have a sending and receiving thread for every client. Is it safe and possible to call send function at the same time as it will not call send function for same socket.
If it's safe and ok, Can i stream data to clients simultaneously without blocking send function for other clients ?
Thank you very much for answers.
Yes it is possible and thread-safe. You could have tested it, or worked out for yourself that IS, IIS, SQL Server etc. wouldn't work very well if it wasn't.
Assuming this is Windows from the tag of "Winsock".
This design (having a send/receive thread for every single connected client), overall, is not going to scale. Hopefully you are aware of that and you know that you have an extremely limited number of clients (even then, I wouldn't write it this way).
You don't need to have a thread pair for every single client.
You can serve tons of clients with a single thread using non-blocking IO and read/write ready notifications (either with select() or one of the varieties of Overlapped IO such as completion routines or completion ports). If you use completion ports you can set a pool of threads to handle socket IO and queue the work for your own worker thread or threads/threadpool.
Yes, you can send and receive to many sockets at once from different threads; but you shouldn't need those extra threads because you shouldn't be making blocking calls to send/recv at all. When you make a non-blocking call the amount that could be written immediately is written and the function returns, you then note how much was sent and ask for notification when the socket is next writable.
I think you might want to consider a different approach as this isn't simple stuff; if you're using .Net you might get by with building this with TcpListener or HttpListener (both of which use completion ports for you), though be aware that you can't easily disable Nagle's algorithm with those so if you need interactivity (think of the auto-complete on Google's search page) then you probably won't get the performance you want.

Does winsock api multithread automatically?

I am wring a small http server which is using the Microsoft Windows WinSock API.
Do I need to apply multithreaded logic when handling multiple users?
Currently Windows sends a message when there is a network event and each message
carried (in wParam) the socket to be used in either send() or recv().
When client A connects and requests a couple of files usually a number of socket
are created by Winsock. My server then get a message that "send this file to
socket 123" and later "send that file to socket 456"
When another client connect it too gets a few sockets, say 789 and 654.
My server then respond to requests to send data using supplied socket number. It
does not have to know who wants the file since the correct file has to be sent to
the right socket.
I do not know whether Windows itself uses multiple threads when handling
accepting connection and sending the message down to my program.
So my question is:
Do I need to apply multithreaded logic when handling multiple users? And if so at
what point should I create a thread?
You typically use a thread per socket. And if you are accepting connections, a thread in a loop to block, waiting for an incoming connection socket. You then create a new thread and pass this socket handle to the new thread to handle. When that connection is closed and done with, simply let that thread terminate (or join). This is the basis of a threaded server.
in psudo code...
loop {
socket = accept();
new ThreadHandler( socket )
}
Using a single thread to handle multiple sockets is tricky, mainly because the thread can block (stop, waiting) while its writing, or more often, reading from a socket. It's not for the faint hearted.
For most applications, there is no point in using multiple threads to handle network connections. I've made a small writeup in an answer to this question.
Multiple threads become useful when handling the received data requires an unpredictable amount of CPU time, for example in database servers, or when the program structure does not allow for requests to be handled asynchronously.
There is also a third option, the "worker pool". A single thread handles all incoming connections and deserializes incoming requests, and then passes off work items to a pool of threads that handle one item at a time.
This way, simply opening a connection does not yet consume the resources needed for an entire thread, and system load is implicitly limited by the number of threads in the pool.

ConnectEx with IOCP problem

I've made a simple dummy server/dummy client program using IOCP for some testing/profiling purpose. (And I also wanted to note that I'm new to asynchronous network programming)
It looks like the server works well with original client, but when the dummy client tries to connect to the server with ConnectEx function, IOCP Worker thread still gets blocked by GetQueuedCompletionStatus function and never returns result while the server succeeds in accepting the connection.
What is the problem and/or the reason, and how should I do to solve this problem?
I think you answer your own question with your comment.
Your sequence of events is incorrect, you say that you Bind, ConnectEx, Associate to IOCP.
You should Bind, associate the socket with the IOCP and THEN call ConnectEx.
Even after you associate your accepted socket to IOCP, your worker thread will remain blocked on GetQueuedCompletionStatus untill you post an "unlocking" completion event.
Completion events for receive/write operation wo'nt be sent by the system unless you "unlock" your new socket.
For details ckeck the source code of Push Framework http://www.pushframework.com It is a C++ network application framework using IOCP.
The "unlocking" trick exists in the "IOCPQueue" class.

what is blocking and non-blocking web server, what difference between both?

I have seen many a web framework provide a non-blocking web server, I just want to know what it means.
a blocking web-server is similar to a phone call. you need to wait on-line to get a response and continue; where as a non-blocking web-server is like a sms service. you sms your request,do your things and react when you receive an sms back!
Using a blocking socket, execution will wait (ie. "block") until the full socket operation has taken place. So, you can process any results/responses in your code immediately after. These are also called synchronous sockets.
A non-blocking socket operation will allow execution to resume immediately and you can handle the server's response with a callback or event. These are called asynchronous sockets.
Non-blocking generally means event driven, multiplexing all activity via an event driven system in a single thread, as opposed to using multiple threads.