Using Sockets with NSXPCConnection - tcpserver

Running into an issue when using sockets with an NSXPCConnection.
Basically, there is a main process and a helper process running, established via NSXPCConnection. That helper process needs to act as a server and listen to a particular port (say 111), which receives outside connections.
The helper process opens a listening socket using the TCPServer helper class (wrapper around CFSocket) which is provided by Apple. Code found here:
https://code.google.com/p/iphone-remotepad/source/browse/trunk/RemotePad/TCPServer.h?r=238
The socket is opened successfully in - (BOOL)start:(NSError **)error.
The outer clients can establish with the 111 port. (test in terminal via telnet localhost 111).
However, the helper process never receives the TCPServer callback TCPServerAcceptCallBack.
The helper process has com.apple.security.network.client entitlement enabled.
Also, when I run the TCPServer in the main app instead of the helper process, set up the server on port 111, and try to connect to port 111, I do get the callback.
Any ideas of why the helper process does not receive socket call back? An XPC related issue?

Ok figured out the issue.
An xpc service provides you with a default run loop of type dispatch_main.
You want to substitute that with an NSRunLoop - done by changing the xpc service info plist:
https://developer.apple.com/library/mac/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CreatingXPCServices.html
Once that is done, you want to manually create a run loop inside your xpc service, along lines of:
do {
#autoreleasepool {
[[NSRunLoop currentRunLoop]run];
}
} while (YES);
With that in place, the TCPServer (which needs an active runloop) will return the callback and you'll be able to get the incoming data.

Related

How to always gracefully disconnect sockets on server kill?

In our flask socketio app, we have a socket.on("disconnect") that is called whenever a socket client disconnects, to handle the db state updates. However, when our server is killed due to a restart or due to a crash, this disconnect function cannot be called (since the server is transiently nonexistent), and is discarded. When the server is back up, all those socket disconnects to each frontend can never be processed properly, so the state is inconsistent.
Is there a way to "cache" these disconnect events to run when the server is back up? The end goal is to ideally have all the sockets would reconnect as well automatically, but currently we do this disconnect then reconnect manually. Our setup is Gunicorn flask threads being Nginx load balanced with a redis event queue with flask socket io.
You should register process signal
def handler(signum, frame):
# loop and handle all socket disconnect
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
The best way to handle this is to not force-kill your server. Instead, handle a signal such as SIGINT and disconnect all clients in a corresponding signal handler.
Another alternative is to keep track of your connected clients in a database (redis, for example). If the application is killed and restarted, it can go through those users and perform any necessary cleanup before starting a new server instance.

How can two Unicorn servers bind to the same Unix socket?

This (rather old) article seems to suggest that two Unicorn master processes
can bind to the same Unix socket path:
When the old master receives the QUIT, it starts gracefully shutting down its workers. Once
all the workers have finished serving requests, it dies. We now have a fresh version of our
app, fully loaded and ready to receive requests, without any downtime: the old and new workers
all share the Unix Domain Socket so nginx doesn’t have to even care about the transition.
Reading around, I don't understand how this is possible. From what I understand, to truly have zero
downtime you have to use SO_REUSEPORT to let the old and new servers temporarily be bound to the
same socket. But SO_REUSEPORT is not supported on Unix sockets.
(I tested this by binding to a Unix socket path that is already in use by another server, and I got
an EADDRINUSE.)
So how can the configuration that the article describes be achieved?
Nginx forwards HTTP requests to a Unix socket.
Normally a single Unicorn server accepts requests on this socket and handles them (fair enough).
During redeployment, a new Unicorn server begins to accept requests on this socket and handles them, while the old server is still running (how?)
My best guess is that the second server calls unlink on the socket file immediately before calling bind with the same socket file, so in fact there is a small window where no process is bound to the socket and a connection would be refused.
Interestingly, if I bind to a socket file and then immediately delete the file, the next connection to the socket actually gets accepted. The second and subsequent connections are refused with ENOENT as expected. So maybe the kernel covers for you somewhat while one process is taking control of a socket that was is bound by another process. (This is on Linux BTW.)

socket.py not creating listener on server

I set variables host and port instead of setting the 'address' variable tuple in socket.py. I was unable to get 'address' as a tuple to work. I do not believe this is the issue, but I thought I should state this up front.
FYI, my goal is an integrations project, and I believe I isolated socket.py as the problematic code. socket.py is not creating a listener on the remote server. I run the python script on my client, and my server address is 192.168.1.130 port 7879.
I think socket.py is the problem, because I do not receive the expected print statements back through the console that socket.py is attempting to create a socket. In addition, I can RDC to the server, disable ufw (yes I know this is a bad idea), create a tcp listener, push data through the client socket to the server socket, and verify this with netcat.
Am I mistaken that I should be able to parameterize socket.py with nothing more than a host and port and be able to create a socket connection? I am happy to provide more detail from logs, but I thought I should start with a very high level overview.
Answer: More investigation needed. I think socket.py does not create the remote connection with socket(),bind(),listen() statements; instead simply looks for a listener on the remote server with a connect() statement. This is entirely my misunderstanding given I did not dive into the details of the socket.py code. I figured this out because the service running on the remote server creates the listener, but the service itself on the remote server is what is not properly starting.

Sockets on a webhost

If you telnet to the ip address 192.43.244.18 port 13, you'll get the current time.
well, if I'm not wrong, this is simply a server socket. But there's one thing strange: how's this socket always listening?
If I take a PHP page and program sockets in there, I still have to request for the page first in order to activate the server socket, but this one isn't associated with any pages, and even if a make a perl script, I still have to request for that in order to run the server socket!
My question is: how can I make such a thing - an always listening socket - on a webhost (any language will do)?
You can run the process that's listening on the socket as a daemon (Linux) or service (Windows), or just a regular program really (although that's less elegant).
A simple place to begin would be http://docs.oracle.com/javase/tutorial/networking/sockets/clientServer.html which teaches you how to make a simple serversocket in Java that listens for a connection on a specific port. The program created will have to be run at all times to be able to accept the connections.

Is there a way to wait for a listening socket on win32?

I have a server and client program on the same machine. The server is part of an application- it can start and stop arbitrarily. When the server is up, I want the client to connect to the server's listening socket. There are win32 functions to wait on file system changes (ReadDirectoryChangesW) and registry changes (RegNotifyChangeKeyValue)- is there anything similar for network changes? I'd rather not have the client constantly polling.
There is no such Win32 API, however this can be easily accomplished by using an event. The client would wait on that event to be signaled. The server would signal the event when it starts up.
The related API that you will need to use is CreateEvent, OpenEvent, SetEvent, ResetEvent and WaitForSingleObject.
If your server will run as a service, then for Vista and up it will run in session 0 isolation. That means you will need to use an event with a name prefixed with "Global\".
You probably do have a good reason for needing this, but before you implement this please consider:
Is there some reason you need a connect right away? I see this as a non issue because if you perform an action in the client, you can at that point make a new server connection.
Is the server starting and stopping more frequently than the client? You could switch roles of who listens/connects
Consider using some form of Windows synchronization, such as semaphore. The client can wait on the synchronization primitive and the server can signal it when it starts up.
Personally I'd use a UDP broadcast from the server and have the "client" listening for it. The server could broadcast a UDP packet every X period whilst running and when the client gets one, if it's not already connected, it could connect.
This has the advantage that you can move the client onto a different machine without any issues (and since the main connection from client to server is sockets already it would be a pity to tie the client and server to the same machine simply because you selected a local IPC method for the initial bootstrap).