I'm writing a client-server application for OS X. The service needs to run forever, or at least as close as possible. :-)
In the past I have used "classic" Distributed Objects quite successfully with an Objective-C application, but this time I wanted to use Swift and the new shiny IPC technology, XPC!
So, here's my question:
When I create an XPC Mach Service (it needs root privileges) and start it via launchd, the process appears to restart for every new, incoming connection. I have written services started via launchd before and never had this problem. Is there something specific to using XPC that causes this?
My preference is to use a high-level IPC mechanism instead of something more fundamental like Unix domain sockets, but I'm happy to drop down to that level if necessary.
Related
Given the server-client model, would the OS initiate messages to applications, or is message passing always initiated by programs that want to use resources and thus must communicate with the OS?
OS is an overloaded term, and application is a vague term.
A pure message passing OS might implement traditional (unix) system calls in applications. For example, you might have an application called FileSystem, which accepts messages like Read,Write,Open,Close.... In these, such an application would be considered a server, and the client would be an application which wanted to use the File Services.
Pure message passing systems typically have difficulty with asynchronous events. When you look at implementing a normal read system call in a message passing system, it is natural that it will be an RPC: the client sends a read request, then suspends until the server has satisfied the read and sent a reply.
When the client wants asynchronous notification, such as send me a message when there is new mouse events available; the RPC somewhat falls down. While purely asynchronous systems exist, they are cumbersome to use with plain old programming languages like C, C++, ... There is hope that message based languages like Golang can break the impass, but that is yet to be seen.
Higher level OS-like services may deploy a number of interaction methods, quite distinct from client serve. Publish-Subscribe, a more recent reimplmentation of the 1980s multi-catch, has been popular in the last decade. Clients subscribe to a set of channels that they are interested in, and every event delivered to that channel is copied to every client subscribed to the channel before it is retired. Normal clients can generate events as well, so the mechanism serves as a dynamic interconnect between modules.
Dbus + zeromq are P-S systems of differing scales. Note that both can be implemented outside of a message passing OS.
I would like to implement a server app in Swift, with two listening (server) sockets with different functionality sets exposed. The app itself is a kind of smart proxy - transformer of data between the two sockets, with a little persistence thrown in. I have tried to prototype the app in Vapor, but it seems that multiple server sockets in a single app is somewhat an unexpected use case. Doc suggests that Application container is a singleton, it has a single runningServer property, and it takes a single Services struct where Router is registered - where i would need practically two separate routers. Now i'm having some ideas, involving either multiple Vapor Applications inside "my application", or some usage of subContainers, though i don't know how exactly the service registry sharing works. Or maybe something around Providers which i don't fully understand either. But in any case, it all feels like that i'm fighting against a clean design decision of "1 app = 1 server". Anyone prove me wrong?
Is there a conventional way to write a program such that commands can be issued to the program from the command line without a repl? For example, how you can send commands to a running nginx server using sudo /etc/init.d/nginx restart (or any other valid command besides restart)
One idea I had was having the long-running program create and monitor a unix socket that other programs can write to to send it commands. Another was to create a local server with a REST interface that can be sent commands that way, though that seems a bit gross.
What's the right way to do this?
Both ways are ok, and you could even consider using some RPC machinery, such as making your application serve JSONRPC on some unix(7) socket. Or use a fifo(7). Or use D-Bus.
A common habit on Unix is to have applications reload their configuration files on e.g. SIGHUP signal, and save some persistent state (before terminating) on SIGTERM. Read signal(7) (notice that only async-signal-safe routines can be called fro signal handlers; a good way is to only set some volatile sig_atomic_t variable inside the handler and test it outside). See also POSIX signal.h documentation.
You might make your application become a specialized HTTP server (e.g. using some HTTP server library like libonion) and give it some Web interface (or REST, or SOAP ...); the user (or sysadmin) will then use his browser to interact with your application.
You could make your server systemd compatible. (I don't know exactly what that requires, it is perhaps D-bus related).
You could embed some command interpreter (like Guile and Lua) in your app and have some limited kind of REPL loop running on some IPC like a socket or a fifo. Beware of nasty code injection.
I had a similar issue where I have a plethora of services running on any number of machines and each is in need of communicating with several others.
My main problem was not so much the communication between the services. That can be done with a simple message sent over a connection (as Basile mentioned, it can be TCP, UDP, Unix sockets, FIFOs...). However, when you have over 20 services, many of which need to communicate with several other services, you start having a headache on how to get all the connections right (I have such a system, although it has a relatively limited number of services, like just 10 and that's already very complicated).
So I created a process (yet another service) called Communicator. All services connect to the Communicator service and when they need to send a message, they include the name of the service they want to reach. The Communicator service is in charge of sending the message to the right placeāi.e. it could be to another Communicator service running on a different computer. Communicator has a graph of all the services available on your network and knows how to send messages to them without your service having to know anything about all of that. Computing a graph can be really complex.
For the purpose, I created the eventdispatcher project. It is in C++, which may not be what you're interested in, although you could use it in other languages that interface with C/C++. The structure of the messages are "proprietary" (specific to the Communicator), but you can create any message you want. A message includes a name and parameters (param-name=value). The first version has a simple one line text communication system. The newer version accepts JSON as well (still must be one line of text per message).
The system supports TCP, UDP, Unix sockets, FIFO, and between threads, you can have thread safe fifos. It also understand signals (like SIGHUP, SIGTERM, etc.) It has a specific connection to listen for the death of a thread. It supports encryption over TCP via OpenSSL. The messages can automatically be dispatched (hence the current name of the library). Connections are assigned a timer. And there are CUI and GUI (Qt) extensions as well.
The one main point here is that all your connections can be polled (see poll()) and thus you can implement a system that reacts to events instead of a system which sleeps and checks for events, sleeps and check, etc. or worth, you have a single blocking connection and everything has to happen on that one connection or your service gets stuck. This is one reason Unix has been using signals since early version of Unix did not have select() nor poll().
I am busy creating a system where various PCs communicate with each other over the internet. How it works at the moment, is each PC is a client and logs onto the server. Currently, there is a normal java program running on my own server with listening sockets that handles incoming requests and then relays the information between the connected PCs. My question is, is this a proper way of doing it? Should I rather change the app to a service or should I use something like a webservice? Also, is it fine using TCP sockets for the communication?
If I don't want to run the program on my own server, what type of company can offer me such a service where I can run my own apps?
I want to expand the current setup to a larger scale, so I want to make sure I am using good practices and keep hackers out.
Thanks
I'm looking for advice on the best way to implement some kind of bi-directional communication between a "server-side" application, written in Objective-C and running on a mac, and a client application running on an iPhone.
To cut a long story short, I'm adapting an existing library for use in a client-server environment. The library (which runs on the server) is basically a search engine which provides periodic results, and additionally can provide updates for any of those results at a later date. In an ideal world therefore I would be able to achieve the following with my hypothetical networking solution:
Start queries on the server.
Have the server "push" results to the client as they arrive.
Have the server "push" updates to individual results to the client as they arrive.
If I was writing this client to run on another Mac, I might well look at using Distributed Objects to mask the fact that the server was actually running remotely, but DO is not available on an iPhone.
If I was writing a more generic client-server application I would probably look at using HTTP to provide some kind of RESTful interface to searches, but this solution does not lend itself well to asynchronous updates and additionally what I am proposing does not fit well with the "stateless" tennet of REST: I would have to model my protocol so I "created" a search resource that I could subsequently query the state of and I would have to poll for updates to it.
One suggestion someone made was to make use of something like BLIP to provide me with a two-way pipe between the client and the server and implement my own "proxy" type objects for the server-side resources that knew how to fetch data from the server and additionally were addressable so that the server could push updates to them. Whilst BLIP provides the low-level messaging framework needed to communicate bi-directionally it still leaves me with a few questions:
How will I manage the lifetime of the objects on the server? I can have a message type that "creates" a search object, but when should that object be destroyed?
How well with this perform on an iPhone: if I have a persistent connection to the server will this drain the batteries too fast? This question is also pertinent in the HTTP world: most async updates are done using a COMET type hack which again requires a persistent connection.
So right now I'm still completely unsure what the best way to go is: I've done a lot of searching and reading but have not settled on any solution. I'm asking here on SO because I'm sure that there are many of you out there who have already solved this problem.
How have you gone about achieving real-time bidirectional networking between the iPhone and an Objective-C server-side app?