The base problem is as follows:
Two nodes communicating over a single socket;
Request-reply pattern;
Both nodes are client and server, i.e. node A makes requests to node B and node B makes requests to node A;
This would be easily solved with two sockets but I have only one. You can think of the problem as having to create multiple virtual sockets/channels over a single socket. Do you know of a well-tested messaging library that would support such use case?
In addition:
Support for C++ and Java;
The data serialization is handled by me using Google Protocol Buffers; and
If it is possible to achieve all this using RPC then great, otherwise I'll manage using protobuf.
I'd prefer not to have to write my own library and use something which is well-tested and well-support. I've looked into ZeroMQ but it doesn't seem to support the third requirement (request-reply pattern from A to B and from B to A simultaneously over a single socket). RabbitMQ is another possibility but may not support this requirement either. (I don't have experience with these libraries so maybe I'm wrong...)
(I wonder if I'm asking for too much.)
I don't have a complete answer for you, but I note that what you're asking for is emphatically supported by Cap'n Proto's RPC protocol: calls can be initiated in either direction.
It's not exactly what you want because:
Cap'n Proto has its own serialization format. The usage model is heavily inspired by Protobufs, but it isn't protobufs.
The Java implementation does not yet support RPC (planned, though).
If you care about Windows: the C++ implementation does not yet support RPC on MSVC, mostly due to MSVC being woefully behind on C++11 support. :/
But it might be worth checking out, for inspiration if nothing else.
(Disclosure: I'm the author of Cap'n Proto, and also the author of most of Google's open source Protobuf code.)
Related
Community,
I want to use/subscribe a pub-socket on a server that implements ZeroMQ (https://zeromq.org/)
My final product will be a flutter app. It must be running on Android/iOS/Windows/MacOS/Linux/Web. So I'm really careful with the plugin-choice. I do not want to burden myself with an intense amount of platform-specific code, neither do I want to be dependent on plugins that might break under certain conditions on each platform.
I know that there is a ZeroMQ-Plugin, but it has some Unresolved Issues in terms of operability on different platforms. Also I tried to run it on different Windows-machines and it only worked in about 25% of the cases.
Here's the fundamental network-communication between App and Server (see image below).
Is it possible to connect to a ZeroMQ-Publisher-Socket WITHOUT implementing or depending on the C++ compiled file of ZeroMQ? I'm thinking of a Socket or WebSocket, but I'm not even sure if it's technically possible (protocol etc), as I think that ZeroMQ uses it's own protocoll (please verify).
Can I subscribe to a ZeroMQ-Publisher-Socket with Sockets or WebSockets in Flutter? If yes, how? Are there alternatives?
dartzmq/install
Best regards
Q1 :"Is it possible to connect to a ZeroMQ-Publisher-Socket WITHOUT implementing or depending on the C++ compiled file of ZeroMQ?"
A1 :Yes, it is. It is quite enough to re-implement the published ZeroMQ ZMTP RFC-s relevant for the use-case & your code is granted to become interoperable, irrespective of the implementation language / deployment ecosystem, if it meets all the ZMTP RFC-s' mandatory requirements. So it is doable.
Q2 :"... ZeroMQ uses it's own protocoll (please verify)."
A2 :No, in the sense of OSI-ISO-L2/L3 stack.Yes, in the sense of higher layer application-driven protocols, where the ZMTP RFC-s apply for the most of the ZeroMQ Scalable Formal Communication Patterns' Archetypes ( may read more on ZeroMQ sockets are not sockets as you know them ), yet there are also tools to interface with O/S plain-sockets' fd-s, where needed. Still A1 applies here.
Q3 :"Can I subscribe to a ZeroMQ-Publisher-Socket with ...? If yes, how?"
A3 :Yes, it possible when your code follows the published ZMTP RFC-s. Implement all ZMTP RFC-s' mandatory properties & you are granted an interoperability with any other, ZeroMQ-ZMTP-RFC-s' compliant, node.
Q4 :"Are there alternatives?"
A4 :Yes, if your design can extend the Server-side, adding another AccessPoint-s there, using ZMQ_STREAM Scalable Formal Communication Archetype there, may reduce your Flutter-side scope of ZMTP RFC-s needed, as interfacing to native plain-socket will be the only one to handle and the "functionality gap" thereof can be handled on the Server-side of the link ( easily handling all the subscription management & message filtering, that must meet the ZeroMQ ZMTP RFC-s, so why not tandem it inside the Server-side before connecting the down-stream to Flutter App - smart, isn't it? )
Recently, I created a lightweight wrapper for the C++ boost asio library for some network communication. I used it to prototype some new functionality. We quickly moved over to a system that utilized Kafka to take advantage of an existing microservice framework when more internal funding came our way. No problem, I figured we would move to a different network model later on, and the internals were more important to my job than the network communication.
My question is, with the amount of technologies that abstract away network interfaces, (i.e. Kafka, grpc, ActiveMQ, ZeroMQ, etc.) are the use of base TCP/UDP sockets becoming more of a last resort, where software architects try to find an existing broker/stream processor/network message passing tool to fit their model? Or are there still many new production developments utilizing base level TCP/UDP sockets, not including those who solely write network libraries such as those mentioned above?
Note that I don't work with Kafka, grpc etc. in my line of work, but I have used UDP/TCP sockets extensively in the past. So forgive any misunderstanding of those particular technologies.
What I want to do: I want to add communication capabilities to a couple of applications (soon to be jar libraries for Java) in Scala, and I want to do it in the most painless way, with no Tomcat, wars, paths for GET requests, RPC servers, etc.
What I have done: I've been checking a number of libraries, like Jetty, JAX-RS, Jackson, etc. But then I see the examples and they usually involve many different folders for configuration, WSDL files, etc. Most of the examples lack a main method and I don't have a clear picture about how many additional requirements may they have (e.g. Tomcat).
What I am planning to do: I'm considering to simply open a socket on the "server" to listen, then connect with the "client" and transfer some JSON, in both directions. This should be fairly standard so that I can use other programming languages in compatible ways (e.g. Python).
What I am asking: I would like to know whether there is some library that makes this easier. Not necessarily using raw sockets, but setting up some process communication in just a few lines, maybe not as simple as Node.js, but something similar.
Bonus: It would be cool to
be able to use other programming languages (e.g. Python) by using open standards
have authentication
But I don't really need any of those at this point.
I think you need RPC client/server system, I would suggest to take one of these two:
Finagle - super flexible and powerful RPC client/server from Finagle. You can define your service with Thrift, and it will generate stubs for client/server in scala. With Thrift it should be straightforward to add Python support.
Spray - much smaller library, focused on creating REST services. It's not so powerful as Finagle, however much easier. And REST allows you to use any other clients
Remotely - an elegant RPC system for reasonable people. Interesting and very promising project, however maybe difficult to start with because of extensive Scalaz+Shapeless+Macro usage
Honestly if you want something that is cross-language compatible, simple, straightforward, and concise then you do not want to use plain old sockets!
Check out dropwizard. It is amazing and I use it for small and large projects alike! It is usually configured by no more than a single configuration file. It supports authentication too!
Out of the box it gives you really great inter-process communication over JSON (using Jackson) and much much more. There is also pretty decent Scala support for dropwizard.
If you must roll your own then I'd recommend using Jackson for JSON parsing. It's super simple to use and also has great scala support.
If you've got a "controlled" use case where the client and server are on the same LAN and deployed in tandem, I'd (controversially) recommend Java RMI; it's dumb and JVM-specific (and uses a Java-specific protocol), but it's very simple to use.
If you need something more robust and cross-language, I'd recommend Apache Thrift. You write your interfaces in a platform-independent interface definition language, and it's very clear which changes are compatible and which are not; the thrift compiler generates skeleton interfaces for you to use, and then you just write an implementation of that interface and a couple of lines to start the server (as you can see from the example on the homepage). It's also got good support for async implementations if you need the performance. Thrift itself is reasonably standard and cross-platform, with its own binary protocol, or you can use JSON as a transport if you really want to (I'd recommend against that though).
RabbitMQ provides one easy way to do what you want without writing a server and implementing your own persistence, flow control, authentication, etc. You can brew or apt-get install it.
You start up a broker daemon process (i.e. manages message queues)
In the Scala producer, you can use Maven-provided Java API to send JSON strings without any fuss (e.g. no definition languages) to specified queues
Then in your other Scala program, connect to the broker, and listen for messages on the queue, and parse the incoming JSON
Because it is so popular, there are many tutorials online for different patterns you may want to use to distribute the messages, e.g. pub/sub, one-to-one, exactly-once delivery, etc.
I need to implement a distributed XMPP MuC application on the lines of XEP-0289 minus some of the features, in essence I want to have a bare bones implementation of the plugin, my concern is to address fault-tolerance and as of now I do not want to worry about the performance considerations as specified in 289.
I have looked into SleekXmpp as a tool to develop server side plugins, but don't know how comfortable it would be to use it for such an implementation, other options I have looked at are OpenFire , Tigase. I am comfortable with Python/Java and other key features to consider would be good documentation, ease of use etc keeping that in mind I would like to know what would be the preferred path to take for this development.
Any guidance will be appreciated.
you should be able to write a MUC component that includes FMUC (or similar). The general way to do this would be to use a library that supports XEP-0114 components (e.g. SleekXMPP (Python), Swiften (C++)) and implement MUC+FMUC through that. You haven't said what your concerns with SleekXMPP are, but it's a fairly well-respected library in the XMPP community, so seems a fair choice (I'd pick Swiften, but I'm biased as one of the authors).
Your second option (patching the server directly) isn't generally the XMPPish way of adding customisations (as it's vendor-specific), but should also work if you can find someone sufficiently familiar with the server code, or if you're willing to become so.
To achieve fault tolerance (assuming you mean resilience to server failures) you'd need to run your XMPP server clustered, and also cluster your FMUC implementation. With that done, the usual XMPP fail-over using SRV records in DNS should ensure other servers retry connections to another host.
On a side note, the next version of FMUC (XEP-0289) will have some of the features of the current revision stripped out, and a number of improvements made based on deployment experience, so if your work is not time-critical, it might be of benefit to you to read that when it's released. I also note that there exists at least one implementation of FMUC already (Isode's M-Link, on which I work), and there is interest from other vendors, so using the standard protocol might benefit you in terms of not re-inventing the wheel.
Has anybody done or seen a deployment of Apache Thrift in an iPhone app?
I am wondering if is a reasonable solution for a high-volume, low(er)-latency network service for iPhones compared to HTTP.
One noteworthy thing I found is a bug report about running Thrift on the iPhone, which seems to have been fixed. But that doesn't necessarily indicate that it's a done deal.
Thrift and HTTP aren't mutually exclusive. In fact thrift now ships with an HTTP transport implementation to use. It's also a really nice way to auto-generate server/client code that avoids a lot of marshalling/unmarshalling boilerplate while still being really fast. Its internal representation is basically binary JSON, so it's very similar to a RESTful web service (except being easier to code and much, much faster).
So... anyone able to answer the original question? If not, I'll dive in myself with thrift's included Cocoa support and see how it works on the iphone.
Just my two cents..
The accepted answer to this question, is an opinion to not use a technology, not an answer of whether it is possible.
Thrift, is an interface definition language, IDL, like Protobuf and Capt'n'Proto. They permit the definition of a client/server/server protocol which is platform agnostic. JSON and Plist don't provide the same level of type conformance.
Having previously lead an iOS team with 10Ms MAU using Google Protobuf v2.5 on iOS, Android, Windows, and server teams, I can attest that IDLs are great on mobile. Apple uses them for syncing iWork content.
My current team uses Thrift for iOS and Android clients, with a mostly Scala backend. I much prefer it to Protobuf.
We send Thrift payloads over HTTPS and WebSockets. Once you have defined (in Thrift) your our wire communication protocol (i.e. frame structure), it's very easy to evolve your APIs.
However, on iOS in particular there are some implementation issues. The current version of the library is quite poorly packaged, and if you hope to make an Objective-C framework (e.g. for iOS 8+), then you will not be able to out of the box with v0.9.2. This is because the library headers include local imports, (#import "TProtocol.h" instead of #import <Thrift/TProtocol.h>) with no umbrella headers. Worst of all, the Objective-C compiler generates very messy Objective-C classes, also including local imports from the Thrift library.
Some of these issues are pretty damning. It indicates to me that while use of an IDL is very much a good engineering decision, not many iOS teams are using Thrift, unless they're huge with the resources to write their own library.
I've always disliked frameworks that use a common interface definition that builds out both server and client code. It keeps both sides too much in lockstep where in reality server API changes must be very flexible in the versions of clients that are communicating with it.
There are helpful libraries that make JSON or PLIST communication over HTTP pretty easy, and decades of debugging and understanding the HTTP protocol and how to use it well. I would ignore that at your peril.
I have used thrift's objective c bindings for a large iPhone app with a few million users. As one of the posters mentioned we can use Http which gets the best of both worlds. However there is no asynchronous HTTP client for thrift. We had to build an event based wrapper to allow non-blocking I/O calls. The underlying layer still issues one call at a time which hit us in a big way because we have one server call that takes a long time but it does not block UI flow and another really fast one that does block UI flow. If the underlying layer is busy with the slow command our fast command just has to wait. I am trying to build asyc http in c++ which can then be used on the iPhone but that is someways off from being ready.
Thrift as an external API doesn't make sense. Use it internally rock and roll.