I have to implement IPC mechanism (Sending short messages) between java/c++/python process running on the same system. One way to implement is using socket using TCP protocol. This requires maintain connection and other associated activities.
Instead I am thinking of using UDP protocol which does not requires connection and I can send messages.
My question is , does UDP on same machine ( for IPC ) still has same disadvantage has it is applicable when communicating across machines ( like un reliable packet delivery, out of order packet.
Yes, is still unrealiable. For local communication try to use named pipes or shared memory
Edit:
Don't know the requirements of your applications, did you considered something like MPI (altough Java is not well supported...) or, Thrift? ( http://thrift.apache.org/ )
Local UDP is still unreliable, but the major advantage is UDP multicast. You can have one data publisher and many data subscribers. The kernel does the job of delivering a copy of the datagram to each subscriber for you.
Unix local datagram sockets, on the other hand, are required to be reliable but they do not support multicast.
Local UDP is more unreliable than on a network, like 50%+ packet drop unreliable. It is a terrible choice, kernel developers have attributed the quality down to lack of demand.
I would recommend investigating message based middleware preferably with a BSD socket compatible interface for easy learning curve. A suggestion would be ZeroMQ which includes C++, Java and Python bindings.
Local UDP is both still unreliable and sometimes blocked by firewalls. We faced this in our MsgConnect product which uses local UDP for interthread communication. BTW MsgConnect can be an option for your task so that you don't need to deal with sockets. Unfortunately there's no Python binding, but "native" C++ and Java implementations exist.
Related
What is performance (I mean latency while sending all messages, maximum fan-out rate for many messages to many receivers) of ZMQ in comparison to "simple" UDP and its multicast implementation?
Assume, I have one static 'sender', which have to send messages to many,many 'receivers'. PUB/SUB pattern with simple TCP transport seems very comfortable to handle such task - ZMQ does many things without our effort, one ZMQ-socket is enough to handle even numerous connections.
But, what I am afraid is: ZMQ could create many TCP sockets in background, even if we don't "see" that. That could create latency. However, if I create "common" UDP socket and will transmit all my messages with multicast - there would be only one socket (multicast), so I think latency problem would be solved. To be honest, I would like to stay with ZMQ and PUB/SUB on TCP. Are my concerns valid?
I don't think you can really compare them in that way. It depends on what is important to you.
TCP provides reliability and you as the sender can choose if loss is more important than latency by setting your block/retry options on send.
mcast provides network bandwidth saving especially if you network has multiple segments/routers.
Other options in zeromq
Use zmq_proxy's to split/share the load of tcp connections
Use pub/sub with pgm/epgm which is just a layer over multicast (I use this)
Use the new radio dish pattern (with this you have limited subscription options)
http://api.zeromq.org/4-2:zmq-udp
Behind the scenes, a TCP "socket" is identified (simplified) by both the "source" and "destination" - so there will be a socket for each direction you're communicating with each peer (for a more full description of how a socket is set up/identified, see here and here). This has nothing to do with ZMQ, ZMQ will set up exactly as many sockets as is required by TCP. You can optimize this if you choose to use multicast, but ZMQ does not optimize this for you by using multicast behind the scenes, except for PUB/SUB see here.
Per James Harvey's answer, ZMQ has added support for UDP, so you can use that if you do not need or want the overhead of TCP, but based on what you've said, you'd like to stay with TCP, which is likely the better choice for most applications (i.e. we often unconsciously expect the reliability of TCP when we're designing our applications, and should only choose UDP if we know what we're doing).
Given that, you should assume that ZMQ is as efficient as TCP enables it to be with regards to low-level socket management. In your case, with PUB/SUB, you're already using multicast. I think you're good.
I'm writing a server that, among other things, needs to be constantly sending data in different multicast addresses. The packages being sent might be received by a client side (an app) which will be switching between the mentioned addresses.
I'm using Perfect (https://github.com/PerfectlySoft/Perfect) for writing the server side, however had no luck using the Perfect-Net module nor using CocoaAsyncSocket. How could i implement both the sender and the receiver using swift? Any could snippet would be really useful.
I've been reading about multicasting and when it comes to the receiver, i've notice that in most languages (i.e. java or c#) the receiver often indicates a port number and a multicast ip-address, but when is the connection with the server being made? When does the socket bind to the real server ip-address?
Thanks in advance
If we talk about the TCP/IP stack, only IP and UDP support broadcasts and multicasts. They're both connectionless, and this is why you see only sending and receiving to special multicast addresses, but no binds and connects. You see it in different languages because (a) protocols are language-agnostic and (b) most implementations put reasonable efforts in trying to be compatible with BSD sockets interface.
If you want that true multicast, you'll need to find a swift implementation of sockets that allow setting options. Usual names for this operation is setsockopt. Multicast sender side doesn't need anything beyond a basic UDP socket (I suggest using UDP, not IP), while sender needs to be added to a multicast group. This Python example pretty much describes it.
However, it's worth noting that routers don't route broadcasts and multicasts. Hence you cannot use it over internet. If you need to use internet in your project, I'd advise you to use TCP - or websockets if your clients are browsers - and send messages to "groups" of them manually.
I guess you actually want Perfect-Kafka or Perfect-Mosquitto - Message Queue which allows a server to publish live streams to the client side subscribers. Low-level sockets will not easily fulfill your requirement.
I have some (very) old software written in C, that was used for two devices that communicate via serial cable (RS232) - both sending and receiving messages.
Now the old devices are to be replaced by new modern ones that do not have serial ports, but only Ethernet.
Hence, the request now is to convert the old serial communication to UDP communication (C++ is the choice for the moment).
So, I have some questions about this "conversion":
1) Suppose there are two peers A and B. Should I implement a server and a client for each peer, i.e.: serverA+clientA (for device A) and serverB+clientB (for device B)? Or is there some other/different approach?...
2) The old serial communication had some CRC, probably to ensure some reliability. Is it CRC necessary to be implemented (in my custom messages) also on UDP communication or not?
Thanks in advance for your time and patience.
1) UDP is a connectionless protocol so there's no rigid client and server roles here. You simply have some code that handles receiving and some code that facilitates sending.
2) You don't need CRC for UDP. First, there's a FCS (CRC32) in each Ethernet frame. Then, there's a header checksum in IP packets. After all, checksum is already included in UPD datagram!
Please also consider the following things:
In everyday life COM ports are long gone from the physical world, but they're still with us in the virtual form (even Android phones have COM ports). There are a lot of solutions for doing COM over USB/TCP/whatever. Some of them are PC apps, some of them are implemented in hardware (see Arduino's COM over USB),
When an UDP datagram fails checksum test, it is dropped (usually) silently. So in UDP you don't have built-in capabilities to distinguish between "nothing was received" and "we received something but that's not a valid thing". Check UDP-Lite if you want to handle these situations on the application level (it should simplify the porting process I believe).
Default choice for transferring data is TCP, because it provides reliable delivery. UDP is recommended for users that care about being realtime and for those who can tolerate some data loss. Or for those who care about the resources.
Choose TCP if you are going to send large amount of data or be ready to handle packet congestion on ports. Choose TCP if you plan to go wireless in future or be ready to handle periodical significant loss of packets.
If your devices are really tiny or filled with other stuff, it is possible to operate directly on Level 2 (Ethernet).
I want to make sure, if use UDP within a host, should i care about the package lost issue?
Yes, you should care about reliability when using UDP. Even if you use it on localhost, there is no guaranty that packets are not lost because the Protocol Specifications does not ensure this. It also depends on the implementation of UDP in Operating System. It may behave differently on different operating systems as far as reliability is concerned because there is no rule defined in UDP specifications.
Also the order of delivery in UDP is not ensured so you should also take care of it while using UDP for IPC.
I hope it helps.
I understand that 0MQ is supposed to be faster than TCP Sockets in a clustered environment and I can see where that would be the case (I think that's what they're referring to when they say "Faster than TCP, for clustered products and supercomputing" on the 0MQ website). However, will I see any kind of speedup using 0MQ instead of TCP sockets to communicate between two processes running on the same machine?
Well, the short version is give it a try.
The slightly longer version is that writing TCP sockets can be hard, there's a lot of things that are easy to have problems with, but 0MQ guarantees the message will be delivered in its entirety. It is also written by experts in network sockets, which, with the best will in the world, you probably aren't, and they use a few advanced tricks to speed things along.
You are not actually running on one machine because the VM is treated as a separate machine. This means that TCP sockets have to run through the whole network stack and cannot take shortcuts like they do when you communicate between processes on one machine.
However, you could try UDP multicast under ZeroMQ to see if that speeds up your application. UDP is less reliable on a wide area network, but in a closed environment of a VM talking to its host, you can safely skip all the TCP reliability stuff.
I guess IPC should be faster than TCP. If you are willing to move to a single process, INPROC is definitely going to be much faster.
I think (have not tested) that the answer is false as ZMQ likely uses the same standard C lib and adds some message headers.
Same thing applies for UDP.
Same thing applies for IPC pipes.
ZMQ could be just as fast but since it adds headers it is not likely.
Now it could be a different story if you really need some sort of header and ZMQ has implemented it better than you. Like for message size or type, but I digress.