Need to implement client-server applications in different language that can communicate with each other - sockets

I want to build a client and server application using Socket programming that can chat but the client and server has to be implemented in different languages.
I want to use C# and Java for that purpose. I want to know if its possible and if it is How?Thanks

It is possible and pretty easy, especially with C# and Java. Their implementation of sockets is pretty similar.
Few things to watch for:
Make sure you are serializing integers in network order. If I remember correctly Java and .NET put bytes on the wire in a different order. See this post for some guidance.
Make sure you are encoding/decoding your strings consistently, like using Unicode on both sides.
Don't try using unsigned integer types, Java only supports signed types (I'm sure there are libraries to deal with it if necessary).
I don't know how compatible float and double serializations are on both sides, so if you need it do some more investigation.
Good luck and have fun!

Related

Haxe server-client exchange on all platforms

I'm developing client-server application on haxe. I want client side to be compiled to javascript from haxe, but there is a big chance, that other languages will be also needed. I cannot find universal way of doing it. Though, I can make it with websockets, when it comes to jabascript, and on haxe Sockets, in every other language.
So question is: is there a way to make a client-server application on haxe, without branching code for different platforms?
What you are looking for is Haxe Remoting, which is part of the Standard Library
Haxe remoting is a way to communicate between different platforms. With Haxe remoting, applications can transmit data transparently, send data and call methods between server and client side.
relevant sources:
http://haxe.org/manual/std-remoting.html
http://api.haxe.org/haxe/remoting/
As side note, if you are looking into creating a actual website with client/server in one project you might want to look into http://www.ufront.net

Easiest form of process communication in Scala

What I want to do: I want to add communication capabilities to a couple of applications (soon to be jar libraries for Java) in Scala, and I want to do it in the most painless way, with no Tomcat, wars, paths for GET requests, RPC servers, etc.
What I have done: I've been checking a number of libraries, like Jetty, JAX-RS, Jackson, etc. But then I see the examples and they usually involve many different folders for configuration, WSDL files, etc. Most of the examples lack a main method and I don't have a clear picture about how many additional requirements may they have (e.g. Tomcat).
What I am planning to do: I'm considering to simply open a socket on the "server" to listen, then connect with the "client" and transfer some JSON, in both directions. This should be fairly standard so that I can use other programming languages in compatible ways (e.g. Python).
What I am asking: I would like to know whether there is some library that makes this easier. Not necessarily using raw sockets, but setting up some process communication in just a few lines, maybe not as simple as Node.js, but something similar.
Bonus: It would be cool to
be able to use other programming languages (e.g. Python) by using open standards
have authentication
But I don't really need any of those at this point.
I think you need RPC client/server system, I would suggest to take one of these two:
Finagle - super flexible and powerful RPC client/server from Finagle. You can define your service with Thrift, and it will generate stubs for client/server in scala. With Thrift it should be straightforward to add Python support.
Spray - much smaller library, focused on creating REST services. It's not so powerful as Finagle, however much easier. And REST allows you to use any other clients
Remotely - an elegant RPC system for reasonable people. Interesting and very promising project, however maybe difficult to start with because of extensive Scalaz+Shapeless+Macro usage
Honestly if you want something that is cross-language compatible, simple, straightforward, and concise then you do not want to use plain old sockets!
Check out dropwizard. It is amazing and I use it for small and large projects alike! It is usually configured by no more than a single configuration file. It supports authentication too!
Out of the box it gives you really great inter-process communication over JSON (using Jackson) and much much more. There is also pretty decent Scala support for dropwizard.
If you must roll your own then I'd recommend using Jackson for JSON parsing. It's super simple to use and also has great scala support.
If you've got a "controlled" use case where the client and server are on the same LAN and deployed in tandem, I'd (controversially) recommend Java RMI; it's dumb and JVM-specific (and uses a Java-specific protocol), but it's very simple to use.
If you need something more robust and cross-language, I'd recommend Apache Thrift. You write your interfaces in a platform-independent interface definition language, and it's very clear which changes are compatible and which are not; the thrift compiler generates skeleton interfaces for you to use, and then you just write an implementation of that interface and a couple of lines to start the server (as you can see from the example on the homepage). It's also got good support for async implementations if you need the performance. Thrift itself is reasonably standard and cross-platform, with its own binary protocol, or you can use JSON as a transport if you really want to (I'd recommend against that though).
RabbitMQ provides one easy way to do what you want without writing a server and implementing your own persistence, flow control, authentication, etc. You can brew or apt-get install it.
You start up a broker daemon process (i.e. manages message queues)
In the Scala producer, you can use Maven-provided Java API to send JSON strings without any fuss (e.g. no definition languages) to specified queues
Then in your other Scala program, connect to the broker, and listen for messages on the queue, and parse the incoming JSON
Because it is so popular, there are many tutorials online for different patterns you may want to use to distribute the messages, e.g. pub/sub, one-to-one, exactly-once delivery, etc.

Game data across network

I'm designing a game where players are programmed bots competing in a programming contest. The bots can be programmed in any language - Java, Ruby, Python, C#. I'm looking for some way to transmit game data across the network or some way by which the game server can talk to the bots. What would be a better choice for this? Should i use XMPP or some other form of remote method invocation?
What you are descibing is not an RMI problem but a messaging one. I am sure there are several solutions you could use, and based on the limited knowledge of your application, I would say that XMPP is one of them. It is language agnostic and has libraries (and servers) available in most well supported languages.
Whether it is the best solution, I couldn't say, but I would think it is a viable one. It gives you the option for transmitting from point to point, point to many points, and a means for your game server to broadcast to all clients.
A REST based webservice might be easier to use if you need lots of languages to be able to call it.
I always find reinventing the wheel to be tedious. Try and see if you can use OpenTNL.
The issue with many Remoting infrastructures are that they are normally not portable between frameworks.
While XMPP might work for you - the main issue you might find is excessive data crossing the network due to all the header/presence stuff in the data being sent around. Also as XMPP is XML based any binary data would have to be sent around as a Base64 string.
A better bet might be a more low level socket interface - either way having the freedom to do bit-packing to reduce the size of the data will possibly be beneficial.

Any success using Apache Thrift on iPhone?

Has anybody done or seen a deployment of Apache Thrift in an iPhone app?
I am wondering if is a reasonable solution for a high-volume, low(er)-latency network service for iPhones compared to HTTP.
One noteworthy thing I found is a bug report about running Thrift on the iPhone, which seems to have been fixed. But that doesn't necessarily indicate that it's a done deal.
Thrift and HTTP aren't mutually exclusive. In fact thrift now ships with an HTTP transport implementation to use. It's also a really nice way to auto-generate server/client code that avoids a lot of marshalling/unmarshalling boilerplate while still being really fast. Its internal representation is basically binary JSON, so it's very similar to a RESTful web service (except being easier to code and much, much faster).
So... anyone able to answer the original question? If not, I'll dive in myself with thrift's included Cocoa support and see how it works on the iphone.
Just my two cents..
The accepted answer to this question, is an opinion to not use a technology, not an answer of whether it is possible.
Thrift, is an interface definition language, IDL, like Protobuf and Capt'n'Proto. They permit the definition of a client/server/server protocol which is platform agnostic. JSON and Plist don't provide the same level of type conformance.
Having previously lead an iOS team with 10Ms MAU using Google Protobuf v2.5 on iOS, Android, Windows, and server teams, I can attest that IDLs are great on mobile. Apple uses them for syncing iWork content.
My current team uses Thrift for iOS and Android clients, with a mostly Scala backend. I much prefer it to Protobuf.
We send Thrift payloads over HTTPS and WebSockets. Once you have defined (in Thrift) your our wire communication protocol (i.e. frame structure), it's very easy to evolve your APIs.
However, on iOS in particular there are some implementation issues. The current version of the library is quite poorly packaged, and if you hope to make an Objective-C framework (e.g. for iOS 8+), then you will not be able to out of the box with v0.9.2. This is because the library headers include local imports, (#import "TProtocol.h" instead of #import <Thrift/TProtocol.h>) with no umbrella headers. Worst of all, the Objective-C compiler generates very messy Objective-C classes, also including local imports from the Thrift library.
Some of these issues are pretty damning. It indicates to me that while use of an IDL is very much a good engineering decision, not many iOS teams are using Thrift, unless they're huge with the resources to write their own library.
I've always disliked frameworks that use a common interface definition that builds out both server and client code. It keeps both sides too much in lockstep where in reality server API changes must be very flexible in the versions of clients that are communicating with it.
There are helpful libraries that make JSON or PLIST communication over HTTP pretty easy, and decades of debugging and understanding the HTTP protocol and how to use it well. I would ignore that at your peril.
I have used thrift's objective c bindings for a large iPhone app with a few million users. As one of the posters mentioned we can use Http which gets the best of both worlds. However there is no asynchronous HTTP client for thrift. We had to build an event based wrapper to allow non-blocking I/O calls. The underlying layer still issues one call at a time which hit us in a big way because we have one server call that takes a long time but it does not block UI flow and another really fast one that does block UI flow. If the underlying layer is busy with the slow command our fast command just has to wait. I am trying to build asyc http in c++ which can then be used on the iPhone but that is someways off from being ready.
Thrift as an external API doesn't make sense. Use it internally rock and roll.

Is a client-server setup a good way to move data between machines?

I need to move some data from one machine to another. Is it a good idea to write a client server app using sockets in Perl to do the transfer? Will I have problems if one side is written in Java?
I mean, should I be aware of any issues I might face when I try to attempt the above?
Short answer: Using a Perl program as the client or server is just fine. Your only problem might be your personal skill and experience level, but after you do it you know how to do it. :) Most of the problem is choosing how you need to do it, not the technology involved. Perl isn't going to be the problem, but it doesn't have an advantage over other languages either.
As some have already noted, the socket portion of the problem is going to be the same in most languages since almost everything uses the BSD stuff. Perl doesn't have any roadblocks or special gotchas for that. To move data around you create one side to listen on a socket and the other to open a connection and send the data. Easy peasy. You might want to check out Lincoln Stein's Network Programming with Perl for that bit. That can get you the low-level bits.
For higher-level networking, POE is very useful and easy to work with once you get started. It's a framework for dealing with event-driven programming and has many plugins to easily communicate between processes. You might spend a little time learning it, but it gives a lot back too.
If you aren't inventing your own protocol, there's most likely already a Perl module that can format and parse the messages.
If you just want to transfer data, there are several things you can do. The easiest in concept might be just to write lines to the socket and read them as lines from the other end. A bit more complicated than that is using something like Data::Dumper, YAML, or JSON to serialize data to text and send that. For more complex things, such as sharing Perl objects, you might want to use Storable. You freeze your objects, send them as data over the network, then thaw them on the other side.
If you want to implement your client and server in different languages you have a bit more work to figure out how they'll talk to each other. The socket stuff is mostly the same, but a Java server won't understand the output of Perl's Storable (it's possible, but you'll have to parse it yourself and that's no good :). If you do everything right, neither side should care what you used on the other side.
I can only think of one gotcha off the top of my head: most text based network protocols use CRLF for line endings, but Perl on UNIX type machines assumes LF endings by default, this means you will need to change the input and output record separators if you want to use readline (aka <>) and print (also beware of printf, since it doesn't use the output record separator). Of course, if you are going to use a pre-existing protocol, there is probably already a Net::<PROTOCOL NAME> module on CPAN, so you won't need to worry about that. If you are designing your own protocol, I would keep the CRLF convention because it makes it easy to debug the server with telnet (which is really the last valid use for that program).
You don't say whether you need to implement your application to support any particular protocol or whether you need to implement a home grown protocol. The networking support in Perl is flexible enough to support either (or most places in between).
At the low level socket end, your code is going to be fairly similar whatever language your are using - BSD socket APIs are pretty well the same everywhere they are supported. The support you need for this is built into Perl but low level socket programming can be frustrating - it's very low level.
However, Perl's standard library contains the Socket module which is rather easier to use (and well documented).
If you need to implement an existing protocol you may well find that it has already been implemented. For example Net::Telnet implements command/response protocols (like Telnet) making a client app trivial.
Searching CPAN may save you a lot of pain. Look at modules in the Net::* hierarchy
I don't think you're gonna have any major issues that you won't have by not using Perl. Even performance will be comparable to other solutions due to network latencies.
You might want to look at POE framework. It makes writing such components a breeze.
It probably depend on a few factors. Does speed or responsiveness matter? Are you moving data between they same type of machines (Unix to Unix, Windows to Windows)? What type of data are you trying to move (Text or Binary)? What is knowledge about sockets and what languages do you have experience?
I have sent and received binary data over PERL sockets from differing applications, but I don't have much experience with the text processing over sockets from differing machines. If you are moving data between machine you need to keep in mind the way the data is marshalled and if it is packed or aligned on some byte boundry. I have not exchanged data with a Java programs, but is should be similiar.
It probably would help to have some experience with PERL, and I would recommend looking at the examples in the "camel" book. I have used the ones in the book as a starting point and made modification for what I needed to achieve. You may have to consult some other areas of the book if you are dealing with binary data, or to help in doing translations for sending data.
Write socket communication in Perl is relatively easy. Do it right and reliable is big pain even CPAN modules are examples of error prone code. It depends of your expectations.
You are basically asking two questions:
Is Perl a proper language for socket communication?
Is Perl a proper language for UI?
Referring to e5's answer, Perl is indeed a string-centric language with a focus on readable strings, less well equipped to handle binary data. Thus the answer probably lies in the questions: Is your communication string based? Is your UI string based?
If doing binary interaction through a socket, well, you probably could be doing better than Perl (not talking about C, but maybe C-ish languages). If you want to do graphical user-interaction you probably reach faster results by choosing one of the higher languages that focus more on gui interaction. (Java-ish might be the thing here.)