What is the difference between RPC and RMI? - rmi

Currently reading about RPC and RMI, I'm a bit confused about the difference.
When implementing RMI and for example gRPC, the syntax is basically the same.
They both have interfaces determining methods parameters and response.
They can both send objects in parameters (Java RMI doing it natively, C# gRPC with proto).
They both execute the request to the server through a method call on some object (based on the interface.
So what is the difference? Is it how the processes of data transfer happen between client and server?
By the looks of it RMI is just a Java implementation of RPC and gRPC is the C# implementation.

RPC stands for Remote Procedure Call which supports procedural programming. Tt’s almost like IPC mechanism wherever the software permits the processes to manage shared information Associated with an environment wherever completely different processes area unit death penalty on separate systems and essentially need message-based communication.
The above diagram shows the working steps in PRC implementation.
RMI stands for Remote Method Invocation, is a similar to PRC but it supports object-oriented programming which is the java’s feature. A thread is allowable to decision the strategy on a foreign object. In RMI, objects are passed as a parameter rather than ordinary data.
This diagram shows the client-server architecture of the RMI protocol.
RPC and RMI both are similar but the basic difference between RPC and RMI is that RPC supports procedural programming, on the other hand, RMI supports object-oriented programming.
Let’s see that the difference between RPC and RMI:

Related

HTTP/REST API wrappers of COM/IDL interfaces?

I'm working on a project where we're maintaining object-oriented COM interfaces defined in Interface Definition Language (IDL) files. This allows for simple & efficient consumption from C++, C# and Python (using comtypes) when running on the same Windows computer as the service provider. However, communication is currently bound to a single computer.
My organization is currently encountering use-cases where it would be useful to be able to call the same interfaces remotely from a different computer. I understand that HTTP/REST is currently the "standard" way of exposing remote APIs over a network. I'm therefore interested in any SW that could streamline the process of generating a HTTP/REST (or similar) wrapper server of existing COM/IDL interfaces. I'm fully aware that the generated interfaces will probably not adhere to REST "best practice" guidelines. Just want a auto-generated solution to avoid having to manually maintain a wrapper APIs myself.

Communication of two application in same device

There are 2 applications that need to communicate with each other. They are both running on the same PC.
Main Application (C#)
Helper Application (C#) -> launched from Main Application
Helper Application will modify some data used/contained by the Main Application. Can the helper application be a microservice? (not familiar with microservices, but I've saw this while checking on the net)
I found a helpful tutorial and was able to create a WCF Duplex Binding.
Now the Main Application and Helper Application can communicate.
I'm just wondering if this is a good solution (or a microservice is better??)
Can the helper application be a microservice? (not familiar with microservices...
Sure. "Microservices" is just the latest term that describes distributed component-based network computing. It goes back a long way to the days of (and possibly further) distributed COM (DCOM) and Corba; COM+ and finally service-orientated architecture (SOA). WCF used SOA as a best practice. In practice the only real difference between SOA and microservices is that the latter tend to adopt HTTP-REST-JSON as the transport/API/payload whereas the SOA generation is transport/payload neutral but generally using SOAP.
I found a helpful tutorial and was able to create a WCF Duplex Binding. Now the Main Application and Helper Application can communicate. I'm just wondering if this is a good solution (or a microservice is better??)
Well technically you are already using microservice/SOA.
I'm just wondering if this is a good solution
No. The problem with SOA/microservices on the same machine is that they are very chatty; have a high overhead; and their message payloads quite verbose. Both SOAP and REST utilise text messages (XML and JSON respectively) by default (which is large compared to binary).
If both client and server are on the same machine you are best to just use straight-up named-pipes and avoid WCF/REST. Communcation under named-pipes are binary and so are very compact; named pipes run in Kernal mode meaning it is very fast and as an added bonus when communicating locally, bypasses the network layer (as opposed to say TCP which will even for LOCALHOST).

Are both REST and SOAP an implementation of SOA?

I have a question around SOA.
Are SOAP and REST both considered approaches for implementing a service-oriented architecture?
I know that REST is a style, thus this leads me to this question.
Yes, they both can be considered approaches for implementing a SOA. I suppose you could say REST is a style, but then you'd have to say SOAP is one too. I would simply consider them different techniques to accomplish the same end. SOAP mimics a Remote Procedure Call and REST is inline with how the web (http) was designed.
When creating/adapting services to work in a SOA architecture the interfaces exposed can be whatever you desire as long as the consumers have the ability to process the response.
For the sake of giving a more concise answer, I will interpret REST as being a HTTP interface which can perform the CRUD operations, perhaps responding to requests with an XML or JSON object.
SOAP tends to lend itself to more complex operations on the service side, the libraries and involved XML's of SOAP introduces complexity to the system.
If all you require is the representation of resources which can be accessed through simple CRUD operations it is worth considering implementing a REST interface to reduce complexity, even if the service will run along side services with SOAP interfaces. All that would be required is the consumer of the service is able to deal with the RESTful style responses as well as acting as a SOAP client.
There would be arguments for consistency across the service to improve maintainability and ease of development, but this isn't a necessity and should only be included in the decision process.
When including a messaging bus into the design, heterogeneous services can be dealt with even more effectively by inserting standard transforms (XSLT, custom) into the process which can translate the response from services into a standard format understood by the system as a whole.
If you simply ask whether both of them can be implemented using Service Oriented Architecture - yes they do. They can even be used both at once in a single SOA-based project.
If you are asking whether SOAP or REST should be used - there is no answer unless you provided project specifications.
SOAP and REST are ways of building services.
SOAP is XML based and, in theory, supports more than just HTTP, and has standards for interface definition (WSDL), and things like security (WS_Security).
REST is a style for doing web services in a resource-oriented manner using a defined set of web operations (GET, POST, etc), but defines very little else.
However, SOA is about much more than just a bunch of services. Choosing REST or SOAP is the easy part.

When should I use RequestFactory vs GWT-RPC?

I am trying to figure out if I should migrate my gwt-rpc calls to the new GWT2.1 RequestFactory cals.
Google documentation vaguely mentions that RequestFactory is a better client-server communication method for "data-oriented services"
What I can distill from the documentation is that there is a new Proxy class that simplifies the communication (you don't pass back and forth the actual entity but just the proxy, so it is lighter weight and easier to manage)
Is that the whole point or am I missing something else in the big picture?
The big difference between GWT RPC and RequestFactory is that the RPC system is "RPC-by-concrete-type" while RequestFactory is "RPC-by-interface".
RPC is more convenient to get started with, because you write fewer lines of code and use the same class on both the client and the server. You might create a Person class with a bunch of getters and setters and maybe some simple business logic for further slicing-and-dicing of the data in the Person object. This works quite well until you wind up wanting to have server-specific, non-GWT-compatible, code inside your class. Because the RPC system is based on having the same concrete type on both the client and the server, you can hit a complexity wall based on the capabilities of your GWT client.
To get around the use of incompatible code, many users wind up creating a peer PersonDTO that shadows the real Person object used on the server. The PersonDTO just has a subset of the getters and setters of the server-side, "domain", Person object. Now you have to write code that marshalls data between the Person and PersonDTO object and all other object types that you want to pass to the client.
RequestFactory starts off by assuming that your domain objects aren't going to be GWT-compatible. You simply declare the properties that should be read and written by the client code in a Proxy interface, and the RequestFactory server components take care of marshaling the data and invoking your service methods. For applications that have a well-defined concept of "Entities" or "Objects with identity and version", the EntityProxy type is used to expose the persistent identity semantics of your data to the client code. Simple objects are mapped using the ValueProxy type.
With RequestFactory, you pay an up-front startup cost to accommodate more complicated systems than GWT RPC easily supports. RequestFactory's ServiceLayer provides significantly more hooks to customize its behavior by adding ServiceLayerDecorator instances.
I went through a transition from RPC to RF. First I have to say my experience is limited in that, I used as many EntityProxies as 0.
Advantages of GWT RPC:
It's very easy to set-up, understand and to LEARN!
Same class-based objects are used on the client and on the server.
This approach saves tons of code.
Ideal, when the same model objects (and POJOS) are used on either client and server, POJOs == MODEL OBJECTs == DTOs
Easy to move stuff from the server to client.
Easy to share implementation of common logic between client and server (this can turn out as a critical disadvantage when you need a different logic).
Disadvatages of GWT RPC:
Impossible to have different implementation of some methods for server and client, e.g. you might need to use different logging framework on client and server, or different equals method.
REALLY BAD implementation that is not further extensible: most of the server functionality is implemented as static methods on a RPC class. THAT REALLY SUCKS.
e.g. It is impossible to add server-side errors obfuscation
Some security XSS concerns that are not quite elegantly solvable, see docs (I am not sure whether this is more elegant for RequestFactory)
Disadvantages of RequestFactory:
REALLY HARD to understand from the official doc, what's the merit of it! It starts right at completely misleading term PROXIES - these are actually DTOs of RF that are created by RF automatically. Proxies are defined by interfaces, e.g. #ProxyFor(Journal.class). IDE checks if there exists corresponding methods on Journal. So much for the mapping.
RF will not do much for you in terms of commonalities of client and server because
On the client you need to convert "PROXIES" to your client domain objects and vice-versa. This is completely ridiculous. It could be done in few lines of code declaratively, but there's NO SUPPORT FOR THAT! If only we could map our domain objects to proxies more elegantly, something like JavaScript method JSON.stringify(..,,) is MISSING in RF toolbox.
Don't forget you are also responsible for setting transferable properties of your domain objects to proxies, and so on recursively.
POOR ERROR HANDLING on the server and - Stack-traces are omitted by default on the server and you re getting empty useless exceptions on the client. Even when I set custom error handler, I was not able to get to low-level stack traces! Terrible.
Some minor bugs in IDE support and elsewhere. I filed two bug requests that were accepted. Not an Einstein was needed to figure out that those were actually bugs.
DOCUMENTATION SUCKS. As I mentioned proxies should be better explained, the term is MISLEADING. For the basic common problems, that I was solving, DOCS IS USELESS. Another example of misunderstanding from the DOC is connection of JPA annotations to RF. It looks from the succinct docs that they kinda play together, and yes, there is a corresponding question on StackOverflow. I recommend to forget any JPA 'connection' before understanding RF.
Advantages of RequestFactory
Excellent forum support.
IDE support is pretty good (but is not an advantage in contrast with RPC)
Flexibility of your client and server implementation (loose coupling)
Fancy stuff, connected to EntityProxies, beyond simple DTOs - caching, partial updates, very useful for mobile.
You can use ValueProxies as the simplest replacement for DTOs (but you have to do all not so fancy conversions yourself).
Support for Bean Validations JSR-303.
Considering other disadvantages of GWT in general:
Impossible to run integration tests (GWT client code + remote server) with provided JUnit support <= all JSNI has to be mocked (e.g. localStorage), SOP is an issue.
No support for testing setup - headless browser + remote server <= no simple headless testing for GWT, SOP.
Yes, it is possible to run selenium integration tests (but that's not what I want)
JSNI is very powerful, but at those shiny talks they give at conferences they do not talk much about that writing JSNI codes has some also some rules. Again, figuring out how to write a simple callback was a task worth of true researcher.
In summary, transition from GWT RPC to RequestFactory is far from WIN-WIN situation,
when RPC mostly fits your needs. You end up writing tons conversions from client domain objects to proxies and vice-versa. But you get some flexibility and robustness of your solution. And support on the forum is excellent, on Saturday as well!
Considering all advantages and disadvantages I just mentioned, it pays really well to think in advance whether any of these approaches actually brings improvement to your solution and to your development set-up without big trade-offs.
I find the idea of creating Proxy classes for all my entities quite annoying. My Hibernate/JPA pojos are auto-generated from the database model. Why do I now need to create a second mirror of those for RPC? We have a nice "estivation" framework that takes care of "de-hibernating" the pojos.
Also, the idea of defining service interfaces that don't quite implement the server side service as a java contract but do implement the methods - sounds very J2EE 1.x/2.x to me.
Unlike RequestFactory which has poor error handling and testing capabilities (since it processes most of the stuff under the hood of GWT), RPC allows you to use a more service oriented approach. RequestFactory implements a more modern dependency injection styled approach that can provide a useful approach if you need to invoke complex polymorphic data structures. When using RPC your data structures will need to be more flat, as this will allow your marshaling utilities to translate between your json/xml and java models. Using RPC also allows you to implement more robust architecture, as quoted from the gwt dev section on Google's website.
"Simple Client/Server Deployment
The first and most straightforward way to think of service definitions is to treat them as your application's entire back end. From this perspective, client-side code is your "front end" and all service code that runs on the server is "back end." If you take this approach, your service implementations would tend to be more general-purpose APIs that are not tightly coupled to one specific application. Your service definitions would likely directly access databases through JDBC or Hibernate or even files in the server's file system. For many applications, this view is appropriate, and it can be very efficient because it reduces the number of tiers.
Multi-Tier Deployment
In more complex, multi-tiered architectures, your GWT service definitions could simply be lightweight gateways that call through to back-end server environments such as J2EE servers. From this perspective, your services can be viewed as the "server half" of your application's user interface. Instead of being general-purpose, services are created for the specific needs of your user interface. Your services become the "front end" to the "back end" classes that are written by stitching together calls to a more general-purpose back-end layer of services, implemented, for example, as a cluster of J2EE servers. This kind of architecture is appropriate if you require your back-end services to run on a physically separate computer from your HTTP server."
Also note that setting up a single RequestFactory service requires creating around 6 or so java classes where as RPC only requires 3. More code == more errors and complexity in my book.
RequestFactory also has a little bit more overhead during the request processing, as it has to marshal serialization between the data proxies and actual java models. This added interface adds extra processing cycles which can really add up in an enterprise or production environment.
I also do not believe that RequestFactory services are serialization like RPC services.
All in all after using both for some time now, i always go with RPC as its more lightweight, easier to test and debug, and faster then using a RequestFactory. Although RequestFactory might be more elegant and extensible then its RPC counter part. The added complexity does not make it a better tool necessary.
My opinion is that the best architecture is to use two web apps , one client and one server. The server is a simple lightweight generic java webapp that uses the servlet.jar library. The client is GWT. You make RESTful request via GWT-RPC into the server side of the client web application. The server side of the client is just a pass though to apache http client which uses a persistant tunnel into the request handler you have running as a single servlet in your server servlet web application. The servlet web application should contain your database application layer (hibernate, cayenne, sql etc..) This allows you to fully divorce the database object models from the actual client providing a much more extensible and robust way to develop and unit test your application. Granted it requires a tad bit of initial setup time, but in the end allows you to create a dynamic request factory sitting outside of GWT. This allows you to leverage the best of both worlds. Not to mention being able to test and make changes to your server side without having to have the gwt client compiled or build.
I think it's really helpful if you have a heavy pojo on the client side, for example if you use Hibernate or JPA entities.
We adopted another solution, using a Django style persistence framework with very light entities.
The only caveat I would put in is that RequestFactory uses the binary data transport (deRPC maybe?) and not the normal GWT-RPC.
This only matters if you are doing heavy testing with SyncProxy, Jmeter, Fiddler, or any similar tool that can read/evaluate the contents of the HTTP request/response (like GWT-RPC), but would be more challenging with deRPC or RequestFactory.
We have have a very large implementation of GWT-RPC in our project.
Actually we have 50 Service interfaces with many methods each, and we have problems with the size of TypeSerializers generated by the compiler that turns our JS code huge.
So we are analizing to move towards RequestFactory.
I have been read for a couple of days digging into the web and trying to find what other people are doing.
The most important drawback I saw, and maybe I could be wrong, is that with RequestFactory your are no longer in control of the communication between your Server Domain objects and your client ones.
What we need is apply the load / save pattern in a controlled way. I mean, for example client receive the whole object graph of objects belonging to a specific transaction, do his updates and them send the whole back to the server. The server will be responsible for doing validation, compare old with new values and do persistance. If 2 users from different sites gets the same transaction and do some updates, the resulting transaction shouldn't be the merged one. One of the updates should fail in my scenario.
I don't see that RequestFactory helps supporting this kind of processing.
Regards
Daniel
Is it fair to say that when considering a limited MIS application, say with 10-20 CRUD'able business objects, and each with ~1-10 properties, that really it's down to personal preference which route to go with?
If so, then perhaps projecting how your application is going to scale could be the key in choosing your route GWT RPC or RequestFactory:
My application is expected to stay with that relatively limited number of entities but will massively increase in terms of their numbers. 10-20 objects * 100,000 records.
My application is going to increase significantly in the breadth of entities but the relative numbers involved of each will remain low. 5000 objects * 100 records.
My application is expected to stay with that relatively limited number of entities AND will stay in relatively low numbers of e.g. 10-20 objects * 100 records
In my case, I'm at the very starting point of trying to make this decision. Further complicated by having to change UI client side architecture as well as making the transport choice. My previous (significantly) large scale GWT UI used the Hmvc4Gwt library, which has been superseded by the GWT MVP facilities.

Why doesn’t Web Sockets use SOAP?

First off, I intend no hostility nor neglegence, just want to know people's thoughts. I am looking into bi-directional communication between client and server; client being a web application. At this point I have a few options: MS-proprietary duplex binding, from what I hear unreliable and unnatural: comet, and web sockets (for supported browsers).
I know this question has been asked in other ways here, but I have a more specific question to the approach. Considering web sockets are client-side, the client code sits in JavaScript. Is it really the intention to build a large chunk of an application directly in JavaScript? Why didn't W3C do this in web services? Wouldn't it be easier if we were to be able to use SOAP to provide a contract and define events along with the existing messaging involved? Just feels like the short end of the stick so far.
Why not make it simple and take advantage of JS dynamic nature and leave the bulk of code where it belongs....on the server?
Instead of
mysocket.send("AFunction|withparameters|segmented");
we could say
myServerObject.AFunction("that", "makessense");
and instead of
...
mysocket.onmessage = function() { alert("yay! an ambiguous message"); }
...
we could say
...
myServerObject.MeaningfulEvent = function(realData) { alert("Since I have realistic data...."); alert("Hello " + realData.FullName); }
...
HTML 5 took forever to take hold....did we waste a large amount of effort in the wrong direction? Thoughts?
Sounds to me like you've not yet fully grasped the concepts around Websockets. For example you say:
Considering web sockets are client-side
This is not the case, sockets have 2 sides, you could think of these as a Server and a Client, however once the connection is established the distinction blurs - you could then also think of the client and the server as "peers" - each can write or read in to the pipe that connects them (the socket connection) at any time. I suspect you'd benefit from learning a little more about HTTP works on top of TCP - WebSockets is similar / analogous to HTTP in this way.
Regarding SOAP / WSDL, from the point of view of a conversation surrounding TCP / WebSocket / HTTP you can think of all SOAP / WSDL conversations as being identical to HTTP (i.e. normal web page traffic).
Finally, remember the stacked nature of network programming, for instance SOAP/WSDL looks like this:
SOAP/WSDL
--------- (sits atop)
HTTP
--------- (sits atop)
TCP
And WebSockets look like this
WebSocket
--------- (sits atop)
TCP
HTH.
JavaScript allows clients to communicate via HTTP with XMLHttpRequest. WebSockets extends this functionality to allow JavaScript to make arbitrary network I/O (not just HTTP), which is a logical extension and allows all sorts of applications that need to use TCP traffic (but might not be using the HTTP protocol) to be ported to JavaScript. I think it is rather logical that, as applications continue to move to the cloud, that HTML and JavaScript support everything that is available on the desktop.
While a server can do non-HTTP network I/O on behalf of a JavaScript client and make that communication available over HTTP, this is not always the most appropriate or efficient thing to do. For example, it would not make sense to add an additional round-trip cost when attempting to make an online SSH terminal. WebSockets makes it possible for JavaScript to talk directly to the SSH server.
As for the syntax, part of it is based on XMLHttpRequest. As has been pointed out in the other posting, WebSockets is a fairly low-level API that can be wrapped in a more understandable one. It is more important that WebSockets support all the necessary applications than that it have the most elegant syntax (sometimes focusing on the syntax can lead to more restrictive functionality). Library authors can always make this very general API more manageable to other application developers.
As you noted WebSockets has low overhead. The overhead is similar to normal TCP sockets: just two bytes more per frame compared to hundreds for AJAX/Comet.
Why low-level instead of some sort of built-in RPC functionality? Some thoughts:
It's not that hard to take an existing RPC protocol and layer it on a low-level socket protocol. You can't go the opposite direction and build a low-level connection if the RPC overhead is assumed.
WebSockets support is fairly trivial to add to multiple languages on the server side. The payload is just a UTF-8 string and pretty much every language has built-in efficient support for that. An RPC mechanism not so much. How do you handle data type conversions between Javascript and the target language? Do you need to add type hinting on the Javascript side? What about variable length arguments and/or argument lists? Do you build these mechanisms if the language doesn't have a good answer? Etc.
Which RPC mechanism would it be modeled after? Would you choose an existing one (SOAP, XML-RPC, JSON-RPC, Java RMI, AMF, RPyC, CORBA) or an entirely new one?
Once client support is fairly universal, then many services that have normal TCP socket will add WebSockets support (because it's fairly trivial to add). The same is not true if WebSockets was RPC based. Some existing services might add an RPC layer, but for the most part WebSockets services would be created from scratch.
For my noVNC project (VNC client using just Javascript, Canvas, WebSockets) the low-overhead nature of WebSockets is critical for achieving reasonable performance. Until VNC servers include WebSockets support, noVNC includes wsproxy which is a generic WebSockets to TCP socket proxy.
If you are thinking about implementing an interactive web application and you haven't decided on server-side language, then I suggest looking at Socket.IO which is a library for node (server-side Javascript using Google's V8 engine).
In addition to all the advantages of node (same language on both sides, very efficient, power libraries, etc), Socket.IO gives you several things:
Provides both client and server framework library for handling connections.
Detects the best transport supported by both client and server. Transports include (from best to worst): native WebSockets, WebSockets using flash emulation, various AJAX models.
Consistent interface no matter what transport is used.
Automatic encode/decode of Javascript datatypes.
It wouldn't be that hard to create a RPC mechanism on top of Socket.IO since both side are the same language with the same native types.
WebSocket makes Comet and all other HTTP push type techniques legible by allowing requests to originate from the server. It is kind of a sandboxed socket and gives us limited functionality.
However, the API is general enough for framework and library authors to improve on the interface in whichever way they desire. For example, you could write some RPC or RMI styled service on top of WebSockets that allows sending objects over the wire. Now internally they are being serialized in some unknown format, but the service user doesn't need to know and doesn't care.
So thinking from a spec authors POV, going from
mysocket.send("AFunction|withparameters|segmented");
to
myServerObject.AFunction("that", "makessense");
is comparatively easy and requires writing a small wrapper around WebSockets so that serialization and deserialization happens opaquely to the application. But going in the reverse direction means the spec authors need to make a much more complex API which makes for a weaker foundation for writing code on top of it.
I ran into the same problem where I needed to do something like call('AFunction', 'foo', 'bar') rather than serialize/de-serialize every interaction. My preference was also to leave the bulk of code on the server and just use the Javascript to handle the view. WebSockets were a better fit because of its natural support for bi-directional communication. To simplify my app development, I build a layer on top of WebSockets to make remote method calls (like RPC).
I have published the RMI/RPC library at http://sourceforge.net/projects/rmiwebsocket/. Once the communication is setup between the web-page and the servlet, you can execute calls in either direction. The server uses reflection to call the appropriate method in the server-side object and client uses Javascript's 'call' method to call the appropriate function in the client-side object. The library uses Jackson to take care of the serialization/deserialization of various Java types to/from JSON.
WebSocket JSR was negotiated by a number of parties (Oracle, Apache, Eclipse, etc) all with very different agendas. It's just as well they stopped at message transport level and left higher level constructs out. If what you need is a Java to JavaScript RMI, check out the FERMI Framework.