GWT-RPC vs HTTP Call - which is better? - gwt

I am evaluating if there is a performance variation between calls made using GWT-RPC and HTTP Call.
My appln services are hosted as Java servlets and I am currently using HTTPProxy connections to fetch data from them. I am looking to convert them to GWT-RPC calls if that brings in performance improvement.
I would like to know about pros/cons of each...
Also any suggestions on tools to measure performance of Async calls...
[A good article on various Server communication strategies which can be employed with GWT.]

GWT-RPC is generally preferred when the backend is also written in Java because it means not having to encode and decode the object at each end -- you can just transmit a regular Java object to the client, and use it there.
JSON (using RequestBuilder) is generally used when the backend is written in some other language, and requires the server to JSON-encode the response object and the client to JSON-decode it into a JavaScriptObject for use in the GWT code.
If I had to guess I'd say that GWT-RPC also results in smaller transport objects because the GWT team optimizes for this case, but either will work, and JSON can still be pretty small. It just comes down to a matter of developer convenience in most cases.
As for tools to measure request time, you can either use Chrome/Webkit's developer tools, or Firefox's Firebug extension, or measure request time in your app and send that metrics data back to your server in a deferred request for collection and analysis.

I wrote that article mentioned in the question (thanks for the link!).
As always, the answer is 'it depends'. I've used both GWT-RPC and JSON.
As outlined above, GWT-RPC allows for some serious productivity in shipping java objects (with some limits) over the wire. Some logic can be shared, and GWT takes care of marshalling/unmarshalling your object.
JSON allows for cross domain access and consumption by other, non GWT clients. You can get by with overlay types, but no behavior (like validation) can be shared. JSON can also be easily compressed and cached, unlike GWT-RPC (last time I looked).
Since we have no idea what the payload is, performance recommendations are hard to give. I'd recommend (again, as someone does above) testing yourself.

Just an addition to the other answers, there's one point to consider which could influence your decision towards JSON, even if you're using Java on the back-end:
Maybe sometime in the future, you want to allow non-GWT clients to talk to your server. Many modern sites offer some kind of API access, and if you're using JSON, you basically already have a comparatively open API.

In general I agree with Jason - if your server side uses Java, go with GWT-RPC. You'll be able to reuse the POJOs, validation logic, etc. RPC also tends to "play" better with MVP and code-splitting.
However, if your server side uses anything else use JSON - but don't fret, with JavaScript Overlay Types using JSON is a breeze. You won't be able to reuse the code from client side on the server, though (YMMV).
From a performance point of view - I'd say that JSON has the edge here. Modern browsers have some seriously good methods for fast encoding/decoding for JSON. I'm not sure what GWT-RPC is "behind the scenes", but I doubt it can beat JSON when it comes to speed. As for the payload - that depends on the developer (the names of the objects in JSON, etc), but I'd say that in general JSON is also (marginably) smaller. Enable compression on your server (for example, mod_deflate on Apache HTTP) to squeeze the bits even more ;)

Related

Adapter Proxy for Restful APIs

this is a general 'what technologies are available' question.
My company provides a web application with a RESTful API. However, it is too slow for my needs and some of the results are in an awkward format.
I want to wrap their restful server with a proxy/adapter server, so when you connect to the proxy you get the RESTful API I wish the real one provides.
So it needs to do a few things:
passthrough most requests
cache some requests
do some extra requests on the original server to detect if a request is cacheable
for instance: there is a request for a field in a record: GET /records/id/field which might be slow, but there is a fingerprint request GET /records/id/fingerprint which is always fast. If there exists a cache of GET /records/1/field2 for the fingerprint feedbeef, then I need to check the original server still has the fingerprint feed beef before serving the cached version.
fix headers for some responses - e.g. content-type, based upon the path
do stream processing on some large content, for instance
GET /records/id/attachments/1234
returns a 100Mb log file in text format
remove null characters from files
optionally recode the log to filter out irrelevant lines, reducing the load on the client
cache the filtered version for later requests.
While I could modify the client to achieve this functionality, such code would not be re-usable for other clients (different languages), and complicates the client logic.
I had a look at whether clojure/ring could do it, and while there is a nice little proxy middleware for it, it doesn't handle streaming content as far as I can tell - the whole 100Mb would have to be downloaded. Also it doesn't include any cache logic yet.
I took a look at whether squid could do it, but I'm not familiar with the technology, and it seems mostly concerned with passing through requests rather than modifying them on the fly.
I'm looking for hints where I might find the correct technology to implement this. I'm mostly language agnostic if learning a new language gets me access to a really simple way to do it.
I believe you should choose a platform that is easier for you to implement your custom business logic on. The following web application frameworks provide easy connectivity with REST APIs, and allow you to create a web application that could work as a REST proxy:
Play framework (Java + Scala)
express + Node.js (Javascript)
Sinatra (Ruby)
I'm more familiar with Play, of which I know it provides utilities for caching you could find useful, and is also extendable by a number of plugins.
If you are familiar with Scala, you could have a also have a look at Finagle. It is a framework build be Twitter's infrastructure team to provide protocol-agnostic connectivity. It might be an overkill for REST to REST proxy, but it provides abstractions you might find useful.
You could also look at some 3rd party services like Apitools, which allows to create a proxy programmatically (in lua). Apirise is a similar service (of which I'm a co-founder) that intends to do provide similar functionalities with a user-friendly UI.
Beeceptor does exactly what you want. It plugs in-between your web-app and original API to route requests.
For your use-case of caching a few responses, you can create a rule. That way it shall not hit the original endpoint.
The requests to original APIs can be mocked, and you can inspect response
You can simulate delays.
(Note: it is a shameless plug, I am the author of Beeceptor and thought it should help you and other developers.)
https://github.com/nodejitsu/node-http-proxy is looking useful - although I don't yet know if it can stream process for transcoding.

GWT non-Java backend application structure

Does anyone know where is a good example of GWT application with non-Java backend? Something like the "Contacts" application on the official pages of GWT. I'm interested in the following topics specifically (this is how I see the "back" part of the application, but it may be different, of course):
DTO serialization.
Communication layer (the very back one). Should it use generics and work with any abstract DTO? Is there any other proven approach?
Service layer, which uses the communication layer with specific DTOs.
Requests caching. Should it be implemended on the service or communication level?
Good abstraction. So we can easily substitute any parts for testing and other purposes, like using different serializators (e. g. XML, JSON), different servers' behaviors when managing user sessions (URL might change once the user is logged in).
I know, there are many similar topics here, but I haven't found one, which is focused on the structure of the client part.
Use REST on both sides. PHP/Python REST server-side. GWT client-side.
Don't use GWT-RPC.
The following gives a guideline for Java-java REST, but there is leeway to move out of server-side Java. It explains why REST is appropriate.
http://h2g2java.blessedgeek.com/2011/11/gwt-with-jax-rs-aka-rpcrest-part-0.html.
REST is an industry established pattern (Google. Yahoo, in fact every stable-minded establishment deploys REST services).
REST is abstracted as a HTTP level data structure. Which Java, PHP and Python have established libraries to comply to ensure DTO integrity.
Communication layer (the very back one). Should it use generics ???
Don't understand the question or why the question exists. Just use the REST pattern to provide integrity to non-homogeneous language between server and client and to HTTP request/response.
If you are using Java at back-end, there is no escape from using generics. Generics saves code. But using generics extensively needs programmer to have equally extensive visual capacity to visualise the generics. If your back-end is on PHP or Python, are there generics for PHP? Python generics? Might as well stay in Java land or C# land and forget about Java-free service provider.
Did you mean DTO polymorphism? Don't try polymorphism or decide on it until after you have established your service. Then adaptively and with agility introduce polymorphism into your DTOs if you really see the need. But try to avoid it because with JSON data interchange, it gets rather confusing between server and client. Especially if they don't speak the same programming language.
If you are asking HTTP level generics? I don't know of any framework, not SOAP, not REST where you could have generics carried by the XML or JSON. Is there? Generics?
Service layer? REST.
Requests caching? Cache at every appropriate opportunity. Have service-provider cache query results for items common and static to all sessions like menu, menu/drop-down box choices, labels, etc. Cache your history and places.
On GWT side cache records so that forward/backward button will not trigger inadvertent query. Use MVP pattern and history to manage history traversal that might trigger redisplay of info.
If you are talking about unified info abstraction, you should start your project with JAX-RS to define/test the API and perform data abstraction. Without performing any business logic.
Then, once your HTTP-level APIs andDTOs are defined, convert server-side to using language of your choice to proceed to write more complex code.
BTW, I don't dig your terminology "Backend".
We normally use the terminology client-side for service consumer, server-side for service provider, backend for data repository/persistence access, mid-tier or middle-ware for intervening/auxiliary software required to provide mathematical/scientific/graphical analysis/synthesis.
If our terminologies did not coincide, I probably answered this question wrong.

When should I use RequestFactory vs GWT-RPC?

I am trying to figure out if I should migrate my gwt-rpc calls to the new GWT2.1 RequestFactory cals.
Google documentation vaguely mentions that RequestFactory is a better client-server communication method for "data-oriented services"
What I can distill from the documentation is that there is a new Proxy class that simplifies the communication (you don't pass back and forth the actual entity but just the proxy, so it is lighter weight and easier to manage)
Is that the whole point or am I missing something else in the big picture?
The big difference between GWT RPC and RequestFactory is that the RPC system is "RPC-by-concrete-type" while RequestFactory is "RPC-by-interface".
RPC is more convenient to get started with, because you write fewer lines of code and use the same class on both the client and the server. You might create a Person class with a bunch of getters and setters and maybe some simple business logic for further slicing-and-dicing of the data in the Person object. This works quite well until you wind up wanting to have server-specific, non-GWT-compatible, code inside your class. Because the RPC system is based on having the same concrete type on both the client and the server, you can hit a complexity wall based on the capabilities of your GWT client.
To get around the use of incompatible code, many users wind up creating a peer PersonDTO that shadows the real Person object used on the server. The PersonDTO just has a subset of the getters and setters of the server-side, "domain", Person object. Now you have to write code that marshalls data between the Person and PersonDTO object and all other object types that you want to pass to the client.
RequestFactory starts off by assuming that your domain objects aren't going to be GWT-compatible. You simply declare the properties that should be read and written by the client code in a Proxy interface, and the RequestFactory server components take care of marshaling the data and invoking your service methods. For applications that have a well-defined concept of "Entities" or "Objects with identity and version", the EntityProxy type is used to expose the persistent identity semantics of your data to the client code. Simple objects are mapped using the ValueProxy type.
With RequestFactory, you pay an up-front startup cost to accommodate more complicated systems than GWT RPC easily supports. RequestFactory's ServiceLayer provides significantly more hooks to customize its behavior by adding ServiceLayerDecorator instances.
I went through a transition from RPC to RF. First I have to say my experience is limited in that, I used as many EntityProxies as 0.
Advantages of GWT RPC:
It's very easy to set-up, understand and to LEARN!
Same class-based objects are used on the client and on the server.
This approach saves tons of code.
Ideal, when the same model objects (and POJOS) are used on either client and server, POJOs == MODEL OBJECTs == DTOs
Easy to move stuff from the server to client.
Easy to share implementation of common logic between client and server (this can turn out as a critical disadvantage when you need a different logic).
Disadvatages of GWT RPC:
Impossible to have different implementation of some methods for server and client, e.g. you might need to use different logging framework on client and server, or different equals method.
REALLY BAD implementation that is not further extensible: most of the server functionality is implemented as static methods on a RPC class. THAT REALLY SUCKS.
e.g. It is impossible to add server-side errors obfuscation
Some security XSS concerns that are not quite elegantly solvable, see docs (I am not sure whether this is more elegant for RequestFactory)
Disadvantages of RequestFactory:
REALLY HARD to understand from the official doc, what's the merit of it! It starts right at completely misleading term PROXIES - these are actually DTOs of RF that are created by RF automatically. Proxies are defined by interfaces, e.g. #ProxyFor(Journal.class). IDE checks if there exists corresponding methods on Journal. So much for the mapping.
RF will not do much for you in terms of commonalities of client and server because
On the client you need to convert "PROXIES" to your client domain objects and vice-versa. This is completely ridiculous. It could be done in few lines of code declaratively, but there's NO SUPPORT FOR THAT! If only we could map our domain objects to proxies more elegantly, something like JavaScript method JSON.stringify(..,,) is MISSING in RF toolbox.
Don't forget you are also responsible for setting transferable properties of your domain objects to proxies, and so on recursively.
POOR ERROR HANDLING on the server and - Stack-traces are omitted by default on the server and you re getting empty useless exceptions on the client. Even when I set custom error handler, I was not able to get to low-level stack traces! Terrible.
Some minor bugs in IDE support and elsewhere. I filed two bug requests that were accepted. Not an Einstein was needed to figure out that those were actually bugs.
DOCUMENTATION SUCKS. As I mentioned proxies should be better explained, the term is MISLEADING. For the basic common problems, that I was solving, DOCS IS USELESS. Another example of misunderstanding from the DOC is connection of JPA annotations to RF. It looks from the succinct docs that they kinda play together, and yes, there is a corresponding question on StackOverflow. I recommend to forget any JPA 'connection' before understanding RF.
Advantages of RequestFactory
Excellent forum support.
IDE support is pretty good (but is not an advantage in contrast with RPC)
Flexibility of your client and server implementation (loose coupling)
Fancy stuff, connected to EntityProxies, beyond simple DTOs - caching, partial updates, very useful for mobile.
You can use ValueProxies as the simplest replacement for DTOs (but you have to do all not so fancy conversions yourself).
Support for Bean Validations JSR-303.
Considering other disadvantages of GWT in general:
Impossible to run integration tests (GWT client code + remote server) with provided JUnit support <= all JSNI has to be mocked (e.g. localStorage), SOP is an issue.
No support for testing setup - headless browser + remote server <= no simple headless testing for GWT, SOP.
Yes, it is possible to run selenium integration tests (but that's not what I want)
JSNI is very powerful, but at those shiny talks they give at conferences they do not talk much about that writing JSNI codes has some also some rules. Again, figuring out how to write a simple callback was a task worth of true researcher.
In summary, transition from GWT RPC to RequestFactory is far from WIN-WIN situation,
when RPC mostly fits your needs. You end up writing tons conversions from client domain objects to proxies and vice-versa. But you get some flexibility and robustness of your solution. And support on the forum is excellent, on Saturday as well!
Considering all advantages and disadvantages I just mentioned, it pays really well to think in advance whether any of these approaches actually brings improvement to your solution and to your development set-up without big trade-offs.
I find the idea of creating Proxy classes for all my entities quite annoying. My Hibernate/JPA pojos are auto-generated from the database model. Why do I now need to create a second mirror of those for RPC? We have a nice "estivation" framework that takes care of "de-hibernating" the pojos.
Also, the idea of defining service interfaces that don't quite implement the server side service as a java contract but do implement the methods - sounds very J2EE 1.x/2.x to me.
Unlike RequestFactory which has poor error handling and testing capabilities (since it processes most of the stuff under the hood of GWT), RPC allows you to use a more service oriented approach. RequestFactory implements a more modern dependency injection styled approach that can provide a useful approach if you need to invoke complex polymorphic data structures. When using RPC your data structures will need to be more flat, as this will allow your marshaling utilities to translate between your json/xml and java models. Using RPC also allows you to implement more robust architecture, as quoted from the gwt dev section on Google's website.
"Simple Client/Server Deployment
The first and most straightforward way to think of service definitions is to treat them as your application's entire back end. From this perspective, client-side code is your "front end" and all service code that runs on the server is "back end." If you take this approach, your service implementations would tend to be more general-purpose APIs that are not tightly coupled to one specific application. Your service definitions would likely directly access databases through JDBC or Hibernate or even files in the server's file system. For many applications, this view is appropriate, and it can be very efficient because it reduces the number of tiers.
Multi-Tier Deployment
In more complex, multi-tiered architectures, your GWT service definitions could simply be lightweight gateways that call through to back-end server environments such as J2EE servers. From this perspective, your services can be viewed as the "server half" of your application's user interface. Instead of being general-purpose, services are created for the specific needs of your user interface. Your services become the "front end" to the "back end" classes that are written by stitching together calls to a more general-purpose back-end layer of services, implemented, for example, as a cluster of J2EE servers. This kind of architecture is appropriate if you require your back-end services to run on a physically separate computer from your HTTP server."
Also note that setting up a single RequestFactory service requires creating around 6 or so java classes where as RPC only requires 3. More code == more errors and complexity in my book.
RequestFactory also has a little bit more overhead during the request processing, as it has to marshal serialization between the data proxies and actual java models. This added interface adds extra processing cycles which can really add up in an enterprise or production environment.
I also do not believe that RequestFactory services are serialization like RPC services.
All in all after using both for some time now, i always go with RPC as its more lightweight, easier to test and debug, and faster then using a RequestFactory. Although RequestFactory might be more elegant and extensible then its RPC counter part. The added complexity does not make it a better tool necessary.
My opinion is that the best architecture is to use two web apps , one client and one server. The server is a simple lightweight generic java webapp that uses the servlet.jar library. The client is GWT. You make RESTful request via GWT-RPC into the server side of the client web application. The server side of the client is just a pass though to apache http client which uses a persistant tunnel into the request handler you have running as a single servlet in your server servlet web application. The servlet web application should contain your database application layer (hibernate, cayenne, sql etc..) This allows you to fully divorce the database object models from the actual client providing a much more extensible and robust way to develop and unit test your application. Granted it requires a tad bit of initial setup time, but in the end allows you to create a dynamic request factory sitting outside of GWT. This allows you to leverage the best of both worlds. Not to mention being able to test and make changes to your server side without having to have the gwt client compiled or build.
I think it's really helpful if you have a heavy pojo on the client side, for example if you use Hibernate or JPA entities.
We adopted another solution, using a Django style persistence framework with very light entities.
The only caveat I would put in is that RequestFactory uses the binary data transport (deRPC maybe?) and not the normal GWT-RPC.
This only matters if you are doing heavy testing with SyncProxy, Jmeter, Fiddler, or any similar tool that can read/evaluate the contents of the HTTP request/response (like GWT-RPC), but would be more challenging with deRPC or RequestFactory.
We have have a very large implementation of GWT-RPC in our project.
Actually we have 50 Service interfaces with many methods each, and we have problems with the size of TypeSerializers generated by the compiler that turns our JS code huge.
So we are analizing to move towards RequestFactory.
I have been read for a couple of days digging into the web and trying to find what other people are doing.
The most important drawback I saw, and maybe I could be wrong, is that with RequestFactory your are no longer in control of the communication between your Server Domain objects and your client ones.
What we need is apply the load / save pattern in a controlled way. I mean, for example client receive the whole object graph of objects belonging to a specific transaction, do his updates and them send the whole back to the server. The server will be responsible for doing validation, compare old with new values and do persistance. If 2 users from different sites gets the same transaction and do some updates, the resulting transaction shouldn't be the merged one. One of the updates should fail in my scenario.
I don't see that RequestFactory helps supporting this kind of processing.
Regards
Daniel
Is it fair to say that when considering a limited MIS application, say with 10-20 CRUD'able business objects, and each with ~1-10 properties, that really it's down to personal preference which route to go with?
If so, then perhaps projecting how your application is going to scale could be the key in choosing your route GWT RPC or RequestFactory:
My application is expected to stay with that relatively limited number of entities but will massively increase in terms of their numbers. 10-20 objects * 100,000 records.
My application is going to increase significantly in the breadth of entities but the relative numbers involved of each will remain low. 5000 objects * 100 records.
My application is expected to stay with that relatively limited number of entities AND will stay in relatively low numbers of e.g. 10-20 objects * 100 records
In my case, I'm at the very starting point of trying to make this decision. Further complicated by having to change UI client side architecture as well as making the transport choice. My previous (significantly) large scale GWT UI used the Hmvc4Gwt library, which has been superseded by the GWT MVP facilities.

SOAP, REST or just XML for Objective-C/iPhone vs. server solution

We are going to set up a solution where the iPhone is requesting data from the server. We have the option to decide what kind of solution to put in place and we are not sure about which way to go.
Regarding SOAP I think I have the answer, there are no really stable solution for doing this (I know there are solutions, but I want something stable).
How about REST?
Or is it better to just create our own XML? It's not going to be so complicated reguest/respons-flow.
Thanks in advance!
I've created an open source application for iPhone OS 3.0 that shows how to use REST & SOAP services in iPhone application, using XML (using 8 different iPhone libraries), SOAP, JSON (using SBJSON and TouchJSON), YAML, Protocol Buffers (Google serialization format) and even CSV from a PHP sample app (included in the project).
http://github.com/akosma/iPhoneWebServicesClient
The project is modular enough to support many other formats and libraries in the future.
The following presentation in SlideShare shows my findings in terms of performance, ease of implementation and payload characteristics:
http://www.slideshare.net/akosma/web-services-3439269
Basically I've found, in my tests, that Binary Plists + REST + JSON and XML + the TBXML library are the "best" options (meaning: ease of implementation + speed of deserialization + lowest payload size).
In the Github project there's a "results" folder, with an Excel sheet summarizing the findings (and with all the raw data, too). You can launch the tests yourself, too, in 3G or wifi, and then have the results mailed to yourself for comparison and study.
Hope it helps!
REST is the way to go. There are SOAP solutions, but given that all people end up doing with SOAP can be done with RESTful services anyway, there's simply no need for the overhead (SOAP calls wrap XML for data inside of an XML envelope which must also be parsed).
The thing that makes REST as an approach great is that it makes full use of the HTTP protocol, not just for fetching data but also posting (creating) or deleting things too. HTTP has standard messages defined for problems with all those things, and a decent authentication model to boot.
Since REST is just HTTP calls, you can choose what method of data transfer best meets your needs. You could send/receive XML if you like, though JSON is easier to parse and is smaller to send. Plists are another popular format since you can send richer datatypes across and it's slightly more structured than JSON, although from the server side you generally have to find libraries to create it.
Many people use JSON but beware that it's very finicky about parsing - mess up a character at the start of a line, or accidentally get strings in there without escaping "'" characters and there can be issues.
XML Property-lists (plist) are also a common way to serialize data in Cocoa. It is also trivial to generate from other languages and some good libraries exist out there.
You are not saying how complex your data structures are and if you actually need state handling.
If you want to keep your network traffic to a minimum, while still keeping some of the structured features of XML, you might have a look at JSON. It is a very light weight data encapsulation framework.
There are some implementations available for iPhone, for instance TouchJSON
Claus
I would go with simple HTTP. NSURLConnection in the Cocoa libraries make this pretty simple. Google Toolbox for Mac also has several classes to help parsing URL-encoded data.
I think it's obvious that REST is the new king of servers communication, you should definitely use REST, the questions should be what REST methodology you should use and what coding language, in my post I present few very simple implementations for REST servers in C#, PHP, Java and node.js.

Is it a good thing for a custom rest protocol to be binary based instead of text based like Http?

Have you ever seen a good reason to create a custom binary rest protocol instead of using the basic http rest implementation?
I am currently working on a service oriented architecture framework in .Net responsible for hosting and consuming services. I don't want to be based on an existing framework like Remoting or WCF, because I want total flexibility and control for performing custom optimization.
So here I am trying to find the best protocol for handling this SOA framework. I like the request/response stateless connection nature of REST and the uri for defining resources, but I dislike the text based nature of HTTP.
Here are my arguments for disliking HTTP, correct me if I'm wrong:
First an evidence, parsing text is less efficient than parsing binary.
I'd prefer a fixed length binary header containing the content length and a binary content.
Second, there is no notion of sequence number for http requests, so the only way of associating a response with its request is the socket connection used to send the request and receiving the response.
This means there can only be one pending request at a time for a specified socket, so if a service consumer want to send multiple requests in parallel to a service, it need to open multiple socket to the server.
A custom rest protocol could define a sequence number for requests, so request and response would be associated with the sequence number instead of the socket and there could be multiple requests sent in parallel on the same socket.
I think there is no way to do this standardly with HTTP, it could be done with a custom text based protocol, but why not make it binary based to gain performance.
To add a little more context, my SOA framework does not need to be accessible from a non .Net consumer, so I have no restriction about using the .Net binary formatter or another custom binary formatter.
So am I making any sense for wanting a custom binary rest protocol? If you think I'm wrong please tell me your arguments.
Thanks.
REST is an architectural style for constructing web services. Roy Fielding articulated it based on his experience in designing HTTP, but it transcends HTTP. You can deploy a RESTful service over ordinary email exchange for instance.
The REST representations of your resources can be anything you like, though Roy really stresses that people should try to use very carefully designed, standard representations. There's nothing wrong with binary. In fact image representations like JPEG and PNG are binary. Google's Protocol Buffers gives you ways of creating compact binary representations of structured data too.
So the short answer is that you can certainly be RESTful and use binary representations and a home-grown binary substitute for HTTP.
I would actually very strongly recommend you use HTTP for efficiency, though. If you use your own protocol then you lose all the leverage provided by the wonderful HTTP caching infrastructure that stands between your server and its clients. The full client load falls right on your server instead of getting spread out over the intermediate caches.
10/4/2010: In our HTTP-based REST APIs we now support Java Serialized Object and Kryo binary representations in addition to XML, JSON, and XHTML. The Kryo serialization library performs significantly better than the others without requiring any special protocol. Another alternative to cut down on bandwidth is to use HTTP compression along with a textual representation.
The data of your packets can be whatever you like; be it XML, Plaintext, JSON, or just a binary format. I see no reason to force any specific one of these on yourself (use whatever is most fitting).
Though, when saying 'binary format', most people hear a fixed format of field-lengths and other really annoying things. Generally, if you aren't desperate from a data-transfer point of view, I see no reason to go this way [though you may do a binary serialisation of .net objects so they can be re-instantiated [or look at the 'protocol buffers' lib for .net]].
Summary: Seems okay to me. Whatever floats your boat.
Have you ever seen a good reason to create a custom binary rest protocol instead of using the basic http rest implementation?
I'm not sure I understand your terms. A REST representation can be completely binary, and still transported over pure HTTP. There's no need to invent a new protocol just to transmit binary data. Just reduce your requirements to a well-documented media type (which you're free to invent).
Regarding binary resource representations, Fielding himself argues that binary is not only acceptable, but may be required in some situations.
Regardless of whether you go straight binary, Base64 mixed with text, or text-only, don't forget the hypertext constraint if you plan on calling what you do "REST".
So am I making any sense for wanting a custom binary rest protocol?
If you mean you'd like to create a custom, binary-only, hypertext-driven media type - that's perfectly reasonable.
If you're talking about inventing some custom extension to HTTP, I would avoid that unless absolutely necessary (and what you describe doesn't sound to me like it rises to that level).
I want total flexibility and control
Based in this statement then I would suggest one of two choices. Either straight TCP/IP sockets, or WCF. These options give you by far the most flexibility. WCF is a great solution if you want to be transport agnostic and you want to start with a clean state as far as an application protocol is concerned.
REST imposes a set of constraints that restrict how your distributed application behaves in order to gain certain beneficial characteristics. If you see your service end points as ways to invoke operations then I suggest that REST will not fit your needs well at all.
Somehow, I don't think whether you send binary or text payloads should be your biggest concern at this point.