jsonrpc2 returning remote reference - zend-framework

I'm exploring jsonrpc 2 for a web service. I have some experience with java rmi and very much liked that. To make things easy I using the zend framework so I think I like to use that library. There is however one thing i am missing. how do I make a procedure send back a reference to an other object.
I get that is not within the protocol because its about procedures but it would still be a useful thing. Like with the java rmi I could pick objects to send by value (serialize) or reference (remote object proxy). So what is the best way do solve this? are there any standards for this that most library's use?
I spend a view hours on google looking for this and can think of a solution (like return a url) but, I would rather use a standard then design something new.
There is one other thing i would like your opinion on. I heard an architect rand about the protocol's feature of sending batches of call's. Are the considered nice or dirty? (he thinks they where ugly but i can think of use for then)
update
I think the nicesed way is just to return a remoteref object with a url to the object. That way its only a small wrappen and a litle documentation. Yet i would like to know if there is a commen way to do this.
SMD Posibilitie's
There might be some way to specify the return type in my smd, is there anyone with idears of how to give a reference to another page in my smd return type? Or does anyone know a good explenation for the zend_json_smd class?

You can't return a reference of any kind via JSON-RPC.
There is no standard to do so (afaik) because RPC is stateless, and most developers prefer it that way. This simplicity is what makes JSON-RPC so desirable to client-side developers over SOAP (and other messes).
You could however adopt a convention in your return values that some JSON constructs should be treated as "cues" to manufacture remote "object" proxies. For example, you could create a modified JSON de-serialiser that turns:
{
"__remote_object": {
"class": "Some.Remote.Class",
"remote_id": 54625143,
"smd": "http://domain/path/to/further.smd",
"init_with": { ... Initial state of object ... }
}
}
Into a remote object proxy by:
creating a local object with a prototype named after class, initialised via init_with
downloading & processing the smd URL
creating a local proxy method (in the object proxy) for each procedure exposed via the API, such that they pass the remote_id to the server with each call (so that the server knows which remote object proxy maps to which server-side object.)
While this scheme would be workable, there are plenty of moving parts, and the code is much larger and more complicated on the client-side than for normal JSON-RPC.
JSON-RPC itself is not well standardised, so most extensions (even SMD) are merely conventions around method-names and payloads.

Related

shared domain with scala.js, how?

Probably a basic question, but I'm confused with the various documentations and examples around scala.js.
I have a domain model I would like to share between scala and scala.js, let's say:
class Estimator(val nickname: String)
... and of course I would like to send objects between the web-client (scala.js with angular via angulate) and the server (scala with spring-mvc on spring-boot).
Should the class extends js.Object? And be annotated with #ScalaJSDefined (not yet deprecated in v0.6.15)?
If yes, it would be an unwanted dependency that comes also in the server part. Neither #scalaJSDefined nor js.Object are in the dummy scalajs-stubs. Or am I missing something?
If no, how to pass them through $http.post which expects a js.Any? I also get some TypeError at other places. Should I picke/unpickle everywhere or is there an automatic way?
EDIT 2017-03-30:
Actually this relates to Angulate, the facade for AngularJS I choose. For 2 features (communications to an http server and displaying model fields in html), the domain classes have to be Javascript classes. In Angulate's example, the domain model is duplicated.
There are also (and sadly) no plan to include js.Object in scalajs-stubs to overcome this problem. Details in https://github.com/scala-js/scala-js/issues/2564 . Perhaps js.Object doesn't hurt so much on the jvm...
So, what web frameworks and facade for scala.js does / doesn't nicely support shared domain? Not angulate1, probably Udash, perhaps react?
(Caveat: I don't know Angulate, which might affect some of this. Speaking generally, though...)
No, those shared objects shouldn't derive from js.Object or use #ScalaJSDefined -- those are only for objects that are designed to interface with JavaScript itself, and it doesn't sound like that's what you have in mind. Objects that are just for Scala don't need them.
But yes -- in general, you're usually going to need to pickle the communications in one way or another. Which pickling library you use is up to you (there are several), but remember that the communication is simply a stream of bytes -- you have to tell the system how to serialize and deserialize between your domain objects and those bytes.
There isn't anything automatic in Scala.js per se -- that's just a language, and doesn't dictate your library choices. You can use implicits to make the pickling semi-automatic, but I recommend being a bit careful with that. I don't see anything obvious in the Angulate docs that indicate that it does the pickling automatically.

Creating Self-Documenting Actors in Scala

I'm looking at implementing a JSON-RPC based web service in Scala using finagle. I'm trying to work out how best to structure the RPC invocation code (ie. taking the deserialized request and invoking the appropriate method).
The service needs to be able to spit out a help page on all the possible requests accepted and their parameters. In Java, I would simply use annotations (to both expose and document functions) and then have the RPC service reflect on the appropriate classes, detect all exposed methods and then use the reflected MethodInfo's to invoke the functions where appropriate.
What is the idiomatic Scala way to achieve something similar? Should I use a message-passing approach (ie. just pass a request object into an actor, have it determine if it can invoke it, etc.)
We had success doing something similar to the approach suggested by #Jan above. More specifically, we defined a parent class for all request objects which takes the expected return type as a type parameter. Going one step further, we're generating our protocol IDL and serialization bindings by reflecting on API objects (little more than sets of requests).
In the future, the experimental typed channels feature in Akka may help with some of the mechanics.

CORBA: Can a CORBA IDL type be an attribute of another?

Before I start using CORBA I want to know something.
It would seem intuitive to me that you could use an IDL type as an attribute of another, which would then expose that attribute's methods to the client application (using ".") as well.
But is this possible?
For example (excuse my bad IDL):
interface Car{
attribute BrakePedal brakePedal;
//...
}
//then.. (place above)
interface BrakePedal{
void press();
//...
}
//...
Then in the client app, you could do: myCar.brakePedal.press();
CORBA would seem crappy if you couldn't do these kind of multi-level
object interfaces. After all, real-world objects are multi-level, right? So can
someone put my mind at ease and confirm (or try, if you already have CORBA set up) if this definitely works? None of the IDL documentation explicitly shows this in example, which is why I'm concerned. Thanks!
Declaring an attribute is logically equivalent to declaring a pair of accessor functions, one to read the value of the attribute, and one to write it (you can also have readonly attributes, in which case you would only get the read function).
It does appear from the CORBA spec. that you could put an interface name as an attribute name. I tried feeding such IDL to omniORB's IDL to C++ translator, and it didn't reject it. So I think it is allowed.
But, I'm really not sure you would want to do this in practice. Most CORBA experts recommend that if you are going to use attributes you only use readonly attributes. And for something like this, I would just declare my own function that returned an interface.
Note that you can't really do the syntax you want in the C++ mapping anyway; e.g.
server->brakePedal()->press(); // major resource leak here
brakePedal() is the attribute accessor function that returns a CORBA object reference. If you immediately call press() on it, you are going to leak the object reference.
To do this without a leak you would have to do something like this:
BrakePedal_var brakePedal(server->brakePedal());
brakePedal->press();
You simply can't obtain the notational convenience you want from attributes in this scenario with the C++ mapping (perhaps you could in the Python mapping). Because of this, and my general dislike for attributes, I'd just use a regular function to return the BrakePedal interface.
You don't understand something important about distributed objects: remote objects (whether implemented with CORBA, RMI, .NET remoting or web services) are not the same as local objects. Calls to CORBA objects are expensive, slow, and may fail due to network problems. The object.attribute.method() syntax would make it hard to see that two different remote calls are being executed on that one line, and make it hard to handle any failures that might occur.

Objective-c design advice for use of different data sources, swapping between test and live

I'm in the process of designing an application that is part of a larger piece of work, depending on other people to build an API that the app can make use of to retrieve data.
While I was thinking about how to setup this project and design the architecture around it, something occurred to me, and I'm sure many people have been in similar situations.
Since my work is depending on other people to complete their tasks, and a test server, this slows work down at my end.
So the question is:
What's the best practice for creating test repositories and classes, implementing them, and not having to depend on altering several places in the code to swap between the test classes and the actual repositories / proper api calls.
Contemplate the following scenario:
GetDataFromApiCommand *getDataCommand = [[GetDataFromApiCommand alloc]init];
getDataCommand.delegate = self;
[getDataCommand getData];
Once the data is available via the API, "GetDataFromApiCommand" could use the actual API, but until then a set of mock data could be returned upon the call of [getDataCommand getData]
There might be multiple instances of this, in various places in the code, so replacing all of them wherever they are, is a slow and painful process which inevitably leads to one or two being overlooked.
In strongly typed languages we could use dependency injection and just alter one place.
In objective-c a factory pattern could be implemented, but is that the best route to go for this?
GetDataFromApiCommand *getDataCommand = [GetDataFromApiCommandFactory buildGetDataFromApiCommand];
getDataCommand.delegate = self;
[getDataCommand getData];
What is the best practices to achieve this result?
Since this would be most useful, even if you have the actual API available, to run tests, or work off-line, the ApiCommands would not necessarily have to be replaced permanently, but the option to select "Do I want to use TestApiCommand or ApiCommand".
It is more interesting to have the option to switch between:
All commands are test
and
All command use the live API,
rather than selecting them one by one, however that would also be useful to do for testing one or two actual API commands, mixing them with test data.
EDIT
The way I have chosen to go with this is to use the factory pattern.
I set up the factory as follows:
#implementation ApiCommandFactory
+ (ApiCommand *)newApiCommand
{
// return [[ApiCommand alloc]init];
return [[ApiCommandMock alloc]init];
}
#end
And anywhere I want to use the ApiCommand class:
GetDataFromApiCommand *getDataCommand = [ApiCommandFactory newApiCommand];
When the actual API call is required, the comments can be removed and the mock can be commented out.
Using new in the message name implies that who ever uses the factory to get an object, is responsible for releasing it (since we want to avoid autorelease on the iPhone).
If additional parameters are required, the factory needs to take these into consideration
i.e:
[ApiCommandFactory newSecondApiCommand:#"param1"];
This will work quite well with repositories as well.
Using new in the message name implies that who ever uses the factory to get an object, is responsible for releasing it (since we want to avoid autorelease on the iPhone).
You don't have to be obsessive about not autoreleasing. It really only applies if you happen to be creating lots of autoreleased objects within one iteration of the event loop. If this object is intended to live long enough that you would otherwise retain it outside of the factory method, giving back an autoreleased object will cost you only a reference in the autorelease pool.
Anyway, to answer your question, your choice is pretty much exactly what I would do. Consider also creating a Objective-C protocol that matches the API and that your APICommandMock and APICommand must conform to. That will document the API and provide some discipline for both you and the other team. For instance, it'll nail down things like method names and parameter types.

JSON or SOAP (XML)?

I'm developing a new application for the company.
The application have to exchange data from and to iPhone.
Company server side uses .NET framework.
For example: the class "Customer" (Name, Address etc..) for a specific CustomerNumber should be first downloaded from server to iphone, stored locally and then uploaded back to apply changes (and make them available to other people). Concurrency should not be a problem (at least at this time...)
In any case I have to develop both the server side (webservice or whatever) and the iPhone app.
I'm free to identify the best way to do that (this is the application "number ONE" so it will become the "standard" for the future).
So, what do you suggest me ?
Use SOAP web services (XML parsing etc..) or user JSON ? (it seems lighter...)
Is it clear to me how to "upload" data using SOAP (very long to code the xml soap envelope ... I would avoid) but how can I do the same using JSON ?
The application needs to use date values (for example: last_visit_date etc..) what about date in Json ?
JSON has several advantages over XML. Its a lot smaller and less bloated, so you will be passing much less data over the network - which in the case of a mobile device will make a considerable difference.
Its also easier to use in javascript code as you can simply pass the data packet directly into a javascript array without any parsing, extracting and converting, so it is much less CPU intensive too.
To code with it, instead of an XML library, you will want a JSON library. Dates are handled as you would with XML - encode them to a standard, then let the library recognise them. (eg here's a library with a sample with dates in it)
Here's a primer.
Ah, the big question: JSON or XML?
In general, I would prefer XML only when I need to pass around a lot of text, since XML excels at wrapping and marking up text.
When passing around small data objects, where the only strings are small (ids, dates, etc.), I would tend to use JSON, as it is smaller, easier to parse, and more readable.
Also, note that even if you choose XML, that does not by any means mean you need to use SOAP. SOAP is a very heavy-weight protocol, designed for interoperability between partners. As you control both the client and server here, it doesn't necessarily make sense.
Consider how you'd be consuming the results on the iPhone. What mechansim would you use to read the web service response? NSXMLParser?
How you consume the data would have the biggest impact on how your serve it.
Are JSON and SOAP your only options? What about RESTful services?
Take a look at some big players on the web that have public APIs that are accessible by iPhone clients:
Twitter API
FriendFeed API
Also, review the following related articles:
How to parse nested JSON on iPhone
RESTful WCF service that can still use SOAP
Performance of REST vs SOAP
JSON has following advantages:
it can encode boolean and numeric values ... in XML everything is a string
it has much clearer semantics ... in json you have {"key":"someValue"}, in XML you can have <data><key>someValue</key></data> or <data key="someValue" /> ... any XML node must have a name ... this does not always make sense ... and children may either represent properties of an object, or children, which when occuring multiple times actually represent an array ... to really understand the object structure of an XML message, you need its corresponding schema ... in JSON, you need the JSON only ...
smaller and thus uses less bandwidth and memory during parsing/generation ...
apart from that, i see NO difference between XML and JSON ... i mean, this is so interchangable ... you can use JSON to capture the semantics of SOAP, if you want to ...
it's just that SOAP is so bloated ... if you do want to use SOAP, use a library and generators for that ... it's neither fun nor interesting to build it all by hand ...
using XML RPC or JSON RPC should work faster ... it is more lightweight, and you use JSON or XML at will ... but when creating client<->server apps, a very important thing in my eyes, is to abstract the transport layer on both sides ... your whole business logic etc. should in no way depend on more than a tiny interface, when it comes to communication, and then you can plug in protocols into your app, as needed ...
There are more options than just SOAP vs JSON. You can do a REST-based protocol (Representational State Transfer) using XML. I think it's easier use than SOAP and you get a much nicer XSD (that you design.) It's rather easy for almost any client to access such services.
On the other hand, JSON parsers are available for almost any language and make it really easy to call from JavaScript if you'll use them via AJAX.
However, SOAP can be rather powerful with tons of standardized extensions that support enterprise features.
You could also use Hessian using HessianKit on the iPhone side, and HessianC# on the server side.
The big bonuses are:
1. Hessian in a binary serialization protocol, so smaller data payloads, good for 3G and GSM.
2. You do not need to worry about format in either end, transport is automated with proxies.
So on the server side you just define an C# interface, such as:
public interface IFruitService {
int FruitCount();
string GetFruit(int index);
}
Then you just subclass CHessianHandler and implement the IFruitService, and your web service is done.
On the iPhone just write the corresponding Objective-C protocol:
#protocol IFruitService
-(int)FruitCount;
-(NSString*)GetFruit:(int)index;
#end
That can then be access by proxy by a single line of code:
id<IFruitService> fruitService = [CWHessianConnection proxyWithURL:serviceURL
protocol:#protocol(IFruitService)];
Links:
HessianKit : hessiankit
I would certainly go with JSON, as others already noted - it's faster and data size is smaller. You can also use a data modelling framework like JSONModel to validate the JSON structure, and to autoconvert JSON objects to Obj-C objects.
JSONModel also includes classes for networking and working with APIs - also includes json rpc methods.
Have a look at these links:
http://www.jsonmodel.com - the JSONModel framework
http://json-rpc.org - specification for JSON APIs implementation
http://www.charlesproxy.com - the best tool to debug JSON APIs
http://json-schema.org - tool to define validation schemas for JSON, useful along the way
Short example of using JSONModel:
http://www.touch-code-magazine.com/how-to-make-a-youtube-app-using-mgbox-and-jsonmodel/
Hope these are useful