CORBA: Can a CORBA IDL type be an attribute of another? - interface

Before I start using CORBA I want to know something.
It would seem intuitive to me that you could use an IDL type as an attribute of another, which would then expose that attribute's methods to the client application (using ".") as well.
But is this possible?
For example (excuse my bad IDL):
interface Car{
attribute BrakePedal brakePedal;
//...
}
//then.. (place above)
interface BrakePedal{
void press();
//...
}
//...
Then in the client app, you could do: myCar.brakePedal.press();
CORBA would seem crappy if you couldn't do these kind of multi-level
object interfaces. After all, real-world objects are multi-level, right? So can
someone put my mind at ease and confirm (or try, if you already have CORBA set up) if this definitely works? None of the IDL documentation explicitly shows this in example, which is why I'm concerned. Thanks!

Declaring an attribute is logically equivalent to declaring a pair of accessor functions, one to read the value of the attribute, and one to write it (you can also have readonly attributes, in which case you would only get the read function).
It does appear from the CORBA spec. that you could put an interface name as an attribute name. I tried feeding such IDL to omniORB's IDL to C++ translator, and it didn't reject it. So I think it is allowed.
But, I'm really not sure you would want to do this in practice. Most CORBA experts recommend that if you are going to use attributes you only use readonly attributes. And for something like this, I would just declare my own function that returned an interface.
Note that you can't really do the syntax you want in the C++ mapping anyway; e.g.
server->brakePedal()->press(); // major resource leak here
brakePedal() is the attribute accessor function that returns a CORBA object reference. If you immediately call press() on it, you are going to leak the object reference.
To do this without a leak you would have to do something like this:
BrakePedal_var brakePedal(server->brakePedal());
brakePedal->press();
You simply can't obtain the notational convenience you want from attributes in this scenario with the C++ mapping (perhaps you could in the Python mapping). Because of this, and my general dislike for attributes, I'd just use a regular function to return the BrakePedal interface.

You don't understand something important about distributed objects: remote objects (whether implemented with CORBA, RMI, .NET remoting or web services) are not the same as local objects. Calls to CORBA objects are expensive, slow, and may fail due to network problems. The object.attribute.method() syntax would make it hard to see that two different remote calls are being executed on that one line, and make it hard to handle any failures that might occur.

Related

jsonrpc2 returning remote reference

I'm exploring jsonrpc 2 for a web service. I have some experience with java rmi and very much liked that. To make things easy I using the zend framework so I think I like to use that library. There is however one thing i am missing. how do I make a procedure send back a reference to an other object.
I get that is not within the protocol because its about procedures but it would still be a useful thing. Like with the java rmi I could pick objects to send by value (serialize) or reference (remote object proxy). So what is the best way do solve this? are there any standards for this that most library's use?
I spend a view hours on google looking for this and can think of a solution (like return a url) but, I would rather use a standard then design something new.
There is one other thing i would like your opinion on. I heard an architect rand about the protocol's feature of sending batches of call's. Are the considered nice or dirty? (he thinks they where ugly but i can think of use for then)
update
I think the nicesed way is just to return a remoteref object with a url to the object. That way its only a small wrappen and a litle documentation. Yet i would like to know if there is a commen way to do this.
SMD Posibilitie's
There might be some way to specify the return type in my smd, is there anyone with idears of how to give a reference to another page in my smd return type? Or does anyone know a good explenation for the zend_json_smd class?
You can't return a reference of any kind via JSON-RPC.
There is no standard to do so (afaik) because RPC is stateless, and most developers prefer it that way. This simplicity is what makes JSON-RPC so desirable to client-side developers over SOAP (and other messes).
You could however adopt a convention in your return values that some JSON constructs should be treated as "cues" to manufacture remote "object" proxies. For example, you could create a modified JSON de-serialiser that turns:
{
"__remote_object": {
"class": "Some.Remote.Class",
"remote_id": 54625143,
"smd": "http://domain/path/to/further.smd",
"init_with": { ... Initial state of object ... }
}
}
Into a remote object proxy by:
creating a local object with a prototype named after class, initialised via init_with
downloading & processing the smd URL
creating a local proxy method (in the object proxy) for each procedure exposed via the API, such that they pass the remote_id to the server with each call (so that the server knows which remote object proxy maps to which server-side object.)
While this scheme would be workable, there are plenty of moving parts, and the code is much larger and more complicated on the client-side than for normal JSON-RPC.
JSON-RPC itself is not well standardised, so most extensions (even SMD) are merely conventions around method-names and payloads.

In GWT, why shouldn't a method return an interface?

In this video from Google IO 2009, the presenter very quickly says that signatures of methods should return concrete types instead of interfaces.
From what I heard in the video, this has something to do with the GWT Java-to-Javascript compiler.
What's the reason behind this choice ?
What does the interface in the method signature do to the compiler ?
What methods can return interfaces instead of concrete types, and which are better off returning concrete instances ?
This has to do with the gwt-compiler, as you say correctly. EDIT: However, as Daniel noted in a comment below, this does not apply to the gwt-compiler in general but only when using GWT-RPC.
If you declare List instead of ArrayList as the return type, the gwt-compiler will include the complete List-hierarchy (i.e. all types implementing List) in your compiled code. If you use ArrayList, the compiler will only need to include the ArrayList hierarchy (i.e. all types implementing ArrayList -- which usually is just ArrayList itself). Using an interface instead of a concrete class you will pay a penalty in terms of compile time and in the size of your generated code (and thus the amount of code each user has to download when running your app).
You were also asking for the reason: If you use the interface (instead of a concrete class) the compiler does not know at compile time which implementations of these interfaces are going to be used. Thus, it includes all possible implementations.
Regarding your last question: all methods CAN be declared to return interface (that is what you ment, right?). However, the above penalty applies.
And by the way: As I understand it, this problem is not restricted to methods. It applies to all type declarations: variables, parameters. Whenever you use an interface to declare something, the compiler will include the complete hierarchy of sub-interfaces and implementing classes. (So obviously if you declare your own interface with only one or two implementing classes then you are not incurring a big penalty. That is how I use interfaces in GWT.)
In short: use concrete classes whenever possible.
(Small suggestion: it would help if you gave the time stamp when you refer to a video.)
This and other performance tips were presented at Google IO 2011 - High-performance GWT.
At about the 7 min point the speak addresses 'RPC Type Explosion':
For some reason I thought the GWT compiler would optimize it away again but it appears I was mistaken.

What functions to put inside a class

If I have a function (say messUp that does not need to access any private variables of a class (say room), should I write the function inside the class like room.messUp() or outside of it like messUp(room)? It seems the second version reads better to me.
There's a tradeoff involved here. Using a member function lets you:
Override the implementation in derived classes, so that messing up a kitchen could involve trashing the cupboards even if no cupboards are available in a generic room.
Decide that you need to access private variables later on, without having to refactor all the code that uses the function.
Make the function part of an interface, so that a piece of code may require that its argument be mess-up-able.
Using an external function lets you:
Make that function generic, so that you may apply it to rooms, warehouses and oil rigs equally (if they provide the member functions required for messing up).
Keep the class signature small, so that creating mock versions for unit testing (or different implementations) becomes easier.
Change the class implementation without having to examine the code for that function.
There's no real way to have your cake and eat it too, so you have to make choices. A common OO decision is to make everything a method (unless clearly idiotic) and sacrifice the three latter points, but that doesn't mean you should do it in all situations.
Any behaviour of a class of objects should be written as an instance method.
So room.messUp() is the OO way to do this.
Whether messUp has to access any private members of the class or not, is irrelevant, the fact that it's a behaviour of the room, suggests that it's an instance method, as would be cleanUp or paint, etc...
Ignoring which language, I think my first question is if messUp is related to any other functions. If you have a group of related functions, I would tend to stick them in a class.
If they don't access any class variables then you can make them static. This way, they can be called without needing to create an instance of the class.
Beyond that, I would look to the language. In some languages, every function must be a method of some class.
In the end, I don't think it makes a big difference. OOP is simply a way to help organize your application's data and logic. If you embrace it, then you would choose room.messUp() over messUp(room).
i base myself on "C++ Coding Standards: 101 Rules, Guidelines, And Best Practices" by Sutter and Alexandrescu, and also Bob Martin's SOLID. I agree with them on this point of course ;-).
If the message/function doesnt interract so much with your class, you should make it a standard ordinary function taking your class object as argument.
You should not polute your class with behaviours that are not intimately related to it.
This is to repect the Single Responsibility Principle: Your class should remain simple, aiming at the most precise goal.
However, if you think your message/function is intimately related to your object guts, then you should include it as a member function of your class.

Do Extension Methods Hide Dependencies?

All,
Wanted to get a few thoughts on this. Lately I am becoming more and more of a subscriber of "purist" DI/IOC principles when designing/developing. Part of this (a big part) involves making sure there is little coupling between my classes, and that their dependencies are resolved via the constructor (there are certainly other ways of managing this, but you get the idea).
My basic premise is that extension methods violate the principles of DI/IOC.
I created the following extension method that I use to ensure that the strings inserted into database tables are truncated to the right size:
public static class StringExtensions
{
public static string TruncateToSize(this string input, int maxLength)
{
int lengthToUse = maxLength;
if (input.Length < maxLength)
{
lengthToUse = input.Length;
}
return input.Substring(0, lengthToUse);
}
}
I can then call my string from within another class like so:
string myString = "myValue.TruncateThisPartPlease.";
myString.TruncateToSize(8);
A fair translation of this without using an extension method would be:
string myString = "myValue.TruncateThisPartPlease.";
StaticStringUtil.TruncateToSize(myString, 8);
Any class that uses either of the above examples could not be tested independently of the class that contains the TruncateToSize method (TypeMock aside). If I were not using an extension method, and I did not want to create a static dependency, it would look more like:
string myString = "myValue.TruncateThisPartPlease.";
_stringUtil.TruncateToSize(myString, 8);
In the last example, the _stringUtil dependency would be resolved via the constructor and the class could be tested with no dependency on the actual TruncateToSize method's class (it could be easily mocked).
From my perspective, the first two examples rely on static dependencies (one explicit, one hidden), while the second inverts the dependency and provides reduced coupling and better testability.
So does the use of extension methods conflict with DI/IOC principles? If you're a subscriber of IOC methodology, do you avoid using extension methods?
I think it's fine - because it's not like TruncateToSize is a realistically replaceable component. It's a method which will only ever need to do a single thing.
You don't need to be able to mock out everything - just services which either disrupt unit testing (file access etc) or ones which you want to test in terms of genuine dependencies. If you were using it to perform authentication or something like that, it would be a very different matter... but just doing a straight string operation which has absolutely no configurability, different implementation options etc - there's no point in viewing that as a dependency in the normal sense.
To put it another way: if TruncateToSize were a genuine member of String, would you even think twice about using it? Do you try to mock out integer arithmetic as well, introducing IInt32Adder etc? Of course not. This is just the same, it's only that you happen to be supplying the implementation. Unit test the heck out of TruncateToSize and don't worry about it.
I see where you are coming from, however, if you are trying to mock out the functionality of an extension method, I believe you are using them incorrectly. Extension methods should be used to perform a task that would simply be inconvenient syntactically without them. Your TruncateToLength is a good example.
Testing TruncateToLength would not involve mocking it out, it would simply involve the creation of a few strings and testing that the method actually returned the proper value.
On the other hand, if you have code in your data layer contained in extension methods that is accessing your data store, then yes, you have a problem and testing is going to become an issue.
I typically only use extension methods in order to provide syntactic sugar for small, simple operations.
Extension methods, partial classes and dynamic objects. I really like them, however you must tread carefully , there be monsters here.
I would take a look at dynamic languages and see how they cope with these sort of problems on a day to day basis, its really enlightening. Especially when they have nothing to stop them from doing stupid things apart from good design and discipline. Everything is dynamic at run time, the only thing to stop them is the computer throwing a major run time error. "Duck Typing" is the maddest thing I have ever seen, good code is down to good program design, respect for others in your team, and the trust that every member, although have the ability to do some wacky things choose not to because good design leads to better results.
As for your test scenario with mock objects/ICO/DI, would you really put some heavy duty work in an extension method or just some simple static stuff that operate in a functional type way? I tend to use them like you would in a functional programming style, input goes in, results come out with no magic in the middle, just straight up framework classes that you know the guys at MS have designed and tested :P that you can rely on.
If your are doing some heavy lifting stuff using extension methods I would look at your program design again, check out your CRC designs, Class models, Use Cases, DFD's, action diagrams or whatever you like to use and figure out where in this design you planned to put this stuff in an extension method instead of a proper class.
At the end of the day, you can only test against your system design and not code outside of your scope. If you going to use extension classes, my advice would be to look at Object Composition models instead and use inheritance only when there is a very good reason.
Object Composition always wins out with me as they produce solid code. You can plug them in, take them out and do what you like with them. Mind you this all depends on whether you use Interfaces or not as part of your design. Also if you use Composition classes, the class hierarchy tree gets flattened into discrete classes and there are fewer places where your extension method will be picked up through inherited classes.
If you must use a class that acts upon another class as is the case with extension methods, look at the visitor pattern first and decide if its a better route.
Its a pain because they are hard to mock. I usually use one of these strategies
Yep, scrap the extension its a PITA to mock out
Use the extension and just test that it did the right thing. i.e. pass data into the truncate and check it got truncated
If it's not some trivial thing, and I HAVE to mock it, I'll make my extension class have a setter for the service it uses, and set that in the test code.
i.e.
static class TruncateExtensions{
public ITruncateService Service {private get;set;}
public string TruncateToSize(string s, int size)
{
return (Service ?? Service = new MyDefaultTranslationServiceImpl()). TruncateToSize(s, size);
}
}
This is a bit scary because someone might set the service when they shouldn't, but I'm a little cavalier sometimes, and if it was really important, I could do something clever with #if TEST flags, or the ServiceLocator pattern to avoid the setter being used in production.

Encapsulation in the age of frameworks

At my old C++ job, we always took great care in encapsulating member variables, and only exposing them as properties when absolutely necessary. We'd have really specific constructors that made sure you fully constructed the object before using it.
These days, with ORM frameworks, dependency-injection, serialization, etc., it seems like you're better off just relying on the default constructor and exposing everything about your class in properties, so that you can inject things, or build and populate objects more dynamically.
In C#, it's been taken one step further with Object initializers, which give you the ability to basically define your own constructor. (I know object initializers are not really custom constructors, but I hope you get my point.)
Are there any general concerns with this direction? It seems like encapsulation is starting to become less important in favor of convenience.
EDIT: I know you can still carefully encapsulate members, but I just feel like when you're trying to crank out some classes, you either have to sit and carefully think about how to encapsulate each member, or just expose it as a property, and worry about how it is initialized later. It just seems like the easiest approach these days is to expose things as properties, and not be so careful. Maybe I'm just flat wrong, but that's just been my experience, espeically with the new C# language features.
I disagree with your conclusion. There are many good ways of encapsulating in c# with all the above mentioned technologies, as to maintain good software coding practices. I would also say that it depends on whose technology demo you're looking at, but in the end it comes down to reducing the state-space of your objects so that you can make sure they hold their invariants at all times.
Take object relational frameworks; most of them allow you to specify how they are going to hydrate the entities; NHibernate for example allows you so say access="property" or access="field.camelcase" and similar. This allows you to encapsulate your properties.
Dependency injection works on the other types you have, mostly those which are not entities, even though you can combine AOP+ORM+IOC in some very nice ways to improve the state of these things. IoC is often used from layers above your domain entities if you're building a data-driven application, which I guess you are, since you're talking about ORMs.
They ("they" being application and domain services and other intrinsic classes to the program) expose their dependencies but in fact can be encapsulated and tested in even better isolation than previously since the paradigms of design-by-contract/design-by-interface which you often use when mocking dependencies in mock-based testing (in conjunction with IoC), will move you towards class-as-component "semantics". I mean: every class, when built using the above, will be better encapsulated.
Updated for urig: This holds true for both exposing concrete dependencies and exposing interfaces. First about interfaces: What I was hinting at above was that services and other applications classes which have dependencies, can with OOP depend on contracts/interfaces rather than specific implementations. In C/C++ and older languages there wasn't the interface and abstract classes can only go so far. Interfaces allow you to tie different runtime instances to the same interface without having to worry about leaking internal state which is what you're trying to get away from when abstracting and encapsulating. With abstract classes you can still provide a class implementation, just that you can't instantiate it, but inheritors still need to know about the invariants in your implementation and that can mess up state.
Secondly, about concrete classes as properties: you have to be wary about what types of types ;) you expose as properties. Say you have a List in your instance; then don't expose IList as the property; this will probably leak and you can't guarantee that consumers of the interface don't add things or remove things which you depend on; instead expose something like IEnumerable and return a copy of the List, or even better, do it as a method:
public IEnumerable MyCollection { get { return _List.Enum(); } } and you can be 100% certain to get both the performance and the encapsulation. Noone can add or remove to that IEnumerable and you still don't have to perform a costly array copy. The corresponding helper method:
static class Ext {
public static IEnumerable<T> Enum<T>(this IEnumerable<T> inner) {
foreach (var item in inner) yield return item;
}
}
So while you can't get 100% encapsulation in say creating overloaded equals operators/method you can get close with your public interfaces.
You can also use the new features of .Net 4.0 built on Spec# to verify the contracts I talked about above.
Serialization will always be there and has been for a long time. Previously, before the internet-area it was used for saving your object graph to disk for later retrieval, now it's used in web services, in copy-semantics and when passing data to e.g. a browser. This doesn't necessarily break encapsulation if you put a few [NonSerialized] attributes or the equivalents on the correct fields.
Object initializers aren't the same as constructors, they are just a way of collapsing a few lines of code. Values/instances in the {} will not be assigned until all of your constructors have run, so in principle it's just the same as not using object initializers.
I guess, what you have to watch out for is deviating from the good principles you've learnt from your previous job and make sure you are keeping your domain objects filled with business logic encapsulated behind good interfaces and ditto for your service-layer.
Private members are still incredibly important. Controlling access to internal object data is always good, and shouldn't be ignored.
Many times private methods I've found to be overkill. Most of the time, if the work you're doing is important enough to break out, you can refactor it in such a way that either a) the private method is trivial, or b) is an integral part of other functions.
In addition, with unit testing, having many methods private makes it very hard to unit test. There are ways around that (making test objects friends, etc), but add difficulties.
I wouldn't discount private methods entirely though. Any time there's important, internal algorithms that really make no sense outside of the class there's no reason to expose those methods.
I think that encapsulation is still important, it helps more in libraries than anything imho. You can create a library that does X, but you don't need everyone to know how X was created. And if you wanted to create it more specifically to obfuscate the way you create X. The way I learned about encapsulation, I remember also that you should always define your variables as private to protect them from a data attack. To protect against a hacker breaking your code and accessing variables that they are not supposed to use.