PIMPL Patterns in "high-level" languages - Possible/Applicable? - pimpl-idiom

In the C++ world, there are two well-known strategies for maintaining binary compatibility with a library:
Interfaces: all public classes are "interface" classes (only pure virtual methods, no members), which are implemented by private subclasses in the library
PIMPL pattern: all public classes hold a single member which is a pointer to a forward-declared class, whose definition is private to the library
Both of these achieve binary stability, but #1 comes with some major disadvantages. The primary one, I believe, is that in the library, methods that accept instances of the public interface classes almost always must immediately force-downcast them to the private implementation classes. The use of interfaces incorrectly signals to clients that they are free to supply their own implementations of these interfaces, which if they ever do, will immediately fail on one of these force-downcasts. Unless polymorphism is the goal, the use of interfaces is arguably the wrong design.
Now let's consider "high-level" languages like Java, Kotlin, C# and Swift (and maybe even Typescript and Ruby). We can certainly adopt strategy #1. However this strategy suffers from the same concerns mentioned above.
But what about the PIMPL pattern? There's no such thing as "forward declaration" in these languages, but we can't even separate the class definition and implementation into different files. The compiler does this for us when it creates the package. So does an analogous pattern exist in these languages that "hides" the private details in the sense that it lets us freely modify private details without breaking binary compatibility?
Which leads to the next question...
Is it even necessary to begin with to "hide" class innards to achieve binary stability in those languages? This is necessary in C++ because of its on-stack value semantics, which makes compiled client code sensitive to the memory size of the library's classes. But to my knowledge, class instances in the "high-level" languages aren't moved around on the call stack, and instead work more like pointers/references would in C++, which may render the concern moot. If that's true, we can simply write classes "naively", and be sure that the binary compatibility remains stable as long as we don't mess with public methods/members. We could, however, do whatever we wish with private members, even if it entails changing the memory size of the public classes, and it wouldn't force client code to be recompiled.
So, in summary: are PIMPL patterns possible in these languages, or does the concept not even apply because there's no problem of private details "leaking" into the binary interface to begin with?

Related

Difference between an instance of a class and a class representing an instance already?

I use Java as an example but this is more of a general OOP design related question.
Lets take the IOExceptions in Java as an example. Why is there a class FileNotFoundException for example? Should not that be an instance of a IOException where the cause is FileNotFound? I would say FileNotFoundException is an instance of IOException. Where does this end? FileNotFoundButOnlyCheckedOnceException, FileNotFoundNoMatterHowHardITriedException..?
I have also seen code in projects I worked in where classes such as FirstLineReader and LastLineReader existed. To me, such classes actually represent instances, but I see such design in many places. Look at the Spring Framework source code for example, it comes with hundreds of such classes, where every time I see one I see an instance instead of a blueprint. Are not classes meant to be blueprints?
What I am trying to ask is, how does one make the decision between these 2 very simple options:
Option 1:
enum DogBreed {
Bulldog, Poodle;
}
class Dog {
DogBreed dogBreed;
public Dog(DogBreed dogBreed) {
this.dogBreed = dogBreed;
}
}
Option 2:
class Dog {}
class Bulldog extends Dog {
}
class Poodle extends Dog {
}
The first option gives the caller the requirement to configure the instance it is creating. In the second option, the class represents the instance itself already (as I see it, which might be totally wrong ..).
If you agree that these classes represent instances instead of blueprints, would you say it is a good practice to create classes that represents instances or is it totally wrong the way I am looking at this and my statement "classes representing instances" is just load of nonsense?
Edited
First of all: We know the Inheritance definition and we can find a lot of examples in SO and internet. But, I think we should look in-depth and a little more scientific.
Note 0:
Clarification about Inheritance and Instance terminology.
First let me name Development Scope for development life cycle, when we are modeling and programming our system and Runtime Scope for sometimes our system is running.
We have Classes and modeling and developing them in Development Scope. And Objects in Runtime Scope. There is no Object in Development Scope.
And in Object Oriented, the definition of Instance is: Creating an Object from a Class.
On the other hand, when we are talking about classes and object, we should clarify our Viewpoint about Development Scope and Runtime Scope.
So, with this introduction, I want to clarify Inheritance:
Inheritance is a relationship between Classes, NOT Objects.
Inheritance can exist in Development Scope, not in Runtime Scope. There is no Inheritance in Runtime Scope.
After running our project, there is no relationship between parent and child (If there is only Inheritance between a child class and parent class). So, the question is: What is super.invokeMethod1() or super.attribute1 ?, they are not the relationship between child and parent. All attributes and methods of a parent are transmitted to the child and that is just a notation to access the parts that transmitted from a parent.
Also, there are not any Objects in Development Scope. So there are not any Instances in Development scope. It is just Is-A and Has-A relationship.
Therefore, when we said:
I would say FileNotFoundException is a instance of an IOException
We should clarify about our Scope (Development and Runtime).
For example, If FileNotFoundException is an instance of IOException, then what is the relationship between a specific FileNotFoundException exception at runtime (the Object) and FileNotFoundException. Is it an instance of instance?
Note 1:
Why we used Inheritance? The goal of inheritance is to extending parent class functionalities (based on the same type).
This extension can happen by adding new attributes or new methods.
Or overriding existing methods.
In addition, by extending a parent class, we can reach to reusability too.
We can not restrict the parent class functionality (Liskov Principle)
We should be able to replace the child as parent in the system (Liskov Principle)
and etc.
Note 2:
The Width and Depth of Inheritance Hierarchies
The Width and Depth of Inheritance can be related to many factors:
The project: The complexity of the project (Type Complexity) and it's architecture and design. The size of the project, the number of classes and etc.
The team: The expertise of a team in controlling the complexity of the project.
and etc.
However, we have some heuristics about it. (Object-Oriented Design Heuristics, Arthur J. Riel)
In theory, inheritance hierarchies should be deep—the deeper, the better.
In practice, inheritance hierarchies should be no deeper than
an average person can keep in his or her short-term memory. A popular
value for this depth is six.
Note that they are heuristics and based on short-term memory number (7). And maybe the expertise of a team affect this number. But in many hierarchies like organizational charts is used.
Note 3:
When we are using Wrong Inheritance?
Based on :
Note 1: the goal of Inheritance (Extending parent class functionalities)
Note 2: the width and depth of Inheritance
In this conditions we use wrong inheritance:
We have some classes in an inheritance hierarchy, without extending parent class functionalities. The extension should be reasonable and should be enough to make a new class. The reasonable means from Observer's point of view. The observer can be Project Architect or Designer (Or other Architects and Designers).
We have a lot of classes in the inheritance hierarchy. It calls Over-Specialization. Some reasons may cause this:
Maybe we did not consider Note 1 (Extending parent functionalities)
Maybe our Modularization (packaging) is not correct. And we put many system use cases in one package and we should make Design Refactoring.
They are other reasons, but not exactly related this answer.
Note 4:
What should we do? When we are using Wrong Inheritance?
Solution 1: We should perform Design Refactoring to check the value of classes in order to Extending parent Functionality. In this refactoring, maybe many classes of system deleted.
Solution 2: We should perform Design Refactoring to modularization. In this refactoring, maybe some classes of our package transmitted to other packages.
Solution 3: Using the Composition over Inheritance.
We can use this technique for many reasons. Dynamic Hierarchy is one of popular reasons that we prefer Composition instead of Inheritance.
see Tim Boudreau (of Sun) notes here:
Object hierarchies don't scale
Solution 4: use instances over Subclasses
This question is about this technique. Let me named it instances over Subclasses.
When we can use it:
(Tip 1): Consider Note 1, when we do not exactly extend the parent class functionalities. Or the extensions are not reasonable and enough.
(Tip 2:) Consider Note 2, If we have a lot of subclasses (semi or identical classes) that extends the parent class a little and we can control this extension without inheritance. Note that it is not easy to say that. We should prove that it is not violating other Object Oriented Principles like Open-Close Principle.
What should we do?
Martin Fowler recommend (Book 1 page 232 and Book 2 page 251):
Replace Subclass with Fields, Change the methods to superclass fields and eliminate the subclasses.
We can use other techniques like enum as the question mentioned.
First, by including the exceptions question along with a general system design issue, you're really asking two different questions.
Exceptions are just complicated values. Their behaviors are trivial: provide the message, provide the cause, etc. And they're naturally hierarchical. There's Throwable at the top, and other exceptions repeatedly specialize it. The hierarchy simplifies exception handling by providing a natural filter mechanism: when you say catch (IOException..., you know you'll get everything bad that happened regarding i/o. Can't get much clearer than that. Testing, which can be ugly for big object hierarchies, is no problem for exceptions: There's little or nothing to test in a value.
It follows that if you are designing similar complex values with trivial behaviors, a tall inheritance hierarchy is a reasonable choice: Different kinds of tree or graph nodes constitute a good example.
Your second example seems to be about objects with more complex behaviors. These have two aspects:
Behaviors need to be tested.
Objects with complex behaviors often change their relationships with each other as systems evolve.
These are the reasons for the often heard mantra "composition over inheritance." It's been well-understood since the mid-90s that big compositions of small objects are generally easier to test, maintain, and change than big inheritance hierarchies of necessarily big objects.
Having said that, the choices you've offered for implementation are missing the point. The question you need to answer is "What are the behaviors of dogs I'm interested in?" Then describe these with an interface, and program to the interface.
interface Dog {
Breed getBreed();
Set<Dog> getFavoritePlaymates(DayOfWeek dayOfWeek);
void emitBarkingSound(double volume);
Food getFavoriteFood(Instant asOfTime);
}
When you understand the behaviors, implementation decisions become much clearer.
Then a rule of thumb for implementation is to put simple, common behaviors in an abstract base class:
abstract class AbstractDog implements Dog {
private Breed breed;
Dog(Breed breed) { this.breed = breed; }
#Override Breed getBreed() { return breed; }
}
You should be able to test such base classes by creating minimal concrete versions that just throw UnsupportedOperationException for the unimplemented methods and verify the implemented ones. A need for any fancier kind of setup is a code smell: you've put too much into the base.
Implementation hierarchies like this can be helpful for reducing boilerplate, but more than 2 deep is a code smell. If you find yourself needing 3 or more levels, it's very likely you can and should wrap chunks of common behavior from the low-level classes in helper classes that will be easier to test and available for composition throughout the system. For example, rather than offering a protected void emitSound(Mp3Stream sound); method in the base class for inheritors to use, it would be far preferable to create a new class SoundEmitter {} and add a final member with this type in Dog.
Then make concrete classes by filling in the rest of the behavior:
class Poodle extends AbstractDog {
Poodle() { super(Breed.POODLE); }
Set<Dog> getFavoritePlaymates(DayOfWeek dayOfWeek) { ... }
Food getFavoriteFood(Instant asOfTime) { ... }
}
Observe: The need for a behavior - that the dog must be able to return its breed - and our decision to implement the "get breed" behavior in an abstract base class resulted in a stored enum value.
We ended up adopting something closer to your Option 1, but this wasn't an a priori choice. It flowed from thinking about behaviors and the cleanest way to implement them.
Following comments are on the condition where sub-classes do not actually extend the functionality of their super class.
From Oracle doc:
Signals that an I/O exception of some sort has occurred. This class is the general class of exceptions produced by failed or interrupted I/O operations.
It says IOException is a general exception. If we have a cause enum:
enum cause{
FileNotFound, CharacterCoding, ...;
}
We will not be able to throw an IOException if the cause in our custom code is not included in the enum. In another word, it makes IOException more specific instead of general.
Assuming we are not programming a library, and the functionality of class Dog below is specific in our business requirement:
enum DogBreed {
Bulldog, Poodle;
}
class Dog {
DogBreed dogBreed;
public Dog(DogBreed dogBreed) {
this.dogBreed = dogBreed;
}
}
Personally I think it is good to use enum because it simplifies the class structure (less classes).
The first code you cite involves exceptions.
Inheritance is a natural fit for exception types because the language-provided construct to differentiate exceptions of interest in the try-catch statement is through use of the type system. This means we can easily choose to handle just a more specific type (FileNotFound), or the more general type (IOException).
Testing a field's value, to see whether to handle an exception, means stepping out of the standard language construct and writing some boiler plate guard code (e.g. test value(s) and rethrow if not interested).
(Further, exceptions need to be extensible across DLL (compilation) boundaries. When we use enums we may have problems extending the design without modifying the source that introduces (and other that consumes) the enum.)
When it comes to things other than exceptions, today's wisdom encourages composition over inheritance as this tends to result in less complex and more maintainable designs.
Your Option 1 is more of a composition example, whereas your Option 2 is clearly an inheritance example.
If you agree that these classes represent instances instead of blueprints, would you say it is a good practice to create classes that represents instances or is it totally wrong the way I am looking at this and my statement "classes representing instances" is just load of nonsense?
I agree with you, and would not say this represents good practice. These classes as shown are not particularly customizable and don't represent added value.
A class that has offers no overrides, no new state, no new methods, is not particularly differentiated from its base. So there is little merit in declaring such a class, unless we seek to do instance-of tests on it (like the exception handling language construct does under the covers). We can't really tell from this example, which is contrived for the purposes of asking the question, whether there is any added value in these subclasses but it doesn't appear so.
To be clear, though, there are lots of worse example of inheritance, such as when an (pre) occupation like Teacher or Student inherits from Person. This means that a Teacher cannot a be Student at the same time unless we engage in adding even more classes, e.g. TeacherStudent, perhaps using multiple inheritance..
We might call this class explosion, as sometimes we end up needing a matrix of classes because of inappropriate is-a relationships. (Add one new class, and you need a whole new row or column of exploded classes.)
Working with a design that suffers class explosion actually creates more work for clients consuming these abstractions, so it is a loose-loose situation.
Here at issue, is in our trust of natural language because when we say someone is-a Student, this is not, from a logical perspective, the same permanent "is-a"/instance-of relationship (of subclassing), but rather a potentially-temporary role being played that the Person: one of many possible roles a Person might play concurrently at that. In these cases composition is clearly superior to inheritance.
In your scenario, however, the BullDog is unlikely to be able to be anything other than the BullDog, so the permanent is-a relationship of subclassing holds, and while adding little value, at least this hierarchy does not risk class explosion.
Note that the main drawback to with the enum approach is that the enum may not be extensible depending on the language you're using. If you need arbitrary extensibility (e.g. by others and without altering your code), you have the choice of using something extensible but more weakly typed, like strings (typos aren't caught, duplicates aren't caught, etc..), or you can use inheritance, as it offers decent extensibility with stronger typing. Exceptions need this kind of extensibility by others without modification and recompilation of the originals and others since they are used across DLL boundaries.
If you control the enum and can recompile the code as a unit as needed to handle new dog types, then you don't need this extensibility.
Option 1 has to list all known causes at declaration time.
Option 2 can be extended by creating new classes, without touching the original declaration.
This is important when the base/original declaration is done by the framework. If there were 100 known, fixed, reasons for I/O problems, an enum or something similar could make sense, but if new ways to communicate can crop up that should also be I/O exceptions, then a class hierarchy makes more sense. Any class library that you add to your application can extend with more I/O exceptions without touching the original declaration.
This is basically the O in the SOLID, open for extension, closed for modification.
But this is also why, as an example, DayOfWeek type of enumerations exists in many frameworks. It is extremely unlikely that the western world suddenly wakes up one day and decides to go for 14 unique days, or 8, or 6. So having classes for those is probably overkill. These things are more fixed in stone (knock-on-wood).
The two options you present do not actually express what I think you're trying to get at. What you're trying to differentiate between is composition and inheritance.
Composition works like this:
class Poodle {
Legs legs;
Tail tail;
}
class Bulldog {
Legs legs;
Tail tail;
}
Both have a common set of characteristics that we can aggregate to 'compose' a class. We can specialize components where we need to, but can just expect that "Legs" mostly work like other legs.
Java has chosen inheritance instead of composition for IOException and FileNotFoundException.
That is, a FileNotFoundException is a kind of (i.e. extends) IOException and permits handling based on the identity of the superclass only (though you can specify special handling if you choose to).
The arguments for choosing composition over inheritance are well-rehearsed by others and can be easily found by searching for "composition vs. inheritance."

What does the 'I' in IObservable<T> or IObserver<T> mean?

I'm trying to learn/understand Rx, specifically RxJS, and keep seeing references to IObservable, IObserver, etc.
Can anyone tell me what the leading I means and/or where it comes from?
From my searching, it looks like the <T> is for the type. If this is wrong or naive, I'd appreciate some clarification on this as well.
Thanks!
In ye olden days of MFC for C++, Microsoft had Hungarian notation down to a very irritating artform, where all concrete classes were prefixed with C and their COM interfaces with I, this does help avoid the conflict where a COM interface and class might share the same name and so muddy your project.
Part of this notation carried over into .NET, except only interfaces kept the I prefix, but classes and other types dropped their Cs. This does make non-interface-heavy code easier to look at, but can cause ambiguity if you begin a class name with a 2-letter acronym beginning with I (as two-letter acronyms must be completely capitalised according to the the .NET style guidelines), but this is rare.
(I note that generic type name placeholders are prefixed with T too, e.g. TKey and TValue in Dictionary).
An example of why this is necessary is when dealing with collections in .NET, if you're building a reusable library and don't want to expose implementation details (e.g. if you use List<T> or T[] as an underlying collection field type), you can use IList<T> or IReadOnlyList<T> which are interfaces. If the interface was simply called List<T> it would conflict with the actual type List<T>, and ReadOnlyList<T> (an interface) might get confused with ReadOnlyCollection<T> (a class).
You might argue that this wouldn't be a problem if classes and interfaces had a different namespace. C does this: struct types and scalars exist in different namespaces, which unfortunately means that every time a struct type name is used, its usage must be prefixed with struct (e.g. a declaration: struct Foo foo). People workaround this by using typedef with anonymous structs, but I feel the end-result is messy (and the Linux kernel coding guidelines prohibit this too).
In Java, however, interfaces are not prefixed with I but instead have class-like names. Whether this is "correct" or "better" is entirely up for debate. C++ does not have interface types, just pure-abstract classes and multiple-inheritance, so the I prefix isn't typically seen at all outside of COM.

In GWT, why shouldn't a method return an interface?

In this video from Google IO 2009, the presenter very quickly says that signatures of methods should return concrete types instead of interfaces.
From what I heard in the video, this has something to do with the GWT Java-to-Javascript compiler.
What's the reason behind this choice ?
What does the interface in the method signature do to the compiler ?
What methods can return interfaces instead of concrete types, and which are better off returning concrete instances ?
This has to do with the gwt-compiler, as you say correctly. EDIT: However, as Daniel noted in a comment below, this does not apply to the gwt-compiler in general but only when using GWT-RPC.
If you declare List instead of ArrayList as the return type, the gwt-compiler will include the complete List-hierarchy (i.e. all types implementing List) in your compiled code. If you use ArrayList, the compiler will only need to include the ArrayList hierarchy (i.e. all types implementing ArrayList -- which usually is just ArrayList itself). Using an interface instead of a concrete class you will pay a penalty in terms of compile time and in the size of your generated code (and thus the amount of code each user has to download when running your app).
You were also asking for the reason: If you use the interface (instead of a concrete class) the compiler does not know at compile time which implementations of these interfaces are going to be used. Thus, it includes all possible implementations.
Regarding your last question: all methods CAN be declared to return interface (that is what you ment, right?). However, the above penalty applies.
And by the way: As I understand it, this problem is not restricted to methods. It applies to all type declarations: variables, parameters. Whenever you use an interface to declare something, the compiler will include the complete hierarchy of sub-interfaces and implementing classes. (So obviously if you declare your own interface with only one or two implementing classes then you are not incurring a big penalty. That is how I use interfaces in GWT.)
In short: use concrete classes whenever possible.
(Small suggestion: it would help if you gave the time stamp when you refer to a video.)
This and other performance tips were presented at Google IO 2011 - High-performance GWT.
At about the 7 min point the speak addresses 'RPC Type Explosion':
For some reason I thought the GWT compiler would optimize it away again but it appears I was mistaken.

Interface in a dynamic language?

Interface (or an abstract class with all the methods abstract) is a powerful weapon in a static-typed language such as C#, JAVA. It allows different derived types to be used in a uniformed way. Design patterns encourage us to use interface as much as possible.
However, in a dynamic-typed language, all objects are not checked for their type at compile time. They don't have to implement an interface to be used in a specific way. You just need to make sure that they have some methods (attributes) defined. This makes interface not necessary, or at least not as useful as it is in a static language.
Does a typical dynamic language (e.g. ruby) have interface? If it does, then what are the benefits of having it? If it doesn't, then are we losing many of the beautiful design patterns that require an interface?
Thanks.
I guess there is no single answer for all dynamic languages. In Python, for instance, there are no interfaces, but there is multiple inheritance. Using interface-like classes is still useful:
Interface-like classes can provide default implementation of methods;
Duck-typing is good, but to an extent; sometimes it is useful to be able to write isinstance(x, SomeType), especially when SomeType contains many methods.
Interfaces in dynamic languages are useful as documentation of APIs that can be checked automatically, e.g. by development tools or asserts at runtime.
As an example, zope.interface is the de-facto standard for interfaces in Python. Projects such as Zope and Twisted that expose huge APIs for consumption find it useful, but as far as I know it's not used much outside this type of projects.
In Ruby, which is a dynamically-typed language and only allows single inheritance, you can mimic an "interface" via mixins, rather than polluting the class with the methods of the "interface".
Mixins partially mimic multiple inheritance, allowing an object to "inherit" from multiple sources, but without the ambiguity and complexity of actually having multiple parents. There is only one true parent.
To implement an interface (in the abstract sense, not an actual interface type as in statically-typed languages) You define a module as if it were an interface in a static language. You then include it in the class. Voila! You've gathered the duck type into what is essentially an interface.
Very simplified example:
module Equippable
def weapon
"broadsword"
end
end
class Hero
include Equippable
def hero_method_1
end
def hero_method_2
end
end
class Mount
include Equippable
def mount_method_1
end
end
h = Hero.new
h.weapon # outputs "broadsword"
m = Mount.new
m.weapon # outputs "broadsword"
Equippable is the interface for Hero, Mount, and any other class or model that includes it.
(Obviously, the weapon will most likely be dynamically set by an initializer, which has been simplified away in this example.)

Encapsulation in the age of frameworks

At my old C++ job, we always took great care in encapsulating member variables, and only exposing them as properties when absolutely necessary. We'd have really specific constructors that made sure you fully constructed the object before using it.
These days, with ORM frameworks, dependency-injection, serialization, etc., it seems like you're better off just relying on the default constructor and exposing everything about your class in properties, so that you can inject things, or build and populate objects more dynamically.
In C#, it's been taken one step further with Object initializers, which give you the ability to basically define your own constructor. (I know object initializers are not really custom constructors, but I hope you get my point.)
Are there any general concerns with this direction? It seems like encapsulation is starting to become less important in favor of convenience.
EDIT: I know you can still carefully encapsulate members, but I just feel like when you're trying to crank out some classes, you either have to sit and carefully think about how to encapsulate each member, or just expose it as a property, and worry about how it is initialized later. It just seems like the easiest approach these days is to expose things as properties, and not be so careful. Maybe I'm just flat wrong, but that's just been my experience, espeically with the new C# language features.
I disagree with your conclusion. There are many good ways of encapsulating in c# with all the above mentioned technologies, as to maintain good software coding practices. I would also say that it depends on whose technology demo you're looking at, but in the end it comes down to reducing the state-space of your objects so that you can make sure they hold their invariants at all times.
Take object relational frameworks; most of them allow you to specify how they are going to hydrate the entities; NHibernate for example allows you so say access="property" or access="field.camelcase" and similar. This allows you to encapsulate your properties.
Dependency injection works on the other types you have, mostly those which are not entities, even though you can combine AOP+ORM+IOC in some very nice ways to improve the state of these things. IoC is often used from layers above your domain entities if you're building a data-driven application, which I guess you are, since you're talking about ORMs.
They ("they" being application and domain services and other intrinsic classes to the program) expose their dependencies but in fact can be encapsulated and tested in even better isolation than previously since the paradigms of design-by-contract/design-by-interface which you often use when mocking dependencies in mock-based testing (in conjunction with IoC), will move you towards class-as-component "semantics". I mean: every class, when built using the above, will be better encapsulated.
Updated for urig: This holds true for both exposing concrete dependencies and exposing interfaces. First about interfaces: What I was hinting at above was that services and other applications classes which have dependencies, can with OOP depend on contracts/interfaces rather than specific implementations. In C/C++ and older languages there wasn't the interface and abstract classes can only go so far. Interfaces allow you to tie different runtime instances to the same interface without having to worry about leaking internal state which is what you're trying to get away from when abstracting and encapsulating. With abstract classes you can still provide a class implementation, just that you can't instantiate it, but inheritors still need to know about the invariants in your implementation and that can mess up state.
Secondly, about concrete classes as properties: you have to be wary about what types of types ;) you expose as properties. Say you have a List in your instance; then don't expose IList as the property; this will probably leak and you can't guarantee that consumers of the interface don't add things or remove things which you depend on; instead expose something like IEnumerable and return a copy of the List, or even better, do it as a method:
public IEnumerable MyCollection { get { return _List.Enum(); } } and you can be 100% certain to get both the performance and the encapsulation. Noone can add or remove to that IEnumerable and you still don't have to perform a costly array copy. The corresponding helper method:
static class Ext {
public static IEnumerable<T> Enum<T>(this IEnumerable<T> inner) {
foreach (var item in inner) yield return item;
}
}
So while you can't get 100% encapsulation in say creating overloaded equals operators/method you can get close with your public interfaces.
You can also use the new features of .Net 4.0 built on Spec# to verify the contracts I talked about above.
Serialization will always be there and has been for a long time. Previously, before the internet-area it was used for saving your object graph to disk for later retrieval, now it's used in web services, in copy-semantics and when passing data to e.g. a browser. This doesn't necessarily break encapsulation if you put a few [NonSerialized] attributes or the equivalents on the correct fields.
Object initializers aren't the same as constructors, they are just a way of collapsing a few lines of code. Values/instances in the {} will not be assigned until all of your constructors have run, so in principle it's just the same as not using object initializers.
I guess, what you have to watch out for is deviating from the good principles you've learnt from your previous job and make sure you are keeping your domain objects filled with business logic encapsulated behind good interfaces and ditto for your service-layer.
Private members are still incredibly important. Controlling access to internal object data is always good, and shouldn't be ignored.
Many times private methods I've found to be overkill. Most of the time, if the work you're doing is important enough to break out, you can refactor it in such a way that either a) the private method is trivial, or b) is an integral part of other functions.
In addition, with unit testing, having many methods private makes it very hard to unit test. There are ways around that (making test objects friends, etc), but add difficulties.
I wouldn't discount private methods entirely though. Any time there's important, internal algorithms that really make no sense outside of the class there's no reason to expose those methods.
I think that encapsulation is still important, it helps more in libraries than anything imho. You can create a library that does X, but you don't need everyone to know how X was created. And if you wanted to create it more specifically to obfuscate the way you create X. The way I learned about encapsulation, I remember also that you should always define your variables as private to protect them from a data attack. To protect against a hacker breaking your code and accessing variables that they are not supposed to use.