Difference between an instance of a class and a class representing an instance already? - class

I use Java as an example but this is more of a general OOP design related question.
Lets take the IOExceptions in Java as an example. Why is there a class FileNotFoundException for example? Should not that be an instance of a IOException where the cause is FileNotFound? I would say FileNotFoundException is an instance of IOException. Where does this end? FileNotFoundButOnlyCheckedOnceException, FileNotFoundNoMatterHowHardITriedException..?
I have also seen code in projects I worked in where classes such as FirstLineReader and LastLineReader existed. To me, such classes actually represent instances, but I see such design in many places. Look at the Spring Framework source code for example, it comes with hundreds of such classes, where every time I see one I see an instance instead of a blueprint. Are not classes meant to be blueprints?
What I am trying to ask is, how does one make the decision between these 2 very simple options:
Option 1:
enum DogBreed {
Bulldog, Poodle;
}
class Dog {
DogBreed dogBreed;
public Dog(DogBreed dogBreed) {
this.dogBreed = dogBreed;
}
}
Option 2:
class Dog {}
class Bulldog extends Dog {
}
class Poodle extends Dog {
}
The first option gives the caller the requirement to configure the instance it is creating. In the second option, the class represents the instance itself already (as I see it, which might be totally wrong ..).
If you agree that these classes represent instances instead of blueprints, would you say it is a good practice to create classes that represents instances or is it totally wrong the way I am looking at this and my statement "classes representing instances" is just load of nonsense?

Edited
First of all: We know the Inheritance definition and we can find a lot of examples in SO and internet. But, I think we should look in-depth and a little more scientific.
Note 0:
Clarification about Inheritance and Instance terminology.
First let me name Development Scope for development life cycle, when we are modeling and programming our system and Runtime Scope for sometimes our system is running.
We have Classes and modeling and developing them in Development Scope. And Objects in Runtime Scope. There is no Object in Development Scope.
And in Object Oriented, the definition of Instance is: Creating an Object from a Class.
On the other hand, when we are talking about classes and object, we should clarify our Viewpoint about Development Scope and Runtime Scope.
So, with this introduction, I want to clarify Inheritance:
Inheritance is a relationship between Classes, NOT Objects.
Inheritance can exist in Development Scope, not in Runtime Scope. There is no Inheritance in Runtime Scope.
After running our project, there is no relationship between parent and child (If there is only Inheritance between a child class and parent class). So, the question is: What is super.invokeMethod1() or super.attribute1 ?, they are not the relationship between child and parent. All attributes and methods of a parent are transmitted to the child and that is just a notation to access the parts that transmitted from a parent.
Also, there are not any Objects in Development Scope. So there are not any Instances in Development scope. It is just Is-A and Has-A relationship.
Therefore, when we said:
I would say FileNotFoundException is a instance of an IOException
We should clarify about our Scope (Development and Runtime).
For example, If FileNotFoundException is an instance of IOException, then what is the relationship between a specific FileNotFoundException exception at runtime (the Object) and FileNotFoundException. Is it an instance of instance?
Note 1:
Why we used Inheritance? The goal of inheritance is to extending parent class functionalities (based on the same type).
This extension can happen by adding new attributes or new methods.
Or overriding existing methods.
In addition, by extending a parent class, we can reach to reusability too.
We can not restrict the parent class functionality (Liskov Principle)
We should be able to replace the child as parent in the system (Liskov Principle)
and etc.
Note 2:
The Width and Depth of Inheritance Hierarchies
The Width and Depth of Inheritance can be related to many factors:
The project: The complexity of the project (Type Complexity) and it's architecture and design. The size of the project, the number of classes and etc.
The team: The expertise of a team in controlling the complexity of the project.
and etc.
However, we have some heuristics about it. (Object-Oriented Design Heuristics, Arthur J. Riel)
In theory, inheritance hierarchies should be deep—the deeper, the better.
In practice, inheritance hierarchies should be no deeper than
an average person can keep in his or her short-term memory. A popular
value for this depth is six.
Note that they are heuristics and based on short-term memory number (7). And maybe the expertise of a team affect this number. But in many hierarchies like organizational charts is used.
Note 3:
When we are using Wrong Inheritance?
Based on :
Note 1: the goal of Inheritance (Extending parent class functionalities)
Note 2: the width and depth of Inheritance
In this conditions we use wrong inheritance:
We have some classes in an inheritance hierarchy, without extending parent class functionalities. The extension should be reasonable and should be enough to make a new class. The reasonable means from Observer's point of view. The observer can be Project Architect or Designer (Or other Architects and Designers).
We have a lot of classes in the inheritance hierarchy. It calls Over-Specialization. Some reasons may cause this:
Maybe we did not consider Note 1 (Extending parent functionalities)
Maybe our Modularization (packaging) is not correct. And we put many system use cases in one package and we should make Design Refactoring.
They are other reasons, but not exactly related this answer.
Note 4:
What should we do? When we are using Wrong Inheritance?
Solution 1: We should perform Design Refactoring to check the value of classes in order to Extending parent Functionality. In this refactoring, maybe many classes of system deleted.
Solution 2: We should perform Design Refactoring to modularization. In this refactoring, maybe some classes of our package transmitted to other packages.
Solution 3: Using the Composition over Inheritance.
We can use this technique for many reasons. Dynamic Hierarchy is one of popular reasons that we prefer Composition instead of Inheritance.
see Tim Boudreau (of Sun) notes here:
Object hierarchies don't scale
Solution 4: use instances over Subclasses
This question is about this technique. Let me named it instances over Subclasses.
When we can use it:
(Tip 1): Consider Note 1, when we do not exactly extend the parent class functionalities. Or the extensions are not reasonable and enough.
(Tip 2:) Consider Note 2, If we have a lot of subclasses (semi or identical classes) that extends the parent class a little and we can control this extension without inheritance. Note that it is not easy to say that. We should prove that it is not violating other Object Oriented Principles like Open-Close Principle.
What should we do?
Martin Fowler recommend (Book 1 page 232 and Book 2 page 251):
Replace Subclass with Fields, Change the methods to superclass fields and eliminate the subclasses.
We can use other techniques like enum as the question mentioned.

First, by including the exceptions question along with a general system design issue, you're really asking two different questions.
Exceptions are just complicated values. Their behaviors are trivial: provide the message, provide the cause, etc. And they're naturally hierarchical. There's Throwable at the top, and other exceptions repeatedly specialize it. The hierarchy simplifies exception handling by providing a natural filter mechanism: when you say catch (IOException..., you know you'll get everything bad that happened regarding i/o. Can't get much clearer than that. Testing, which can be ugly for big object hierarchies, is no problem for exceptions: There's little or nothing to test in a value.
It follows that if you are designing similar complex values with trivial behaviors, a tall inheritance hierarchy is a reasonable choice: Different kinds of tree or graph nodes constitute a good example.
Your second example seems to be about objects with more complex behaviors. These have two aspects:
Behaviors need to be tested.
Objects with complex behaviors often change their relationships with each other as systems evolve.
These are the reasons for the often heard mantra "composition over inheritance." It's been well-understood since the mid-90s that big compositions of small objects are generally easier to test, maintain, and change than big inheritance hierarchies of necessarily big objects.
Having said that, the choices you've offered for implementation are missing the point. The question you need to answer is "What are the behaviors of dogs I'm interested in?" Then describe these with an interface, and program to the interface.
interface Dog {
Breed getBreed();
Set<Dog> getFavoritePlaymates(DayOfWeek dayOfWeek);
void emitBarkingSound(double volume);
Food getFavoriteFood(Instant asOfTime);
}
When you understand the behaviors, implementation decisions become much clearer.
Then a rule of thumb for implementation is to put simple, common behaviors in an abstract base class:
abstract class AbstractDog implements Dog {
private Breed breed;
Dog(Breed breed) { this.breed = breed; }
#Override Breed getBreed() { return breed; }
}
You should be able to test such base classes by creating minimal concrete versions that just throw UnsupportedOperationException for the unimplemented methods and verify the implemented ones. A need for any fancier kind of setup is a code smell: you've put too much into the base.
Implementation hierarchies like this can be helpful for reducing boilerplate, but more than 2 deep is a code smell. If you find yourself needing 3 or more levels, it's very likely you can and should wrap chunks of common behavior from the low-level classes in helper classes that will be easier to test and available for composition throughout the system. For example, rather than offering a protected void emitSound(Mp3Stream sound); method in the base class for inheritors to use, it would be far preferable to create a new class SoundEmitter {} and add a final member with this type in Dog.
Then make concrete classes by filling in the rest of the behavior:
class Poodle extends AbstractDog {
Poodle() { super(Breed.POODLE); }
Set<Dog> getFavoritePlaymates(DayOfWeek dayOfWeek) { ... }
Food getFavoriteFood(Instant asOfTime) { ... }
}
Observe: The need for a behavior - that the dog must be able to return its breed - and our decision to implement the "get breed" behavior in an abstract base class resulted in a stored enum value.
We ended up adopting something closer to your Option 1, but this wasn't an a priori choice. It flowed from thinking about behaviors and the cleanest way to implement them.

Following comments are on the condition where sub-classes do not actually extend the functionality of their super class.
From Oracle doc:
Signals that an I/O exception of some sort has occurred. This class is the general class of exceptions produced by failed or interrupted I/O operations.
It says IOException is a general exception. If we have a cause enum:
enum cause{
FileNotFound, CharacterCoding, ...;
}
We will not be able to throw an IOException if the cause in our custom code is not included in the enum. In another word, it makes IOException more specific instead of general.
Assuming we are not programming a library, and the functionality of class Dog below is specific in our business requirement:
enum DogBreed {
Bulldog, Poodle;
}
class Dog {
DogBreed dogBreed;
public Dog(DogBreed dogBreed) {
this.dogBreed = dogBreed;
}
}
Personally I think it is good to use enum because it simplifies the class structure (less classes).

The first code you cite involves exceptions.
Inheritance is a natural fit for exception types because the language-provided construct to differentiate exceptions of interest in the try-catch statement is through use of the type system. This means we can easily choose to handle just a more specific type (FileNotFound), or the more general type (IOException).
Testing a field's value, to see whether to handle an exception, means stepping out of the standard language construct and writing some boiler plate guard code (e.g. test value(s) and rethrow if not interested).
(Further, exceptions need to be extensible across DLL (compilation) boundaries. When we use enums we may have problems extending the design without modifying the source that introduces (and other that consumes) the enum.)
When it comes to things other than exceptions, today's wisdom encourages composition over inheritance as this tends to result in less complex and more maintainable designs.
Your Option 1 is more of a composition example, whereas your Option 2 is clearly an inheritance example.
If you agree that these classes represent instances instead of blueprints, would you say it is a good practice to create classes that represents instances or is it totally wrong the way I am looking at this and my statement "classes representing instances" is just load of nonsense?
I agree with you, and would not say this represents good practice. These classes as shown are not particularly customizable and don't represent added value.
A class that has offers no overrides, no new state, no new methods, is not particularly differentiated from its base. So there is little merit in declaring such a class, unless we seek to do instance-of tests on it (like the exception handling language construct does under the covers). We can't really tell from this example, which is contrived for the purposes of asking the question, whether there is any added value in these subclasses but it doesn't appear so.
To be clear, though, there are lots of worse example of inheritance, such as when an (pre) occupation like Teacher or Student inherits from Person. This means that a Teacher cannot a be Student at the same time unless we engage in adding even more classes, e.g. TeacherStudent, perhaps using multiple inheritance..
We might call this class explosion, as sometimes we end up needing a matrix of classes because of inappropriate is-a relationships. (Add one new class, and you need a whole new row or column of exploded classes.)
Working with a design that suffers class explosion actually creates more work for clients consuming these abstractions, so it is a loose-loose situation.
Here at issue, is in our trust of natural language because when we say someone is-a Student, this is not, from a logical perspective, the same permanent "is-a"/instance-of relationship (of subclassing), but rather a potentially-temporary role being played that the Person: one of many possible roles a Person might play concurrently at that. In these cases composition is clearly superior to inheritance.
In your scenario, however, the BullDog is unlikely to be able to be anything other than the BullDog, so the permanent is-a relationship of subclassing holds, and while adding little value, at least this hierarchy does not risk class explosion.
Note that the main drawback to with the enum approach is that the enum may not be extensible depending on the language you're using. If you need arbitrary extensibility (e.g. by others and without altering your code), you have the choice of using something extensible but more weakly typed, like strings (typos aren't caught, duplicates aren't caught, etc..), or you can use inheritance, as it offers decent extensibility with stronger typing. Exceptions need this kind of extensibility by others without modification and recompilation of the originals and others since they are used across DLL boundaries.
If you control the enum and can recompile the code as a unit as needed to handle new dog types, then you don't need this extensibility.

Option 1 has to list all known causes at declaration time.
Option 2 can be extended by creating new classes, without touching the original declaration.
This is important when the base/original declaration is done by the framework. If there were 100 known, fixed, reasons for I/O problems, an enum or something similar could make sense, but if new ways to communicate can crop up that should also be I/O exceptions, then a class hierarchy makes more sense. Any class library that you add to your application can extend with more I/O exceptions without touching the original declaration.
This is basically the O in the SOLID, open for extension, closed for modification.
But this is also why, as an example, DayOfWeek type of enumerations exists in many frameworks. It is extremely unlikely that the western world suddenly wakes up one day and decides to go for 14 unique days, or 8, or 6. So having classes for those is probably overkill. These things are more fixed in stone (knock-on-wood).

The two options you present do not actually express what I think you're trying to get at. What you're trying to differentiate between is composition and inheritance.
Composition works like this:
class Poodle {
Legs legs;
Tail tail;
}
class Bulldog {
Legs legs;
Tail tail;
}
Both have a common set of characteristics that we can aggregate to 'compose' a class. We can specialize components where we need to, but can just expect that "Legs" mostly work like other legs.
Java has chosen inheritance instead of composition for IOException and FileNotFoundException.
That is, a FileNotFoundException is a kind of (i.e. extends) IOException and permits handling based on the identity of the superclass only (though you can specify special handling if you choose to).
The arguments for choosing composition over inheritance are well-rehearsed by others and can be easily found by searching for "composition vs. inheritance."

Related

If you have Traits, do you stop using interfaces, Abstract base classes, and multiple inheritance?

It seems like Traits could completely replace interfaces, abstract base classes, mixins, and multiple inheritance, leaving you with just Traits and concrete inheritance.
Is this the intent?
If you have traits, which of the other code structuring constructs should you use?
(Roles are the Perl name for Traits.)
At least for Perl's Moose, there are no interfaces, so roles clearly subsume those, and generally mixins too. I'd say there still could be a case for abstract base classes. Roles can be considered what objects do, where classes are what they are.
By this line of reasoning, there still might be a valid use for an abstract base class. A URL is one example. There could easily be an abstract base class for a URL. An IO stream might be different, perhaps better as a role, as it defines how things behave rather than what they are.
When using roles, however, I have yet to see any clear need for true multiple inheritance from more than one class.
I have no use for interfaces or abstract classes at this point, but mixins and multiple inheritance are really enabled by traits so the usage of those paradigms is strongly encouraged here. Check the entire collection library to see the very rich classes you can build using these ideas.
Ah, my comments reflect Scala - I didn't realize you tagged this with multiple languages.
When you instanciate a trait; it consumes one classe.
So regardless of expressivity; You may still use legacy construct for preventing classes explosion in your jar (and starting time).
I let others answer about expressivity :)
I'm only talking about Scala here...
Read this.

Mixins vs composition in scala

In java world (more precisely if you have no multiple inheritance/mixins) the rule of thumb is quite simple: "Favor object composition over class inheritance".
I'd like to know if/how it is changed if you also consider mixins, especially in scala?
Are mixins considered a way of multiple inheritance, or more class composition?
Is there also a "Favor object composition over class composition" (or the other way around) guideline?
I've seen quite some examples when people use (or abuse) mixins when object composition could also do the job and I'm not always sure which one is better. It seems to me that you can achieve quite similar things with them, but there are some differences also, some examples:
visibility - with mixins everything becomes part of the public api, which is not the case with composition.
verbosity - in most cases mixins are less verbose and a bit easier to use, but it's not always the case (e.g. if you also use self types in complex hierarchies)
I know the short answer is "It depends", but probably there are some typical situation when this or that is better.
Some examples of guidelines I could come up with so far (assuming I have two traits A and B and A wants to use some methods from B):
If you want to extend the API of A with the methods from B then mixins, otherwise composition. But it does not help if the class/instance that I'm creating is not part of a public API.
If you want to use some patterns that need mixins (e.g. Stackable Trait Pattern) then it's an easy decision.
If you have circular dependencies then mixins with self types can help. (I try to avoid this situation, but it's not always easy)
If you want some dynamic, runtime decisions how to do the composition then object composition.
In many cases mixins seem to be easier (and/or less verbose), but I'm quite sure they also have some pitfalls, like the "God class" and others described in two artima articles: part 1, part 2 (BTW it seems to me that most of the other problems are not relevant/not so serious for scala).
Do you have more hints like these?
A lot of the problems that people have with mix-ins can be averted in Scala if you only mix-in abstract traits into your class definitions, and then mix in the corresponding concrete traits at object instantiation time. For instance
trait Locking{
// abstract locking trait, many possible definitions
protected def lock(body: =>A):A
}
class MyService{
this:Locking =>
}
//For this time, we'll use a java.util.concurrent lock
val myService:MyService = new MyService with JDK15Locking
This construct has several things to recommend it. First, it prevents there from being an explosion of classes as different combinations of trait functionalities are needed. Second, it allows for easy testing, as one can create and mix-in "do-nothing" concrete traits, similar to mock objects. Finally, we've completely hidden the locking trait used, and even that locking is going on, from consumers of our service.
Since we've gotten past most of the claimed drawbacks of mix-ins, we're still left with a tradeoff
between mix-in and composition. For myself, I normally make the decision based on whether a hypothetical delegate object would be entirely encapsulated by the containing object, or whether it could potentially be shared and have a lifecycle of its own. Locking provides a good example of entirely encapsulated delegates. If your class uses a lock object to manage concurrent access to its internal state, that lock is entirely controlled by the containing object, and neither it nor its operations are advertised as part of the class's public interface. For entirely encapsulated functionality like this, I go with mix-ins. For something shared, like a datasource, use composition.
Other differences you haven't mentioned:
Trait classes do not have any independent existence:
(Programming Scala)
If you find that a particular trait is used most often as a parent of other classes, so that the child classes behave as the parent trait, then consider defining the trait as a class instead, to make this logical relationship more clear.
(We said behaves as, rather than is a, because the former is the more precise definition of inheritance, based on the Liskov Substitution Principle - see [Martin2003], for example.)
[Martin2003]: Robert C. Martin, Agile Software Development: Principles, Patterns, and Practices, Prentice-Hall, 2003
mixins (trait) have no constructor parameters.
Hence the advice, still from Programming Scala:
Avoid concrete fields in traits that can’t be initialized to suitable default values.
Use abstract fields instead or convert the trait to a class with a constructor.
Of course, stateless traits don’t have any issues with initialization.
It’s a general principle of good object-oriented design that an instance should always be in a known valid state, starting from the moment the construction process finishes.
That last part, regarding the initial state of an object, has often helped decide between class (and class composition) and trait (and mixins) for a given concept.

What exactly is a Class Factory?

I see the word thrown around often, and I may have used it myself in code and libraries over time, but I never really got it. In most write-ups I came across, they just went on expecting you to figure it out.
What is a Class Factory? Can someone explain the concept?
Here's some supplemental information that may help better understand several of the other shorter, although technically correct, answers.
In the strictest sense a Class Factory is a function or method that creates or selects a class and returns it, based on some condition determined from input parameters or global context. This is required when the type of object needed can't be determined until runtime. Implementation can be done directly when classes are themselves objects in the language being used, such as Python.
Since the primary use of any class is to create instances of itself, in languages such as C++ where classes are not objects that can be passed around and manipulated, a similar result can often be achieved by simulating "virtual constructors", where you call a base-class constructor but get back an instance of some derived class. This must be simulated because constructors can't really be virtual✶ in C++, which is why such object—not class—factories are usually implemented as standalone functions or static methods.
Although using object-factories is a simple and straight-forward scheme, they require the manual maintenance of a list of all supported types in the base class' make_object() function, which can be error-prone and labor-intensive (if not over-looked). It also violates encapsulation✶✶ since a member of base class must know about all of the base's concrete descendant classes (now and in the future).
✶ Virtual functions are normally resolved "late" by the actual type of object referenced, but in the case of constructors, the object doesn't exist yet, so the type must be determined by some other means.
✶✶ Encapsulation is a property of the design of a set of classes and functions where the knowledge of the implementation details of a particular class or function are hidden within it—and is one of the hallmarks of object-oriented programming.
Therefore the best/ideal implementations are those that can handle new candidate classes automatically when they're added, rather than having only a certain finite set currently hardcoded into the factory (although the trade-off is often deemed acceptable since the factory is the only place requiring modification).
James Coplien's 1991 book Advanced C++: Programming Styles and Idioms has details on one way to implement such virtual generic constructors in C++. There are even better ways to do this using C++ templates, but that's not covered in the book which predates their addition to the standard language definition. In fact, C++ templates are themselves class factories since they instantiate a new class whenever they're invoked with different actual type arguments.
Update: I located a 1998 paper Coplien wrote for EuroPLoP titled C++ Idioms where, among other things, he revises and regroups the idioms in his book into design-pattern form à la the 1994 Design Patterns: Elements of Re-Usable Object-Oriented Software book. Note especially the Virtual Constructor section (which uses his Envelope/Letter pattern structure).
Also see the related answers here to the question Class factory in Python as well as the 2001 Dr. Dobb's article about implementing them with C++ Templates titled Abstract Factory, Template Style.
A class factory constructs instances of other classes. Typically, the classes they create share a common base class or interface, but derived classes are returned.
For example, you could have a class factory that took a database connection string and returned a class implementing IDbConnection such as SqlConnection (class and interface from .Net)
A class factory is a method which (according to some parameters for example) returns you a customised class (not instantiated!).
The Wikipedia article gives a pretty good definition: http://en.wikipedia.org/wiki/Factory_pattern
But probably the most authoritative definition would be found in the Design Patterns book by Gamma et al. (commonly called the Gang of Four Book).
I felt that this explains it pretty well (for me, anyway). Class factories are used in the factory design pattern, I think.
Like other creational patterns, it [the factory design pattern]
deals with the problem of creating
objects (products) without specifying
the exact class of object that will be
created. The factory method design
pattern handles this problem by
defining a separate method for
creating the objects, which subclasses
can then override to specify the
derived type of product that will be
created. More generally, the term
factory method is often used to refer
to any method whose main purpose is
creation of objects.
http://en.wikipedia.org/wiki/Factory_method_pattern
Apologies if you've already read this and found it to be insufficient.

Are static inner classes a good idea or poor design?

I'm find I have several places that having public static inner classes designed that extend "helper" classes makes my code a lot more type safe and, in my opinion, readable. For example, imagine I have a "SearchCriteria" class. There are a lot of commonalities for the different things I search for (a search term and then a group of search term types, a date range, etc.) By extending it in a static inner class, I tightly couple the extension and the searchable class with the specific differences. This seems like a bad idea in theory (Tight Coupling Bad!) but the extension is specific to this searchable class (One Class, One Purpose).
My question is, in your experience, has the use of static inner classes (or whatever your language equivelent is) made your code more readable/maintainable or has this ended up biting you in the EOF?
Also, I'm not sure if this is community wiki material or not.
Sounds perfectly reasonable to me. By making it an inner class, you're making it easy to find and an obvious candidate for review when the searchable class changes.
Tight coupling is only bad when you couple things that don't really belong together just because one of them happens to call the other one. For classes that collaborate closely, e.g. when, as in your case, one of them exists to support the other, then it's called "cohesion", and it's a good thing.
Note that the class is not the unit of reuse. Therefore, some coupling between classes is normal and expected. The unit of reuse is usually a collection of related classes.
In Python, we have a variety of structures.
Packages. They contain modules. These are essentially directories with a little bit of Python machinery thrown in.
Modules. They contain classes (and functions). These are files; and can contain any number of closely-related classes. Often, the "inner class" business is handled at this level.
Classes. These can contain inner class definitions as well as method functions. Sometimes (not very often) inner classes may actually be used. This is rare, since the module-level coupling among classes is usually perfectly clear.
The only caveat with using inner classes is making sure you're not repeating yourself all over the place - as in - make sure, when you define an inner class, you're not going to need to use that functionality anywhere else, and, that that functionality is necessarily coupled with the outer class. You don't want to end up with a whole bunch of inner classes that all implement the exact same setOrderyByNameDesc() method.
The point in "loose coupling" is to keep the two classes separate so that if there are code changes in your "SearchCriteria" class nothing would have to be change in the other classes. I think that the static inner classes you are talking about could potentially make maintaining code a nightmare. One change in SearchCriteria could send you searching through all of the static classes to figure out which ones are now broken because of the update. Personally, I would stay away from any such inner classes unless it is really needed for some reason.

Encapsulation in the age of frameworks

At my old C++ job, we always took great care in encapsulating member variables, and only exposing them as properties when absolutely necessary. We'd have really specific constructors that made sure you fully constructed the object before using it.
These days, with ORM frameworks, dependency-injection, serialization, etc., it seems like you're better off just relying on the default constructor and exposing everything about your class in properties, so that you can inject things, or build and populate objects more dynamically.
In C#, it's been taken one step further with Object initializers, which give you the ability to basically define your own constructor. (I know object initializers are not really custom constructors, but I hope you get my point.)
Are there any general concerns with this direction? It seems like encapsulation is starting to become less important in favor of convenience.
EDIT: I know you can still carefully encapsulate members, but I just feel like when you're trying to crank out some classes, you either have to sit and carefully think about how to encapsulate each member, or just expose it as a property, and worry about how it is initialized later. It just seems like the easiest approach these days is to expose things as properties, and not be so careful. Maybe I'm just flat wrong, but that's just been my experience, espeically with the new C# language features.
I disagree with your conclusion. There are many good ways of encapsulating in c# with all the above mentioned technologies, as to maintain good software coding practices. I would also say that it depends on whose technology demo you're looking at, but in the end it comes down to reducing the state-space of your objects so that you can make sure they hold their invariants at all times.
Take object relational frameworks; most of them allow you to specify how they are going to hydrate the entities; NHibernate for example allows you so say access="property" or access="field.camelcase" and similar. This allows you to encapsulate your properties.
Dependency injection works on the other types you have, mostly those which are not entities, even though you can combine AOP+ORM+IOC in some very nice ways to improve the state of these things. IoC is often used from layers above your domain entities if you're building a data-driven application, which I guess you are, since you're talking about ORMs.
They ("they" being application and domain services and other intrinsic classes to the program) expose their dependencies but in fact can be encapsulated and tested in even better isolation than previously since the paradigms of design-by-contract/design-by-interface which you often use when mocking dependencies in mock-based testing (in conjunction with IoC), will move you towards class-as-component "semantics". I mean: every class, when built using the above, will be better encapsulated.
Updated for urig: This holds true for both exposing concrete dependencies and exposing interfaces. First about interfaces: What I was hinting at above was that services and other applications classes which have dependencies, can with OOP depend on contracts/interfaces rather than specific implementations. In C/C++ and older languages there wasn't the interface and abstract classes can only go so far. Interfaces allow you to tie different runtime instances to the same interface without having to worry about leaking internal state which is what you're trying to get away from when abstracting and encapsulating. With abstract classes you can still provide a class implementation, just that you can't instantiate it, but inheritors still need to know about the invariants in your implementation and that can mess up state.
Secondly, about concrete classes as properties: you have to be wary about what types of types ;) you expose as properties. Say you have a List in your instance; then don't expose IList as the property; this will probably leak and you can't guarantee that consumers of the interface don't add things or remove things which you depend on; instead expose something like IEnumerable and return a copy of the List, or even better, do it as a method:
public IEnumerable MyCollection { get { return _List.Enum(); } } and you can be 100% certain to get both the performance and the encapsulation. Noone can add or remove to that IEnumerable and you still don't have to perform a costly array copy. The corresponding helper method:
static class Ext {
public static IEnumerable<T> Enum<T>(this IEnumerable<T> inner) {
foreach (var item in inner) yield return item;
}
}
So while you can't get 100% encapsulation in say creating overloaded equals operators/method you can get close with your public interfaces.
You can also use the new features of .Net 4.0 built on Spec# to verify the contracts I talked about above.
Serialization will always be there and has been for a long time. Previously, before the internet-area it was used for saving your object graph to disk for later retrieval, now it's used in web services, in copy-semantics and when passing data to e.g. a browser. This doesn't necessarily break encapsulation if you put a few [NonSerialized] attributes or the equivalents on the correct fields.
Object initializers aren't the same as constructors, they are just a way of collapsing a few lines of code. Values/instances in the {} will not be assigned until all of your constructors have run, so in principle it's just the same as not using object initializers.
I guess, what you have to watch out for is deviating from the good principles you've learnt from your previous job and make sure you are keeping your domain objects filled with business logic encapsulated behind good interfaces and ditto for your service-layer.
Private members are still incredibly important. Controlling access to internal object data is always good, and shouldn't be ignored.
Many times private methods I've found to be overkill. Most of the time, if the work you're doing is important enough to break out, you can refactor it in such a way that either a) the private method is trivial, or b) is an integral part of other functions.
In addition, with unit testing, having many methods private makes it very hard to unit test. There are ways around that (making test objects friends, etc), but add difficulties.
I wouldn't discount private methods entirely though. Any time there's important, internal algorithms that really make no sense outside of the class there's no reason to expose those methods.
I think that encapsulation is still important, it helps more in libraries than anything imho. You can create a library that does X, but you don't need everyone to know how X was created. And if you wanted to create it more specifically to obfuscate the way you create X. The way I learned about encapsulation, I remember also that you should always define your variables as private to protect them from a data attack. To protect against a hacker breaking your code and accessing variables that they are not supposed to use.