We are developing an extension (in C# .NET env.) for a GIS application, which will has predefined types
for modeling the real world objects, start from GenericObject, and goes to more specific types like Pipe and Road with their detailed properties and methods like BottomOfPipe, Diameter and so on.
Surely, there will be an Object Model, Interfaces, Inheritance and lots of other essential parts in the TypeLibrary, and by now we fixed some of them. But as you may know, designing an Object Model is a very ambiguous work, and (I as much as I know), can be done in many different ways and many different results and weaknesses.
Is there any distinct rules in designing O.M.: the Hierarchy, the way of defining Interfaces, abstract and coclasses enums?
Any suggestion, reference or practice?
A couple of good ones:
SOLID
Single responsibility principle
Open/closed principle
Liskoff substitution principle
Interface segregation principle
Dependency inversion principle
More information and more principles here:
http://mmiika.wordpress.com/oo-design-principles/
Check out Domain-Driven Design: Tackling Complexity in the Heart of Software. I think it will answer your questions.
what they said, plus it looks like you are modeling real-world entities, so:
restrict your object model to exactly match the real-world entities.
You can use inheritance and components to reduce the code/model, but only in ways that make sense with the underlying domain.
For example, a Pipe class with a Diameter property would make sense, while a DiameterizedObject class (with a Diameter property) with a GeometryType property of GeometryType.Pipe would not. Both models could be made to work, but the former clearly corresponds to the problem domain, while the latter implements an artificial (non-real-world) perspective.
One additional clue: you know you've got the model right when you find yourself discovering new features in the code that you didn't plan from the start - they just 'naturally' fall out of the model. For example, your model may have Pipe and Junction classes (as connectivity adapters) sufficient to solve the immediate problem of (say) joining different-diameter pipes to each other and calculating flow rates, maximum pressures, and structural integrity. You later realize that since you modeled the structural and connectivity properties of the Pipes and Junctions accurately (within the requirements of the domain) you can also create a JungleGym object from connected pipes and correctly calculate how much structural load it will bear.
This is an extreme example, but it should get the point across: correct object models support extension and often manifest beneficial unexpected properties and features (not bugs!).
The Liskov Substitution Principle, often expressed in terms of "is-a".
Many examples of OOP would be better off making use of "has-a" (in c++ private inheritance or explicit composition) rather than public inheritance ("is-a")
Getting Inheritance right is hard. Doing so with interfaces (pure virtual classes) is often easier than for base/sub classes
Check out the "principles" of Object oriented design. These have guidelines for all the questions you ask.
References:
"Object oriented software construction" by Robert Martin
http://www.objectmentor.com/resources/publishedArticles.html
Checkout the "Design Principles" articles at the above site. They are the best references available.
"BottomOfPipe"? Is that another way of saying the depth of the Pipe below the Road?
Any kind of design is difficult and can be done different ways. There are no guarantees that your design will work when you create it.
The advantage that people who design ball bearings and such have is many more years of experience and data to determine what works and what does not. Software doesn't have as much time or hard data.
Here's some advice:
Inheritance means IS-A. If that doesn't hold, don't use inheritance.
A deep hierarchy is probably a sign of trouble.
From Scott Meyers: Make non-leaf classes interfaces or abstract.
Prefer composition to inheritance.
Related
In his post SOLID: the next step is Functional, Mark Seemann states:
If you keep driving your design towards smaller and smaller interfaces, you'll eventually arrive at the ultimate Role Interface: an interface with a single method [...] If you apply the SRP and ISP like that, you're likely to evolve a code base with many fine-grained classes that each have a single method. That has happened to me more than once.
My concern is about cohesion of such classes. Does this approach drive to
functional cohesion? Are those classes incohesive?
Is there a bad influence on code coherence?
There's a great definition of cohesion presented in the book Growing object oriented software guided by tests, that states the following:
An element’s cohesion is a measure of whether its responsibilities
form a meaningful unit. For example, a class that parses both dates
and URLs is not coherent, because they’re unrelated concepts. Think of
a machine that washes both clothes and dishes—it’s unlikely to do both
well. At the other extreme, a class that parses only the punctuation
in a URL is unlikely to be coherent, because it doesn’t represent a
whole concept. To get anything done, the programmer will have to find
other parsers for protocol, host, resource, and so on. Features with
“high” coherence are easier to maintain.
This probably goes into the subjective territory pretty quickly, but I would argue that SRP and cohesion are both directly related and orthogonal concepts sometimes. If you have classes with only one method, then sure, it's cohesive in a sense that it does one thing only. But also, you lose something as well, ie. the class is now too fine grained to be useful by itself.
In a functional style, having such classes makes a lot of sense. It's all about composing functions to achieve a result. C# makes this style possible, but also pretty verbose as well, so I completely agree with Seemann when he argues for F# in the case you are designing your codebase in such a manner anyway.
The question of is this good or bad design is a subjective one, but I think we can say a few things objectively. One method classes are by their nature almost guaranteed to respect the SRP (sure, you can still miss the point and make god classes with an all powerful method). So code written in such a way should have all the benefits we would expect, ie. be loosely coupled, composable and maintainable. But there is something to say about losing the big picture of such code as well.
I would argue that some combination of the two is required in most cases, leaning towards classes with a single method in most of the codebase. For example, you could write most of your low level code like a reusable collection of libraries, having such classes. Once you get closer to the API level, you would compose such classes to get the logic you want, and then expose such logic to your clients as more cohesive chunks of functionality. Clients would get the benefit of having a more cohesive high level code paths to follow, leading to more convenient use and greater discoverability of functionality your library supports, while also having all the benefits of having your low level code being written in such a way to be maintainable and flexible to change.
currently working on a GUI application using JavaFX 2 as framework. Used in Java allready and know the principles of data binding.
As the functional style programming in scala advocates the use of imutable values (vals), there is a gap.
Is there an other solution than having an mutable fx-property based presentation model for the gui and and immutable model for application logic with an conversion layer?
Greets,
Andreas
Since your question is a bit vague, please forgive if this is largely based on personal opinion: There are, to my knowledge, no other approaches to the mutable property model. I would however argue that you don't want one:
First of, functional programming, at least from a puristic point of view, attempts to avoid side effects. User interfaces, however, are exclusively about causing side effects. There is a slight philosophical mismatch to begin there.
One of the main benefits of immutable data is that you don't have to deal with control structures to avoid concurrent modification. However, JavaFX's event queue implements a very strict single-threaded approach with an implicit control of read and write access. On the other hand, user interface components fit the idea of mutable objects better than most other fields of programming. The node structure is, after all, an inherent hierarchy of stateful components.
Considering this, I think trying to force a functional and immutable paradigm on JavaFX is not going to work out. Instead, I would recommend building a translation layer based on keypath selections - e.g. binding a Label to display an (immutable) Person's name to the Person, not the name property, and having a resolver handle the access to the name attribute. Basically, this would mean having a combination of Bindings#select and a JavaBeanStringProperty.
As we know in OOP that interface provides a set of operations without implementation but
class is the opposite.
in Object oriented design ,we use uml the interface has a set of operations without implementation
and the class also has a set of operations without implementation(i know class has attributes in addition to its operations)?
so, what is the difference in UML?
As we know in OOP that interface provides a set of operations without implementation but class is the opposite.
Not quite true - abstract classes are classes that have one or more methods declared but not defined (in C++ and Java these are abstract methods). You can have a class defined with all its methods abstract - in which case there is close similarity with an interface.
One key idea in UML, though, is that an interface is a set of methods exposed to other classes or components. The purpose is to define a set of operations.
However, moving to programming, a method may be made abstract to aid development (e.g. by ensuring all subclasses have an implementation). This method might be purely internal to the class.
One last observation: the term interface and class in UML are not quite synonymous to interface and class in a language, say Java. For example, Java does not allow multiple class inheritance. Instead Java has the interface which allows a class to implement multiple types (not classes - a subtle difference)
EDIT
Quick note technical words:
Declare: Stating to the system that a variable or operation exists and its type or signature
Define: Same as declaring, but additionally providing a complete implementation of a variable or operation
Interface: A set of declarations of operations
Type: An object's interface(s) and other operations
Class: An object's class defines (not declares) how the object is implemented, including its internal state and the implementation of its operations
Define is to Declare as Class is to Type.
(see What is the difference between Type and Class?)
The purpose of interface is to define a set of operations but we are do the same for class also define a set of operations?
So the purpose of the interface is to declare (not define) a set of public operations that other objects want to use. A class (in UML) is the complete set of operations (public and private). A class (in Java, C++, etc.) additionally defines all non-abstract operations.
So the key is the intent: When other components of the system want to use a set of operations, use interface. When you're using UML to describe an implementation (of a component, algorithm, etc.) use class.
when I go to class that assumed to implement those operations I can't see any implementation for those operations as a diagram describe those operations or anything give a sign for implementation?
UML tool is for modelling and so deliberately avoids providing a place where you enter operation definitions - that is left for later. The idea is that you:
Define the model in UML
Use the UML tool to generate code in the target language
(And some allow you to import your code back into the tool to modify the model with any changes made during implementation. This is called "round-trip" modelling, something which the old TogetherJ product excelled at)
This deliberate gap (you might say deficiency) means that 'define' vs. 'declare' in UML is meaningless. Sorry.
Perhaps you've just seen models created for describing an overview, rather than modelling the system fully, but you can model the behaviour of a class's operations in most UML tools, and some tools also model the behaviour sufficiently that it can be executed .
The behaviour associated with an operation can be modelled using UML state machines, using UML action semantics or in several other ways. Quite often this is left out of the model - it is not always useful to go to that level of detail, so the implementations may just be hinted at in the documentation associated with the operation. But concrete classes in UML definitely have concrete behaviours associated with their operations, so the difference between UML and programming is that UML focuses on behaviour rather than implementation.
According to Wikipedia -
Unified Modeling Language (UML) is a standardized general-purpose
modeling language in the field of object-oriented software
engineering. The Unified Modeling Language includes a set of graphic
notation techniques to create visual models of object-oriented
software-intensive systems.
So, most important thing is UML is general-purpose and graphical. It is not only about classes and interfaces.
UML offers a standard way to visualize a system's architectural blueprints.
Software Construction Needs a Plan. Structure diagrams, Behavior diagrams, Interaction diagrams helps to Visualise In Multiple Dimensions and Levels of Detail which is
Appropriate For Both New and Legacy Systems.
Unified and Universal, Accommodates Parallel Development of Large Systems.
When I think of UML, one term which comes to mind is software quality. One thing that has plagued the software industry in recent year is poor software design. While the software industry has done fairly well for the last decade, the impact of globalization is changing the ways in which software is designed.
hoping a grandmaster can shed some light. Very high overview is that I am no beginner to coding, but still new to OOP. This set of message classes is at the heart of a large simulation application we're writing, and I don't want to do it stupidly--this interface cuts the application in half, from sequencer to executer and vice-versa.
My question is whether or not it's a bad idea to have an inheritance hierarchy this deep (image is not yet fleshed out, might go 5 or 6 deep in the end). This is as opposed to having some of the child classes just have a directed association to their parent class, instead of inheriting.
I've read that a deep inheritance hierarchy is not a good idea, and that if a child class is inheriting simply to have the parent's data, then you should simply include the parent as data in the child, but I'm having a hard time wrapping my head around why. What bad thing is going to happen to us if I decided to make an inheritance hierarchy 7-deep or something like that? Clearly there's a small performance hit, and changing things at the top of the hierarchy is going to have huge ripples throughout the app, but other than that I don't see an issue. Aside, I care little about minor differences in performance.
(bonus question: Is there an off-the-shelf package that handles this kind of stuff? We have most of the low level physical simulations handled, but the sequencing program we're going to have to write. I just have this suspicion that what I've laid out is very similar to what about 10,000 simulation developers before me did.)
(bonus question #2: any masters of both simulation systems and OOP programming, that would not hate living in Los Angeles? We're hiring.)
that if a child class is inheriting simply to have the parent's data
This is a bad idea. There's this understanding that you define base classes as the most generic of contracts that a set of (concrete) classes are going to honor. This typically means that your contract is about behavior and not implementation.
What bad thing is going to happen to us if I decided to make an inheritance hierarchy 7-deep or something like that?
The major issues here are mundane:
Fragile base classes (changes to base are a nightmare for the derived)
Increased coupling (with too many base classes comes tight coupling)
Encapsulation weakens
Testing issues (leaf level overridden methods can't just be tested to reproduce end-user behavior correctly always due to multiple chained calls here and there)
Maintenance (comes from strong coupling)
(You many want to peruse this paper on Why Ada isn't popular, particularly, Item 6, para 6.)
Is there an off-the-shelf package that handles this kind of stuff?
I'm not sure what you are looking for, but if you're looking for an automated hierarchy simplifier then I don't know of any. Also if such a package exists it'll be highly dependent on your language of choice and you haven't mentioned one.
Note that most of the times such issues can be resolved by looking at alternatives like aggregation or traits or dependency injection or whatever. These are design time issues and are typically (IMO) best ironed out on a whiteboard than with a compiler and millions of LOC.
Seeing this question quite late, but I have had many thoughts on this and have been bitten with deep inheritance hierarchies. One reason they are bad is because you will inevitably get the classification wrong as you specialize the many subclasses. However, once you have the class structure in place it will be hard to change because doing so would break client code.
I blogged about this here.
Old question, but active issue in software development and wanted to add an opinion that may help.
Maintenance overhead can't be estimated when you touch the base classes with DI. This is a major drawback that recently affected our 3 level deep inheritance structure.
Also, if you base is a template, expect to violate SOL(I)(D) if you have too many children with just 1 derived parent in 3 levels for example.
Generally, just to access data I'd choose an adequate design pattern or pass the data pointer if it doesn't violate SOLID. Depending on whether you just read or write, I'd also avoid getters and setters to avoid the Quasi-Classes. It's rare case that default children are 'protected' and I think the structure in this case is a candidate to be flawed by design.
I have a program I made in C++ that does not use classes, just simply functions and main().
I want to know what kind of diagrams I can draw for it. Here are a couple of things I have in mind.
Activity diagram
Sequence diagram
class diagram - obviously can't draw this because I didn't use classes
system architecture
state diagrams
use case diagrams
Out of these what can I draw?
I'm guessing from the diagrams that you are aligned to a * Unified Process approach.
IMHO:
Use Case - definitely - eliciting business and system level requirements is implementation agnostic
System Architecture - definitely - layers, processes, network, db and modules / packages.
Activity Diagrams - definitely - use this to describe process flows for key processes
State Diagrams - applicable, although usually associated with the state and lifespan of a single object instance, but it is still conceptually useful if you are maintaining state through other means
Sequence Diagram - applicable, although you will probably need to provide an arbitrary class name to attach your functions (if you are using namespaces, then possibly aggregate to these these instead?)
However, you might hit issues if you want to generate and round-trip code from your diagrams e.g. from a CASE tool such as Rational Rose - most will assume an OO implementation language (noting that the three Amigos are strongly associated with OO!)
I guess this begs the question as to why you would need to develop a procedural app using an OO language and document it with OO techniques?
HTH
As a generalization: You can use all types except the class diagrams. If you could draw a class diagram, you should ask yourself, why you did not use classes in this case.
For all other diagrams, you can use parts of your app as "actors" or "components". It seems like you have not researched yet what these diagrams are used for and what you express with them. If you do that you should be able to determine which diagram makes sense for you.
Remember that each diagram should have a purpose. If you do it just for the sake of being there, then don't do it.