How can I know if class A is an interface, an abstract class or a concrete class (super class)?
According to answers, there are no direct instances of A so I assume that is an abstract class.
However, in this second image :
B should also be an abstract, if the first theory is right... but it can't because in the last answer there are direct instances of B class.
If A would be abstract in image1 it would be shown with the name in italics and/or the string {abstract} next to it. This is not the case here. Therefore A can have direct instances. I guess there is a mistake in image1.
Please note that even if B in image2 would be abstract, it is meaningful to specify an instance of B. An instance specification is not an instance, and as such can be incomplete and abstract. An object will have complete features and a concrete class. For example I could have a red Basketball. In the model I might have an instance specification classified by Ball{abstract} and without a slot for the color, because I don't care which type and color it is. So any instance of Basketball or Handball will fit this instance specification.
As per UML specification, Section 9.2.3.2:
The isAbstract property of Classifier, when true, specifies that the Classifier is abstract, i.e., has no direct instances: every instance of the abstract Classifier shall be an instance of one of its specializations.
The notation is described a bit further, in Section 9.2.4.1:
The name of an abstract Classifier is shown in italics, where permitted by the font in use. Alternatively or in addition, an abstract Classifier may be shown using the textual annotation {abstract} after or below its name.
Neither of the two is indicated in the first diagram, so the answer is simply erroneous.
Note, one more indirect indication of the abstract class (though it is not directly mentioned, just comes from the general description) could be using a Generalization Set. There are a couple of notations used here, you can read about them in Section 9.7.4 (the entire Section 9.7 is about Generalization Sets). This notation also isn't used so still - there is nothing to indicate class A is abstract.
First diagram
You cannot deduct from the diagram if A is an interface, an abstract class or a concrete class:
A could be a concrete class that is further specialized by B and C
A could be an abstract class and B and C be abstract or concrete specializations. One would expect A to be in italic or followed with an {abstract} adornment, but these are not mandatory.
A could even be an interface under some circumstances. In this case, B and C would be specialized interfaces. This possibility has however a low probability because the «interface» keyword would be expected above A. This notation was not mandatory in earlier UML 2 versions but the current UML 2.5 requires it (see Axel's comment).
So if the UML notation would be used with all possible accuracy, A would be a concrete class, but you can objectively not be 100% certain.
Important note: the provided answer claiming that "there is no instance for A" is hearsay. No element in the diagram allows to draw this conclusion
Second diagram
We have seen that the answers to the first questions are flawed, and likewise, B is not necessarily an abstract class.
Important revelation: you need to know that b : B is possible even if B was abstract, because in an object diagram you may chose arbitrarily to show membership to one class, event if the object would be more specialized:
UML 2.5 - Section 9.8.3: An InstanceSpecification represents the possible or actual existence of instances in a modeled system and completely or partially describes those instances.
In case of doubt, a few lines later, you'll read:
The InstanceSpecification may represent: - Classification of the instance by one or more Classifiers, any of which may be abstract.
Keeping this in mind, the answers to the second diagram are all correct, whether B is abstract or not.
Related
In the UML specification there are plenty occurrences of the word "redefine". Not a single mention of what redefinition means. Perhaps it's too simple? Anyway, if someone could explain it even simpler that'd be just great.
Snapshot from UML 2.5.1, Intervals:
I found a clue to what it does using a modelling tool (Sparx Enterprise Architect). If having an interface sub-class of another interface, I get the option to "redefine" operations and attributes of that interface.
I made a wild guess on what it might be used for and redefined it with more parameters. The extra parameter represented the "number of output arguments" added by the Matlab compiler when compiling a Matlab function to a C# library. Then I went ahead and made another sub-class for CLI and redefined arguments accordingly (int return value, all inputs are strings).
The UML 2.5.1 defines redefinition in section 9.2.3.3 (page 100):
Any member (that is a kind of RedefinableElement) of a generalization of a specializing Classifier may be redefined instead of being inherited. Redefinition is done in order to augment, constrain, or override the redefined member(s) in the context of instances of the specializing Classifier.
For a feature such as an attribute, a property, or an operation:
Feature redefinitions may either be explicitly notated with the use of a {redefines <x>} property string on the Feature or implicitly by having a Feature which cannot be distinguished using isDistinguishableFrom() from another Feature in one of the owning Classifier’s more general Classifiers.
Suppose for example that you have a class Node with two attributes: from: Node[*] and to[*]: Node. You could then have a specialization FamilyMember (a node in your genealogy) and you could redefine the members: parent : FamilyMember[*] {redefines from} and child : FamilyMember[*] {redefines from}
Another example: you have a polymorphic class Shape with an abstract operation draw(). You can specialize that class into a class Circle that will have its own draw(). You could leave the redefinition implicit (just mentioning the operation), or you could be very explicit with draw() {redefines draw()}.
The abstract syntax diagrams apply UML to the UML metamodel. The redefinition have the same meaning, but it is explained in a shorter manner in section 6.
Let's take an example in your diagram: let's take IntervalConstraint:
IntervalConstraint inherits from Contraint. A Constraint is composed of a property specification:ValueConstraint (page 36), so IntervalConstraint inherits this property.
Your diagram tells that IntervalConstraint is composed of a property specialization: Interval that redefines the more general specification of the constraint. It's a redefinition, because it narrows down its type (fortunately, Interval inherits from ValueSpecification so there's no risk of inconsistency).
In OOP it is good practice to talk to interfaces not to implementations. So, e.g., you write something like this (by Seq I mean scala.collection.immutable.Seq :)):
// talk to the interface - good OOP practice
doSomething[A](xs: Seq[A]) = ???
not something like the following:
// talk to the implementation - bad OOP practice
doSomething[A](xs: List[A]) = ???
However, in pure functional programming languages, such as Haskell, you don't have subtype polymorphism and use, instead, ad hoc polymorphism through type classes. So, for example, you have the list data type and a monadic instance for list. You don't need to worry about using an interface/abstract class because you don't have such a concept.
In hybrid languages, such as Scala, you have both type classes (through a pattern, actually, and not first-class citizens as in Haskell, but I digress) and subtype polymorphism. In scalaz, cats and so on you have monadic instances for concrete types, not for the abstract ones, of course.
Finally the question: given this hybridism of Scala do you still respect the OOP rule to talk to interfaces or just talk to concrete types to take advantage of functors, monads and so on directly without having to convert to a concrete type whenever you need to use them? Put differently, is in Scala still good practice to talk to interfaces even if you want to embrace FP instead of OOP? If not, what if you chose to use List and, later on, you realized that a Vector would have been a better choice?
P.S.: In my examples I used a simple method, but the same reasoning applies to user defined types. E.g.:
case class Foo(bars: Seq[Bar], ...)
What I would attack here is your "concrete vs. interface" concept. Look at it this way: every type has an interface, in the general sense of the term "interface." A "concrete" type is just a limiting case.
So let's look at Haskell lists from this angle. What's the interface of a list? Well, lists are an algebraic data type, and all such data types have the same general form of interface and contract:
You can construct instances of the type using its constructors according to their arities and argument types;
You can observe instances of the type by matching against their constructors according to their arities and argument types;
Construction and observation are inverses—when you pattern match against a value, what you get out is exactly what was put into it.
If you look at it in these terms, I think the following rule works pretty well in either paradigm:
Choose types whose interfaces and contracts match exactly with your requirements.
If their contract is weaker than your requirements, then they won't maintain invariants that you need;
If their contracts are stronger than your requirements, you may unintentionally couple yourself to the "extra" details and limit your ability to change the program later on.
So you no longer ask whether a type is "concrete" or "abstract"—just whether it fits your requirements.
These are my two cents on this subject. In Haskell you have data types (ADTs). You have both lists (linked lists) and vectors (int-indexed arrays) but they don't share a common supertype. If your function takes a list you cannot pass it a vector.
In Scala, being it a hybrid OOP-FP language, you have subtype polymorphism too so you may not care if the client code passes a List or a Vector, just require a Seq (possibly immutable) and you're done.
I guess to answer to this question you have to ask yourself another question: "Do I want to embrace FP in toto?". If the answer is yes then you shouldn't use Seq or any other abstract superclass in the OOP sense. Of course, the exception to this rule is the use of a trait/abstract class when defining ADTs in Scala. For example:
sealed trait Tree[+A]
case object Empty extends Tree[Nothing]
case class Node[A](value: A, left: Tree[A], right: Tree[A]) extends Tree[A]
In this case one would require Tree[A] as a type, of course, and then use, e.g., pattern matching to determine if it's either Empty or Node[A].
I guess my feeling about this subject is confirmed by the red book (Functional Programming in Scala). There they never use Seq, but List, Vector and so on. Also, haskellers, don't care about these problems and use lists whenever they need linked-list semantic and vectors whenever they need int-indexed-array semantic.
If, on the other hand, you want to embrace OOP and use Scala as a better Java then OK, you should follow the OOP best practice to talk to interfaces not to implementations.
If you're thinking: "I'd rather opt for mostly functional" then you should read Erik Meijer's The Curse of the Excluded Middle.
Case classes are suppose to be algebraic types, therefore some people are against adding methods to the case class.
Can somebody please give an example for why it's a bad idea?
This is one of those questions that leads to more questions.
Following is my take on this.
Lets see what happens when a case class is defined,
The Scala compiler does the following,
Creates a class and its companion object.
Implements the apply method that you can use as a factory. This lets
you create instances of the class without the new keyword.
Prefixes all arguments, in the parameter list, with val. ie. makes it immutable
Adds implementations of hashCode, equals and toString
Implements the unapply method, a case class supports pattern matching. This is important when you define an Algebraic Data Type.
Generates accessors for fields. Note that it does not generate "mutators"
Now as we can see case classes are not exact peers of the Java Beans.
Case classes tend to represent Datatype more than it represents a entity.
I look at them as good friends of programmers in terms of the fact that it cuts down on the boiler plate of endless getters , override equals and hashcode methods etc.
Now coming to the question,
If you look at it from a functional programming standpoint then case classes are the way to go since you would looking at immutability , equality and you are sure that the case class represents a data structure. It is here that a lot of the times people programming in FP say to use them for ADTs.
If your case class has logic that works on the class's state then that makes it a bad choice for functional programming.
I prefer to use case classes for scenarios where i am sure that i need a class to represent a datastructure because thats where i get the help of auto generated methods and the added advantage of patter-matching. When i program in a OO way with side effects ,mutable state i use class .
Having said that there still could be scenarios where you could have a case class with utlity methods. I just think those chances are less.
I'm using Visual Paradigm for UML
I'm drawing a class diagram and I want to mark a struct (instead of a class). There is no such thing there, but instead I found smt called <<primitive>>.
What is it?
Is it a dumb-data-holder?
Kind regards
The «primitive» stereotype means the type has no internal structure, and is defined externally to UML. An integer would be a primitive; its operations are defined by the implementation language.
The «datatype» stereotype is analogous to a C# struct or a value type - instances of the type may have internal structure, but do not have identity and are considered equal if the values of all their properties are equal. A complex number, with real and imaginary parts, would be a data type.
In Martin Odersky's recent post about levels of programmer ability in Scala, in the Expert library designer section, he includes the term "early initializers".
These are not mentioned in Programming in Scala. What are they?
Early initializers are part of the constructor of a subclass that is intended to run before its superclass. For example:
abstract class X {
val name: String
val size = name.size
}
class Y extends {
val name = "class Y"
} with X
If the code was written instead as
class Z extends X {
val name = "class Z"
}
then a null pointer exception would occur when Z got initialized, because size is initialized before name in the normal ordering of initialization (superclass before class).
As far as I can tell, the motivation (as given in the link above) is:
"Naturally when a val is overridden, it is not initialized more than once. So though x2 in the above example is seemingly defined at every point, this is not the case: an overridden val will appear to be null during the construction of superclasses, as will an abstract val."
I don't see why this is natural at all. It is completely possible that the r.h.s. of an assignment might have a side effect. Note that such code structure is completely impossible in either C++ or Java (and I will guess Smalltalk, although I can't speak for that language). In fact you have to make such dual assignments implicit...ticilpmi...EXplicit in those languages via constructors. In the light of the r.h.s. side effect uncertainty, it really doesn't seem like much of a motivation at all: the ability to sidestep superclass side effects (thereby voiding superclass invariants) via ASSIGNMENT? Ick!
Are there other "killer" motivations for allowing such unsafe code structure? Object-oriented languages have done without such a mechanism for about 40 years (30-odd years, if you count from the creation of the language), why include it now?
It...just...seems...dangerous.
On second thought, a year layer...
This is just cake. Literally.
Not an early ANYTHING. Just cake (mixins).
Cake is a term/pattern coined by The Grand Pooh-bah himself, one that employs Scala's trait system, which is halfway between a class and an interface. It is far better than Java's decoration pattern.
The so-called "interface" is merely an unnamed base class, and what used to be the base class is acting as a trait (which I frankly did not know could be done). It is unclear to me if a "with'd" class can take arguments (traits can't), will try it and report back.
This question and its answer has stepped into one of Scala's coolest features. Read up on it and be in awe.