I have a class defined like:
trait BigService {
def A
def D
def E
/* etc */
}
class BigServiceImpl(...) extends BigService {
def A = _
private def B = _ // uses func A, and BigService's parameters
private def C = _ // uses func B, and BigService's parameters
def D = _ // uses func C, and BigService's parameters
/* other members */
}
I'd like to move the private members into a separate file, but the problem is that they all depend on each other and the parameters/other members of the large class.
Is there any way to separate the class into multiple parts?
I'm sure it's possible to split and refactor any code, but the question is very abstract so it's hard to offer any specific advice. So here are a few principles you can employ.
Composition vs Inheritance. Let's say your BigService is the actual business service class that uses some Database, WebService/API, etc. You can use composition to add these components to your service instead of inheriting them all from different classes.
Consider reversing function dependencies. Usually public method would depend on private methods, not the other way around. This will probably be the key for you. Extract many small methods that have single purpose and then you'll be able to see how to move them out with their dependencies to a different class. Tangentially keep in mind Inversion of Control.
Use Higher Order Functions to decouple behavior from implementation. If private def B uses function A and some class members you can express it as HOF: def B(functionA: (...) => ..., someArg: T) or similar.
Scala gives you power of OOP and FP so you can leverage both to refactor. They key is to understand underlying function types: what depends on what. You reduce dependencies significantly if you make function pure, single purpose and generic.
Cake Pattern is a debatable approach, but it might fit your needs. You mix in implementations of all those functions and keep dependencies abstract in the base traits.
I can go on and on but the main point is to try to structure your code in more linear fashion where dependencies only point one way without making a circle. I.e. dependency graph A -> B -> C is better than A -> C, C -> B, B -> A.
Swift allows generic methods to be overloaded by the constraints placed upon the generic types passed in. If you use this with concrete types, then the type passed in will participate in this overloading and constraints will be inferred from that type.
As soon as a generic method delegates to another with overload resolution, the constraints can no longer be inferred and will instead utilize the constraints already placed on the type from above.
protocol Conformance {}
extension String : Conformance {}
// #1
func baseMethod<T>(_ value: T) {
let isConforming = T.self is Conformance.Type
}
// #2
func baseMethod<T>(_ value: T) where T : Conformance {
let isConforming = T.self is Conformance.Type
}
func delegatingMethod<T>(_ value: T) {
baseMethod(value)
}
func run() {
// Calls #2, isConforming = true
baseMethod(String())
// Calls #1, isConforming = true
delegatingMethod(String())
}
I assume this is there so that you have sufficient type information from the call site about what constraints are applicable no matter where the generic type is used, but it seems to severely and artificially limit the utility of overloading by constraint.
Are there any known workarounds to this oddity? Something that emulates this would be extremely useful.
Swift allows generic methods to be overloaded by the constraints placed upon the generic types passed in.
Yes...but be very clear that this is a static overload, not a dynamic override. It is based on types that can be proven at compile-time.
func delegatingMethod<T>(_ value: T) {
baseMethod(value)
}
We're compiling this now, and we need to write it as a concrete, static function call, possibly inlined, into the binary. What do we know about T? We know nothing about T, so any where clause will fail.
We don't even know about how this function is called, because the call may come from another compile unit or module. While in principle, it could have different semantics based on access level, such that one version were used when it is private and all calls can be evaluated, and another used when it's public, that would be a really horrible source of bugs.
What you're asking for is that delegatingMethod defer its decision about what function call to make until runtime. That's not how generics work. Moreover, you're asking that all the where clauses be encoded somewhere in the binary so that they can be evaluated at runtime. Also not how generics work. That would require a much more dynamic dispatch system than Swift wants to implement. It's not impossible; it's just a completely different animal, and prevents lots of optimizations.
This feels like you're trying to reinvent class inheritance with protocols and generics. You can't. They're different solutions and have different features. Class inheritance is fundamentally dynamic. Protocols and generics are fundamentally static. If you want dynamic dispatch based on specific types, use classes.
I created a class extend scala.Immutable
class SomeThing(var string: String) extends Immutable {
override def toString: String = string
}
As I expected, scala compiler should help me prevent change state of class SomeThing. But when I run this test
"Test change state of immutable interface" should "not allow" in {
val someThing = new SomeThing("hello")
someThing.string = "hello 1"
println(someThing)
}
The result is hello 1 and scala compiler don't throw any warning or error.
Why they have to add Immutable trait without help us prevent object mutable?
There are several aspects to this question.
1. A simple one is that Scala compiler can't really ensure immutability for many various reasons. For example, the main target platform JVM allows modifying even final fields using reflection. Another reason this is not enforceable is code like this
/////////////////////////////////////////
//// library v1
package library
class LibraryData(val value:Int)
/////////////////////////////////////////
//// code that uses the library
package app
class UserData(val data:LibraryData) extends Immutable
/////////////////////////////////////////
//// library v2
package library
class LibraryData(var value:Int) //now change it to var!
Since the "library" is compiled independently of the "app" and doesn't even know about existence of the "app" there is no point in time where compiler can catch the broken contract.
2. More fundamental misunderstanding you seem to have is what trait does. In this context trait (or "interface" in some other languages) represents a contract between the implementation and the user-code about how the implementation can and should behave. However not every kind of a contract can be represented as a trait (at least without making the code super-complicated). For example, for a mutable collection there is a contract that size should return the number of times add (or +=) has been called but there is no way to represent such a contract as a trait besides declaring that there are methods size and += with corresponding signatures. On the other hand, for most of the contracts there is no way to enforce implementation to follow the contract . For example, an implementation of size that always returns 0 technically matches all the types but is clearly breaking the contract.
Similarly Immutable doc says:
A marker trait for all immutable data structures such as immutable collections.
So it is just a marker trait which is one of the ways to work around contracts that can't be really represented as types. And it says that whoever implements that trait claims to be an immutable object. Your code claims that but clearly breaks the contract. So technically it is your fault for not respecting the contract.
Imagine this:
val myObject = if(someCondition) {
new Whatever with Trait1
} else if(otherCondition) {
new Whatever with Trait2 with Trait3 with Trait4
} else {
new Whatever with Trait5
}
Is the myObject object "composed" at runtime, or is the scala compiler smart enough to generate the appropriate code at compile time? What kind of performance impact will it have on the code if you have multiple places that are applying traits like in the above code?
It's composed at compile-time
The traits will be added as interfaces to the resulting type, and any concrete methods from those traits will (usually) be copied to the class in their entirety.
Occasionally, the compiler may have to provide concrete implementations via forwarders to static methods, but this isn't usually the case.
Scala will create three anonymous classes, (except the last condition is a syntax error).
Note: These classes will be named using the order in which they are defined in the scope they are defined. So... OuterClass$anon$1 -> 3. Please avoid using these anonymous classes in any long-term Java-serialization as this locks down the order of anonymous classes in your code.
Other than that, it's an awesome convenience feature!
Can I specify interfaces when I declare a member?
After thinking about this question for a while, it occurred to me that a static-duck-typed language might actually work. Why can't predefined classes be bound to an interface at compile time? Example:
public interface IMyInterface
{
public void MyMethod();
}
public class MyClass //Does not explicitly implement IMyInterface
{
public void MyMethod() //But contains a compatible method definition
{
Console.WriteLine("Hello, world!");
}
}
...
public void CallMyMethod(IMyInterface m)
{
m.MyMethod();
}
...
MyClass obj = new MyClass();
CallMyMethod(obj); // Automatically recognize that MyClass "fits"
// MyInterface, and force a type-cast.
Do you know of any languages that support such a feature? Would it be helpful in Java or C#? Is it fundamentally flawed in some way? I understand you could subclass MyClass and implement the interface or use the Adapter design pattern to accomplish the same thing, but those approaches just seem like unnecessary boilerplate code.
A brand new answer to this question, Go has exactly this feature. I think it's really cool & clever (though I'll be interested to see how it plays out in real life) and kudos on thinking of it.
As documented in the official documentation (as part of the Tour of Go, with example code):
Interfaces are implemented implicitly
A type implements an interface by implementing its methods. There is
no explicit declaration of intent, no "implements" keyword.
Implicit interfaces decouple the definition of an interface from its
implementation, which could then appear in any package without
prearrangement.
How about using templates in C++?
class IMyInterface // Inheritance from this is optional
{
public:
virtual void MyMethod() = 0;
}
class MyClass // Does not explicitly implement IMyInterface
{
public:
void MyMethod() // But contains a compatible method definition
{
std::cout << "Hello, world!" "\n";
}
}
template<typename MyInterface>
void CallMyMethod(MyInterface& m)
{
m.MyMethod(); // instantiation succeeds iff MyInterface has MyMethod
}
MyClass obj;
CallMyMethod(obj); // Automatically generate code with MyClass as
// MyInterface
I haven't actually compiled this code, but I believe it's workable and a pretty trivial C++-ization of the original proposed (but nonworking) code.
Statically-typed languages, by definition, check types at compile time, not run time. One of the obvious problems with the system described above is that the compiler is going to check types when the program is compiled, not at run time.
Now, you could build more intelligence into the compiler so it could derive types, rather than having the programmer explicitly declare types; the compiler might be able to see that MyClass implements a MyMethod() method, and handle this case accordingly, without the need to explicitly declare interfaces (as you suggest). Such a compiler could utilize type inference, such as Hindley-Milner.
Of course, some statically typed languages like Haskell already do something similar to what you suggest; the Haskell compiler is able to infer types (most of the time) without the need to explicitly declare them. But obviously, Java/C# don't have this ability.
I don't see the point. Why not be explicit that the class implements the interface and have done with it? Implementing the interface is what tells other programmers that this class is supposed to behave in the way that interface defines. Simply having the same name and signature on a method conveys no guarantees that the intent of the designer was to perform similar actions with the method. That may be, but why leave it up for interpretation (and misuse)?
The reason you can "get away" with this successfully in dynamic languages has more to do with TDD than with the language itself. In my opinion, if the language offers the facility to give these sorts of guidance to others who use/view the code, you should use it. It actually improves clarity and is worth the few extra characters. In the case where you don't have access to do this, then an Adapter serves the same purpose of explicitly declaring how the interface relates to the other class.
F# supports static duck typing, though with a catch: you have to use member constraints. Details are available in this blog entry.
Example from the cited blog:
let inline speak (a: ^a) =
let x = (^a : (member speak: unit -> string) (a))
printfn "It said: %s" x
let y = (^a : (member talk: unit -> string) (a))
printfn "Then it said %s" y
type duck() =
member x.speak() = "quack"
member x.talk() = "quackity quack"
type dog() =
member x.speak() = "woof"
member x.talk() = "arrrr"
let x = new duck()
let y = new dog()
speak x
speak y
TypeScript!
Well, ok... So it's a javascript superset and maybe does not constitute a "language", but this kind of static duck-typing is vital in TypeScript.
Most of the languages in the ML family support structural types with inference and constrained type schemes, which is the geeky language-designer terminology that seems most likely what you mean by the phrase "static duck-typing" in the original question.
The more popular languages in this family that spring to mind include: Haskell, Objective Caml, F# and Scala. The one that most closely matches your example, of course, would be Objective Caml. Here's a translation of your example:
open Printf
class type iMyInterface = object
method myMethod: unit
end
class myClass = object
method myMethod = printf "Hello, world!"
end
let callMyMethod: #iMyInterface -> unit = fun m -> m#myMethod
let myClass = new myClass
callMyMethod myClass
Note: some of the names you used have to be changed to comply with OCaml's notion of identifier case semantics, but otherwise, this is a pretty straightforward translation.
Also, worth noting, neither the type annotation in the callMyMethod function nor the definition of the iMyInterface class type is strictly necessary. Objective Caml can infer everything in your example without any type declarations at all.
Crystal is a statically duck-typed language. Example:
def add(x, y)
x + y
end
add(true, false)
The call to add causes this compilation error:
Error in foo.cr:6: instantiating 'add(Bool, Bool)'
add(true, false)
^~~
in foo.cr:2: undefined method '+' for Bool
x + y
^
A pre-release design for Visual Basic 9 had support for static duck typing using dynamic interfaces but they cut the feature* in order to ship on time.
Boo definitely is a static duck-typed language: http://boo.codehaus.org/Duck+Typing
An excerpt:
Boo is a statically typed language,
like Java or C#. This means your boo
applications will run about as fast as
those coded in other statically typed
languages for .NET or Mono. But using
a statically typed language sometimes
constrains you to an inflexible and
verbose coding style, with the
sometimes necessary type declarations
(like "x as int", but this is not
often necessary due to boo's Type
Inference) and sometimes necessary
type casts (see Casting Types). Boo's
support for Type Inference and
eventually generics help here, but...
Sometimes it is appropriate to give up
the safety net provided by static
typing. Maybe you just want to explore
an API without worrying too much about
method signatures or maybe you're
creating code that talks to external
components such as COM objects. Either
way the choice should be yours not
mine.
Along with the normal types like
object, int, string...boo has a
special type called "duck". The term
is inspired by the ruby programming
language's duck typing feature ("If it
walks like a duck and quacks like a
duck, it must be a duck").
New versions of C++ move in the direction of static duck typing. You can some day (today?) write something like this:
auto plus(auto x, auto y){
return x+y;
}
and it would fail to compile if there's no matching function call for x+y.
As for your criticism:
A new "CallMyMethod" is created for each different type you pass to it, so it's not really type inference.
But it IS type inference (you can say foo(bar) where foo is a templated function), and has the same effect, except it's more time-efficient and takes more space in the compiled code.
Otherwise, you would have to look up the method during runtime. You'd have to find a name, then check that the name has a method with the right parameters.
Or you would have to store all that information about matching interfaces, and look into every class that matches an interface, then automatically add that interface.
In either case, that allows you to implicitly and accidentally break the class hierarchy, which is bad for a new feature because it goes against the habits of what programmers of C#/Java are used to. With C++ templates, you already know you're in a minefield (and they're also adding features ("concepts") to allow restrictions on template parameters).
Structural types in Scala does something like this.
See Statically Checked “Duck Typing” in Scala
D (http://dlang.org) is a statically compiled language and provides duck-typing via wrap() and unwrap() (http://dlang.org/phobos-prerelease/std_typecons.html#.unwrap).
Sounds like Mixins or Traits:
http://en.wikipedia.org/wiki/Mixin
http://www.iam.unibe.ch/~scg/Archive/Papers/Scha03aTraits.pdf
In the latest version of my programming language Heron it supports something similar through a structural-subtyping coercion operator called as. So instead of:
MyClass obj = new MyClass();
CallMyMethod(obj);
You would write:
MyClass obj = new MyClass();
CallMyMethod(obj as IMyInterface);
Just like in your example, in this case MyClass does not have to explicitly implement IMyInterface, but if it did the cast could happen implicitly and the as operator could be omitted.
I wrote a bit more about the technique which I call explicit structural sub-typing in this article.