How is duck typing different from the old 'variant' type and/or interfaces? - interface

I keep seeing the phrase "duck typing" bandied about, and even ran across a code example or two. I am way too lazy busy to do my own research, can someone tell me, briefly:
the difference between a 'duck type' and an old-skool 'variant type', and
provide an example of where I might prefer duck typing over variant typing, and
provide an example of something that i would have to use duck typing to accomplish?
I don't mean to seem fowl by doubting the power of this 'new' construct, and I'm not ducking the issue by refusing to do the research, but I am quacking up at all the flocking hype i've been seeing about it lately. It looks like no typing (aka dynamic typing) to me, so I'm not seeing the advantages right away.
ADDENDUM: Thanks for the examples so far. It seems to me that using something like 'O->can(Blah)' is equivalent to doing a reflection lookup (which is probably not cheap), and/or is about the same as saying (O is IBlah) which the compiler might be able to check for you, but the latter has the advantage of distinguishing my IBlah interface from your IBlah interface while the other two do not. Granted, having a lot of tiny interfaces floating around for every method would get messy, but then again so can checking for a lot of individual methods...
...so again i'm just not getting it. Is it a fantastic time-saver, or the same old thing in a brand new sack? Where is the example that requires duck typing?

In some of the answers here, I've seen some incorrect use of terminology, which has lead people to provide wrong answers.
So, before I give my answer, I'm going to provide a few definitions:
Strongly typed
A language is strongly typed if it enforces the type safety of a program. That means that it guarantees two things: something called progress and something else called preservation. Progress basically means that all "validly typed" programs can in fact be run by the computer, They may crash, or throw an exception, or run for an infinite loop, but they can actually be run. Preservation means that if a program is "validly typed" that it will always be "Validly typed", and that no variable (or memory location) will contain a value that does not conform to its assigned type.
Most languages have the "progress" property. There are many, however, that don't satisfy the "preservation" property. A good example, is C++ (and C too). For example, it is possible in C++ to coerce any memory address to behave as if it was any type. This basically allows programmers to violate the type system any time they want. Here is a simple example:
struct foo
{
int x;
iny y;
int z;
}
char * x = new char[100];
foo * pFoo = (foo *)x;
foo aRealFoo;
*pFoo = aRealFoo;
This code allows someone to take an array of characters and write a "foo" instance to it. If C++ was strongly typed this would not be possible. Type safe languages, like C#, Java, VB, lisp, ruby, python, and many others, would throw an exception if you tried to cast an array of characters to a "foo" instance.
Weakly typed
Something is weakly typed if it is not strongly typed.
Statically typed
A language is statically typed if its type system is verified at compile time. A statically typed language can be either "weakly typed" like C or strongly typed like C#.
Dynamically typed
A dynamically typed language is a language where types are verified at runtime. Many languages have a mixture, of some sort, between static and dynamic typing. C#, for example, will verify many casts dynamically at runtime because it's not possible to check them at compile time. Other examples are languages like Java, VB, and Objective-C.
There are also some languages that are "completely" or "mostly" dynamically typed, like "lisp", "ruby", and "small talk"
Duck typing
Duck typing is something that is completely orthogonal to static, dynamic, weak, or strong typing. It is the practice of writing code that will work with an object regardless of its underlying type identity. For example, the following VB.NET code:
function Foo(x as object) as object
return x.Quack()
end function
Will work, regardless of what the type of the object is that is passed into "Foo", provided that is defines a method called "Quack". That is, if the object looks like a duck, walks like a duck, and talks like a duck, then it's a duck. Duck typing comes in many forms. It's possible to have static duck typing, dynamic duck typing, strong duck typing, and weak duck typing. C++ template functions are a good example of "weak static duck typing". The example show in "JaredPar's" post shows an example of "strong static duck typing". Late binding in VB (or code in Ruby or Python) enables "strong dynamic duck typing".
Variant
A variant is a dynamically typed data structure that can hold a range of predefined data types, including strings, integer types, dates, and com objects. It then defines a bunch of operations for assigning, converting, and manipulating data stored in variants. Whether or not a variant is strongly typed depends on the language in which it is used. For example, a variant in a VB 6 program is strongly typed. The VB runtime ensures that operations written in VB code will conform to the typing rules for variants. Tying to add a string to an IUnknown via the variant type in VB will result in a runtime error. In C++, however, variants are weakly typed because all C++ types are weakly typed.
OK.... now that I have gotten the definitions out of the way, I can now answer your question:
A variant, in VB 6, enables one form of doing duck typing. There are better ways of doing duck typing (Jared Par's example is one of the best), than variants, but you can do duck typing with variants. That is, you can write one piece of code that will operate on an object regardless of its underlying type identity.
However, doing it with variants doesn't really give a lot of validation. A statically typed duck type mechanism, like the one JaredPar describes gives the benefits of duck typing, plus some extra validation from the compiler. That can be really helpful.

The simple answer is variant is weakly typed while duck typing is strongly typed.
Duck typing can be summed up nicely as "if it walks like a duck, looks like a duck, acts like a duck, then it's a duck." It computer science terms consider duck to be the following interface.
interface IDuck {
void Quack();
}
Now let's examine Daffy
class Daffy {
void Quack() {
Console.WriteLine("Thatsssss dispicable!!!!");
}
}
Daffy is not actually an IDuck in this case. Yet it acts just like a Duck. Why make Daffy implement IDuck when it's quite obvious that Daffy is in fact a duck.
This is where Duck typing comes in. It allows a type safe conversion between any type that has all of the behaviors of a IDuck and an IDuck reference.
IDuck d = new Daffy();
d.Quack();
The Quack method can now be called on "d" with complete type safety. There is no chance of a runtime type error in this assignment or method call.

Duck typing is just another term for dynamic typing or late-binding. A variant object that parses/compiles with any member access (e.g., obj.Anything) that may or not actually be defined during runtime is duck typing.

Probably nothing requires duck-typing, but it can be convenient in certain situations.
Say you have a method that takes and uses an object of the sealed class Duck from some 3rd party library. And you want to make the method testable. And Duck has an awfully big API (kind of like ServletRequest) of which you only need to care about a small subset. How do you test it?
One way is to make the method take something that quacks. Then you can simply create a quacking mock object.

Try reading the very first paragraph of the Wikipedia article on duck typing.
Duck typing on Wikipedia
I can have an interface (IRunnable) that defines the method Run().
If I have another class with a method like this:
public void RunSomeRunnable(IRunnable rn) { ... }
In a duck type friendly language I could pass in any class that had a Run() method into the RunSomeRunnable() method.
In a statically typed language the class being passed into RunSomeRunnable needs to explicitly implement the IRunnable interface.
"If it Run() like a duck"
variant is more like object in .NET at least.

#Kent Fredric
Your example can most certainly be done without duck typing by using explicit interfaces...uglier yes, but it's not impossible.
And personally, I find having well defined contracts in interfaces much better for enforcing quality code, than relying on duck typing...but that's just my opinion and take it with a grain of salt.
public interface ICreature { }
public interface IFly { fly();}
public interface IWalk { walk(); }
public interface IQuack { quack(); }
// ETC
// Animal Class
public class Duck : ICreature, IWalk, IFly, IQuack
{
fly() {};
walk() {};
quack() {};
}
public class Rhino: ICreature, IWalk
{
walk();
}
// In the method
List<ICreature> creatures = new List<ICreature>();
creatures.Add(new Duck());
creatures.Add(new Rhino());
foreach (ICreature creature in creatures)
{
if (creature is IFly)
(creature as IFly).fly();
if (creature is IWalk)
(creature as IWalk).walk();
}
// Etc

In regards to your request for an example of something you'd need to use duck typing to accomplish, I don't think such a thing exists. I think of it like I think about whether to use recursion or whether to use iteration. Sometimes one just works better than the other.
In my experience, duck typing makes code more readable and easier to grasp (both for the programmer and the reader). But I find that more traditional static typing eliminates a lot of needless typing errors. There's simply no way to objectively say one is better than another or even to say what situations one is more effective than the other.
I say that if you're comfortable using static typing, then use it. But you should at least try duck typing out (and use it in a nontrivial project if possible).
To answer you more directly:
...so again i'm just not getting it. Is it a fantastic time-saver, or the same old thing in a brand new sack?
It's both. You're still attacking the same problems. You're just doing it a different way. Sometimes that's really all you need to do to save time (even if for no other reason to force yourself to think about doing something a different way).
Is it a panacea that will save all of mankind from extinction? No. And anyone who tells you otherwise is a zealot.

A variant (at least as I've used them in VB6) holds a variable of a single, well-defined, usually static type. E.g., it might hold an int, or a float, or a string, but variant ints are used as ints, variant floats are used as floats, and variant strings are used as strings.
Duck typing instead uses dynamic typing. Under duck typing, a variable might be usable as an int, or a float, or a string, if it happens to support the particular methods that an int or float or string supports in a particular context.
Example of variants versus duck typing:
For a web application, suppose I want my user information to come from LDAP instead of from a database, but I still want my user information to be useable by the rest of the web framework, which is based around a database and an ORM.
Using variants: No luck. I can create a variant that can contain a UserFromDbRecord object or a UserFromLdap object, but UserFromLdap objects won't be usable by routines that expect objects from the FromDbRecord hierarchy.
Using duck typing: I can take my UserFromLdap class and add a couple of methods that make it act like a UserFromDbRecord class. I don't need to replicate the entire FromDbRecord interface, just enough for the routines that I need to use. If I do this right, it's an extremely powerful and flexible technique. If I do it wrong, it produces very confusing and brittle code (subject to breakage if either the DB library or the LDAP library changes).

I think the core point of duck typing is how it is used. One uses method detection and introspection of the entity in order to know what to do with it, instead of declaring in advance what it will be ( where you know what to do with it ).
It's probably more practical in OO languages, where primitives are not primitives, and are instead objects.
I think the best way to sum it up, in variant type, an entity is/can be anything, and what it is is uncertain, as opposed to an entity only looks like anything, but you can work out what it is by asking it.
Here's something I don't believe is plausible without ducktyping.
sub dance {
my $creature = shift;
if( $creature->can("walk") ){
$creature->walk("left",1);
$creature->walk("right",1);
$creature->walk("forward",1);
$creature->walk("back",1);
}
if( $creature->can("fly") ){
$creature->fly("up");
$creature->fly("right",1);
$creature->fly("forward",1);
$creature->fly("left", 1 );
$creature->fly("back", 1 );
$creature->fly("down");
} else if ( $creature->can("walk") ) {
$creature->walk("left",1);
$creature->walk("right",1);
$creature->walk("forward",1);
$creature->walk("back",1);
} else if ( $creature->can("splash") ) {
$creature->splash( "up" ) for ( 0 .. 4 );
}
if( $creature->can("quack") ) {
$creature->quack();
}
}
my #x = ();
push #x, new Rhinoceros ;
push #x, new Flamingo;
push #x, new Hyena;
push #x, new Dolphin;
push #x, new Duck;
for my $creature (#x){
new Thread(sub{
dance( $creature );
});
}
Any other way would require you to put type restrictions on for functions, which would cut out different species, needing you to create different functions for different species, making the code really hellish to maintain.
And that really sucks in terms of just trying to perform good choreography.

Everything you can do with duck-typing you can also do with interfaces. Duck-typing is fast and comfortable, but some argue it can lead to errors (if two distinct methods/properties are named alike). Interfaces are safe and explicit, but people might say "why state the obvious?". Rest is a flame. Everyone chooses what suits him and no one is "right".

Related

Public read-only access to a private var in Scala

Preamble: I'm teaching a course in object-functional programming using Scala and one of the things we do is to take sample problems and compare how they might be implemented using object-functional programming and using state-based, object-oriented programming, which is the background most of the students have.
So I want to implement a simple class in Scala that has a private var with a public accessor method (a very common idiom in state-based, object-oriented programming). Looking at Alvin Alexander's "Scala Cookbook" the recommended code to do this is pretty ghastly:
class Person(private var _age: Int):
def incrAge() = _age += 1
def age = _age
I say "ghastly" because I'm having to invent two names that essentially represent the age field, one used in the constructor and another used in the class interface. I'm curious if people more familiar with Scala would know of a simpler syntax that would avoid this?
EDIT: It seems clear to me now that Scala combines the val/var declaration with the given visibility (public/private), so for a var either both accessor&mutator are public or both are private. Depending on perspective, you might find this inflexible, or feel it rightly punishes you for using var 🙂.
Yes, a better way of doing it is not using var
class Person(val age: Int) {
def incrAge = new Person(age+1)
}
If you are going to write idiomatic scala code, you should start with pretending that certain parts of it simply do not exist: mostly vars, nulls and returns, but also mutable structures or collections, arrays, and certain methods like .get on a Try or an Option, or the Await object. Oh, and also isInstanceOf and asInstance.
You may ask "why do these things exist if they are not supposed to be used?". Well, because sometimes, in a very few very specific cases they are actually useful for achieving a very limited very specific purpose. But that would be probably fewer than 0.1% of the cases you will come across in your career, unless you are involved in some hard core low level library development (in which case, you would not be posting questions like this here).
So, until you acquire enough command of the language to be able to definitively distinguish those 0.1% of the cases from the other 99.9%, you are much better off simply ignoring those language features, and pretending they do not exist (if you can't figure out how to achieve a certain task without using one of those, post a question here, and people will gladly help you).
You said "Having to create two names to manage a single field is ugly." Indeed. But you know what's uglier? Using vars.
(Btw, the way you typically do this in java is getAge and setAge – still two names. The ugliness is rooted in allowing the value labeled with the given name to be different at different points of program execution, not in how specifically the semantics of mutation looks like).

How can I allow the caller to call method of field of case class?

I am not sure the keywords for this pattern, sorry if the question is not clear.
If you have:
case class MyFancyWrapper(
somethingElse: Any,
heavyComplexObject: CrazyThing
)
val w = MyFancyWrapper(???, complexThing)
I want to be able to call w.method with the method coming from complexThing. I tried to extends CrazyThing but it is a trait and I don't want to implement all the method that would be very tedious. I also don't want to have to do:
def method1 = heavyComplexObject.method1
...
for all of them.
Any solution ?
Thanks.
You can do this with macros but I agree with Luis that this is an overkill. Macros are intended to repetitive boring things, not one time boring things. Also this is not as trivial as it sounds, because you probably don't want to pass through all the methods (you probably still want your own hashCode and equals). Finally macros have bad IDE support so most probably no auto-completion for all those methods. On the other hand if you do use a good IDE (like IDEA) there is most probably an action like "Delegate methods" that will generate most of the code for you. You still will have to change the return type from Unit to MyFancyWrapper and add returning this at the end of each method but this can easily be done with mass replace operations (hint: replace "}" with "this }" and the automatically re-formatting code should do the trick)
Here are some screenshots of the process from JetBrains IDEA:
You can use an implicit conversion to make all the methods of heavyComplexThing directly available on MyFancyWrapper:
implicit def toHeavy(fancy: MyFancyWrapper): CrazyThing = fancy.heavyComplexObject
This needs to be in scope when the method is called.
In the comments you indicate that you want to return this so that you can chain multiple calls on the same object:
w.method1.method2.method3
Don't do this
While this is a common pattern in non-functional languages, it is bad practice is Scala for two reasons:
This pattern inherently relies on side-effects, which is the antithesis of functional programming.
It is confusing, because in Scala chaining calls in this way is used to implement a data pipeline, where the output of one function is passed as the input to the next.
It is much clearer to write separate statements so that it is obvious that the methods are being called on the same object:
w.method1()
w.method2()
w.method3()
(It is also conventional to use () when calling methods with side effects)

Everything's an object in Scala

I am new to Scala and heard a lot that everything is an object in Scala. What I don't get is what's the advantage of "everything's an object"? What are things that I cannot do if everything is not an object? Examples are welcome. Thanks
The advantage of having "everything" be an object is that you have far fewer cases where abstraction breaks.
For example, methods are not objects in Java. So if I have two strings, I can
String s1 = "one";
String s2 = "two";
static String caps(String s) { return s.toUpperCase(); }
caps(s1); // Works
caps(s2); // Also works
So we have abstracted away string identity in our operation of making something upper case. But what if we want to abstract away the identity of the operation--that is, we do something to a String that gives back another String but we want to abstract away what the details are? Now we're stuck, because methods aren't objects in Java.
In Scala, methods can be converted to functions, which are objects. For instance:
def stringop(s: String, f: String => String) = if (s.length > 0) f(s) else s
stringop(s1, _.toUpperCase)
stringop(s2, _.toLowerCase)
Now we have abstracted the idea of performing some string transformation on nonempty strings.
And we can make lists of the operations and such and pass them around, if that's what we need to do.
There are other less essential cases (object vs. class, primitive vs. not, value classes, etc.), but the big one is collapsing the distinction between method and object so that passing around and abstracting over functionality is just as easy as passing around and abstracting over data.
The advantage is that you don't have different operators that follow different rules within your language. For example, in Java to perform operations involving objects, you use the dot name technique of calling the code (static objects still use the dot name technique, but sometimes the this object or the static object is inferred) while built-in items (not objects) use a different method, that of built-in operator manipulation.
Number one = Integer.valueOf(1);
Number two = Integer.valueOf(2);
Number three = one.plus(two); // if only such methods existed.
int one = 1;
int two = 2;
int three = one + two;
the main differences is that the dot name technique is subject to polymorphisim, operator overloading, method hiding, and all the good stuff that you can do with Java objects. The + technique is predefined and completely not flexible.
Scala circumvents the inflexibility of the + method by basically handling it as a dot name operator, and defining a strong one-to-one mapping of such operators to object methods. Hence, in Scala everything is an object means that everything is an object, so the operation
5 + 7
results in two objects being created (a 5 object and a 7 object) the plus method of the 5 object being called with the parameter 7 (if my scala memory serves me correctly) and a "12" object being returned as the value of the 5 + 7 operation.
This everything is an object has a lot of benefits in a functional programming environment, for example, blocks of code now are object too, making it possible to pass back and forth blocks of code (without names) as parameters, yet still be bound to strict type checking (the block of code only returns Long or a subclass of String or whatever).
When it boils down to it, it makes some kinds of solutions very easy to implement, and often the inefficiencies are mitigated by the lack of need to handle "move into primitives, manipulate, move out of primitives" marshalling code.
One specific advantage that comes to my mind (since you asked for examples) is what in Java are primitive types (int, boolean ...) , in Scala are objects that you can add functionality to with implicit conversions. For example, if you want to add a toRoman method to ints, you could write an implicit class like:
implicit class RomanInt(i:Int){
def toRoman = //some algorithm to convert i to a Roman representation
}
Then, you could call this method from any Int literal like :
val romanFive = 5.toRoman // V
This way you can 'pimp' basic types to adapt them to your needs
In addition to the points made by others, I always emphasize that the uniform treatment of all values in Scala is in part an illusion. For the most part it is a very welcome illusion. And Scala is very smart to use real JVM primitives as much as possible and to perform automatic transformations (usually referred to as boxing and unboxing) only as much as necessary.
However, if the dynamic pattern of application of automatic boxing and unboxing is very high, there can be undesirable costs (both memory and CPU) associated with it. This can be partially mitigated with the use of specialization, which creates special versions of generic classes when particular type parameters are of (programmer-specified) primitive types. This avoids boxing and unboxing but comes at the cost of more .class files in your running application.
Not everything is an object in Scala, though more things are objects in Scala than their analogues in Java.
The advantage of objects is that they're bags of state which also have some behavior coupled with them. With the addition of polymorphism, objects give you ways of changing the implicit behavior and state. Enough with the poetry, let's go into some examples.
The if statement is not an object, in either scala or java. If it were, you could be able to subclass it, inject another dependency in its place, and use it to do stuff like logging to a file any time your code makes use of the if statement. Wouldn't that be magical? It would in some cases help you debug stuff, and in other cases it would make your hairs grow white before you found a bug caused by someone overwriting the behavior of if.
Visiting an objectless, statementful world: Imaging your favorite OOP programming language. Think of the standard library it provides. There's plenty of classes there, right? They offer ways for customization, right? They take parameters that are other objects, they create other objects. You can customize all of these. You have polymorphism. Now imagine that all the standard library was simply keywords. You wouldn't be able to customize nearly as much, because you can't overwrite keywords. You'd be stuck with whatever cases the language designers decided to implement, and you'd be helpless in customizing anything there. Such languages exist, you know them well, they're the sequel-like languages. You can barely create functions there, but in order to customize the behavior of the SELECT statement, new versions of the language had to appear which included the features most desired. This would be an extreme world, where you'd only be able to program by asking the language designers for new features (which you might not get, because someone else more important would require some feature incompatible with what you want)
In conclusion, NOT everything is an object in scala: Classes, expressions, keywords and packages surely aren't. More things however are, like functions.
What's IMHO a nice rule of thumb is that more objects equals more flexibility
P.S. in Python for example, even more things are objects (like the classes themselves, the analogous concept for packages (that is python modules and packages). You'd see how there, black magic is easier to do, and that brings both good and bad consequences.

Simulating aspects of static-typing in a duck-typed language

In my current job I'm building a suite of Perl scripts that depend heavily on objects. (using Perl's bless() on a Hash to get as close to OO as possible)
Now, for lack of a better way of putting this, most programmers at my company aren't very smart. Worse, they don't like reading documentation and seem to have a problem understanding other people's code. Cowboy coding is the game here. Whenever they encounter a problem and try to fix it, they come up with a horrendous solution that actually solves nothing and usually makes it worse.
This results in me, frankly, not trusting them with code written in duck typed language. As an example, I see too many problems with them not getting an explicit error for misusing objects. For instance, if type A has member foo, and they do something like, instance->goo, they aren't going to see the problem immediately. It will return a null/undefined value, and they will probably waste an hour finding the cause. Then end up changing something else because they didn't properly identify the original problem.
So I'm brainstorming for a way to keep my scripting language (its rapid development is an advantage) but give an explicit error message when an object isn't used properly. I realize that since there isn't a compile stage or static typing, the error will have to be at run time. I'm fine with this, so long as the user gets a very explicit notice saying "this object doesn't have X"
As part of my solution, I don't want it to be required that they check if a method/variable exists before trying to use it.
Even though my work is in Perl, I think this can be language agnostic.
If you have any shot of adding modules to use, try Moose. It provides pretty much all the features you'd want in a modern programming environment, and more. It does type checking, excellent inheritance, has introspection capabilities, and with MooseX::Declare, one of the nicest interfaces for Perl classes out there. Take a look:
use MooseX::Declare;
class BankAccount {
has 'balance' => ( isa => 'Num', is => 'rw', default => 0 );
method deposit (Num $amount) {
$self->balance( $self->balance + $amount );
}
method withdraw (Num $amount) {
my $current_balance = $self->balance();
( $current_balance >= $amount )
|| confess "Account overdrawn";
$self->balance( $current_balance - $amount );
}
}
class CheckingAccount extends BankAccount {
has 'overdraft_account' => ( isa => 'BankAccount', is => 'rw' );
before withdraw (Num $amount) {
my $overdraft_amount = $amount - $self->balance();
if ( $self->overdraft_account && $overdraft_amount > 0 ) {
$self->overdraft_account->withdraw($overdraft_amount);
$self->deposit($overdraft_amount);
}
}
}
I think it's pretty cool, myself. :) It's a layer over Perl's object system, so it works with stuff you already have (basically.)
With Moose, you can create subtypes really easily, so you can make sure your input is valid. Lazy programmers agree: with so little that has to be done to make subtypes work in Moose, it's easier to do them than not! (from Cookbook 4)
subtype 'USState'
=> as Str
=> where {
( exists $STATES->{code2state}{ uc($_) }
|| exists $STATES->{state2code}{ uc($_) } );
};
And Tada, the USState is now a type you can use! No fuss, no muss, and just a small amount of code. It'll throw an error if it's not right, and all the consumers of your class have to do is pass a scalar with that string in it. If it's fine (which it should be...right? :) ) They use it like normal, and your class is protected from garbage. How nice is that!
Moose has tons of awesome stuff like this.
Trust me. Check it out. :)
In Perl,
make it required that use strict and use warnings are on in 100% of the code
You can try to make an almost private member variables by creating closures. A very good example is "Private Member Variables, Sort of " section in http://www.usenix.org/publications/login/1998-10/perl.html . They are not 100% private but fairly un-obvious how to access unless you really know what you're doing (and require them to read your code and do research to find out how).
If you don't want to use closures, the following approach works somewhat well:
Make all of your object member variables (aka object hash keys in Perl) wrapped in accessors. There are ways to do this efficiently from coding standards POV. One of the least safe is Class::Accessor::Fast. I'm sure Moose has better ways but I'm not that familiar with Moose.
Make sure to "hide" actual member variables in private-convention names, e.g. $object->{'__private__var1'} would be the member variable, and $object->var1() would be a getter/setter accessor.
NOTE: For the last, Class::Accessor::Fast is bad since its member variables share names with accessors. But you can have very easy builders that work just like Class::Accessor::Fast and create key values such as $obj->{'__private__foo'} for "foo".
This won't prevent them shooting themselves in the foot, but WILL make it a lot harder to do so.
In your case, if they use $obj->goo or $obj->goo(), they WOULD get a runtime error, at least in Perl.
They could of course go out of their way to do $obj->{'__private__goo'}, but if they do the gonzo cowboy crap due to sheer laziness, the latter is a lot more work than doing the correct $obj->foo().
You can also have a scan of code-base which detects $object->{"_ type strings, though from your description that might not work as a deterrent that much.
You can use Class::InsideOut or Object::InsideOut which give you true data privacy. Rather than storing data in a blessed hash reference, a blessed scalar reference is used as a key to lexical data hashes. Long story short, if your co-workers try $obj->{member} they'll get a run time error. There's nothing in $obj for them to grab at and no easy way to get at the data except through accessors.
Here is a discussion of the inside-out technique and various implementations.

What is the difference between a strongly typed language and a statically typed language?

Also, does one imply the other?
What is the difference between a strongly typed language and a statically typed language?
A statically typed language has a type system that is checked at compile time by the implementation (a compiler or interpreter). The type check rejects some programs, and programs that pass the check usually come with some guarantees; for example, the compiler guarantees not to use integer arithmetic instructions on floating-point numbers.
There is no real agreement on what "strongly typed" means, although the most widely used definition in the professional literature is that in a "strongly typed" language, it is not possible for the programmer to work around the restrictions imposed by the type system. This term is almost always used to describe statically typed languages.
Static vs dynamic
The opposite of statically typed is "dynamically typed", which means that
Values used at run time are classified into types.
There are restrictions on how such values can be used.
When those restrictions are violated, the violation is reported as a (dynamic) type error.
For example, Lua, a dynamically typed language, has a string type, a number type, and a Boolean type, among others. In Lua every value belongs to exactly one type, but this is not a requirement for all dynamically typed languages. In Lua, it is permissible to concatenate two strings, but it is not permissible to concatenate a string and a Boolean.
Strong vs weak
The opposite of "strongly typed" is "weakly typed", which means you can work around the type system. C is notoriously weakly typed because any pointer type is convertible to any other pointer type simply by casting. Pascal was intended to be strongly typed, but an oversight in the design (untagged variant records) introduced a loophole into the type system, so technically it is weakly typed.
Examples of truly strongly typed languages include CLU, Standard ML, and Haskell. Standard ML has in fact undergone several revisions to remove loopholes in the type system that were discovered after the language was widely deployed.
What's really going on here?
Overall, it turns out to be not that useful to talk about "strong" and "weak". Whether a type system has a loophole is less important than the exact number and nature of the loopholes, how likely they are to come up in practice, and what are the consequences of exploiting a loophole. In practice, it's best to avoid the terms "strong" and "weak" altogether, because
Amateurs often conflate them with "static" and "dynamic".
Apparently "weak typing" is used by some persons to talk about the relative prevalance or absence of implicit conversions.
Professionals can't agree on exactly what the terms mean.
Overall you are unlikely to inform or enlighten your audience.
The sad truth is that when it comes to type systems, "strong" and "weak" don't have a universally agreed on technical meaning. If you want to discuss the relative strength of type systems, it is better to discuss exactly what guarantees are and are not provided.
For example, a good question to ask is this: "is every value of a given type (or class) guaranteed to have been created by calling one of that type's constructors?" In C the answer is no. In CLU, F#, and Haskell it is yes. For C++ I am not sure—I would like to know.
By contrast, static typing means that programs are checked before being executed, and a program might be rejected before it starts. Dynamic typing means that the types of values are checked during execution, and a poorly typed operation might cause the program to halt or otherwise signal an error at run time. A primary reason for static typing is to rule out programs that might have such "dynamic type errors".
Does one imply the other?
On a pedantic level, no, because the word "strong" doesn't really mean anything. But in practice, people almost always do one of two things:
They (incorrectly) use "strong" and "weak" to mean "static" and "dynamic", in which case they (incorrectly) are using "strongly typed" and "statically typed" interchangeably.
They use "strong" and "weak" to compare properties of static type systems. It is very rare to hear someone talk about a "strong" or "weak" dynamic type system. Except for FORTH, which doesn't really have any sort of a type system, I can't think of a dynamically typed language where the type system can be subverted. Sort of by definition, those checks are bulit into the execution engine, and every operation gets checked for sanity before being executed.
Either way, if a person calls a language "strongly typed", that person is very likely to be talking about a statically typed language.
This is often misunderstood so let me clear it up.
Static/Dynamic Typing
Static typing is where the type is bound to the variable. Types are checked at compile time.
Dynamic typing is where the type is bound to the value. Types are checked at run time.
So in Java for example:
String s = "abcd";
s will "forever" be a String. During its life it may point to different Strings (since s is a reference in Java). It may have a null value but it will never refer to an Integer or a List. That's static typing.
In PHP:
$s = "abcd"; // $s is a string
$s = 123; // $s is now an integer
$s = array(1, 2, 3); // $s is now an array
$s = new DOMDocument; // $s is an instance of the DOMDocument class
That's dynamic typing.
Strong/Weak Typing
(Edit alert!)
Strong typing is a phrase with no widely agreed upon meaning. Most programmers who use this term to mean something other than static typing use it to imply that there is a type discipline that is enforced by the compiler. For example, CLU has a strong type system that does not allow client code to create a value of abstract type except by using the constructors provided by the type. C has a somewhat strong type system, but it can be "subverted" to a degree because a program can always cast a value of one pointer type to a value of another pointer type. So for example, in C you can take a value returned by malloc() and cheerfully cast it to FILE*, and the compiler won't try to stop you—or even warn you that you are doing anything dodgy.
(The original answer said something about a value "not changing type at run time". I have known many language designers and compiler writers and have not known one that talked about values changing type at run time, except possibly some very advanced research in type systems, where this is known as the "strong update problem".)
Weak typing implies that the compiler does not enforce a typing discpline, or perhaps that enforcement can easily be subverted.
The original of this answer conflated weak typing with implicit conversion (sometimes also called "implicit promotion"). For example, in Java:
String s = "abc" + 123; // "abc123";
This is code is an example of implicit promotion: 123 is implicitly converted to a string before being concatenated with "abc". It can be argued the Java compiler rewrites that code as:
String s = "abc" + new Integer(123).toString();
Consider a classic PHP "starts with" problem:
if (strpos('abcdef', 'abc') == false) {
// not found
}
The error here is that strpos() returns the index of the match, being 0. 0 is coerced into boolean false and thus the condition is actually true. The solution is to use === instead of == to avoid implicit conversion.
This example illustrates how a combination of implicit conversion and dynamic typing can lead programmers astray.
Compare that to Ruby:
val = "abc" + 123
which is a runtime error because in Ruby the object 123 is not implicitly converted just because it happens to be passed to a + method. In Ruby the programmer must make the conversion explicit:
val = "abc" + 123.to_s
Comparing PHP and Ruby is a good illustration here. Both are dynamically typed languages but PHP has lots of implicit conversions and Ruby (perhaps surprisingly if you're unfamiliar with it) doesn't.
Static/Dynamic vs Strong/Weak
The point here is that the static/dynamic axis is independent of the strong/weak axis. People confuse them probably in part because strong vs weak typing is not only less clearly defined, there is no real consensus on exactly what is meant by strong and weak. For this reason strong/weak typing is far more of a shade of grey rather than black or white.
So to answer your question: another way to look at this that's mostly correct is to say that static typing is compile-time type safety and strong typing is runtime type safety.
The reason for this is that variables in a statically typed language have a type that must be declared and can be checked at compile time. A strongly-typed language has values that have a type at run time, and it's difficult for the programmer to subvert the type system without a dynamic check.
But it's important to understand that a language can be Static/Strong, Static/Weak, Dynamic/Strong or Dynamic/Weak.
Both are poles on two different axis:
strongly typed vs. weakly typed
statically typed vs. dynamically typed
Strongly typed means, a variable will not be automatically converted from one type to another. Weakly typed is the opposite: Perl can use a string like "123" in a numeric context, by automatically converting it into the int 123. A strongly typed language like python will not do this.
Statically typed means, the compiler figures out the type of each variable at compile time. Dynamically typed languages only figure out the types of variables at runtime.
Strongly typed means that there are restrictions between conversions between types.
Statically typed means that the types are not dynamic - you can not change the type of a variable once it has been created.
Answer is already given above. Trying to differentiate between strong vs week and static vs dynamic concept.
What is Strongly typed VS Weakly typed?
Strongly Typed: Will not be automatically converted from one type to another
In Go or Python like strongly typed languages "2" + 8 will raise a type error, because they don't allow for "type coercion".
Weakly (loosely) Typed: Will be automatically converted to one type to another:
Weakly typed languages like JavaScript or Perl won't throw an error and in this case JavaScript will results '28' and perl will result 10.
Perl Example:
my $a = "2" + 8;
print $a,"\n";
Save it to main.pl and run perl main.pl and you will get output 10.
What is Static VS Dynamic type?
In programming, programmer define static typing and dynamic typing with respect to the point at which the variable types are checked. Static typed languages are those in which type checking is done at compile-time, whereas dynamic typed languages are those in which type checking is done at run-time.
Static: Types checked before run-time
Dynamic: Types checked on the fly, during execution
What is this means?
In Go it checks typed before run-time (static check). This mean it not only translates and type-checks code it’s executing, but it will scan through all the code and type error would be thrown before the code is even run. For example,
package main
import "fmt"
func foo(a int) {
if (a > 0) {
fmt.Println("I am feeling lucky (maybe).")
} else {
fmt.Println("2" + 8)
}
}
func main() {
foo(2)
}
Save this file in main.go and run it, you will get compilation failed message for this.
go run main.go
# command-line-arguments
./main.go:9:25: cannot convert "2" (type untyped string) to type int
./main.go:9:25: invalid operation: "2" + 8 (mismatched types string and int)
But this case is not valid for Python. For example following block of code will execute for first foo(2) call and will fail for second foo(0) call. It's because Python is dynamically typed, it only translates and type-checks code it’s executing on. The else block never executes for foo(2), so "2" + 8 is never even looked at and for foo(0) call it will try to execute that block and failed.
def foo(a):
if a > 0:
print 'I am feeling lucky.'
else:
print "2" + 8
foo(2)
foo(0)
You will see following output
python main.py
I am feeling lucky.
Traceback (most recent call last):
File "pyth.py", line 7, in <module>
foo(0)
File "pyth.py", line 5, in foo
print "2" + 8
TypeError: cannot concatenate 'str' and 'int' objects
Data Coercion does not necessarily mean weakly typed because sometimes its syntacical sugar:
The example above of Java being weakly typed because of
String s = "abc" + 123;
Is not weakly typed example because its really doing:
String s = "abc" + new Integer(123).toString()
Data coercion is also not weakly typed if you are constructing a new object.
Java is a very bad example of weakly typed (and any language that has good reflection will most likely not be weakly typed). Because the runtime of the language always knows what the type is (the exception might be native types).
This is unlike C. C is the one of the best examples of weakly typed. The runtime has no idea if 4 bytes is an integer, a struct, a pointer or a 4 characters.
The runtime of the language really defines whether or not its weakly typed otherwise its really just opinion.
EDIT:
After further thought this is not necessarily true as the runtime does not have to have all the types reified in the runtime system to be a Strongly Typed system.
Haskell and ML have such complete static analysis that they can potential ommit type information from the runtime.
One does not imply the other. For a language to be statically typed it means that the types of all variables are known or inferred at compile time.
A strongly typed language does not allow you to use one type as another. C is a weakly typed language and is a good example of what strongly typed languages don't allow. In C you can pass a data element of the wrong type and it will not complain. In strongly typed languages you cannot.
Strong typing probably means that variables have a well-defined type and that there are strict rules about combining variables of different types in expressions. For example, if A is an integer and B is a float, then the strict rule about A+B might be that A is cast to a float and the result returned as a float. If A is an integer and B is a string, then the strict rule might be that A+B is not valid.
Static typing probably means that types are assigned at compile time (or its equivalent for non-compiled languages) and cannot change during program execution.
Note that these classifications are not mutually exclusive, indeed I would expect them to occur together frequently. Many strongly-typed languages are also statically-typed.
And note that when I use the word 'probably' it is because there are no universally accepted definitions of these terms. As you will already have seen from the answers so far.
Imho, it is better to avoid these definitions altogether, not only there is no agreed upon definition of the terms, definitions that do exist tend to focus on technical aspects for example, are operation on mixed type allowed and if not is there a loophole that bypasses the restrictions such as work your way using pointers.
Instead, and emphasizing again that it is an opinion, one should focus on the question: Does the type system make my application more reliable? A question which is application specific.
For example: if my application has a variable named acceleration, then clearly if the way the variable is declared and used allows the assignment of the value "Monday" to acceleration it is a problem, as clearly an acceleration cannot be a weekday (and a string).
Another example: In Ada one can define: subtype Month_Day is Integer range 1..31;, The type Month_Day is weak in the sense that it is not a separate type from Integer (because it is a subtype), however it is restricted to the range 1..31. In contrast: type Month_Day is new Integer; will create a distinct type, which is strong in the sense that that it cannot be mixed with integers without explicit casting - but it is not restricted and can receive the value -17 which is senseless. So technically it is stronger, but is less reliable.
Of course, one can declare type Month_Day is new Integer range 1..31; to create a type which is distinct and restricted.