Recreating the Functor type in Swift - swift

I'm wanting to make the Functor type in Swift as a protocol, but I'm having trouble with the type clauses.
protocol Functor {
typealias A
func fmap<
B, C
where
C == Self
C.A == B>
(A -> B) -> C
}
The problem is that C == Self sort of implies that C is the same as Self, and therefore has the same A. Is there any way to define a generic that is the same type but with a different type parameter?
EDIT: Maybe I should clarify that the goal here is to function like fmap in other functional languages, except self is the in parameter, instead of fmap being a global function that takes an in parameter.

No, not today. Swift lacks higher-kinded types. It doesn't even have support for first-order types. So as a first step, you'd want to be able to talk about let x: Array<_> where you didn't yet know the type of the contained object. Swift can't handle that. It can't handle generic closures, either. For example:
func firstT<T>(xs: [T]) -> T {
return xs[0]
}
let ft = firstT // "Argument for generic parameter 'T' could not be inferred"
Those are the simple cases, and Swift's type system already starts falling apart (it can handle it as a function definition, but not as a type). Swift's type system completely falls apart when asked to represent higher-kinded types like Functor.
A little deeper discussion of the issue (but mostly reiterates what I've said here: https://gist.github.com/rnapier/5a84f65886e1ee40f62e)
Note that this isn't specifically a feature so much of "other functional languages." First, Swift isn't really a functional language; it just has some functional features, and second, not all functional languages have this feature (Haskell and Scala, yes, ML and LISP, no). The feature you want is higher-kinded types, and that can exist with or without other functional features.
A nice intro to this is available in section 4.3 Higher-kinded types of Scala for generic programmers. Swift has some very interesting similarities to Scala, so the history of Scala can provide some hints about what might make sense for Swift (Higher-kinded types didn't show up until Scala 2.5).

Related

What does "existential type" mean in Swift?

I am reading Swift Evolution proposal 244 (Opaque Result Types) and don't understand what the following means:
... existential type ...
One could compose these transformations by
using the existential type Shape instead of generic arguments, but
doing so would imply more dynamism and runtime overhead than may be
desired.
An example of an existential type is given in the evolution proposal itself:
protocol Shape {
func draw(to: Surface)
}
An example of using protocol Shape as an existential type would look like
func collides(with: any Shape) -> Bool
as opposed to using a generic argument Other:
func collides<Other: Shape>(with: Other) -> Bool
Important to note here that the Shape protocol is not an existential type by itself, only using it in "protocols-as-types" context as above "creates" an existential type from it. See this post from the member of Swift Core Team:
Also, protocols currently do double-duty as the spelling for existential types, but this relationship has been a common source of confusion.
Also, citing the Swift Generics Evolution article (I recommend reading the whole thing, which explains this in more details):
The best way to distinguish a protocol type from an existential type is to look at the context. Ask yourself: when I see a reference to a protocol name like Shape, is it appearing at a type level, or at a value level? Revisiting some earlier examples, we see:
func addShape<T: Shape>() -> T
// Here, Shape appears at the type level, and so is referencing the protocol type
var shape: any Shape = Rectangle()
// Here, Shape appears at the value level, and so creates an existential type
Deeper dive
Why is it called an "existential"? I never saw an unambiguous confirmation of this, but I assume that the feature is inspired by languages with more advanced type systems, e.g. consider Haskell's existential types:
class Buffer -- declaration of type class `Buffer` follows here
data Worker x y = forall b. Buffer b => Worker {
buffer :: b,
input :: x,
output :: y
}
which is roughly equivalent to this Swift snippet (if we assume that Swift's protocols more or less represent Haskell's type classes):
protocol Buffer {}
struct Worker<X, Y> {
let buffer: any Buffer
let input: X
let output: Y
}
Note that the Haskell example used forall quantifier here. You could read this as "for all types that conform to the Buffer type class ("protocol" in Swift) values of type Worker would have exactly the same types as long as their X and Y type parameters are the same". Thus, given
extension String: Buffer {}
extension Data: Buffer {}
values Worker(buffer: "", input: 5, output: "five") and Worker(buffer: Data(), input: 5, output: "five") would have exactly the same types.
This is a powerful feature, which allows things such as heterogenous collections, and can be used in a lot more places where you need to "erase" an original type of a value and "hide" it under an existential type. Like all powerful features it can be abused and can make code less type-safe and/or less performant, so should be used with care.
If you want even a deeper dive, check out Protocols with Associated Types (PATs), which currently can't be used as existentials for various reasons. There are also a few Generalized Existentials proposals being pitched more or less regularly, but nothing concrete as of Swift 5.3. In fact, the original Opaque Result Types proposal linked by the OP can solve some of the problems caused by use of PATs and significantly alleviates lack of generalized existentials in Swift.
I'm sure you've already used existentials a lot before without noticing it.
A summarized answer of Max is that for:
var rec: Shape = Rectangle() // Example A
only Shape properties can be accessed. While for:
func addShape<T: Shape>() -> T // Example B
Any property of T can be accessed. Because T adopts Shape then all properties of Shape can also be accessed as well.
The first example is an existential the second is not.
Example of real code:
protocol Shape {
var width: Double { get }
var height: Double { get }
}
struct Rectangle: Shape {
var width: Double
var height: Double
var area: Double
}
let rec1: Shape = Rectangle(width: 1, height: 2, area: 2)
rec1.area // ❌
However:
let rec2 = Rectangle(width: 1, height: 2, area: 2)
func addShape<T: Shape>(_ shape: T) -> T {
print(type(of: shape)) // Rectangle
return shape
}
let rec3 = addShape(rec2)
print(rec3.area) // ✅
I'd argue that for most Swift users, we all understand Abstract class and Concrete class. This extra jargon makes it slightly confusing.
The trickiness is that with the 2nd example, to the compiler, the type you return isn't Shape, it's Rectangle i.e. the function signature transforms to this:
func addShape(_ shape: Rectangle) -> Rectangle {
This is only possible because of (constrained) generics.
Yet for rec: Shape = Whatever() to the compiler the type is Shape regardless of the assigning type. <-- Box Type
Why is it named Existential?
The term "existential" in computer science and programming is borrowed from philosophy, where it refers to the concept of existence and being. In the context of programming, "existential" is used to describe a type that represents the existence of any specific type, without specifying which type it is.
The term is used to reflect the idea that, by wrapping a value in an existential type, you are abstracting away its specific type and only acknowledging its existence.
In other words, an existential type provides a way to handle values of different types in a unified way, while ignoring their specific type information†. This allows you to work with values in a more generic and flexible manner, which can be useful in many situations, such as when creating collections of heterogeneous values, or when working with values of unknown or dynamic types.
The other day I took my kid to an ice-cream shop. She asked what are you having, and I didn't know the flavor I picked, so I didn't say it's strawberry flavored or chocolate, I just said "I'm having an ice-cream".
I just specified that it's an ice-cream without saying its flavor. My daughter could no longer determine if it was Red, or Brown. If it was having a fruit flavor or not. I gave her existential-like information.
Had I told her it's a chocolate, then I would have gave her specific information. Which is then not existential.
†: In Example B, we're not ignoring the specific type information.
Special thanks to a friend who helped me come up with this answer
I feel like it's worth adding something about why the phrase is important in Swift. And in particular, I think almost always, Swift is talking about "existential containers". They talk about "existential types", but only really with reference to "stuff that is stored in an existential container". So what is an "existential container"?
As I see it, the key thing is, if you have a variable that you're passing as a parameter or using locally, etc. and you define the type of the variable as Shape then Swift has to do some things under the hood to make it work and that's what they are (obliquely) referring to.
If you think about defining a function in a library/framework module that you're writing that is publicly available and takes for example the parameters public func myFunction(shape1: Shape, shape2: Shape, shape1Rotation: CGFloat?) -> Shape... imagine it (optionally) rotates shape1, "adds" it to shape2 somehow (I leave the details up to your imagination) then returns the result. Coming from other OO languages, we instinctively think that we understand how this works... the function must be implemented only with members available in the Shape protocol.
But the question is for the compiler, how are the parameters represented in memory? Instinctively, again, we think... it doesn't matter. When someone writes a new program that uses your function at some point in the future, they decide to pass their own shapes in and define them as class Dinosaur: Shape and class CupCake: Shape. As part of defining those new classes, they will have to write implementations of all the methods in protocol Shape, which might be something like func getPointsIterator() -> Iterator<CGPoint>. That works just fine for classes. The calling code defines those classes, instantiates objects from them, passes them into your function. Your function must have something like a vtable (I think Swift calls it a witness table) for the Shape protocol that says "if you give me an instance of a Shape object, I can tell you exactly where to find the address of the getPointsIterator function". The instance pointer will point to a block of memory on the stack, the start of which is a pointer to the class metadata (vtables, witness tables, etc.) So the compiler can reason about how to find any given method implementation.
But what about value types? Structs and enums can have just about any format in memory, from a one byte Bool to a 500 byte complex nested struct. These are usually passed on the stack or registers on function calls for efficiency. (When Swift exactly knows the type, all code can be compiled knowing the data format and passed on the stack or in registers, etc.)
Now you can see the problem. How can Swift compile the function myFunction so it will work with any possible future value type/struct defined in any code? As I understand it, this is where "existential containers" come in.
The simplest approach would be that any function that takes parameters of one of these "existential types" (types defined just by conforming to a Protocol) must insist that the calling code "box" the value type... that it store the value in a special reference counted "box" on the heap and pass a pointer to this (with all the usual ARC retain/release/autorelease/ownership rules) to your function when the function takes a parameter of type Shape.
Then when a new, weird and wonderful, type is written by some code author in the future, compiling the methods of Shape would have to include a way to accept "boxed" versions of the type. Your myFunction would always handle these "existential types" by handling the box and everything works. I would guess that C# and Java do something like this (boxing) if they have the same problem with non class types (Int, etc.)?
The thing is that for a lot of value types, this can be very inefficient. After all, we are compiling mostly for 64 bit architecture, so a couple of registers can handle 8 bytes, enough for many simple structures. So Swift came up with a compromise (again I might be a bit inaccurate on this, I'm giving my idea of the mechanism... feel free to correct). They created "existential containers" that are always 4 pointers in size. 16 bytes on a "normal" 64 bit architecture (most CPUs that run Swift these days).
If you define a struct that conforms to a protocol and it contains 12 bytes or less, then it is stored in the existential container directly. The last 4 byte pointer is a pointer to the type information/witness tables/etc. so that myFunction can find an address for any function in the Shape protocol (just like in the classes case above). If your struct/enum is larger than 12 bytes then the 4 pointer value points to a boxed version of the value type. Obviously this was considered an optimum compromise, and seems reasonable... it will be passed around in 4 registers in most cases or 4 stack slots if "spilled".
I think the reason the Swift team end up mentioning "existential containers" to the wider community is because it then has implications for various ways of using Swift. One obvious implication is performance. There's a sudden performance drop when using functions in this way if the structs are > 12 bytes in size.
Another, more fundamental, implication I think is that protocols can be used as parameters only if they don't have protocol or Self requirements... they are not generic. Otherwise you're into generic function definitions which is different. That's why we sometimes need to change things like: func myFunction(shape: Shape, reflection: Bool) -> Shape into something like func myFunction<S:Shape>(shape: S, reflection: Bool) -> S. They are implemented in very different ways under the hood.

Swift Generics... Checking conformance to protocol with associated type

I'm trying to write a generic function in Swift that takes any number, Int, Float, Double, etc. by setting the generic type to <T: Numeric>. So,
func doSomething<T: Numeric>(with foo: T, and bar: T) -> T {
// body
}
Most of the body will work for any Numeric type, but along the way, I need to find the remainder... which means I need a different approach for FloatingPoint types vs. Integer types.
When T is an Int, this means using the modulo operator:
let remainder = foo % bar
However, when T is a Double or Float, the modulo operator doesn't work and I need to use the truncatingRemainder(dividingBy:) method:
let remainder = foo.truncatingRemainder(dividingBy: bar)
Where I'm struggling is to find a way to sift these out. In theory, I should just be able to do something like this:
var remainder: T
if T.self as FloatingPoint { // <- This doesn't work
remainder = foo.truncatingRemainder(dividingBy: bar)
} else {
remainder = foo % bar
}
This, of course, leads to this error, since FloatingPoint has an associated type:
error: protocol 'FloatingPoint' can only be used as a generic constraint because it has Self or associated type requirements
I understand this error... essentially, FloatingPoint is generic with a still-generic associated type for the adopting type to define.
However, I would like to know the best way to conditionally run select blocks of code that only apply to some more narrowly-defined Protocol than defined with the generic param (T).
Specifically, is there a way to define a single generic function to handle all Numeric types, then differentiate by FloatingPoint vs. Integer types within.
There are a couple issues at foot here.
Numeric is the wrong protocol if you're looking to take remainders.
Unless I am misreading the documentation for Numeric, a matrix type could reasonably conform to Numeric. If a matrix were passed into your function, you would have no real way to compute remainders because that's not a well-defined notion. Consequently, your function shouldn't be defined on all Numeric types. The solution is to define a new protocol which describes types with well-defined remainders. Unfortunately, as Alexander notes in the comments...
Swift will not allow you to extend a protocol to conform to another protocol.
There are various technical reasons for this restriction, mostly centering around difficulties when a type would conform to a protocol in multiple ways.
I think you have two reasonable solutions.
A. Make two different overloads of doSomething, one with T: FloatingPoint and the other with T: BinaryInteger. If there is too much shared code between these implementations, you could refactor the doSomething into multiple functions, most of which would be defined on all of Numeric.
B. Introduce a new protocol RemainderArithmetic: Numeric which describes the operations you want. Then, write explicit conformances for all the particular types you want to use e.g. extension Double: RemainderArithmetic and extension UInt: RemainderArithmetic.
Neither of these solutions are particularly appealing, but I think they have one key advantage: Both of these solutions make clear the particular semantics of the types you are expecting. You are not really anticipating types other than BinaryInteger's or FloatingPoint's, so you shouldn't accept other types. The semantics of remainders can be extremely tricky as evidenced by the wide range of behaviors described on the wikipedia page for mod. Therefore, it probably isn't very natural to be defining a function the same way across integers and floating points. If you are certain that is what you want to do, both of these solutions make your assumptions about what types you are expecting explicit.
If you aren't satisfied by either of these solutions and can provide more details about what exactly you're trying to do, we might be able to find something else.
This doesn't sound like a good use case for a generic. You'll notice that operators like + are not generic in Swift. They use overloading, and so should you.

Can type constructors be considered as types in functional programming languages?

I am approaching the Haskell programming language, and I have a background of Scala and Java developer.
I was reading the theory behind type constructors, but I cannot understand if they can be considered types. I mean, in Scala, you use the keywords class or trait to define type constructors. Think about List[T], or Option[T]. Also in Haskell, you use the same keyword data, that is used for defining new types.
So, are type constructors also types?
Let's look at an analogy: functions. In some branches of mathematics, functions are called value constructors, because that's what they do: you put one or more values in, and they construct a new value out of those.
Type constructors are exactly the same thing, except on the type level: you put one or more types in, and they construct a new type out of those. They are, in some sense, functions on the type level.
Now, to our analogy: what is the analog of the question you are asking? Well, it is this: "Can value constructors (i.e. functions) be considered as values in functional programming languages?"
And the answer is: it depends on the programming language. Now, for functional programming languages, the answer is "Yes" for almost all (if not all) of them. It depends on your definition of what a "functional programming language" is. Some people define a functional programming language as a programming language which has functions as values, so the answer will be trivially "Yes" by definition. But, some people define a functional programming language as a programming language which does not allow side-effects, and in such a language, it is not necessarily true that functions are values.
The most famous example may be John Backus' FP, from his seminal paper Can Programming Be Liberated from the von Neumann Style? – a functional style and its algebra of programs. In FP, there is a hierarchy of "function-like" things. Functions can only deal with values, and functions themselves are not values. However, there is a concept of "functionals" which are "function constructors", i.e. they can take functions (and also values) as input and/or produce functions as output, but they cannot take functionals as input and/or produce them as output.
So, FP is arguably a functional programming language, but it does not have functions as values.
Note: functions as values is also called "first-class functions" and functions that take functions as input or return them as output are called "higher-order functions".
If we look at some types:
1 :: Int
[1] :: List Int
add :: Int → Int
map :: (a → b, List a) → b
You can see that we can easily say: any value whose type has an arrow in it, is a function. Any value whose type has more than one arrow in it, is a higher-order function.
Again, the same applies to type constructors, since they are really the same thing except on the type level. In some languages, type constructors can be types, in some they can't. For example, in Java and C♯, type constructors are not types. You cannot have a List<List> in C♯, for example. You can write down the type List<List> in Java, but that is misleading, since the two Lists mean different things: the first List is the type constructor, the second List is the raw type, so this is in fact not using a type constructor as a type.
What is the equivalent to our types example above?
Int :: Type
List :: Type ⇒ Type
→ :: (Type, Type) ⇒ Type
Functor :: (Type ⇒ Type) ⇒ Type
(Note, how we always have Type? Indeed, we are only dealing with types, so we normally don't write Type but instead simply write *, pronounced "Type"):
Int :: *
List :: * ⇒ *
→ :: (*, *) ⇒ *
Functor :: (* ⇒ *) ⇒ *
So, Int is a proper type, List is a type constructor that takes one type and produces a type, → (the function type constructor) takes two types and returns a type (assuming only unary functions, e.g. using currying or passing tuples), and Functor is a type constructor, which itself takes a type constructor and returns a type.
Theses "type-types" are called kinds. Like with functions, anything with an arrow is a type constructor, and anything with more than one arrow is a higher-kinded type constructor.
And like with functions, some languages allow higher-kinded type constructors and some don't. The two languages you mention in your question, Scala and Haskell do, but as mentioned above, Java and C♯ don't.
However, there is a complication when we look at your question:
So, are type constructors also types?
Not really, no. At least not in any language I know about. See, while you can have higher-kinded type constructors that take type constructors as input and/or return them as output, you cannot have an expression or a value or a variable or a parameter which has a type constructor as its type. You cannot have a function that takes a List or returns a List. You cannot have a variable of type Monad. But, you can have a variable of type Int.
So, clearly, there is a difference between types and type constructors.
Well, types and type constructors have a calculus of their own and they each have kinds. If you use :k (Maybe Int) in ghci for example, you'll get *, now this is a proper type and it (usually) has inhabitants. In this case Nothing, Just 42, etc. * now has a more descriptive alias Type.
Now you can look at the kind of the constructor that is Maybe, and :k Maybe will give you * -> *. With the alias, this is Type -> Type which is what you expect. It takes a Type and constructs a Type. Now if you see types as set of values, one good question is what set of values do Maybe has? Well, none because it is not really a type. You might attempt something like Just but that has type a -> Maybe a with kind Type, rather than Maybe with kind Type -> Type.
At least in Haskell, there is a hierarchy that can roughly be described as follows.
Terms are things that exist at run-time, values like 1, 'a', and (+), for example.
Each term has a type, like Int or Char or Int -> Int -> Int.
Each type has a kind, and all types have the same kind, namely *.
A type constructor like [], though, has kind * -> *, so it is not a type. Instead, it is a mapping from a type to a type.
There are other kinds as well, including (in addition to * and * -> *, with an example of each):
* -> * -> * (Either)
(* -> *) -> * -> * (ReaderT, a monad transformer)
Constraint (Num Int)
* -> Constraint (Num; this is the kind of a type class)

The purpose of type classes in Haskell vs the purpose of traits in Scala

I am trying to understand how to think about type classes in Haskell versus traits in Scala.
My understanding is that type classes are primarily important at compile time in Haskell and not at runtime anymore, on the other hand traits in Scala are important both at compile time and run time. I want to illustrate this idea with a simple example, and I want to know if this viewpoint of mine is correct or not.
First, let us consider type classes in Haskell:
Let's take a simple example. The type class Eq.
For example, Int and Char are both instances of Eq. So it is possible to create a polymorphic List that is also an instance of Eq and can either contain Ints or Chars but not both in the same List.
My question is : is this the only reason why type classes exist in Haskell?
The same question in other words:
Type classes enable to create polymorphic types ( in this example a polymorphic List) that support operations that are defined in a given type class ( in this example the operation == defined in the type class Eq) but that is their only reason for existence, according to my understanding. Is this understanding of mine correct?
Is there any other reason why type classes exist in ( standard ) Haskell?
Is there any other use case in which type classes are useful in standard Haskell ? I cannot seem to find any.
Since Haskell's Lists are homogeneous, it is not possible to put Char and Int into the same list. So the usefulness of type classes, according to my understanding, is exhausted at compile time. Is this understanding of mine correct?
Now, let's consider the analogous List example in Scala:
Lets define a trait Eq with an equals method on it.
Now let's make Char and Int implement the trait Eq.
Now it is possible to create a List[Eq] in Scala that accepts both Chars and Ints into the same List ( Note that this - putting different type of elements into the same List - is not possible Haskell, at least not in standard Haskell 98 without extensions)!
In the case of the Haskell's List, the existence of type classes is important/useful only for type checking at compile time, according to my understanding.
In contrast, the existence of traits in Scala is important both at compile time for type checking and at run type for polymorphic dispatch on the actual runtime type of the object in the List when comparing two Lists for equality.
So, based on this simple example, I came to the conclusion that in Haskell type classes are primarily important/used at compilation time, in contrast, Scala's traits are important/used both at compile time and run time.
Is this conclusion of mine correct?
If not, why not ?
EDIT:
Scala code in response to n.m.'s comments:
case class MyInt(i:Int) {
override def equals(b:Any)= i == b.asInstanceOf[MyInt].i
}
case class MyChar(c:Char) {
override def equals(a:Any)= c==a.asInstanceOf[MyChar].c
}
object Test {
def main(args: Array[String]) {
val l1 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('b'))
val l2 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('b'))
val l3 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('c'))
println(l1==l1)
println(l1==l3)
}
}
This prints:
true
false
I will comment on the Haskell side.
Type classes bring restricted polymorphism in Haskell, wherein a type variable a can still be quantified universally, but ranges over only a subset of all the types -- namely, the types for which an instance of the type class is available.
Why restricted polymorphism is useful? A nice example would be the equality operator
(==) :: ?????
What its type should be? Intuitively, it takes two values of the same type and returns a boolean, so:
(==) :: a -> a -> Bool -- (1)
But the typing above is not entirely honest, since it allows one to apply == to any type a, including function types!
(\x :: Integer -> x + x) == (\x :: Integer -> 2*x)
The above would pass type checking if (1) were the typing for (==), since both arguments are of the same type a = (Integer -> Integer). However, we can not effectively compare two functions: well-known Computability results tell us that there is no algorithm to do that in general.
So, what we could do to implement (==)?
Option 1: at run time, if a function (or any other value involving functions -- such as a list of functions) is found to be passed to (==), raise an exception. This is what e.g. ML does. Typed programs can now "go wrong", despite checking types at compile time.
Option 2: introduce a new kind of polymorphism, restricting a to the function-free types. For instance, ww could have (==) :: forall-non-fun a. a -> a -> Bool so that comparing functions yields to a type error. Haskell exploits type classes to obtain exactly that.
So, Haskell type classes allow one to type (==) "honestly", ensuring no error at run time, and without being overly restrictive. Of course, the power of type classes goes far beyond of that but, at least in my own view, they primary purpose is to allow restricted polymorphism, in a very general and flexible way. Indeed, with type classes the programmer can define their own restrictions on the universal type quantifications.

Advantages of Scala's type system

I am exploring the Scala language. One claim I often hear is that Scala has a stronger type system than Java. By this I think what people mean is that:
scalac rejects certain buggy programs which javac will compile happily, only to cause a runtime error.
Certain invariants can be encoded in a Scala program such that the compiler won't let the programmer write code that violates the condition.
Am I right in thinking so?
The main advantage of the Scala Type system is not so much being stronger but rather being far richer (see "The Scala Type System").
(Java can define some of them, and implement others, but Scala has them built-in).
See also The Myth Makers 1: Scala's "Type Types", commenting Steve Yegge's blog post, where he "disses" Scala as "Frankenstein's Monster" because "there are type types, and type type types".
Value type classes (useful for reasonably small data structures that have value semantics) used instead of primitives types (Int, Doubles, ...), with implicit conversion to "Rich" classes for additional methods.
Nonnullable type
Monad types
Trait types (and the mixin composition that comes with it)
Singleton object types (just define an 'object' and you have one),
Compound types (intersections of object types, to express that the type of an object is a subtype of several other types),
Functional types ((type1, …)=>returnType syntax),
Case classes (regular classes which export their constructor parameters and which provide a recursive decomposition mechanism via pattern matching),
Path-dependent types (Languages that let you nest types provide ways to refer to those type paths),
Anonymous types (for defining anonymous functions),
Self types (can be used for instance in Trait),
Type aliases, along with:
package object (introduced in 2.8)
Generic types (like Java), with a type parameter annotation mechanism to control the subtyping behavior of generic types,
Covariant generic types: The annotation +T declares type T to be used only in covariant positions. Stack[T] is a subtype of Stack[S] if T is a subtype of S.
Contravariant generic types: -T would declare T to be used only in contravariant positions.
Bounded generic types (even though Java supports some part of it),
Higher kinded types, which allow one to express more advanced type relationships than is possible with Java Generics,
Abstract types (the alternative to generic type),
Existential types (used in Scala like the Java wildcard type),
Implicit types (see "The awesomeness of Scala is implicit",
View bounded types, and
Structural types, for specifing a type by specifying characteristics of the desired type (duck typing).
The main safety problem with Java relates to variance. Basically, a programmer can use incorrect variance declarations that may result in exceptions being thrown at run-time in Java, while Scala will not allow it.
In fact, the very fact that Java's Array is co-variant is already a problem, since it allows incorrect code to be generated. For instance, as exemplified by sepp2k:
String[] strings = {"foo"};
Object[] objects = strings;
objects[0] = new Object();
Then, of course, there are raw types in Java, which allows all sort of things.
Also, though Scala has it as well, there's casting. Java API is rich in type casts, and there's no idiom like Scala's case x: X => // x is now safely cast. Sure, one case use instanceof to accomplish that, but there's no incentive to do it. In fact, Scala's asInstanceOf is intentionally verbose.
These are the things that make Scala's type system stronger. It is also much richer, as VonC shows.