Sets, Functors and Eq confusion - scala

A discussion came up at work recently about Sets, which in Scala support the zip method and how this can lead to bugs, e.g.
scala> val words = Set("one", "two", "three")
scala> words zip (words map (_.length))
res1: Set[(java.lang.String, Int)] = Set((one,3), (two,5))
I think it's pretty clear that Sets shouldn't support a zip operation, since the elements are not ordered. However, it was suggested that the problem is that Set isn't really a functor, and shouldn't have a map method. Certainly, you can get yourself into trouble by mapping over a set. Switching to Haskell now,
data AlwaysEqual a = Wrap { unWrap :: a }
instance Eq (AlwaysEqual a) where
_ == _ = True
instance Ord (AlwaysEqual a) where
compare _ _ = EQ
and now in ghci
ghci> import Data.Set as Set
ghci> let nums = Set.fromList [1, 2, 3]
ghci> Set.map unWrap $ Set.map Wrap $ nums
fromList [3]
ghci> Set.map (unWrap . Wrap) nums
fromList [1, 2, 3]
So Set fails to satisfy the functor law
fmap f . fmap g = fmap (f . g)
It can be argued that this is not a failing of the map operation on Sets, but a failing of the Eq instance that we defined, because it doesn't respect the substitution law, namely that for two instances of Eq on A and B and a mapping f : A -> B then
if x == y (on A) then f x == f y (on B)
which doesn't hold for AlwaysEqual (e.g. consider f = unWrap).
Is the substition law a sensible law for the Eq type that we should try to respect? Certainly, other equality laws are respected by our AlwaysEqual type (symmetry, transitivity and reflexivity are trivially satisfied) so substitution is the only place that we can get into trouble.
To me, substition seems like a very desirable property for the Eq class. On the other hand, some comments on a recent Reddit discussion include
"Substitution seems stronger than necessary, and is basically quotienting the type, putting requirements on every function using the type."
-- godofpumpkins
"I also really don't want substitution/congruence since there are many legitimate uses for values which we want to equate but are somehow distinguishable."
-- sclv
"Substitution only holds for structural equality, but nothing insists Eq is structural."
-- edwardkmett
These three are all pretty well known in the Haskell community, so I'd be hesitant to go against them and insist on substitability for my Eq types!
Another argument against Set being a Functor - it is widely accepted that being a Functor allows you to transform the "elements" of a "collection" while preserving the shape. For example, this quote on the Haskell wiki (note that Traversable is a generalization of Functor)
"Where Foldable gives you the ability to go through the structure processing the elements but throwing away the shape, Traversable allows you to do that whilst preserving the shape and, e.g., putting new values in."
"Traversable is about preserving the structure exactly as-is."
and in Real World Haskell
"...[A] functor must preserve shape. The structure of a collection should not be affected by a functor; only the values that it contains should change."
Clearly, any functor instance for Set has the possibility to change the shape, by reducing the number of elements in the set.
But it seems as though Sets really should be functors (ignoring the Ord requirement for the moment - I see that as an artificial restriction imposed by our desire to work efficiently with sets, not an absolute requirement for any set. For example, sets of functions are a perfectly sensible thing to consider. In any case, Oleg has shown how to write efficient Functor and Monad instances for Set that don't require an Ord constraint). There are just too many nice uses for them (the same is true for the non-existant Monad instance).
Can anyone clear up this mess? Should Set be a Functor? If so, what does one do about the potential for breaking the Functor laws? What should the laws for Eq be, and how do they interact with the laws for Functor and the Set instance in particular?

Another argument against Set being a Functor - it is widely accepted that being a Functor allows you to transform the "elements" of a "collection" while preserving the shape. [...] Clearly, any functor instance for Set has the possibility to change the shape, by reducing the number of elements in the set.
I'm afraid that this is a case of taking the "shape" analogy as a defining condition when it is not. Mathematically speaking, there is such a thing as the power set functor. From Wikipedia:
Power sets: The power set functor P : Set → Set maps each set to its power set and each function f : X → Y to the map which sends U ⊆ X to its image f(U) ⊆ Y.
The function P(f) (fmap f in the power set functor) does not preserve the size of its argument set, yet this is nonetheless a functor.
If you want an ill-considered intuitive analogy, we could say this: in a structure like a list, each element "cares" about its relationship to the other elements, and would be "offended" if a false functor were to break that relationship. But a set is the limiting case: a structure whose elements are indifferent to each other, so there is very little you can do to "offend" them; the only thing is if a false functor were to map a set that contains that element to a result that doesn't include its "voice."
(Ok, I'll shut up now...)
EDIT: I truncated the following bits when I quoted you at the top of my answer:
For example, this quote on the Haskell wiki (note that Traversable is a generalization of Functor)
"Where Foldable gives you the ability to go through the structure processing the elements but throwing away the shape, Traversable allows you to do that whilst preserving the shape and, e.g., putting new values in."
"Traversable is about preserving the structure exactly as-is."
Here's I'd remark that Traversable is a kind of specialized Functor, not a "generalization" of it. One of the key facts about any Traversable (or, actually, about Foldable, which Traversable extends) is that it requires that the elements of any structure have a linear order—you can turn any Traversable into a list of its elements (with Foldable.toList).
Another, less obvious fact about Traversable is that the following functions exist (adapted from Gibbons & Oliveira, "The Essence of the Iterator Pattern"):
-- | A "shape" is a Traversable structure with "no content,"
-- i.e., () at all locations.
type Shape t = t ()
-- | "Contents" without a shape are lists of elements.
type Contents a = [a]
shape :: Traversable t => t a -> Shape t
shape = fmap (const ())
contents :: Traversable t => t a -> Contents a
contents = Foldable.toList
-- | This function reconstructs any Traversable from its Shape and
-- Contents. Law:
--
-- > reassemble (shape xs) (contents xs) == Just xs
--
-- See Gibbons & Oliveira for implementation. Or do it as an exercise.
-- Hint: use the State monad...
--
reassemble :: Traversable t => Shape t -> Contents a -> Maybe (t a)
A Traversable instance for sets would violate the proposed law, because all non-empty sets would have the same Shape—the set whose Contents is [()]. From this it should be easy to prove that whenever you try to reassemble a set you would only ever get the empty set or a singleton back.
Lesson? Traversable "preserves shape" in a very specific, stronger sense than Functor does.

Set is "just" a functor (not a Functor) from the subcategory of Hask where Eq is "nice" (i.e. the subcategory where congruence, substitution, holds). If constraint kinds were around from way back then perhaps set would be a Functor of some kind.

Well, Set can be treated as a covariant functor, and as a contravariant functor; usually it's a covariant functor. And for it to behave regarding equality one has to make sure that whatever the implementation, it does.
Regarding Set.zip - it is nonsense. As well as Set.head (you have it in Scala). It should not exist.

Related

Why is FunctionK not the same as a Natural Transformation

The cats documentation on FunctionK contains:
Thus natural transformation can be implemented in terms of FunctionK. This is why a parametric polymorphic function FunctionK[F, G] is sometimes referred as a natural transformation. However, they are two different concepts that are not isomorphic.
What's an example of a FunctionK which isn't a Natural Transformation (or vice versa)?
It's not clear to me whether this statement is implying F and G need to have Functor instances, for a FunctionK to be a Natural Transformation.
The Wikipedia article on Natural Transformations says that the Commutative Diagram can be written with a Contravariant Functor instead of a Covariant Functor, which to me implies that a Functor instance isn't required?
Alternatively, the statement could be refering to impure FunctionKs, although I'd kind of expect analogies to category theory breaking down in the presence of impurity to be a given; and not need explicitly stating?
When you have some objects (from CT) in one category and some objects in another category, and you are able to come up with a way show that each object and arrow between objects has a correspondence in later then you can say that there is a functor from one to another. In less strict language: you can say that there is a functor from A to B if you can find a "subgraph" in B that has the same shape as A.
In such case you can "zoom out": draw a point, call it object representing category A, draw another, call it object representing B, and draw an arrow and call it functor.
If there are many ways you can do it, you can have multiple functors. And with more categories you can combine these functors like you compose arrows. Which in this "zoomed out" world look like normal objects and arrows. And you can form categories with them. If you can find a mapping between these categories, a functor on functors, then this is a natural transformation.
When it comes to functional programming you don't work in such generic framework. Usually, you assume that:
object is a type
arrow is a function
to define a category you almost always would have to use a generic type, or else it would be too specific to be useful as a general purpose library (isomorphism between e.g. possible transitions of one enum into transition states of another enum could be a functor, but that wouldn't necessarily fit some generic interface)
since programming languages cannot let you define a generic mapping between two arbitrary types, functors that you'll see will be almost exclusively Id ~> F: you can lift a function A => B into List[A] => List[B], Future[A] => Future[B] and so one easily (this proves existence of F[A] -> F[B] arrow for given A -> B arrow, and if A and B are generic you provided a proof for all arrows from Id), but try finding something more complex than "given A, add a wrapper around it to get F[A]" and it's a challenge
similarly the only natural transformations you'll see will be from Id ~> F into Id ~> G that is "given A, change the wrapper type from F[A] to G[A]", because you have a guarantee that there is the same A hidden somehow in both F and G and you don't have to deal with modifying it (only with modifying the wrapper)
The latter being exactly a FunctionK or just a polymorphic function in Scala 3: [A] => F[A] => G[A]. A concept from type theory rather than from CT (although in mathematics a lot of concepts map into each other, like here it FunctionK maps to natural transformation where objects represent types and arrows functions between them).
Category theory isn't so restrictive. As a matter of the fact, isn't even rooted in computer science and was not created with functional programmers in mind. Let's create non-FunctionK natural transformation which models some operation on "state machines" implementation:
object is a state in state machine of sort (let's say enum value)
arrow is a legal transition from one state to another (let say you can model it as a pair of enum values, and it somehow incorporates transitivity)
when you have 2 models of state machines with different enums, BUT you can take one model: one enum and allowed pairs and translate it to another model: another enum and its pair, you have a functor. one that doesn't implement cats.Functor but still
let's say you would model it with some class Translate[Enum1, Enum2] {...} interface
let's say that this translation also extends the model with some functionality, so it's actually Translate[Enum1, Enum2 | ExtensionX]
now, build another extension Translate[Enum1, Enum2 | ExtensionY]
if you can somehow convert Translate[A, B | ExtensionX] into Translate[A, B | ExtensionY] for a whole category of enums (described as A and B) then this would be a natural transformation
notice that it would not fit FunctionK just like Translate doesn't fit Functor
It's a bit stretched example, also hardly anyone implementing an isomorphisms between state machines would reach for functor as a way to describe it, but it should show that natural transformation is not FunctionK. And why it's so rare to see functors and natural transformations different than Id ~> F lifts and (Id ~> F) ~> (Id ~> G) rewrappings.
PS. When it comes to Scala, you can also meet CT used as: object is a type, arrow is a subtyping relationship, thus covariant functors and contravariant functors appear in type parameters. Since "A is a subtype of B" translates to "A can be always upcasted to B", then these 2 interpretations of functors will often be mashed together - something along "don't define map if you cannot upcast and don't define contramap if you cannot downcast parameter" (see also narrow and widen operations in Cats).
PS2. Authors might be defending against hardcore-CT fans: from the point of view of CT Kleisli and ReaderT are 2 different things, yet in Cats they are combined together into a single Kleisli type for convenience. I saw some people complaining about it so maybe authors of the documentation felt that they need this disclaimer.
You can write down FunctionK-instances for things that aren't functors at all, neither covariant nor contravariant.
For example, given
type F[X] = X => X
you could implement a FunctionK[F, F] by
new FunctionK[F, F] {
def apply[X](f: F[X]): F[X] = f andThen f
}
Here, the F cannot be considered to be a functor, because X appears with both variances. Thus, you get something that's certainly a FunctionK, but the question whether it's a natural transformation isn't even valid to begin with.
Note that this example does not depend on whether you take the general CT-definition or the narrow FP-definition of what a "functor" is, the mapping F is simply not functorial.

What is the advantage of Option/Maybe Monad over Functor?

I understand the advantage of IO monad and List Monad over Functor, however, I don't understand the advantage of Option/Maybe Monad over Functor.
Is that simply the integration of types of the language?
or
What is the advantage of Option/Maybe Monad over Functor in specific use?
PS. asking the advantage in specific use is not opinion-based because if there is, it can be pointed out without subjective aspect.
PS.PS. some member here is so eager to push repeatedly
Option is both a functor and a monad?
should be the answer, or QA is duplicate, but actually not.
I already know the basics such as
Every monad is an applicative functor and every applicative functor is a functor
as the accepted answer there, and that is not what I'm not asking here.
Having an excellent answer here that is Not included in the previous QA.
The aspect, detail, or resolution of each question is quite different, so please avoid "bundling" different things in rough manner here.
Let's look at the types.
fmap :: Functor f => (a -> b) -> (f a -> f b)
(<*>) :: Applicative f => f (a -> b) -> (f a -> f b)
flip (>>=) :: Monad f => (a -> f b) -> (f a -> f b)
For a functor, we can apply an ordinary function a -> b to a value of type f a to get a value of type f b. The function never gets a say in what happens to the f part, only the inside. Thinking of functors as sort of box-like things, the function in fmap never sees the box itself, just the inside, and the value gets taken out and put back into the exact same box it started at.
An applicative functor is slightly more powerful. Now, we have a function which is also in a box. So the function gets a say in the f part, in a limited sense. The function and the input each have f parts (which are independent of each other) which get combined into the result.
A monad is even more powerful. Now, the function does not, a priori, have an f part. The function takes a value and produces a new box. Whereas in the Applicative case, our function's box and our value's box were independent, in the Monad case, the function's box can depend on the input value.
Now, what's all this mean? You've asked me to focus on Maybe, so let's talk about Maybe in the concrete.
fmap :: (a -> b) -> (Maybe a -> Maybe b)
(<*>) :: Maybe (a -> b) -> (Maybe a -> Maybe b)
flip (>>=) :: (a -> Maybe b) -> (Maybe a -> Maybe b)
As a reminder, Maybe looks like this.
data Maybe a = Nothing | Just a
A Maybe a is a value which may or may not exist. From a functor perspective, we'll generally think of Nothing as some form of failure and Just a as a successful result of type a.
Starting with fmap, the Functor instance for Maybe allows us to apply a function to the inside of the Maybe, if one exists. The function gets no say over the success or failure of the operation: A failed Maybe (i.e. a Nothing) must remain failed, and a successful one must remain successful (obviously, we're glossing over undefined and other denotational semantic issues here; I'm assuming that the only way a function can fail is with Nothing).
Now (<*>), the applicative operator, takes a Maybe (a -> b) and a Maybe a. Either of those two might have failed. If either of them did, then the result is Nothing, and only if the two both succeeded do we get a Just as our result. This allows us to curry operations. Concretely, if I have a function of the form g :: a -> b -> c and I have values ma :: Maybe a and mb :: Maybe b, then we might want to apply g to ma and mb. But when we start to do that, we have a problem.
fmap g ma :: Maybe (b -> c)
Now we've got a function that may or may not exist. We can't fmap that over mb, because a nonexistent function (a Nothing) can't be an argument to fmap. The problem is that we have two independent Maybe values (ma and mb in our example) which are fighting, in some sense, for control. The result should only exist if both are Just. Otherwise, the result should be Nothing. It's sort of a Boolean "and" operation, in that if any of the intermediates fail, then the whole calculation fails. (Note: If you're looking for a Boolean "or", where any individual success can recover from prior failure, then you're looking for Alternative)
So we write
(fmap g ma) <*> mb :: Maybe c
or, using the more convenient synonym Haskell provides for this purpose,
g <$> ma <*> mb :: Maybe c
Now, the key word in the above situation is independent. ma and mb have no say over the other's success or failure. This is good in many cases, because code like this can often be parallelized (there are very efficient command line argument parsing libraries that exploit just this property of Applicative). But, obviously, it's not always what we want.
Enter Monad. In the Maybe monad, the provided function produces a value of type Maybe b based on the input a. The Maybe part of the a and of the b are no longer independent: the latter can depend directly on the former.
For example, take the classic example of Maybe: a square root function. We can't take a square root of a negative number (let's assume we're not working with complex numbers here), so our hypothetical square root looks like
sqrt :: Double -> Maybe Double
sqrt x | x < 0 = Nothing
| otherwise = Just (Prelude.sqrt x)
Now, suppose we've got some number r. But r isn't just a number. It came from earlier in our computation, and our computation might have failed. Maybe it did a square root earlier, or tried to divide by zero, or something else entirely, but it did something that has some chance of producing a Nothing. So r is Maybe Double, and we want to take its square root.
Obviously, if r is already Nothing, then its square root is Nothing; we can't possibly take a square root if we've already failed to compute everything else. On the other hand, if r is a negative number, then sqrt is going to fail and produce Nothing despite the fact that r is itself Just. So what we really want is
case r of
Nothing -> Nothing
Just r' -> sqrt r'
And this is exactly what the Monad instance for Maybe does. That code is equivalent to
r >>= sqrt
The result of this entire computation (and, namely, whether or not it is Nothing or Just) depends not just on whether or not r is Nothing but also on r's actual value. Two different Just values of r can produce success or failure depending on what sqrt does. We can't do that with just a Functor, we can't even do that with Applicative. It takes a Monad.

When should one use applicatives over monads?

I’ve been using Scala at work and to understand Functional Programming more deeply I picked Graham Hutton’s Programming in Haskell (love it :)
In the chapter on Monads I got my first look into the concept of Applicative Functors (AFs)
In my (limited) professional-Scala capacity I’ve never had to use AFs and have always written code that uses Monads. I’m trying to distill the understanding of “when to use AFs” and hence the question. Is this insight correct:
If all your computations are independent and parallelizable (i.e., the result of one doesn’t determine the output of another) your needs would be better served by an AF if the output needs to be piped to a pure function without effects. If however, you have even a single dependency AFs won’t help and you’ll be forced to use Monads. If the output needs to be piped to a function with effects (e.g., returning Maybe) you’ll need Monads.
For example, if you have “monadic” code like so:
val result = for {
x <- callServiceX(...)
y <- callServiceY(...) //not dependent on X
} yield f(x,y)
It’s better to do something like (pseudo-AF syntax for scala where |#| is like a separator between parallel/asynch calls).
val result = (callServiceX(...) |#| callServiceY(...)).f(_,_)
If f == pure and callService* are independent AFs will serve you better
If f has effects i.e., f(x,y): Option[Response] you’ll need Monads
If callServiceX(...), y <- callServiceY(...), callServiceZ(y) i.e., there is even a single dependency in the chain, use Monads.
Is my understanding correct? I know there’s a lot more to AFs/Monads and I believe I understand the advantages of one over the other (for the most part). What I want to know is the decision making process of deciding which one to use in a particular context.
There is not really a decision to be made here: always use the Applicative interface, unless it is too weak.1
It's the essential tension of abstraction strength: more computations can be expressed with Monad; computations expressed with Applicative can be used in more ways.
You seem to be mostly correct about the conditions where you need to use Monad. I'm not sure about this one:
If f has effects i.e. f(x,y) : Option[Response] you'll need Monads.
Not necessarily. What is the functor in question here? There is nothing stopping you from creating a F[Option[X]] if F is the applicative. But just as before you won't be able to make further decisions in F depending on whether the Option succeeded or not -- the whole "call tree" of F actions must be knowable without computing any values.
1 Readability concerns aside, that is. Monadic code will probably be more approachable to people from traditional backgrounds because of its imperative look.
I think you'll need to be a little cautious about terms like "independent" or "parallelizable" or "dependency". For example, in the IO monad, consider the computation:
foo :: IO (String, String)
foo = do
line1 <- getLine
line2 <- getLine
return (line1, line2)
The first and second lines are not independent or parallelizable in the usual sense. The second getLine's result is affected by the action of the first getLine through their shared external state (i.e., the first getLine reads a line, implying the second getLine will not read that same line but will rather read the next line). Nonetheless, this action is applicative:
foo = (,) <$> getLine <*> getLine
As a more realistic example, a monadic parser for the expression 3 + 4 might look like:
expr :: Parser Expr
expr = do
x <- factor
op <- operator
y <- factor
return $ x `op` y
The three actions here are interdependent. The success of the first factor parser determines whether or not the others will be run, and its behavior (e.g., how much of the input stream it absorbs) clearly affects the results of the other parsers. It would not be reasonable to consider these actions as operating "in parallel" or being "independent". Still, it's an applicative action:
expr = factor <**> operator <*> factor
Or, consider this State Int action:
bar :: Int -> Int -> State Int Int
bar x y = do
put (x + y)
z <- gets (2*)
return z
Clearly, the result of the gets (*2) action depends on the computation performed in the put (x + y) action. But, again, this is an applicative action:
bar x y = put (x + y) *> gets (2*)
I'm not sure that there's a really straightforward way of thinking about this intuitively. Roughly, if you think of a monadic action/computation m a as having "monadic structure" m as well as a "value structure" a, then applicatives keep the monadic and value structures separate. For example, the applicative computation:
λ> [(1+),(10+)] <*> [3,4,5]
[4,5,6,13,14,15]
has a monadic (list) structure whereby we always have:
[f,g] <*> [a,b,c] = [f a, f b, f c, g a, g b, g c]
regardless of the actual values involves. Therefore, the resulting list length is the product of the length of both "input" lists, the first element of the result involves the first elements of the "input" lists, etc. It also has a value structure whereby the value 4 in the result clearly depends on the value (1+) and the value 3 in the inputs.
A monadic computation, on the other hand, permits a dependency of the monadic structure on the value structure, so for example in:
quux :: [Int]
quux = do
n <- [1,2,3]
drop n [10..15]
we can't write down the structural list computation independent of the values. The list structure (e.g., the length of the final list) is dependent on the value level data (the actual values in the list [1,2,3]). This is the kind of dependency that requires a monad instead of an applicative.

monad bind operation on a list not intuitive to follow

I find the Option Monad to be intuitive to understand, while List is not.
Some(1) >>= { x=>Some(x+1)}
Ma -> a -> Mb -> Mb
if I extract value from Some(1) I know it is 1
but in the list case
List(3,4,5) flatMap { x=> List(x,-x) }
if I extract value from List, what do I get ? how to make the understanding process intuitive
Intuition behind an Option or Maybe is actually very similar to List monad. The main difference is that List is non-deterministic - we don't know how many values we can get, when with Option it's always one on success and zero on failure. Empty list is considered a failure.
I think this piece describes it quite well:
For lists, monadic binding involves joining together a set of
calculations for each value in the list. When used with lists, the
signature of >>= becomes:
(>>=) :: [a] -> (a -> [b]) -> [b]
That is, given a list of a's and a function that maps an a onto a list
of b's, binding applies this function to each of the a's in the input
and returns all of the generated b's concatenated into a list.
And an example of list implementation:
instance Monad [] where
return x = [x]
xs >>= f = concat (map f xs)
fail _ = []
Sorry for putting Haskell into Scala answer, but that was the resource I used to understand this stuff.
Scala's flatMap is not exactly Haskell's bind >>=, but quite close to it. So what does it all mean?:
Imagine a practical situation where you have a list of clients List[Client], you can bind them to a single list of orders List[Order] that will be automatically flattened for you with flatMap or >>=. If you would use a map instead, you would get a List[List[Order]]. In practice you will provide the function for >>= to use, similarly how you provide a function to fold - you decide how data has to be generated/aggregated/etc. What bind does for you is to provide a general pattern for combining two monadic values and for each type of monads implementation will be unique.
You might prefer to look at it at as multiple levels of abstraction (from more general to less):
Monad has bind operation that will combine two monadic values into one: (>>=) :: m a -> (a -> m b) -> m b.
List as a monad instance implements bind with something like this: xs >>= f = concat (map f xs) - map the function over all elements and concatenate results into a single list.
You provide an implementation of the function f for the bind function depending on your needs (the clients -> orders example).
Once you know how bind behaves for your monad instances (List, Option) you can think in terms of just using it and "forget" about actual implementation.

What's the difference between a lens and a partial lens?

A "lens" and a "partial lens" seem rather similar in name and in concept. How do they differ? In what circumstances do I need to use one or the other?
Tagging Scala and Haskell, but I'd welcome explanations related to any functional language that has a lens library.
To describe partial lenses—which I will henceforth call, according to the Haskell lens nomenclature, prisms (excepting that they're not! See the comment by Ørjan)—I'd like to begin by taking a different look at lenses themselves.
A lens Lens s a indicates that given an s we can "focus" on a subcomponent of s at type a, viewing it, replacing it, and (if we use the lens family variation Lens s t a b) even changing its type.
One way to look at this is that Lens s a witnesses an isomorphism, an equivalence, between s and the tuple type (r, a) for some unknown type r.
Lens s a ====== exists r . s ~ (r, a)
This gives us what we need since we can pull the a out, replace it, and then run things back through the equivalence backward to get a new s with out updated a.
Now let's take a minute to refresh our high school algebra via algebraic data types. Two key operations in ADTs are multiplication and summation. We write the type a * b when we have a type consisting of items which have both an a and a b and we write a + b when we have a type consisting of items which are either a or b.
In Haskell we write a * b as (a, b), the tuple type. We write a + b as Either a b, the either type.
Products represent bundling data together, sums represent bundling options together. Products can represent the idea of having many things only one of which you'd like to choose (at a time) whereas sums represent the idea of failure because you were hoping to take one option (on the left side, say) but instead had to settle for the other one (along the right).
Finally, sums and products are categorical duals. They fit together and having one without the other, as most PLs do, puts you in an awkward place.
So let's take a look at what happens when we dualize (part of) our lens formulation above.
exists r . s ~ (r + a)
This is a declaration that s is either a type a or some other thing r. We've got a lens-like thing that embodies the notion of option (and of failure) deep at it's core.
This is exactly a prism (or partial lens)
Prism s a ====== exists r . s ~ (r + a)
exists r . s ~ Either r a
So how does this work concerning some simple examples?
Well, consider the prism which "unconses" a list:
uncons :: Prism [a] (a, [a])
it's equivalent to this
head :: exists r . [a] ~ (r + (a, [a]))
and it's relatively obvious what r entails here: total failure since we have an empty list!
To substantiate the type a ~ b we need to write a way to transform an a into a b and a b into an a such that they invert one another. Let's write that in order to describe our prism via the mythological function
prism :: (s ~ exists r . Either r a) -> Prism s a
uncons = prism (iso fwd bck) where
fwd [] = Left () -- failure!
fwd (a:as) = Right (a, as)
bck (Left ()) = []
bck (Right (a, as)) = a:as
This demonstrates how to use this equivalence (at least in principle) to create prisms and also suggests that they ought to feel really natural whenever we're working with sum-like types such as lists.
A lens is a "functional reference" that allows you to extract and/or update a generalized "field" in a larger value. For an ordinary, non-partial lens that field is always required to be there, for any value of the containing type. This presents a problem if you want to look at something like a "field" which might not always be there. For example, in the case of "the nth element of a list" (as listed in the Scalaz documentation #ChrisMartin pasted), the list might be too short.
Thus, a "partial lens" generalizes a lens to the case where a field may or may not always be present in a larger value.
There are at least three things in the Haskell lens library that you could think of as "partial lenses", none of which corresponds exactly to the Scala version:
An ordinary Lens whose "field" is a Maybe type.
A Prism, as described by #J.Abrahamson.
A Traversal.
They all have their uses, but the first two are too restricted to include all cases, while Traversals are "too general". Of the three, only Traversals support the "nth element of list" example.
For the "Lens giving a Maybe-wrapped value" version, what breaks is the lens laws: to have a proper lens, you should be able to set it to Nothing to remove the optional field, then set it back to what it was, and then get back the same value. This works fine for a Map say (and Control.Lens.At.at gives such a lens for Map-like containers), but not for a list, where deleting e.g. the 0th element cannot avoid disturbing the later ones.
A Prism is in a sense a generalization of a constructor (approximately case class in Scala) rather than a field. As such the "field" it gives when present should contain all the information to regenerate the whole structure (which you can do with the review function.)
A Traversal can do "nth element of a list" just fine, in fact there are at least two different functions ix and element that both work for this (but generalize slightly differently to other containers).
Thanks to the typeclass magic of lens, any Prism or Lens automatically works as a Traversal, while a Lens giving a Maybe-wrapped optional field can be turned into a Traversal of a plain optional field by composing with traverse.
However, a Traversal is in some sense too general, because it is not restricted to a single field: A Traversal can have any number of "target" fields. E.g.
elements odd
is a Traversal that will happily go through all the odd-indexed elements of a list, updating and/or extracting information from them all.
In theory, you could define a fourth variant (the "affine traversals" #J.Abrahamson mentions) that I think might correspond more closely to Scala's version, but due to a technical reason outside the lens library itself they would not fit well with the rest of the library - you would have to explicitly convert such a "partial lens" to use some of the Traversal operations with it.
Also, it would not buy you much over ordinary Traversals, since there's e.g. a simple operator (^?) to extract just the first element traversed.
(As far as I can see, the technical reason is that the Pointed typeclass which would be needed to define an "affine traversal" is not a superclass of Applicative, which ordinary Traversals use.)
Scalaz documentation
Below are the scaladocs for Scalaz's LensFamily and PLensFamily, with emphasis added on the diffs.
Lens:
A Lens Family, offering a purely functional means to access and retrieve a field transitioning from type B1 to type B2 in a record simultaneously transitioning from type A1 to type A2. scalaz.Lens is a convenient alias for when A1 =:= A2, and B1 =:= B2.
The term "field" should not be interpreted restrictively to mean a member of a class. For example, a lens family can address membership of a Set.
Partial lens:
Partial Lens Families, offering a purely functional means to access and retrieve an optional field transitioning from type B1 to type B2 in a record that is simultaneously transitioning from type A1 to type A2. scalaz.PLens is a convenient alias for when A1 =:= A2, and B1 =:= B2.
The term "field" should not be interpreted restrictively to mean a member of a class. For example, a partial lens family can address the nth element of a List.
Notation
For those unfamiliar with scalaz, we should point out the symbolic type aliases:
type #>[A, B] = Lens[A, B]
type #?>[A, B] = PLens[A, B]
In infix notation, this means the type of a lens that retrieves a field of type B from a record of type A is expressed as A #> B, and a partial lens as A #?> B.
Argonaut
Argonaut (a JSON library) provides a lot of examples of partial lenses, because the schemaless nature of JSON means that attempting to retrieve something from an arbitrary JSON value always has the possibility of failure. Here are a few examples of lens-constructing functions from Argonaut:
def jArrayPL: Json #?> JsonArray — Retrieves a value only if the JSON value is an array
def jStringPL: Json #?> JsonString — Retrieves a value only if the JSON value is a string
def jsonObjectPL(f: JsonField): JsonObject #?> Json — Retrieves a value only if the JSON object has the field f
def jsonArrayPL(n: Int): JsonArray #?> Json — Retrieves a value only if the JSON array has an element at index n