Clarifying some functional Programming jargon in scala [closed] - scala

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Although at this point i am quite ok with function1 Monad and its transformer version i.e. Kleisli composition, going back into a book I originally started with, i still can't really understand the jargon that what used, somewhat to stress the difference between a typical Monadic behavior and kleisli composition. I find that distinction non-intuitive and misleading.
I suspect the author wanted to stress the notion of deferred evaluation that comes Function1 Monad, compared to other monad, but that's just working with the effect of function.
Anyway, I wonder if someone can clarify some the author statement:
Power of function application over the additional structure that we call effects— You’ve seen how ordinary functions compose and how
they can be applied over an argument. But that may not be enough for
domain models, where you need to apply a function with custom behavior
at the point of application. Look at the function application that
flatMap offers: def flatMap[A, B](ma: F[A])(f: A => F[B]): F[B]. It
applies A => F[B] to F[A] and then flattens F[F[B]] to F[B]. Here
you flatten at the point of application so that you get back the
structure you supplied to flatMap. This enables you to sequence the
function application over the effectful structure of F[_].
.......
Power of function composition over the additional structure that we
call effects— If you have a function f: A => F[B] and another function
g: B => F[C], where F is a monad, then you can compose them to get A
=> F[C]. This isn’t ordinary function composition; it’s composition of monadic functions. It’s also called Kleisli composition;
.......
The Jargon i don't get is "Power of function application" vs "Power of function composition"
Especially when the author try to make the point with the following:
It applies A => F[B] to F[A] and then flattens F[F[B]] to F[B]. Here you flatten at the point of application so that you get back the structure you supplied to flatMap.
The same thing happen wether you start with a monadic function or just an effect. To me an effect can even be seen as the function Unit => F[A].
So i am not crystal clear if i am not understanding what the author is trying to stress, or is it that it is the author that is not making its point properly and i already got the all point.
What's the point of application anyway ? how is it not in function1 ?

Related

What is DSL in Scala? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Going through various Scala related material, term DSL is used at many places.
Google search tells it is Domain specific language.
What exactly it means, and why is it that this term doesn't comes across while learning other languages like Java?
As others have pointed out, the first part of the question ("what is a DSL?") is essentially answered by What is a DSL and where should I use it?
I'll try to answer instead to the second part: why are DSLs so popular in Scala?
The reason is that Scala (as opposed to other languages like Java) offers many syntactic facilities to provide DSLs.
For example, Scala has infix method applications:
someObject.someMethod(someArgument)
// can be written as
someObject someMethod someArgument
This makes introducing custom "operators" very easy in the language. A notable example is the akka DSL for sending messages to an actor:
actor ! message
which is a DSL mimicking the syntax of Erlang.
Another example of a syntactic facility in Scala is the "trailing block argument" (not sure it has a precise name):
def someMethod(x: Int)(y: String) = ???
// can be invoked as
someMethod(42)("foo")
// but also as
someMethod(42) { "foo" }
which is very interesting when the last parameter is a function:
def someOtherMethod[A, B](x: A)(f: A => B): B = ???
someOtherMethod(42) { a =>
// ...a very long body
}
In other languages, blocks ({ ... }) are usually reserved to built-in control-flow structures (such as if, while, for, etc), but in Scala you can use this syntactic facility to build custom methods that resemble built-in control structures.
Those two features alone are distinctive enough for explaining why DSL are so pervasive in the Scala community.
Digging a bit deeper, we can also mention implicit conversions, which allow to add custom methods to any existing type. For example
implicit class TimesOps(x: Int) {
def times(y: Int): Int = x * y
}
// then use as
2 times 4 // 8
This example combines the use of infix method application and implicit conversions.

why not nested syntax in Scala higher-kinded type parameter [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
This is not allowed, A nested type parameter (higher-kind type, right?):
trait Abc[C[T]] {
def f(): C[T]
}
cmd13.sc:2: not found: type T
def f(): C[T]
Whereas this is
trait Abc[C[_],T] {
def f(): C[T]
}
is there reason that we need to write Abc[C[_],T] instead of Abc[C[T]] ?
I mean that the latter is more intuitive.
The Abc[C[T]] syntax is not intuitive to me: it seems to have only one argument, when it has two. Worse, consider
def foo[C[T]](x: C[(String, T)]): T = ...
What would the foo[C[T]] syntax convey? It seems to imply that C[_] is being applied to T later on, when it is not. The syntax foo[C[_],T] does not have that implication.
Since this question was tagged Haskell, let me add that in Haskell we write
foo :: forall c t. c (String, t) -> t
so a similar Scala syntax would be
def foo[C,T](x: C[(String, T)]): T = ...
This could be done, if Scala used kind inference to infer the kind of C as Haskell does. Of course, if C is actually unused in the type, then we would need some way to suggest its kind, e.g.
foo :: forall (c :: * -> *) t. ...
or rely on kind polymorphism (as in Haskell). This requires a more advanced typing engine, though.
In Haskell, the GHC devs like to experiment with bleeding edge research in the compiler, and the programmers enjoy that. In Scala, the devs are a bit more conservative, but still added pretty advanced things compared to most languages. After all, Scala has higher kinds, which are not widespread -- even if Scala lacks kind inference and kind polymorphism, I'd still regard Scala as quite advanced with respect to types.
I think the reason is that the with the second syntax there are many reasonable things you can't do or they would look very ugly.
How would you re-write something Monad-inspired like this:
def map[F[_],A,B](src: F[A], f: A=>B): F[B]
Where would you put restrictions on types? Imagine the clutter for something like this:
def sorted[S[_] <: Seq, A : Ordered](seq: S[A]): S[A]
So what you suggest seems like adding a syntax that simplifies the trivial case and makes more complex cases very hard or impossible. This doesn't look like a good trade-off.
P.S. I bet this question will be closed soon as mostly opinion-based because most probably this is what it is.

Scala: question marks in type parameters

I'm trying to understand the following piece of code (from the Scalaz library):
def kleisliIdApplicative[R]: Applicative[Kleisli[Id, R, ?]] = ...
I'm assuming that a type of the form T[P0, ?] is a type-constructor that takes a parameter. However I'm no able to find documentation that explains the usage of question marks in type parameters.
A related question is what is the difference between the question mark and an underscore?
Is there a place where all this is well-documented?
The question mark syntax comes from a compiler plugin called kind-projector.
You can see it being included in the scalaz build here: https://github.com/scalaz/scalaz/blob/series/7.3.x/project/build.scala#L310
The plugin translates
Kleisli[Id, R, ?]
into (roughly)
({type L[A] = Kleisli[Id, R, A]})#L
which is a rather convoluted way (but unfortunately the only way in Scala) of expressing a type lambda, i.e. a partially applied type constructor.

Can someone explain to me what the Shapeless library is for? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Can someone explain to me in simple terms what the Shapeless library is for?
Scala has generics and inheritance functionality so I'm a bit confused what Shapeless is for.
Maybe a use case to clarify things would be helpful.
It's a little hard to explain, as shapeless has a wide range of features; I'd probably find it easier to "explain, in simple terms, what variables are for". You definitely want to start with the feature overview.
Broadly speaking, shapeless is about programming with types. Doing things at compile-time that would more commonly be done at runtime, keeping precise track of the type of each element in a list, being able to translate from tuples to HLists to case classes, creating polymorphic functions (as opposed to methods), etc.
A typical usage scenario would go something like:
Read a bunch of values from somewhere into a List
perform a type-safe cast of that List into an HList
map over that HList with a polymorphic function that e.g. normalises values
convert the 3rd element (which is statically known to be an Int) into a 0-padded string
construct a case class using values from the HList
For reference, an HList will have a precise type, such as Int :: String :: Boolean :: HNil (yes, that really is a single type) where everything is pinned down and the size is fixed. So you either need to know at compile time exactly what will be going into your HList, or you need the type-safe cast.
If you take the tail of such an HList, you get a String :: Boolean :: HNil, and a compile-time guarantee that the head of this will be a String. Prepending a value to the head will similarly preserve all types involved.
Shapeless also comes with the Generic type class, allowing you to use HList operations on tuples and case classes as well.
The other features I tend to use are:
Coproducts, which allow you to statically type a value as being e.g. "a String, Double or Int, but nothing else" (much like Either, but not limited to just two possibilities)
Lenses, which simplify working with nested case classes.
Looking at an HList is something that might seem baffling until you try to work with types and delegate or switch on types. Take a look at the following:
val myList = 1 :: 2 :: "3" :: fred :: Nil
What is the type of myList here? If you were to inspect it, you'd see it was of type List[Any]. That's not very helpful. What's even less helpful is if I tried to use the following PartialFunction[Any] to map over it:
myList.map{
case x: Int => x
case x: String => Int.parseInt(x)
}
At runtime, this might throw a MatchError because I haven't actually told you what type fred is. It could be of type Fred.
With an HList you can know right at compile time if you've failed to capture one of the types of that list. In the above, if I had defined myList = 1 :: 2 :: "3" :: fred :: HNil when I accessed the 3rd element, it's type would be String and this would be known at compile time.
As #KevinWright states, there's more to it than that with Shapeless but HList is one of the defining features of the library.
Everything in Shapeless has two things in common:
First, it isn't in the Scala standard library, but arguably should be. Therefore, asking what Shapeless is for is a bit like asking with the Scala standard library is for! It's for everything. It's a grab bag.
(but it isn't a totally arbitrary grab bag, because:)
Second, everything in Shapeless provides increased checking and safety at compile time. Nothing in Shapeless (that I can think of?) actually “does” anything at runtime. All of the interesting action happens when your code is compiled. The goal is always increased confidence that if your code compiles at all, it won't crash or do the wrong thing at runtime. (Hence this notable quip: https://twitter.com/mergeconflict/status/304090286659866624 "Does
Miles Sabin even exist at runtime?")
There is a nice introduction to what type-level programming is all about, with links to further resources, at https://stackoverflow.com/a/4443972/86485.

Are there any documented anti-patterns for functional programming? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Next month I'm going to work on a new R&D project that will adopt a functional programming language (I voted for Haskell, but right now F# got more consensus). Now, I've played with such languages for a while and developed a few command line tools with them, but this is a quite bigger project and I'm trying to improve my functional programming knowledge and technique. I've also read a lot on the topic, but I can't find any books or resources that document anti-patterns in the functional programming world.
Now, learning about anti-patterns means learning about other smart people failures: in OOP I know a few of them, and I'm experienced enough to choose wisely when something that generally is an anti-pattern, perfectly fit my needs. But I can choose this because I know the lesson learned by other smart guys.
Thus, my question is: are there any documented anti-patterns in functional programming? Till now, all of my collegues told me that they do not know any, but they can't state why.
If yes, please include one single link to an authoritative source (a catalogue, an essay, a book or equivalent).
If no, please support your answer by a proper theorem.
Please don't turn this question in a list: it is a boolean question that just requires a proof to evaluate the answer. For example, if you are Oleg Kiselyov, "Yes" is enough, since everybody will be able to find your essay on the topic. Still, please be generous.
Note that I am looking for formal anti-patterns, not simple bad habits or bad practices.
From the linked wikipedia article on Anti-Patterns:
... there must be at least two key elements present to formally distinguish an actual anti-pattern from a simple bad habit, bad practice, or bad idea:
some repeated pattern of action, process or structure that initially appears to be beneficial, but ultimately produces more bad consequences than beneficial results, and
an alternative solution exists that is clearly documented, proven in actual practice and repeatable.
Moreover by "documented" I mean something from authoritative authors or well known sources.
The languages that I'm used to are:
Haskell (where I'm really starting to think that if code compiles, it works!)
Scala
F#
but I can also adapt knowledge about anti-patterns documented in other functional languages.
I searched a lot in the web, but all the resources I've found are either related to OOP or to function layout (define variable at the beginning of the function, and the like...).
The only anti-pattern I've seen is over-monadization, and since monads can be incredibly useful this falls somewhere in between a bad practice and an anti-pattern.
Suppose you have some property P that you want to be true of some of your objects. You could decorate your objects with a P monad (here in Scala, use paste in the REPL to get the object and its companion to stick together):
class P[A](val value: A) {
def flatMap[B](f: A => P[B]): P[B] = f(value) // AKA bind, >>=
def map[B](f: A => B) = flatMap(f andThen P.pure) // (to keep `for` happy)
}
object P {
def pure[A](a: A) = new P(a) // AKA unit, return
}
Okay, so far so good; we cheated a little bit by making value a val rather than making this a comonad (if that's what we wanted), but we now have a handy wrapper in which we can wrap anything. Now let's suppose we also have properties Q and R.
class Q[A](val value: A) {
def flatMap[B](f: A => Q[B]): Q[B] = f(value)
def map[B](f: A => B) = flatMap(f andThen Q.pure)
}
object Q {
def pure[A](a: A) = new Q(a)
}
class R[A](val value: A) {
def flatMap[B](f: A => R[B]): R[B] = f(value)
def map[B](f: A => B) = flatMap(f andThen R.pure)
}
object R {
def pure[A](a: A) = new R(a)
}
So we decorate our object:
class Foo { override def toString = "foo" }
val bippy = R.pure( Q.pure( P.pure( new Foo ) ) )
Now we are suddenly faced with a host of problems. If we have a method that requires property Q, how do we get to it?
def bar(qf: Q[Foo]) = qf.value.toString + "bar"
Well, clearly bar(bippy) isn't going to work. There are traverse or swap operations that effectively flip monads, so we could, if we'd defined swap in an appropriate way, do something like
bippy.map(_.swap).map(_.map(bar))
to get our string back (actually, a R[P[String]]). But we've now committed ourselves to doing something like this for every method that we call.
This is usually the wrong thing to do. When possible, you should use some other abstraction mechanism that is equally safe. For instance, in Scala you could also create marker traits
trait X
trait Y
trait Z
val tweel = new Foo with X with Y with Z
def baz(yf: Foo with Y) = yf.toString + "baz"
baz(tweel)
Whew! So much easier. Now it is very important to point out that not everything is easier. For example, with this method if you start manipulating Foo you will have to keep track of all the decorators yourself instead of letting the monadic map/flatMap do it for you. But very often you don't need to do a bunch of in-kind manipulations, and then the deeply nested monads are an anti-pattern.
(Note: monadic nesting has a stack structure while traits have a set structure; there is no inherent reason why a compiler could not allow set-like monads, but it's not a natural construct for typical formulations of type theory. The anti-pattern is a simple consequence of the fact that deep stacks are tricky to work with. They can be somewhat easier if you implement all the Forth stack operations for your monads (or the standard set of Monad transformers in Haskell).)