Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I read the wonderful blog from JOHN A DE GOES regarding to tagless final. In the section 5.Fake Abstraction, he has mentioned:
Unfortunately, these operations satisfy no algebraic laws—none
whatsoever! This means when we are writing polymorphic code, we have
no way to reason generically about putStrLn and getStrLn.
For all we know, these operations could be launching threads, creating
or deleting files, running a large number of individual side-effects
in sequence, and so on.
He is correspond to the following tagless algebra:
trait Console[F[_]] {
def putStrLn(line: String): F[Unit]
val getStrLn: F[String]
}
Does it mean, writting laws for tageless algebra is not possible or do I misunderstand something.
A few things:
John A De Goes, while is very knowledgeable has also a lot of opinions and express them as if they were inferred from mathematics without making a clear distinction - this posts is a part of series where he basically pitches that tagless final is often a bad solution and ZIO is a good one
paragraph says that tagless final often doesn't follow algebraic laws which means that we cannot e.g. consider IO monid/semigroup and similar. Which is true. But it doesn't mean that these constructs cannot obey some contracts (called laws) because the do and that is the whole point of Cats Effect
nobody can force you to write laws for algebras, because laws are basically some particular way of writing specification/tests where you write a separate test for some class of interfaces and then for every implementation you can instantiate this test to check if your implementation fulfill contracts - and yes, nobody can force you to write test for your code. However, that can be said about virtually everything we code, and TTFI give you benefit of making it easier to specify a common behavior of widely different implementations, and then writing your code and tests carefully, sticking to the part of contract that is vital for a particular piece of code while also making these dependencies on contracts explicit
So yes, nobody can force you to write laws for your algebras, but people who implement them in libraries actually do this, and if you write your own algebras, you are encouraged to do so, so this argument is stretched and eristic.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm quite new to Scala and functional programming. I have read that we are not supposed to make any side effect(Eg: DB and IO operation) in FP. I'm wondering how can we handle DB operation in Scala?
If you want to create a purely functional app, you can't do any side effects, but without side effect how can we do anything useful (write text to console, read data from the database, etc.)?
Basically, what we can do is "cheating" by wrapping all code that is not pure (is performing any side effects) in effect which is usually called IO monad. Impure actions wrapped with IO are not executed until explicitly started (usually by calling method named like unsafeRun). And since that wrapped actions are just values, you can return them from functions, assign to variables and do everything you would do with plain values:
import cats.effect.IO //you'd have to add cats-effect dependency to make this import work
val printHelloToConsole = IO(println("Hello")) //nothing is happening yet
printHelloToConsole.unsafeRunSync // starting performin effects
The main purpose of that action is an attempt to separate pure, functional code from impure parts of the application. Quote from Martin Odersky:
The IO monad does not make a function pure. It just makes it obvious that it’s impure.
There are several implementations of IO Monad for Scala: ZIO, Cats-Effect, Monix. For pure functional database communication, you can use Doobie which works with any of these monads.
I would recommend you to watch that talk from John de Goes FP to the max, it explains very well what is IO monad and how to use it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have been a searching for alternative to Moose (Modern object-oriented Perl)
Because Moose is slow I have seen several post relation to this issue, I not want that.
Example from the same creator: https://www.youtube.com/watch?v=ugEry1UWg84&feature=youtu.be&t=260
So I found this alternative from the same creator of moose:
https://metacpan.org/pod/MOP#DESCRIPTION
MOP - A Meta Object Protocol for Perl 5
This module implements a Meta Object Protocol for Perl 5 with minimal overhead and no non-core dependencies (eventually).
Work with UNIVERSAL::Object:
https://metacpan.org/pod/UNIVERSAL::Object
Is this a good choice and alternative to Moose, does someone test this software ?
Related post:
https://www.perlmonks.org/?node_id=1220917
Thanks.
Note: I forget to mention I know about Moo, Mouse, etc, maybe exist something better ?
MOP is very low level, Moxie is based on it; but it's still a proof of concept.
There are faster and lighter alternatives that have been tested in production: Moo and Mouse.
In which context do you use Moose and find it slow ? There is of course an overhead involved, but most of it happens at startup time (compilation) ; then, at runtime, most features are cheap (as long as you make your classes immutable), as explained in the documentation. Over the time Moose has become the de facto standard for object oriented programming and it has a very, very wide ecosystem (a search on MooseX on metacpan returns 820 results). Don't give up on it to early.
If you really need faster startup time (like in vanilla CGI environment for example), the most relevant alternative to Moose is Moo, Minimal Object Orientation. It is really light-weithg, has no XS dependency, while implementing a significant subset of Moose (also, its syntax is fully compatible with Moose so you upgrade to Moose anytime later if you need some piece of functionality that you find missing in Moo). It also has a rich ecosystem.
I attended the following keynote on the future of Scala by Martin Odersky:
https://skillsmatter.com/skillscasts/8866-from-dot-to-dotty
At 1:01:00 an answer to an audience question seems to say that future Scala will not be Turing complete.
Did I understand this correctly? Will Scala 3 no longer be Turing complete? If so, what practical impact will this have on someone like me who uses Scala daily at work to solve practical problems? In other words, what do industrial Scala programmers loose and what do they gain by removing Turing completeness?
At 1:01:00 an answer to an audience question seems to say that future Scala will not be Turing complete.
Did I understand this correctly? Will Scala 3 no longer be Turing complete?
No, that's neither the question being asked nor the answer being given.
Firstly: The question being asked is not whether Scala will not be Turing-complete, the question being asked is whether the Scala Type System will not be Turing-complete.
Secondly: The answer being given is not that the Type System of future Scala will not be Turing-complete. Martin Odersky clearly says that with implicits, the Type System will definitely be Turing-complete, and without implicits, he doesn't want to make a prediction as to whether or not it will be Turing-complete.
So, to answer your question:
Scala will definitely still be Turing-complete.
The question you linked to is not about Scala, it is about Scala's Type System.
Scala's Type System will also still be Turing-complete because of implicits.
Scala's Type System without implicits may or may not be Turing-complete, we don't know yet, and Martin Odersky doesn't want to make any predictions.
If so, what practical impact will this have on someone like me who uses Scala daily at work to solve practical problems?
None whatsoever. First off, again, the Type System will still be Turing-complete because of implicits. And secondly, even it weren't, AFAIK, the Turing-completeness of Scala's Type System has not been used for anything pragmatically interesting. There are libraries which do perform sophisticated type-level computations, but those computations always terminate. Nobody has written a library which performs arbitrary Turing-complete computation at the type-level. (And in fact, it isn't even possible, because even though Scala's type system is Turing-complete, all currently existing implementations of Scala (there is only one anyway) have a strict limit on the recursion depth of the type checker).
In other words, what do industrial Scala programmers loose and what do they gain by removing Turing completeness?
Let's first talk about the type system: they don't lose anything. What they gain is the fact that compilation is guaranteed to terminate, and the fact that this means that the compiler can prove stuff about the program it couldn't prove otherwise.
Let's also answer the hypothetical question: what if Scala weren't Turing-complete? Well, we could no longer write infinite loops. That's pretty much it. Note, however, that lots of things that are typically modeled as infinite loops (or infinite recursion) over data can still be modeled as finite co-recursion over co-data! (For example, an event loop in an Operating System, a Web Server, or a GUI.)
OTOH, lots of things compilers can't do are "because it's equivalent to solving the Halting Problem". Well, in a language that isn't Turing-complete, the Halting Problem doesn't exist! So, the compiler can prove many more things about programs than it could in a Turing-complete language.
But, to re-iterate: there are no plans of making Scala not Turing-complete. There are no plans of making implicits not Turing-complete. There are restrictions to the type system which may or may not make the type system not Turing-complete.
Is my understanding above actually correct, that is, will Scala 3 no longer be Turing complete?
No, Scala 3 is still a Turing-complete language. You can already experiment with it by trying out Dotty, the current prototype for what will become Scala 3.
If you can give a link to the particular slide of the particular talk you're referring to, we can help you figure out what it was actually trying to express.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
While learning Scala, I found the concept of implicit difficult to rationalize. It allows one to pass values implicitly, without explicitly mentioning them.
What is its purpose for being and what is the problem that it seeks to solve?
At it's heart, implicit is a way of extending the behavior of values of a type in a way that's fully controllable on a local level in your program, and external to the original code that defines those values. It's one approach to solving the expression problem.
It lets you keep your core classes focused on their most fundamental structure and behavior, and factor out higher-level behaviors. It's used to achieve ad hoc polymorphism, where two formally unrelated data types can be seamlessly adapted to the same interface, so that they can be treated as instances of the same type.
For example, rather than your data model classes containing JSON serialization behavior, you can store that behavior elsewhere, and implicitly augment an object with the ability to serialize itself. This amounts to defining in an implicit instance, which specifies how your object can be viewed as "JSON serializable", rather than its original type, and it's done without editing real type of the object.
There are several forms of implicit, which are pretty thoroughly covered elsewhere. Use cases include enhance-my-library pattern, the typeclass pattern, implicit conversions, and dependency injection.
What's really interesting to me, in the context of this question, is how this differs from approaches in other languages.
Enhance-my-library and typeclasses
In many other languages, you accomplish this by monkey patching (typically where there is no type checking) or extension methods. These approaches have the downside of composing unpredictably and applying globally. In statically typed languages without a way of opening classes, you usually have to make explicit adapters. This has the downside of a lot of boilerplate. In both static and dynamic languages, you may also be able to use reflection, but usually with a lot of ceremony and complexity.
In Haskell, typeclasses exist as a first-class concept. They're global, though, so you don't get the local control over what typeclass is applied in a given situation. In Scala, you control what implicits are in scope locally, through the modules you import. And you can always opt out of implicit resolution entirely by passing parameters explicitly.
People advocate for global versus local resolution of typeclasses one way or the other, depending on who you ask.
Implicit conversions
A lot of other languages have no way to accomplish this. But it's become pretty frowned upon in Scala, so maybe this is for good reason.
There's a paper about type classes with older slides and discussion.
Being able implicitly to pass an object that encodes a type class simplifies the boilerplate.
Odersky just responded to a critique of implicits that
Scala would not be Scala if it did not have implicit parameters and
classes.
That suggests they solve a challenge that is central to the design of the language. In other words, supporting type classes is not an ancillary concern.
Its a deep question really It is something that is very powerful and you can use them to write abstract code eg typeclasses etc i can recommend some tutorials that you may look into and then we can haved a chat maybe sometime :)
It is all about providing sensible defaults in your code.
Also the magic of invoking apparently non existent methods on objects which just seems to work! All that good stuff is done via implicits.
But for all its power it may cause people to write some really bad code as well.
Please do watch Nick Partridge's presentation here and i am sure if you code along with him you will understand why and how to approach implicits.
Watch it here
Dick Walls excellent presentation with live coding
Watch both parts.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
It is my opinion that every language was created for a specific purpose. What was Scala created for and what problems does it best solve?
One of the things mentioned in talks by Martin Odersky on Scala is it being a language which scales well to tackle various problems. He wasn't talking about scaling in the sense of performance but in the sense that the language itself can seem to be expanded via libraries. So that:
val lock = new ReentrantReadWriteLock
lock withReadLock {
//do stuff
}
Looks like there is some special syntactic sugar for dealing with j.u.c locks. But this is not the case, it's just using the scala language in such a way as it appears to be. The code is more readable, isn't it?
In particular the various parsing rules of the scala language make it very easy to create libraries which look like a domain-specific language (or DSL). Look at scala-test for example:
describe("MyCoolClass") {
it("should do cool stuff") {
val c = new MyCoolClass
c.prop should be ("cool")
}
}
(There are lots more examples of this - I found out this one yesterday). There is much talk about which new features are going in the Java language in JDK7 (project coin). Many of these features are special syntactic sugar to deal with some specific issue. Scala has been designed with some simple rules that mean new keywords for every little annoyance are not needed.
Another goal of Scala was to bridge the gap between functional and object-oriented languages. It contains many constructs inspired (i.e. copied from!) functional languages. I'm thing of the incredibly powerful pattern-matching, the actor-based concurrency framework and (of course) first- and higher-order functions.
Of course, your question said that there was a specific purpose and I've just given 3 separate reasons; you'll probably have to ask Martin Odersky!
One more of the original design goals was of course to create a language which runs on the Java Virtual Machine and is fully interoperable with Java classes. This has (at least) two advantages:
you can take advantage of the ubiquity, stability, features and reputation of the JVM. (think management extensions, JIT compilation, advanced Garbage Collection etc)
you can still use all your favourite Java libraries, both 3rd party and your own. If this wasn't the case, it would be a significant obstacle to using Scala commercially in many cases (mine for example).
Agree with previous answers but recommend the Introduction to An Overview of the Scala Programming Language:
The work on Scala stems from a research effort to develop better language support for component software. There are two hypotheses that we would like to validate with the Scala experiment. First, we postulate that a programming language for component software needs to be scalable in the sense that the same concepts can describe small as well as large parts. Therefore, we concentrate on mechanisms for abstraction, composition, and decomposition rather than adding a large set of primitives which might be useful for components at some level of scale, but not at other levels. Second, we postulate that scalable support for components can be provided by a programming language which unifes and generalizes object-oriented and functional programming. For statically typed languages, of which Scala is an instance, these two paradigms were up to now largely separate. (Odersky)
I'd personally classify Scala alongside Python in terms of which problems it solves and how. The conspicuous difference and occasional complaint is Type complexity. I agree Scala's abstractions are complicated and at times seemingly convoluted but for a few points:
They're also mostly optional.
Scala's compiler is like free testing and documentation as cyclomatic complexity and lines of code escalate.
When aptly implemented Scala can perform otherwise all but impossible operations behind consistent and coherent APIs. From Scala 2.8 Collections:
For instance, a String (or rather: its backing class RichString) can be seen as a sequence of Chars, yet it is not a generic collection type. Nevertheless, mapping a character to character map over a RichString should again yield a RichString, as in the following interaction with the Scala REPL:
scala> "abc" map (x => (x + 1).toChar)
res1: scala.runtime.RichString = bcd
But what happens if one applies a function from Char to Int to a string? In that case, we cannot produce a string as result, it has to be some sequence of Int elements instead. Indeed one gets:
"abc" map (x => (x + 1))
res2: scala.collection.immutable.Vector[Int] = Vector(98, 99, 100)
So it turns out that map yields different types depending on what the result type of the passed function argument is! (Odersky)
Since it's functional and uses actors (as I understand it, please comment if I've got this wrong) it makes it very easy to scale nearly anything up to any number of CPUs.
That said, I see Scala as kind of a test bed for new language features. Throw in the kitchen sink and see what happens.
My personal opinion is that for any apps involving a team of more than 3 people you are more productive with a language with Very Simple and Restrictive Syntax just because the entire job becomes more how you interact with others as opposed to just coding to make the computer do something.
The more people you add, the more time you are going to spend explaining what ?: means or the difference between | and || as applied to two booleans (In Java, you'll find very few people know).