Why is Either expected in the following for comprehension? - scala

I am playing with tagless final in scala. I use pureconfig to load the configuration and then use the configuration values to set the server port and host.
Snippet
def create[F[_]: Async] =
for {
config <- ConfigSource.default.at("shopkart").load[AppConfig]
httpApp = EndpointApp.make[F]
server <- BlazeServerBuilder[F]
.bindHttp(port = config.http.port, host = config.http.host)
.withHttpApp(httpApp)
.resource
} yield server
The compilation error is ambiguous to me. This is the compilation error.
type mismatch;
[error] found : cats.effect.kernel.Resource[F,Unit]
[error] required: scala.util.Either[?,?]
[error] server <- BlazeServerBuilder[F]
[error] ^
[error] one error found
I understand that the ConfigSource.default.at("shopkart").load[AppConfig] returns Either[ConfigReaderFailures, AppConfig]. But within the context of for-comprehension, it is an instance of AppConfig. So, why in the following line where BlazeServerbuilder an Either is expected ?
My understanding is with in the context of for-comprehension, these are two different instances. Also, I came across a similar example in scala pet store https://github.com/pauljamescleary/scala-pet-store/blob/master/src/main/scala/io/github/pauljamescleary/petstore/Server.scala#L28
How to de-sugar for to understand this error better?

The code below that you would have got if you have used flatMap/map instead of for-comprehension.
ConfigSource.default.at("shopkart").load[AppConfig] // Either[E, AppConfig]
.flatMap { config => // in flatMap you should have the same type of Monad
BlazeServerBuilder[F] // Resource[F, BlazeServerBilder[F]]
.bindHttp(port = config.http.port, host = config.http.host)
.withHttpApp(EndpointApp.make[F])
.resource
}
The cause of your error that you can't use different types of a monad in one for-comprehension block. If you need that you should convert your monads to the same type. In your case the easiest way is converting your Either to Resource[F, AppConfig]. But you have to consider using F that can understand an error type of Either, like MonadError to handle error from Either and convert it to F. After you can use Resource.eval that expects F. I see that you use Async, so you could use Async[F].fromEither(config) for that.
def create[F[_]: Async] =
for {
config <- Resource.eval(
Async[F].fromEither(ConfigSource.default.at("shopkart").load[AppConfig])
)
httpApp = EndpointApp.make[F]
server <- BlazeServerBuilder[F]
.bindHttp(port = config.http.port, host = config.http.host)
.withHttpApp(httpApp)
.resource
} yield server

Related

missing parameter type for expanded function for my function; not for another with same signature

Short form: I have a method with the same signature as Future.recover. Passing a partial function to Future's version works. Passing the same PF to my version results in a missing parameter type for expanded function. The argument types of an anonymous function must be fully known. (SLS 8.5) error. What's the difference?
Longer form:
I'm trying to implement the TracingFuture class discussed here in an attempt to trace errors across future boundaries. The basic technique is to wrap the Future in another class, TracingFuture, while adding a pseudo-stacktrace.
The code given in the blog post is missing the recover method from Future, so I've added it with the same signature:
class TracingFuture[+T](underlying: Future[T], val trace: Vector[FutureTraceElement]) extends Future[T] {
def recover[U >: T](pf: PartialFunction[Throwable, U]
)(implicit ec: ExecutionContext, enclosing: sourcecode.Enclosing, file: sourcecode.File,
line: sourcecode.Line): TracingFuture[U] = {
val recovered = underlying.recover(pf)
new TracingFuture[U](recovered, trace :+ FutureTraceElement(enclosing.value, "recover", file.value, line.value))
}
}
For comparison, here's the equivalent chunk of code out of Future. Note that aside from the extra implicit parameters the signatures are the same.
trait Future[+T] extends Awaitable[T] {
def recover[U >: T](pf: PartialFunction[Throwable, U])(implicit executor: ExecutionContext): Future[U] =
transform { _ recover pf }
}
Finally, my code that produces the compile error:
val x: TracingFuture[Vector[Maintainer]] = ... // code producing a TracingFuture
val fMaintainers = x.recover {
case err: Throwable ⇒
logger.error("Failed to get list of user maintainers.", err)
Vector.empty[Maintainer]
}
And the error message:
[error] /Users/bwbecker/oat/src/oat3/modules/wapp/app/oat/wapp/dao/CronJobDAO.scala:273: missing parameter type for expanded function
[error] The argument types of an anonymous function must be fully known. (SLS 8.5)
[error] Expected type was: ?
[error] val fMaintainers = x.recover {
[error] ^
Once again, this code works with the Future.recover but I get a compile error with TracingFuture.recover. I don't understand why.
This SO question explains that the compiler knows the argument to the partial function must be a supertype of T but can't guarantee that. But why doesn't it run into that issue with Future.recover?
And, of course, I'd like to know if there's anything I do about it other than rewriting the anonymous partial function to make the types explicit.
The problem is that TracingFuture has two overloaded recover methods: the one you added and the one you inherited from Future. When you only have one, it provides the expected type which is crucial for type inference, but with overloaded methods it doesn't work, as you see from Expected type was: ?.
You may think the compiler should notice types of function parameters are the same and so can still provide the expected type. And you would be right, but it was only fixed in Scala 2.12.
Of course, then you'll run into trouble that the compiler has no way to tell which overload you want when only the implicit arguments are different.
Try to replace
val fMaintainers = x.recover {
case err: Throwable ⇒
logger.error("Failed to get list of user maintainers.", err)
Vector.empty[Maintainer]
}
with
val fMaintainers = x.recover(PartialFunction[Throwable, Vector[Maintainer]] {
case err: Throwable ⇒
logger.error("Failed to get list of user maintainers.", err)
Vector.empty[Maintainer]
})
Why do I get a "missing parameter for expanded function" in one case and not the other?

What is a proper signature of Executor.execute(​????) in Sangria middleware to log slow GraphQL queries?

I am trying to integrate Sangria middleware to log slow GraphQL queries in my application but getting the following compilation
Error:
type mismatch;
found : sangria.schema.Schema[models.UserRepo,Unit]
required: sangria.schema.Schema[Any,Unit]
Note:models.UserRepo <:Any,but class Schema is invariant in type
Ctx.
You may wish to define Ctx as +Ctx instead. (SLS 4.5)
Error occurred in an application involving default arguments.
Code snippet:
val Query = ObjectType("Query", List[Field[UserRepo, Unit]]
(Field("store", StoreType, resolve = _ ⇒ ()) ))
val schema = Schema(Query, Some(MutationType))
val logResult = Executor.execute(SchemaDefinition.schema,
query.asInstanceOf[Document], middleware = SlowLog(newlogger,
threshold = 10 seconds) :: Nil)
Here is the reference link: https://github.com/sangria-graphql/sangria-slowlog
Kindly help me to know what is a proper signature of Executor.execute(​​​????)
Thanks!
I think the main issue is that you've defined the schema in terms of UserRepo, but you haven't provided it at the execution time. I guess adding a userContext argument should fix the issue:
Executor.execute(SchemaDefinition.schema, query,
userContext = new UserRepo,
middleware = SlowLog(newlogger, threshold = 10 seconds) :: Nil)
I also made this test to check the types (these types are similar to your scenario), but it compiles just fine:
val schema: Schema[Repo, Unit] = ???
val md: Middleware[Any] = ???
Executor.execute(schema, query, new Repo, middleware = md :: Nil)
If it still does not compile, I would suggest you to provide complete self-contains example that reproduces the issue. (for instance, in your example you don't show the type of MutationType)

How to make method generic without getting "No matching Shape found"

I am not sure how to get past this "No matching Shape found" error, apart from writing lots of boilerplate.
The basic idea illustrated in the Gist is that I have a very basic version of a method (works, but is very specific), then a version that takes the mapper parameter and is more generic (works too, but is specific to one particular type), and then a third version which takes a type parameter and would be very useful, but doesn't compile because of this error.
Basic method:
def updatePD_FirstNames(id: ids.PersonalDetailsId, firstNames: StringLtd30): Future[Int] = {
Better method:
def updatePD_SL(id: ids.PersonalDetailsId, mapper: tables.PersonalDetails => tables.profile.api.Rep[StringLtd30], sl: StringLtd30): Future[Int] = {
Ideal method (but doesn't compile):
def updatePD_X[X](id: ids.PersonalDetailsId, mapper: tables.PersonalDetails => tables.profile.api.Rep[X], sl: X): Future[Int] = {
```
[server] $ compile
[info] Compiling 1 Scala source to ... target\scala-2.12\classes...
[error] ...schema\DbProxy.scala:688: No matching Shape found.
[error] Slick does not know how to map the given types.
[error] Possible causes: T in Table[T] does not match your * projection,
[error] you use an unsupported type in a Query (e.g. scala List),
[error] or you forgot to import a driver api into scope.
[error] Required level: slick.lifted.FlatShapeLevel
[error] Source type: slick.lifted.Rep[X]
[error] Unpacked type: T
[error] Packed type: G
[error] val q2: Query[tables.profile.api.Rep[X], X, Seq] = q1.map(mapper)
[error] ^
[error] one error found
[error] (server/compile:compileIncremental) Compilation failed
[error] Total time: 4 s, completed 23-Mar-2017 11:15:47
```
Full code at https://gist.github.com/aholland/0845bf29d836d672d006ab58f5f1c73c
The only obvious problem I can see in the code you've posted is that X is unconstrained. It could be any type, includes ones that Slick doesn't know how to process.
What you can do is add a context bound on X. The bound you probably want is BaseTypedType, which is a "typed type" Slick uses to identify types it can work with. It's described from 11:30 in https://www.youtube.com/watch?v=tS6N5AaZTLA
You'd use it like this:
import slick.ast.BaseTypedType
def updatePD[X : BaseTypedType](
id: Long,
selector: PersonTable => Rep[X],
newValue: X
): DBIO[Int] =
people.filter(_.id === id).map(selector).update(newValue)
What that means is that when you use the method...
updatePD(anId, _.name, "Alice")
...the compiler has to prove to itself that whatever X you use, there is an approproate type representation in Slick.
This is also from Richard, but the exchange took place on gitter.
The only trouble with the first answer is that by demanding an implicit of type BaseTypedType[X] the context bound forces client code for optional columns to provide an implicit of type BaseTypedType[Option[X]] even when BaseTypedType[X] is already available.
This is unnecessary. Slick handles optional columns for you and if you provide an implicit for BaseTypedType[X] you are providing enough for it to handle columns of type Option[X].
So the context bound, while it works, is more demanding than necessary and results in having to write implicits in the client-code that involve directly referencing null and replicating logic already built into Slick. Not good.
The answer is to declare the implicit parameter as a named implicit parameter (called shape below) in its own parameter list, i.e. in long-form, not using the context bound short-hand :BaseTypedType. Then you can specify the more complicated but less demanding constraint used below.
So the solution is:
def updatePD[X] (id: Long, selector: PersonTable => Rep[X], newValue: X)
(implicit shape: Shape[_ <: FlatShapeLevel, Rep[X], X, _]): DBIO[Int] = {
people.filter(_.id === id).map(selector).update(newValue)
}
Understanding why shape has the exact type Shape[_ <: FlatShapeLevel, Rep[X], X, _] depends on an intimate understanding of Slick's types and implicit mechanisms. Richard may yet write a blog post on that!

akka-http has mixed java and scala dsl definitions, preventing compilation

There's the following error that happens when trying to compile the line of code in the error message. Removing withStatus makes the code compile.
[error] /home/anton/code/flow-mobile/server/src/main/scala/in/flow/server/FlowServerStack.scala:108: type mismatch;
[error] found : akka.http.javadsl.model.HttpResponse
[error] required: akka.http.scaladsl.model.HttpResponse
[error] r mapEntity {_ transformDataBytes errorFlow(ermsg) } withStatus code
For some reason the function signature is this (even though it is found in the scala dsl package)
override def withStatus(statusCode: Int):
akka.http.javadsl.model.HttpResponse = copy(status = statusCode)
override def withStatus(statusCode: akka.http.javadsl.model.StatusCode):
akka.http.javadsl.model.HttpResponse = copy(status = statusCode.asInstanceOf[StatusCode])
Whats going on?
The withStatus method is probably intended as a builder pattern helper to be used with Java.
If you want to alter a HttpResponse from Scala I reckon it would be more idiomatic to use .copy(status = StatusCodes.OK).
The thing is that you are supposed to use the copy method to change statusCode, headers etc with Scala dsl's HttpResponse. Other withXYZ methods are more for the internal workings of Java api.
val originalResponse = ...
val newResponse = originalResponse.copy(status = StatusCodes.OK)
// or
val newResponse = originalResponse.copy(status = StatusCodes.NotFound)
You can look at defined StatusCodes here - http://doc.akka.io/api/akka-http/current/akka/http/scaladsl/model/StatusCodes$.html

Akka/Scala: onSuccess for multiple asks to different actor

I'm trying to make an async call to different sub-actors, such as:
A ---> B ---> C
\---> D
Actor A sends a request message to Actor B and B send two task messages to C and D, when C and D send the results back, B merge the results up and send it back to the A.
I was trying to use ask pattern and onSuccess to solve this:
class B(fakeInterface: String, subInterface: Array[ActorRef]) extends Actor {
val subCount = subInterface.length
var finishCount = 0
def receive = {
case reqMsg(msg) =>
if (subInterface.length == 0){
sender ! DoneMessage(msg)
} else {
implicit val timeout = Timeout(5 minutes)
val composedFutures = subInterface map { x =>
(x ? DoItMessage(msg)).mapTo[DoneMessage]
}
val allResult = Future.sequence(composedFutures)
allResult.onSuccess {
case _ => sender ! DoneMessage(msg)
}
}
}
}
But the code above does not compile at all, I got three error:
[error] inferred type arguments [dummy.DoneMessage,Array] do not conform to method sequence's type parameter bounds [A,M[_] <: TraversableOnce[_]]
[error] val allResult = Future.sequence(composedFutures)
[error] ^
[error] type mismatch;
[error] found : Array[scala.concurrent.Future[dummy.DoneMessage]]
[error] required: M[scala.concurrent.Future[A]]
[error] val allResult = Future.sequence(composedFutures)
[error] ^
[error] Cannot construct a collection of type M[A] with elements of type A based on a collection of type M[scala.concurrent.Future[A]].
[error] val allResult = Future.sequence(composedFutures)
[error] ^
[error] three errors found
How can I fix this? Or is there a more proper way to solve this scenario?
Array is not a TraversableOnce. There is an implicit conversion from Array to WrappedArray which is a TraversableOnce, but by the time the type parameter has been inferred to Array it is too late for the implicit conversion. If you replace Array with one of the classes from the collections library, e.g. List you can get past these compiler errors.
The ask pattern is inefficient because it has to create a fake actor for every request, which creates considerable overhead. It is useful for allowing non-actor code to communicate with actors, but actors themselves should message each other directly. Actor B will have to be stateful so it can keep track of messages from C and D as they come back, but it would make for a better solution.
scala.Array does not inherit from scala.collection.TraversableOnce so it cannot be used with Future.sequence. Try using a List or Seq instead.
http://www.scala-lang.org/api/current/index.html#scala.Array