akka-http has mixed java and scala dsl definitions, preventing compilation - scala

There's the following error that happens when trying to compile the line of code in the error message. Removing withStatus makes the code compile.
[error] /home/anton/code/flow-mobile/server/src/main/scala/in/flow/server/FlowServerStack.scala:108: type mismatch;
[error] found : akka.http.javadsl.model.HttpResponse
[error] required: akka.http.scaladsl.model.HttpResponse
[error] r mapEntity {_ transformDataBytes errorFlow(ermsg) } withStatus code
For some reason the function signature is this (even though it is found in the scala dsl package)
override def withStatus(statusCode: Int):
akka.http.javadsl.model.HttpResponse = copy(status = statusCode)
override def withStatus(statusCode: akka.http.javadsl.model.StatusCode):
akka.http.javadsl.model.HttpResponse = copy(status = statusCode.asInstanceOf[StatusCode])
Whats going on?

The withStatus method is probably intended as a builder pattern helper to be used with Java.
If you want to alter a HttpResponse from Scala I reckon it would be more idiomatic to use .copy(status = StatusCodes.OK).

The thing is that you are supposed to use the copy method to change statusCode, headers etc with Scala dsl's HttpResponse. Other withXYZ methods are more for the internal workings of Java api.
val originalResponse = ...
val newResponse = originalResponse.copy(status = StatusCodes.OK)
// or
val newResponse = originalResponse.copy(status = StatusCodes.NotFound)
You can look at defined StatusCodes here - http://doc.akka.io/api/akka-http/current/akka/http/scaladsl/model/StatusCodes$.html

Related

Why is Either expected in the following for comprehension?

I am playing with tagless final in scala. I use pureconfig to load the configuration and then use the configuration values to set the server port and host.
Snippet
def create[F[_]: Async] =
for {
config <- ConfigSource.default.at("shopkart").load[AppConfig]
httpApp = EndpointApp.make[F]
server <- BlazeServerBuilder[F]
.bindHttp(port = config.http.port, host = config.http.host)
.withHttpApp(httpApp)
.resource
} yield server
The compilation error is ambiguous to me. This is the compilation error.
type mismatch;
[error] found : cats.effect.kernel.Resource[F,Unit]
[error] required: scala.util.Either[?,?]
[error] server <- BlazeServerBuilder[F]
[error] ^
[error] one error found
I understand that the ConfigSource.default.at("shopkart").load[AppConfig] returns Either[ConfigReaderFailures, AppConfig]. But within the context of for-comprehension, it is an instance of AppConfig. So, why in the following line where BlazeServerbuilder an Either is expected ?
My understanding is with in the context of for-comprehension, these are two different instances. Also, I came across a similar example in scala pet store https://github.com/pauljamescleary/scala-pet-store/blob/master/src/main/scala/io/github/pauljamescleary/petstore/Server.scala#L28
How to de-sugar for to understand this error better?
The code below that you would have got if you have used flatMap/map instead of for-comprehension.
ConfigSource.default.at("shopkart").load[AppConfig] // Either[E, AppConfig]
.flatMap { config => // in flatMap you should have the same type of Monad
BlazeServerBuilder[F] // Resource[F, BlazeServerBilder[F]]
.bindHttp(port = config.http.port, host = config.http.host)
.withHttpApp(EndpointApp.make[F])
.resource
}
The cause of your error that you can't use different types of a monad in one for-comprehension block. If you need that you should convert your monads to the same type. In your case the easiest way is converting your Either to Resource[F, AppConfig]. But you have to consider using F that can understand an error type of Either, like MonadError to handle error from Either and convert it to F. After you can use Resource.eval that expects F. I see that you use Async, so you could use Async[F].fromEither(config) for that.
def create[F[_]: Async] =
for {
config <- Resource.eval(
Async[F].fromEither(ConfigSource.default.at("shopkart").load[AppConfig])
)
httpApp = EndpointApp.make[F]
server <- BlazeServerBuilder[F]
.bindHttp(port = config.http.port, host = config.http.host)
.withHttpApp(httpApp)
.resource
} yield server

Exclude a specific implicit from a Scala project

How can I prevent the usage of a specific implicit in my scala code?
For example, I was recently bit by the default Codec provided by https://github.com/scala/scala/blob/68bad81726d15d03a843dc476d52cbbaf52fb168/src/library/scala/io/Codec.scala#L76.
Is there a way to ensure that any code that calls for an implicit codec: Codec never uses the one provided by fallbackSystemCodec?
Alternatively, is it possible to block all implicit Codecs?
Is this something that should be doable using scalafix?
Scalafix can inspect implicit arguments using SemanticTree. Here is an example solution by defining a custom scalafix rule.
Given
import scala.io.Codec
object Hello {
def foo(implicit codec: Codec) = 3
foo
}
we can define a custom rule
class ExcludedImplicitsRule(config: ExcludedImplicitsRuleConfig)
extends SemanticRule("ExcludedImplicitsRule") {
...
override def fix(implicit doc: SemanticDocument): Patch = {
doc.tree.collect {
case term: Term if term.synthetic.isDefined => // TODO: Use ApplyTree(func, args)
val struct = term.synthetic.structure
val isImplicit = struct.contains("implicit")
val excludedImplicit = config.blacklist.find(struct.contains)
if (isImplicit && excludedImplicit.isDefined)
Patch.lint(ExcludedImplicitsDiagnostic(term, excludedImplicit.getOrElse(config.blacklist.mkString(","))))
else
Patch.empty
}.asPatch
}
}
and corresponding .scalafix.conf
rule = ExcludedImplicitsRule
ExcludedImplicitsRuleConfig.blacklist = [
fallbackSystemCodec
]
should enable sbt scalafix to raise the diagnostic
[error] /Users/mario/IdeaProjects/scalafix-exclude-implicits/example-project/scalafix-exclude-implicits-example/src/main/scala/example/Hello.scala:7:3: error: [ExcludedImplicitsRule] Attempting to pass excluded implicit fallbackSystemCodec to foo'
[error] foo
[error] ^^^
[error] (Compile / scalafix) scalafix.sbt.ScalafixFailed: LinterError
Note the output of println(term.synthetic.structure)
Some(ApplyTree(
OriginalTree(Term.Name("foo")),
List(
IdTree(SymbolInformation(scala/io/LowPriorityCodecImplicits#fallbackSystemCodec. => implicit lazy val method fallbackSystemCodec: Codec))
)
))
Clearly the above solution is not efficient as it searches strings, however it should give some direction. Perhaps matching on ApplyTree(func, args) would be better.
scalafix-exclude-implicits-example shows how to configure the project to use ExcludedImplicitsRule.
You can do this by using a new type altogether; this way, nobody will be able to override it in your dependencies. It's essentially the answer I posted to create an ambiguous low priority implicit
It may not be practical though, if for example you can't change the type.

How to use akka-http-circe on client side calls?

I'm trying to do a simple REST API call using akka-http, circe and akka-http-json (akka-http-circe in particular).
import io.circe.generic.auto._
object Blah extends FailFastCirceSupport {
//...
val fut: Future[Json] = Http().singleRequest(HttpRequest(uri = uri))
I'm expecting akka-http-circe to figure out how to unmarshal a HttpResponse to my wanted type (here, just Json). But it doesn't compile.
So I looked at some documentation and samples around, and tried this:
val fut: Future[Json] = Http().singleRequest(HttpRequest(uri = uri)).flatMap(Unmarshal(_).to)
Gives: type mismatch; found 'Future[HttpResponse]', required 'Future[Json]'. How should I expose an unmarshaller for the fetch?
Scala 2.12.4, akka-http 10.0.10, akka-http-circe 1.18.0
The working code seems to be:
val fut: Future[Json] = Http().singleRequest(HttpRequest(uri = uri)).flatMap(Unmarshal(_).to[Json])
When the target type is explicitly given, the compiler finds the necessary unmarshallers and this works. It works equally with a custom type in place of Json.
Further question: Is this the best way to do it?

"recursive value needs type" with no recursion or forward reference involved

I have a code fragment in a class like this:
protected val reservedWords =
this.getClass.getMethods
.filter(_.getReturnType == classOf[Keyword])
.map(_.invoke(this).asInstanceOf[Keyword].str)
override val lexical = {
new SqlLexical(reservedWords)
}
This works fine, but reservedWords was only used once. So I decided to make it a local variable:
override val lexical = {
val keywords =
this.getClass.getMethods
.filter(_.getReturnType == classOf[Keyword])
.map(_.invoke(this).asInstanceOf[Keyword].str)
new SqlLexical(keywords)
}
(SqlLexical's constructor takes a Seq[Int]). Somehow this gives two errors:
[error] /home/aromanov/IdeaProjects/scalan-lite/meta/src/main/scala/scalan/meta/SqlParser.scala:186: recursive value keywords needs type
[error] val keywords =
[error] ^
[error] /home/aromanov/IdeaProjects/scalan-lite/meta/src/main/scala/scalan/meta/SqlParser.scala:191: not found: value keywords
[error] new SqlLexical(keywords)
[error] ^
Replacing val keywords = Nil makes both errors go away. Giving an explicit type instead fixes the first error only. What is going on here?
I don't have a minimized example yet, because I hope I am missing something obvious and it won't be needed. But the complete class can be seen at https://github.com/scalan/scalan/blob/a84253df1b04bf98ab7aba43be9e661f5c9e1423/meta/src/main/scala/scalan/meta/SqlParser.scala and it should be possible to reproduce the problem by
git clone https://github.com/scalan/scalan.git
git checkout a84253df1b04bf98ab7aba43be9e661f5c9e1423
sbt scalan-meta/compile
UPDATE: if I keep the original protected val reservedWords in place,
override val lexical = {
val keywords =
this.getClass.getMethods
.filter(_.getReturnType == classOf[Keyword])
.map(_.invoke(this).asInstanceOf[Keyword].str)
new SqlLexical(keywords)
}
compiles without problems.

Scala Pickling and type parameters

I'm using Scala Pickling, an automatic serialization framework for Scala.
According to the author's slides, any type T can be pickled as long as there is an implicit Pickler[T] in scope.
Here, I'm assuming she means scala.tools.nsc.io.Pickler.
However, the following does not compile:
import scala.pickling._
import scala.pickling.binary._
import scala.tools.nsc.io.Pickler
object Foo {
def bar[T: Pickler](t: T) = t.pickle
}
The error is:
[error] exception during macro expansion:
[error] scala.ScalaReflectionException: type T is not a class
[error] at scala.reflect.api.Symbols$SymbolApi$class.asClass(Symbols.scala:323)
[error] at scala.reflect.internal.Symbols$SymbolContextApiImpl.asClass(Symbols.scala:73)
[error] at scala.pickling.PickleMacros$class.pickleInto(Macros.scala:381)
[error] at scala.pickling.Compat$$anon$17.pickleInto(Compat.scala:33)
[error] at scala.pickling.Compat$.PickleMacros_pickleInto(Compat.scala:34)
I'm using Scala 2.10.2 with scala-pickling 0.8-SNAPSHOT.
Is this a bug or user error?
EDIT 1: The same error arises with both scala.pickling.SPickler and scala.pickling.DPickler.
EDIT 2: It looks like this is a bug: https://github.com/scala/pickling/issues/31
Yep, as Andy pointed out:
you need either a scala.pickling.SPickler or a scala.pickling.DPickler (static and dynamic, respectively) in order to pickle a particular type.
Those both already come in the scala.pickling package, so it's enough to just use them in your generic method signature.
You're absolutely correct that you can add an SPickler context-bound to your generic method. The only additional thing which you need (admittedly it's a bit ugly, and we're thinking about removing it) is to add a FastTypeTag context bound as well. (This is necessary for the pickling framework to know what type it's trying to pickle, as it handles primitives differently, for example.)
This is what you'd need to do to provide generic pickling/unpickling methods:
Note that for the unbar method, you need to provide an Unpickler context-bound rather than a SPickler context-bound.
import scala.pickling._
import binary._
object Foo {
def bar[T: SPickler: FastTypeTag](t: T) = t.pickle
def unbar[T: Unpickler: FastTypeTag](bytes: Array[Byte]) = bytes.unpickle[T]
}
Testing this in the REPL, you get:
scala> Foo.bar(42)
res0: scala.pickling.binary.BinaryPickle =
BinaryPickle([0,0,0,9,115,99,97,108,97,46,73,110,116,0,0,0,42])
scala> Foo.unbar[Int](res0.value)
res1: Int = 42
Looking at the project, it seems you need either an scala.pickling.SPickler or a scala.pickling.DPickler (static and dynamic, respectively) in order to pickle a particular type.
The pickle methods are macros. I suspect that if you pickle with an SPickler, the macro will require the compile time type of your class to be known.
Thus, you may need to do something similar to:
object Foo {
def bar(t: SomeClass1) = t.pickle
def bar(t: SomeClass2) = t.pickle
def bar(t: SomeClass3) = t.pickle
// etc
}
Alternatively, a DPickler may do the trick. I suspect that you'll still have to write some custom pickling logic for your specific types.