Using Scala and Play 2.5.10 (according to plugin.sbt) I have this code:
import akka.stream.scaladsl.Source
import play.api.libs.streams.Streams
import play.http._
val source = Source.fromPublisher(Streams.enumeratorToPublisher(enumerator))
Ok.sendEntity(HttpEntity.Streamed(source, None, Some("application/zip")))
The imports there are mostly from testing because no matter what I try I can't get the framework to accept HttpEntity.Streamed. With this setup the error is what the title says. Or taken from the console:
Looking at the documentation here I can't really figure out why it doesn't work: https://www.playframework.com/documentation/2.5.10/api/java/play/http/HttpEntity.Streamed.html
This is also what the official examples use: https://www.playframework.com/documentation/2.5.x/ScalaStream
Does anyone at least have some pointers on where to start looking? I've never used Scala or Play before so any hints are welcome.
you should import this one
import play.api.http.HttpEntity
import play.api.libs.streams.Streams
val entity: HttpEntity = HttpEntity.Streamed(fileContent, None, None)
Result(ResponseHeader(200), entity).as(MemeFoRTheFile)
It means that HttpEntity.Streamed is not a value so you should wrap it in a Result() with its ResponseHeader and its extension
Related
I'm a scala newbie, using pyspark extensively (on DataBricks, FWIW). I'm finding that Protobuf deserialization is too slow for me in python, so I'm porting my deserialization udf to scala.
I've compiled my .proto files to scala and then a JAR using scalapb as described here
When I try to use these instructions to create a UDF like this:
import gnmi.gnmi._
import org.apache.spark.sql.{Dataset, DataFrame, functions => F}
import spark.implicits.StringToColumn
import scalapb.spark.ProtoSQL
// import scalapb.spark.ProtoSQL.implicits._
import scalapb.spark.Implicits._
val deserialize_proto_udf = ProtoSQL.udf { bytes: Array[Byte] => SubscribeResponse.parseFrom(bytes) }
I get the following error:
command-4409173194576223:9: error: could not find implicit value for evidence parameter of type frameless.TypedEncoder[Array[Byte]]
val deserialize_proto_udf = ProtoSQL.udf { bytes: Array[Byte] => SubscribeResponse.parseFrom(bytes) }
I've double checked that I'm importing the correct implicits, to no avail. I'm pretty fuzzy on implicits, evidence parameters and scala in general.
I would really appreciate it if someone would point me in the right direction. I don't even know how to start diagnosing!!!
Update
It seems like frameless doesn't include an implicit encoder for Array[Byte]???
This works:
frameless.TypedEncoder[Byte]
this does not:
frameless.TypedEncoder[Array[Byte]]
The code for frameless.TypedEncoder seems to include a generic Array encoder, but I'm not sure I'm reading it correctly.
#Dymtro, Thanks for the suggestion. That helped.
Does anyone have ideas about what is going on here?
Update
Ok, progress - this looks like a DataBricks issue. I think that the notebook does something like the following on startup:
import spark.implicits._
I'm using scalapb, which requires that you don't do that
I'm hunting for a way to disable that automatic import now, or "unimport" or "shadow" those modules after they get imported.
If spark.implicits._ are already imported then a way to "unimport" (hide or shadow them) is to create a duplicate object and import it too
object implicitShadowing extends SQLImplicits with Serializable {
protected override def _sqlContext: SQLContext = ???
}
import implicitShadowing._
Testing for case class Person(id: Long, name: String)
// no import
List(Person(1, "a")).toDS() // doesn't compile, value toDS is not a member of List[Person]
import spark.implicits._
List(Person(1, "a")).toDS() // compiles
import spark.implicits._
import implicitShadowing._
List(Person(1, "a")).toDS() // doesn't compile, value toDS is not a member of List[Person]
How to override an implicit value?
Wildcard Import, then Hide Particular Implicit?
How to override an implicit value, that is imported?
How can an implicit be unimported from the Scala repl?
Not able to hide Scala Class from Import
NullPointerException on implicit resolution
Constructing an overridable implicit
Caching the circe implicitly resolved Encoder/Decoder instances
Scala implicit def do not work if the def name is toString
Is there a workaround for this format parameter in Scala?
Please check whether this helps.
Possible problem can be that you don't want just to unimport spark.implicits._ (scalapb.spark.Implicits._), you probably want to import scalapb.spark.ProtoSQL.implicits._ too. And I don't know whether implicitShadowing._ shadow some of them too.
Another possible workaround is to resolve implicits manually and use them explicitly.
I 'm facing below problem with Spark Shell. So, in a shell session -
I imported following - import scala.collection.immutable.HashMap
Then I realized my mistake and imported correct class - import java.util.HashMap
But, now I get following error on running my code -
<console>:34: error: reference to HashMap is ambiguous;
it is imported twice in the same scope by
import java.util.HashMap
and import scala.collection.immutable.HashMap
val colMap = new HashMap[String, HashMap[String, String]]()
Please assist me if I have long running Spark Shell session i.e I do not want to close and reopen my shell. So, is there a way I can clear previous imports and use correct class?
I know that we can also specify full qualified name like - val colMap = new java.util.HashMap[String, java.util.HashMap[String, String]]()
But, 'm looking if there is a way to clear an incorrect loaded class?
Thanks
I am using scalaz7 in a project and sometimes I run into issues with imports. The simplest way get started is
import scalaz._
import Scalaz._
but sometimes this can lead to conflicts. What I have been doing until now the following slightly painful process:
work out a minimal example that needs the same imports as my actual code
copy that example in a separate project
compile it with the option -Xprint:typer to find out how the code looks after implicit resolution
import the needed implicits in the original project.
Although this works, I would like to streamline it. I see that scalaz7 has much more fine-grained imports, but I do not fully understand how they are organized. For instance, I see one can do
import scalaz.std.option._
import scalaz.std.AllInstances._
import scalaz.std.AllFunctions._
import scalaz.syntax.monad._
import scalaz.syntax.all._
import scalaz.syntax.std.boolean._
import scalaz.syntax.std.all._
and so on.
How are these sub-imports organized?
As an example, say I want to work with validations. What would I need, for instance to inject validation implicits and make the following compile?
3.fail[String]
What about making ValidationNEL[A, B] an instance of Applicative?
This blog post explains the package structure and imports a la carte in scalaz7 in detail: http://eed3si9n.com/learning-scalaz-day13
For your specific examples, for 3.failure[String] you'd need:
import scalaz.syntax.validation._
Validation already has a method ap:
scala> "hello".successNel[Int] ap ((s: String) => "x"+s).successNel[Int]
res1: scalaz.Validation[scalaz.NonEmptyList[Int],java.lang.String] = Success(xhello)
To get the <*> operator, you need this import:
import scalaz.syntax.applicative._
Then you can do:
"hello".successNel[Int] <*> ((s: String) => "x"+s).successNel[Int]
I want to save an object (an instance of a class) to a file. I didn't find any valuable example of it. Do I need to use serialization for it?
How do I do that?
UPDATE:
Here is how I tried to do that
import scala.util.Marshal
import scala.io.Source
import scala.collection.immutable
import java.io._
object Example {
class Foo(val message: String) extends scala.Serializable
val foo = new Foo("qweqwe")
val out = new FileOutputStream("out123.txt")
out.write(Marshal.dump(foo))
out.close
}
First of all, out123.txt contains many extra data and it was in a wrong encoding. My gut tells me there should be another proper way.
On the last ScalaDays Heather introduced a new library which gives a new cool mechanism for serialization - pickling. I think it's would be an idiomatic way in scala to use serialization and just what you want.
Check out a paper on this topic, slides and talk on ScalaDays'13
It is also possible to serialize to and deserialize from JSON using Jackson.
A nice wrapper that makes it Scala friendly is Jacks
JSON has the following advantages
a simple human readable text
a rather efficient format byte wise
it can be used directly by Javascript
and even be natively stored and queried using a DB like Mongo DB
(Edit) Example Usage
Serializing to JSON:
val json = JacksMapper.writeValueAsString[MyClass](instance)
... and deserializing
val obj = JacksMapper.readValue[MyClass](json)
Take a look at Twitter Chill to handle your serialization: https://github.com/twitter/chill. It's a Scala helper for the Kyro serialization library. The documentation/example on the Github page looks to be sufficient for your needs.
Just add my answer here for the convenience of someone like me.
The pickling library, which is mentioned by #4lex1v, only supports Scala 2.10/2.11 but I'm using Scala 2.12. So I'm not able to use it in my project.
And then I find out BooPickle. It supports Scala 2.11 as well as 2.12!
Here's the example:
import boopickle.Default._
val data = Seq("Hello", "World!")
val buf = Pickle.intoBytes(data)
val helloWorld = Unpickle[Seq[String]].fromBytes(buf)
More details please check here.
I am using scalaz7 in a project and sometimes I run into issues with imports. The simplest way get started is
import scalaz._
import Scalaz._
but sometimes this can lead to conflicts. What I have been doing until now the following slightly painful process:
work out a minimal example that needs the same imports as my actual code
copy that example in a separate project
compile it with the option -Xprint:typer to find out how the code looks after implicit resolution
import the needed implicits in the original project.
Although this works, I would like to streamline it. I see that scalaz7 has much more fine-grained imports, but I do not fully understand how they are organized. For instance, I see one can do
import scalaz.std.option._
import scalaz.std.AllInstances._
import scalaz.std.AllFunctions._
import scalaz.syntax.monad._
import scalaz.syntax.all._
import scalaz.syntax.std.boolean._
import scalaz.syntax.std.all._
and so on.
How are these sub-imports organized?
As an example, say I want to work with validations. What would I need, for instance to inject validation implicits and make the following compile?
3.fail[String]
What about making ValidationNEL[A, B] an instance of Applicative?
This blog post explains the package structure and imports a la carte in scalaz7 in detail: http://eed3si9n.com/learning-scalaz-day13
For your specific examples, for 3.failure[String] you'd need:
import scalaz.syntax.validation._
Validation already has a method ap:
scala> "hello".successNel[Int] ap ((s: String) => "x"+s).successNel[Int]
res1: scalaz.Validation[scalaz.NonEmptyList[Int],java.lang.String] = Success(xhello)
To get the <*> operator, you need this import:
import scalaz.syntax.applicative._
Then you can do:
"hello".successNel[Int] <*> ((s: String) => "x"+s).successNel[Int]