zio-http (ZIO 2.x) application not starting with Scala 3 - scala

I have this simple application:
import zhttp.http.*
import zhttp.http.Method.GET
import zhttp.service.Server
import zio.*
object HexAppApplication extends ZIOAppDefault {
// Create HTTP route
val app: HttpApp[Any, Nothing] = Http.collect[Request] {
case GET -> !! / "text" => Response.text("Hello World!")
case GET -> !! / "json" => Response.json("""{"greetings": "Hello World!"}""")
}
val program: URIO[Any, ExitCode] = Server.start(8090, app).exitCode
override def run: URIO[Any, ExitCode] = program
}
The server starts and stops right away. Why doesn't it stay started?
build.sbt
val zioVersion = "2.0.0-RC5"
val zioHttpVersion = "2.0.0-RC6"
libraryDependencies += "dev.zio" %% "zio" % zioVersion
libraryDependencies ++= Seq(
"dev.zio" %% "zio-test" % zioVersion % "test",
"dev.zio" %% "zio-test-sbt" % zioVersion % "test",
"dev.zio" %% "zio-test-magnolia" % zioVersion % "test" // optional
)
testFrameworks += new TestFramework("zio.test.sbt.ZTestFramework")
libraryDependencies += "io.d11" %% "zhttp" % zioHttpVersion
libraryDependencies += "io.d11" %% "zhttp-test" % zioHttpVersion % Test
The sample from the documentation (for Scala 2.x) looks similar:
import zio._
import zhttp.http._
import zhttp.service.Server
object HelloWorld extends App {
val app = Http.collect[Request] {
case Method.GET -> !! / "text" => Response.text("Hello World!")
}
override def run(args: List[String]): URIO[zio.ZEnv, ExitCode] =
Server.start(8090, app).exitCode
}

Try updating zioHttpVersion to "2.0.0-RC7". That fixed it for me.

It seems that the problem was that zio-http was build against zio in version v2.0.0-RC6 so it is not yet usable with final 2.0.0 nor anything below v2.0.0-RC6 [version] (https://discordapp.com/channels/629491597070827530/819703129267372113/995509195015209000). The support for new final zio release has been just merged so no one should run into this again link to github PR

Related

Trying to read file from s3 with FLINK using the IDE getting Class org.apache.hadoop.fs.s3a.S3AFileSystem not found

I am trying to read a file from s3 using Flink from IntelliJ and getting the following exception:
Caused by: java.lang.ClassNotFoundException: Class
org.apache.hadoop.fs.s3a.S3AFileSystem not found
This how my code looks like :
import org.apache.flink.api.scala.createTypeInformation
import org.apache.flink.streaming.api.functions.source.SourceFunction
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.Path
import org.apache.parquet.column.page.PageReadStore
import org.apache.parquet.example.data.simple.convert.GroupRecordConverter
import org.apache.parquet.hadoop.ParquetFileReader
import org.apache.parquet.hadoop.util.HadoopInputFile
import org.apache.parquet.io.ColumnIOFactory
class ParquetSourceFunction extends SourceFunction[String]{
override def run(ctx: SourceFunction.SourceContext[String]): Unit = {
val inputPath = "s3a://path-to-bucket/"
val outputPath = "s3a://path-to-output-bucket/"
val conf = new Configuration()
conf.set("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem")
val readFooter = ParquetFileReader.open(HadoopInputFile.fromPath(new Path(inputPath), conf))
val metadata = readFooter.getFileMetaData
val schema = metadata.getSchema
val parquetFileReader = new ParquetFileReader(conf, metadata, new Path(inputPath), readFooter.getRowGroups, schema.getColumns)
// val parquetFileReader2 = new ParquetFileReader(new Path(inputPath), ParquetReadOptions)
var pages: PageReadStore = null
try {
while ({ pages = parquetFileReader.readNextRowGroup; pages != null }) {
val rows = pages.getRowCount
val columnIO = new ColumnIOFactory().getColumnIO(schema)
val recordReader = columnIO.getRecordReader(pages, new GroupRecordConverter(schema))
(0L until rows).foreach { _ =>
val group = recordReader.read()
val myString = group.getString("field_name", 0)
ctx.collect(myString)
}
}
}
}
override def cancel(): Unit = ???
}
object Job {
def main(args: Array[String]): Unit = {
// set up the execution environment
lazy val env = StreamExecutionEnvironment.getExecutionEnvironment
lazy val stream = env.addSource(new ParquetSourceFunction)
stream.print()
env.execute()
}
}
Sbt dependencies :
val flinkVersion = "1.12.1"
val flinkDependencies = Seq(
"org.apache.flink" %% "flink-clients" % flinkVersion,// % "provided",
"org.apache.flink" %% "flink-scala" % flinkVersion,// % "provided",
"org.apache.flink" %% "flink-streaming-scala" % flinkVersion, // % "provided")
"org.apache.flink" %% "flink-parquet" % flinkVersion)
lazy val root = (project in file(".")).
settings(
libraryDependencies ++= flinkDependencies,
libraryDependencies += "org.apache.hadoop" % "hadoop-common" % "3.3.0" ,
libraryDependencies += "org.apache.parquet" % "parquet-hadoop" % "1.11.1",
libraryDependencies += "org.apache.flink" %% "flink-table-planner-blink" % "1.12.1" //% "provided"
)
S3 is only supported by adding the respective flink-s3-fs-hadoop to your plugin folder as described on the plugin docs. For an IDE local setup, the root that should contain the plugins dir is the working directory by default. You can override it by using the env var FLINK_PLUGINS_DIR.
To get the flink-s3-fs-hadoop into plugin, I'm guessing some sbt glue is necessary (or you do it once manually). In gradle, I'd define a plugin scope and copy the jars in a custom task to the plugin dir.

PlaySpec not found in IntelliJ

Below is a Scala test of websocket:
import java.util.function.Consumer
import play.shaded.ahc.org.asynchttpclient.AsyncHttpClient
import play.api.inject.guice.GuiceApplicationBuilder
import play.api.test.{Helpers, TestServer, WsTestClient}
import scala.compat.java8.FutureConverters
import scala.concurrent.Await
import scala.concurrent.duration._
import org.scalatestplus.play._
class SocketTest extends PlaySpec with ScalaFutures {
"HomeController" should {
"reject a websocket flow if the origin is set incorrectly" in WsTestClient.withClient { client =>
// Pick a non standard port that will fail the (somewhat contrived) origin check...
lazy val port: Int = 31337
val app = new GuiceApplicationBuilder().build()
Helpers.running(TestServer(port, app)) {
val myPublicAddress = s"localhost:$port"
val serverURL = s"ws://$myPublicAddress/ws"
val asyncHttpClient: AsyncHttpClient = client.underlying[AsyncHttpClient]
val webSocketClient = new WebSocketClient(asyncHttpClient)
try {
val origin = "ws://example.com/ws"
val consumer: Consumer[String] = new Consumer[String] {
override def accept(message: String): Unit = println(message)
}
val listener = new WebSocketClient.LoggingListener(consumer)
val completionStage = webSocketClient.call(serverURL, origin, listener)
val f = FutureConverters.toScala(completionStage)
Await.result(f, atMost = 1000.millis)
listener.getThrowable mustBe a[IllegalStateException]
} catch {
case e: IllegalStateException =>
e mustBe an[IllegalStateException]
case e: java.util.concurrent.ExecutionException =>
val foo = e.getCause
foo mustBe an[IllegalStateException]
}
}
}
}
}
But compile is failing on line import org.scalatestplus.play._ with error :
Cannot resolve symbol scalatestplus
From https://www.playframework.com/documentation/2.8.x/ScalaTestingWithScalaTest I have added scalatest and play to build:
build.sbt:
name := "testproject"
version := "1.0"
lazy val `testproject` = (project in file(".")).enablePlugins(PlayScala)
resolvers += "scalaz-bintray" at "https://dl.bintray.com/scalaz/releases"
resolvers += "Akka Snapshot Repository" at "https://repo.akka.io/snapshots/"
scalaVersion := "2.12.2"
libraryDependencies ++= Seq( jdbc , ehcache , ws , guice , specs2 % Test)
// https://mvnrepository.com/artifact/com.typesafe.scala-logging/scala-logging
libraryDependencies += "com.typesafe.scala-logging" %% "scala-logging" % "3.9.2"
libraryDependencies ++= Seq(
"org.scalatestplus.play" %% "scalatestplus-play" % "3.0.0" % "test"
)
unmanagedResourceDirectories in Test <+= baseDirectory ( _ /"target/web/public/test" )
I've tried rebuilding the project and module in IntelliJ "build" option and "Build Option" when I right click on build.sbt but the import is not found.
sbt dist from Intellij "sbt shell" then File -> "Invalidate caches" with restart of IntelliJ seems to fix the issue
:Invalidate caches screenshot

Scala Akka Streams project fails to compile, "value ~> is not a member of akka.streams.Outlet[In]"

I am using the Akka Streams API in a Scala project, working in Intellij IDEA with the SBT plugin. I have a worker pool Flow as described here: https://doc.akka.io/docs/akka/current/scala/stream/stream-cookbook.html. Here is my code:
package streams
import akka.NotUsed
import akka.stream.scaladsl.{Balance, Flow, GraphDSL, Merge}
import akka.stream.{FlowShape, Graph}
object WorkerPoolFlow {
def apply[In, Out](
worker: Flow[In, Out, Any],
workerCount: Int):
Graph[FlowShape[In, Out], NotUsed] = {
GraphDSL.create() { implicit b =>
val balance = b.add(Balance[In](workerCount, waitForAllDownstreams = true))
val merge = b.add(Merge[Out](workerCount))
for (i <- 0 until workerCount)
balance.out(i) ~> worker.async ~> merge.in(i)
FlowShape(
balance.in,
merge.out)
}
}
}
For some reason the project is now failing to compile, giving this error: value ~> is not a member of akka.stream.Outlet[In].
It compiled fine until today. The only change I am aware of making is installing a Scala linter plugin scalafmt, and importing a few new libraries in build.sbt. Here is my build.sbt:
name := "myProject"
version := "0.1"
scalaVersion := "2.11.11"
unmanagedJars in Compile += file("localDep1.jar")
unmanagedJars in Compile += file("localDep2.jar")
libraryDependencies += "io.spray" %% "spray-json" % "1.3.3"
libraryDependencies += "com.typesafe.akka" %% "akka-actor" % "2.5.6"
libraryDependencies += "com.typesafe.akka" %% "akka-stream" % "2.5.6"
libraryDependencies += "com.typesafe.akka" %% "akka-testkit" % "2.5.6" % Test
libraryDependencies += "com.typesafe.akka" %% "akka-stream-testkit" % "2.5.6" % Test
libraryDependencies += "com.47deg" %% "fetch" % "0.6.0"
libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.2.3"
I have tried reloading SBT, building from SBT outside of IDEA, removing and re-adding dependencies, and cleaning the project, with no luck.
Import GraphDSL.Implicits._:
object WorkerPoolFlow {
def apply[In, Out](
worker: Flow[In, Out, Any],
workerCount: Int):
Graph[FlowShape[In, Out], NotUsed] = {
import GraphDSL.Implicits._
GraphDSL.create() { implicit b =>
...
}
}
}

Expect a specific instance for mock in Scalamock

Why can I not tell a mock that it should expect an instance of a class without explicitly giving the type? Here is what I mean:
val myClass = new MyClass(...)
val traitMock = mock[MyTrait]
(traitMock.mymethod _).expects(myClass).returning(12.3)
does not work, while
val myClass: MyClass = new MyClass(...)
val traitMock = mock[MyTrait]
(traitMock.mymethod _).expects(myClass).returning(12.3)
does work. How come the type can not be inferred?
My testing part in build.sbt is
libraryDependencies ++= Seq(
"org.scalatest" %% "scalatest" % "3.0.0" % "test"
exclude("org.scala-lang", "scala-reflect")
exclude("org.scala-lang.modules", "scala-xml")
)
libraryDependencies += "org.scalamock" %% "scalamock-scalatest-support" % "3.3.0" % "test"
Since I was asked for MyClass (it is SpacePoint here):
trait SpacePoint {
val location: SpaceLocation
}
val sp = new SpacePoint {
override val location: SpaceLocation = new SpaceLocation(DenseVector(1.0, 1.0))
}
So actually it works for me. Let me mention that type inference in the code:
val myClass = new MyClass(...)
has nothing to do with ScalaMock but is guaranteed by scala itself. Below I will specify working sample with library versions and sources of the classes.
Testing libraries:
"org.scalatest" %% "scalatest" % "2.2.4" % "test",
"org.scalamock" %% "scalamock-scalatest-support" % "3.2.2" % "test"
Source code of classes:
class MyClass(val something: String)
trait MyTrait {
def mymethod(smth: MyClass): Double
}
Source code of test:
import org.scalamock.scalatest.MockFactory
import org.scalatest.{Matchers, WordSpec}
class ScalamockTest extends WordSpec with MockFactory with Matchers {
"ScalaMock" should {
"infers type" in {
val myClass = new MyClass("Hello")
val traitMock = mock[MyTrait]
(traitMock.mymethod _).expects(myClass).returning(12.3)
traitMock.mymethod(myClass) shouldBe 12.3
}
}
}
Hope it helps. Will be ready to update answer once you provide more details.

Anaylze twitter datas with Spark

Anyone else help me about how can i analyze twitter data based on 'keys' whatever i write.I found this code but this is give me an error.
import java.io.File
import com.google.gson.Gson
import org.apache.spark.streaming.twitter.TwitterUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}
/**
* Collect at least the specified number of tweets into json text files.
*/
object Collect {
private var numTweetsCollected = 0L
private var partNum = 0
private var gson = new Gson()
def main(args: Array[String]) {
// Process program arguments and set properties
if (args.length < 3) {
System.err.println("Usage: " + this.getClass.getSimpleName +
"<outputDirectory> <numTweetsToCollect> <intervalInSeconds> <partitionsEachInterval>")
System.exit(1)
}
val Array(outputDirectory, Utils.IntParam(numTweetsToCollect), Utils.IntParam(intervalSecs), Utils.IntParam(partitionsEachInterval)) =
Utils.parseCommandLineWithTwitterCredentials(args)
val outputDir = new File(outputDirectory.toString)
if (outputDir.exists()) {
System.err.println("ERROR - %s already exists: delete or specify another directory".format(
outputDirectory))
System.exit(1)
}
outputDir.mkdirs()
println("Initializing Streaming Spark Context...")
val conf = new SparkConf().setAppName(this.getClass.getSimpleName)
val sc = new SparkContext(conf)
val ssc = new StreamingContext(sc, Seconds(intervalSecs))
val tweetStream = TwitterUtils.createStream(ssc, Utils.getAuth)
.map(gson.toJson(_))
tweetStream.foreachRDD((rdd, time) => {
val count = rdd.count()
if (count > 0) {
val outputRDD = rdd.repartition(partitionsEachInterval)
outputRDD.saveAsTextFile(outputDirectory + "/tweets_" + time.milliseconds.toString)
numTweetsCollected += count
if (numTweetsCollected > numTweetsToCollect) {
System.exit(0)
}
}
})
ssc.start()
ssc.awaitTermination()
}
}
Error is
object gson is not a member of package com.google
If you know any link about it or fix this problem can you share with me,because i want to analyze twitter datas with spark.
Thanks.:)
Like Peter pointed out, you are missing the gson dependency. So you'll need to add the following dependency to your build.sbt :
libraryDependencies += "com.google.code.gson" % "gson" % "2.4"
You can also do the following to define all the dependencies in one sequence :
libraryDependencies ++= Seq(
"com.google.code.gson" % "gson" % "2.4",
"org.apache.spark" %% "spark-core" % "1.2.0",
"org.apache.spark" %% "spark-streaming" % "1.2.0",
"org.apache.spark" %% "spark-streaming-twitter" % "1.2.0"
)
Bonus: In case of other missing dependencies, you can try to search your dependency on the http://mvnrepository.com/ and if you need to find the associated jar/dependency for a given class, you can also use the findjar website