I wait for a Future to complete and print the content on the console. Even when everything is finished, the main application doesn't exit and I have to kill it manually.
def main(args: Array[String]): Unit {
val req = HttpRequest(GET, myURL)
val res = Http().singleRequest(req)
val resultsFutures = Future {
val resultString = Await.result(HttpRequests.unpackResponse(res), Duration.Inf)
JsonMethods.parse(resultString).extract[List[Results]]
}
val results = Await.result(resultsFutures, Duration.Inf)
println(results)
}
So results gets printed on the console with the expected contend, but the application still doesn't end.
Is there something I can do to exit the application? Is there still something running, that the main is waiting for?
I'm using:
scala 2.12.10
akka 2.5.26
akkaHttp 10.1.11
As you are using Akka, you likely have an ActorSystem instantiated somehow under the hood that will keep the process running.
Either you are able to get a hand on it and call its actorSystem.terminate() method, or you can also use an explicit sys.exit(0) at the end of your main method (0 being the exit code you want).
Edit: you should also wrap the Awaits in Try and make sure to call sys.exit in case of failures as well.
Related
Use future in repl:
scala> val a=Future{1}
a: scala.concurrent.Future[Int] = Future(<not completed>)
scala> a.value
res0: Option[scala.util.Try[Int]] = Some(Success(1))
return Some(Success(1))
Use it in IDEA:
object A extends App{
val a=Future{1}
println(a.value)
}
return None:
"C:\Program Files\Java\jdk1.8.0_201\bin\java.exe"...
None
Why?There is no something like Thread.Sleep,so in any situation,I think the Future will return immediately,give me Some(Successs(1))
Thanks!
The Future is executed asynchronously. It is submitted to the thread pool queue, where one of the available threads picks it up eventually, and executes.
When you are running in repl, somewhere (during IO probably), the current thread looses control, the context switches, and another thread gets a chance to pick up the task from the queue and complete it.
When running it as a program, the a.value is executed immediately after a=Future, in the same thread, the asynchronous task is still sitting in the queue.
I've got simple cats-effect app, which download site from the URL given as argument. During downloading app is supposed to display "loading bar" by writting dots (.) to console. I implemented it by doing race of two IOs one for downloading another for displaying dots.
This is whole app on scastie.
The most important part is here:
def loader(): IO[Unit] = for {
_ <- console.putStr(".")
_ <- timer.sleep(Duration(50, MILLISECONDS)) *> loader()
} yield {}
def download(url: String): IO[String] = IO.delay(Source.fromURL(url)).map(_.mkString)
def run(args: List[String]): IO[Unit] = {
args.headOption match {
case Some(url) =>
for {
content <- IO.race(download(url), loader()).map(_.left.get)
_ <- console.putStrLn() *> console.putStrLn(s"Downloaded site from $url. Size of downloaded content is ${content.length}.")
} yield {}
case None => console.putStrLn("Pass url as argument.")
}
}
Everything works as I expected, when I run it, I get:
..............
Downloaded site from https://www.scala-lang.org. Size of downloaded content is 47738.
Only problem is that app never exits.
As far as I checked loader IO gets cancelled correctly. I can even add something like this:
urlLoader.run(args) *> console.putStrLn("???") *> IO(ExitCode.Success)
And ??? gets displayed.
Also when I remove race, then app exits correctly.
So my question how can I fix this and make my app exit at the end?
To follow up on my comment above: the problem is that your ScheduledExecutorService has threads running that prevent the JVM from exiting, even though your timer's tasks have been cancelled. There are several ways you could resolve this:
Add a IO(ses.shutdown()) before IO(ExitCode.Success).
Call newScheduledThreadPool with a thread factory that daemonizes its threads.
Use the timer: Timer that you get for free inside IOApp.
The last of these is almost definitely the right choice—using the timer (and ContextShift) provided by IOApp will give you reasonable defaults for this and other behaviors.
You may also prevent an early JVM exit using Scala futures with a code like this:
import scala.concurrent.{Await, Future}
import scala.concurrent.duration.Duration
Await.ready(Future.never, Duration.Inf)
Basically I mean:
for(v <- Future(long time operation)) yield v*someOtherValue
This expression returns another Future, but the question is, is the v*someOhterValue operation lazy or not? Will this expression block on getting the value of Future(long time operation)?
Or it is like a chain of callbacks?
A short experiment can test this question.
import concurrent._;
import concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._
object TheFuture {
def main(args: Array[String]): Unit = {
val fut = for (v <- Future { Thread.sleep(2000) ; 10 }) yield v * 10;
println("For loop is finished...")
println(Await.ready(fut, Duration.Inf).value.get);
}
}
If we run this, we see For loop is finished... almost immediately, and then two seconds later, we see the result. So the act of performing map or similar operations on a future is not blocking.
A map (or, equivalently, your for comprehension) on a Future is not lazy: it will be executed as soon as possible on another thread. However, since it runs on another thread, it isn't blocking, either.
If you want to do the definition and execution of the Future separately, then you have to use something like a Monix Task.
https://monix.io/api/3.0/monix/eval/Task.html
At the end of my ScalaTest suite I need to do some DB clean up.
The cleanup itself is a Future. The suite does not call the super.afterAll() which leaves some resources used by the suite (like a web browser and db connections) pending.
Here is relevant pieces of code:
override def afterAll():Unit={
var cleanUpsInProgress = true
DB.cleanUpDeletedSegments(db).onComplete{case _ =>
cleanUpsInProgress = false
}
while(cleanUpsInProgress){}
db.close()
aggregatesDB.close()
super.afterAll()
}
and
def cleanUpDeletedSegments(implicit db:ADMPDB):Future[Int]={
db.run{
segments.filter(_.deleted === 1).delete
}
}
I've debugged and scratched my head for a while and got to conclusion it is not even processing the code in the future's onComplete callback. Even when I substitute Slick's db action with stub Future.successfull(1) I still have everything pending and super.afterAll() gets NOT invoked.
Probably I'm doing something stupidly wrong? Could you help?
Note: Also I do think I need to use this ugly var and while loop here because otherwise the main thread gets completed and the framework which initiates the suite running just closes JVM. Maybe I am wrong here so would be great to hear some comments.
--------------------------UPDATE----------------------
The solution by Tyler works. But when I flatMap one more asynch cleanup (which I actually need to do) then the problem is the same again. The code below freezes and does not call super.afterAll:
override def afterAll():Unit={
val cleanUp = DB.cleanUpDeletedSegments(db).flatMap(_ => DB.cleanUpDeletedSegmentGroups(db))
Await.result(cleanUp, 6 seconds)
db.close()
aggregatesDB.close()
super.afterAll()
}
Await.result also does not throw TimeoutException and from what I see neither completed normally. Any ideas?
It works only if I use Await.result sequentially for each future like below:
override def afterAll():Unit={
val cleanUpSegments = DB.cleanUpDeletedSegments(db)
Await.result(cleanUpSegments, 3 seconds)
val cleanUpSegmentGroups = DB.cleanUpDeletedSegmentGroups(db)
Await.result(cleanUpSegmentGroups, 3 seconds)
db.close()
aggregatesDB.close()
super.afterAll()
}
Its probably just easier to await for your Future cleanup to finish:
import scala.concurrent.Await
import scala.concurrent.duration._
override def afterAll() ={
val future = DB.cleanUpDeletedSegments(db)
Await.result(future, 2 minutes)
aggregatesDB.close()
super.afterAll()
}
You can set the timeout to whatever is reasonable
Use solution by #Tyler. Your solution didn't work because you used non-volatile variable cleanupInProgress from multiple threads.
(I've not included the imports so as not to clutter this question)
(This is the simplest possible Scala App (created using scala-minimal template on Typesafe Activator))
I'm trying to run a query against an Elasticsearch Server.
I've run the same code on sbt console and I can see the results alright.
However, when I run the following code, I see "END" (code after the callbacks) being printed, but neither the Success callback nor the Failure callback get run.
I'm a Scala noob, so maybe I'm doing something wrong here? This code compiles. (Just to let you know all the imports are there)
object Hello{
def main(args: Array[String]): Unit = {
val client = ElasticClient.remote("vm-3bsa", 9300)
val res:Future[SearchResponse] = client.execute{ search in "vulnerabilities/3bsa" query "css" }
res onComplete{
case Success(s) => println(s)
case Failure(t) => println("An error has occured: " + t)
}
println("END")
//EDIT start
Await.result(res,10.seconds)
//EDIT end
}
}
FINAL EDIT
Instead of using onComplete, it works if I, instead, print result of the call to Await.result:
val await=Await.result(res,10.seconds)
println(await)
// results shown
The main thread will register your onComplete, println("END") and then exit, this makes the program terminate so therefore you never see your onComplete callback.
You can use Await.result(future, timeout) to block the main thread to keep it alive until the answer arrives. In a server context that would be a big no-no but in a small app like this it is not a problem blocking one thread.