I don't know how to reproduce my problem in a simple way.
I have an actor that executes external command by 'sys.process' package.
object FileHelper {
def downloadFile(url: String, filename: String): Either[String, Unit] = {
println(s"MyThread: ${Thread.currentThread().getName}")
util.Try {
import scala.language.postfixOps
new URL(url) #> new File(filename) !
} match {
case util.Failure(err) => Left(s"Download error: $err")
case util.Success(code) => if (code != 0) Left("Can't download file") else Right({})
}
}
}
So when i call dowloadFile within actor Try statement doesn't work!
router MyThread: app-akka.actor.default-dispatcher-3
router[ERROR] Exception in thread "Thread-10" java.io.FileNotFoundException: /home/alex/dumpss/456.tar.bz2 (No such file or directory)
router[ERROR] at java.io.FileOutputStream.open0(Native Method)
router[ERROR] at java.io.FileOutputStream.open(FileOutputStream.java:270)
router[ERROR] at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
router[ERROR] at scala.sys.process.ProcessBuilderImpl$FileOutput$$anonfun$$lessinit$greater$3.apply(ProcessBuilderImpl.scala:33)
router[ERROR] at scala.sys.process.ProcessBuilderImpl$FileOutput$$anonfun$$lessinit$greater$3.apply(ProcessBuilderImpl.scala:33)
router[ERROR] at scala.sys.process.ProcessBuilderImpl$OStreamBuilder$$anonfun$$lessinit$greater$4.apply(ProcessBuilderImpl.scala:38)
router[ERROR] at scala.sys.process.ProcessBuilderImpl$OStreamBuilder$$anonfun$$lessinit$greater$4.apply(ProcessBuilderImpl.scala:38)
router[ERROR] at scala.sys.process.ProcessBuilderImpl$ThreadBuilder$$anonfun$1.apply$mcV$sp(ProcessBuilderImpl.scala:58)
router[ERROR] at scala.sys.process.ProcessImpl$Spawn$$anon$1.run(ProcessImpl.scala:23)
As you see external command has been executed in thread 'Thread-10' but Try is catching exception in 'app-akka.actor.default-dispatcher-3'.
With scala process api, url downloading and file redirection are implemented by threads instead of real processes: https://github.com/scala/scala/blob/2.12.x/src/library/scala/sys/process/ProcessBuilderImpl.scala#L31-L64
So, when this line gets executed,
new URL(url) #> new File(filename) !
two more threads is spawned, one for downloading the url and writing the result to the pipe, the other reads from the pipe and writes whatever it reads to the file. And the parent thread (in which the actor is running) waits for their exit values, and returns either of them accordingly: https://github.com/scala/scala/blob/2.12.x/src/library/scala/sys/process/ProcessImpl.scala#L151
Unfortunately, exit value for file redirection is always ignored. So you cannot tell whether the operation succeeds or not by checking the return code of the pipe. https://github.com/scala/scala/blob/2.12.x/src/library/scala/sys/process/ProcessBuilderImpl.scala#L39
Instead of using scala process api, you can do the work with the help of commons-io library:
Try {
IOUtils.copy(url.openStream, new FileOutputStream(file))
} match {
case Success(_) => ...
case Failure(ex) => ...
}
Related
I'm using the AsyncHttpClient in http4s-0.19.0-M2 to make a client-call:
for {
resp <- http.expectOr[String](GET(url)){ error =>
error.as[String].map(body => throw new Exception(...)
}
_ <- doSomethingWithResponse(resp)
} yield ()
Occassiaonally the remote end times out, and I see the following in the log:
java.util.concurrent.TimeoutException: Request timeout to remote.server.com after 60000 ms
at org.asynchttpclient.netty.timeout.TimeoutTimerTask.expire(TimeoutTimerTask.java:43)
at org.asynchttpclient.netty.timeout.RequestTimeoutTimerTask.run(RequestTimeoutTimerTask.java:50)
at shade.cda.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:670)
at shade.cda.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:745)
at shade.cda.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:473)
at shade.cda.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
However, it looks like doSomethingWithResponse() is still invoked, but with a partial resp string. Is there a way to change this behavior so that the http.expectOr call fails if it can't retrieve the entire payload?
I'm trying to limit life of observable by timeout:
def doLongOperation() = {
Thread.sleep(duration)
"OK"
}
def firstStep = Observable.create(
(observer: Observer[String]) => {
observer.onNext(doLongOperation())
observer.onCompleted()
Subscription()
}
)
firstStep
.timeout(1 second)
.subscribe(
item => println(item),
throwable => throw throwable,
() => println("complete")
)
I would like to distinguish between following results:
Observable finished by timeout, no result obtained
Exception thrown during execution
Execution finished successfully, return value
I can process cases 2 and 3 with no problem in partials onNext and onError, but how do I detect if observable finished by timeout?
One more thing: I've never get into block onComplete, though there is a call to obeserver.onCompleted() in my code. Why?
If a timeout happens, that TimeoutException is emitted on the computation thread where a throw throwable is ends up being ignored and your main thread won't and can't see it. You can add toBlocking after the timeout so any exception will end up on the same thread:
firstStep
.timeout(1 second)
.toBlocking()
.subscribe(
item => println(item),
throwable => println(throwable),
() => println("complete")
)
TimeoutException gets thrown indeed. The problem was caused by using wrong libraries. I had "com.netflix.rxjava" in my dependencies, instead of "io.reactivex"
I am working with an asyncio.Protocol server where the purpose is for the client to call the server, but wait until the server has responded and data is returned before stopping the client loop.
Based on the asyncio doc Echo Client and Server here: https://docs.python.org/3/library/asyncio-protocol.html#protocol-example-tcp-echo-server-and-client , results of transport.write(...) are returned immediately when called.
Through experience, calling loop.run_until_complete(coroutine) fails with RuntimeError: Event loop is running.
Running asyncio.sleep(n) in the data_received() method of the server doesn't have any effect either.
yield from asyncio.sleep(n) and yield from asyncio.async(asyncio.sleep(n)) in data_received() both hang the server.
My question is, how do I get my client to wait for the server to write a response before giving back control?
I guess to never use transport/protocol pair directly.
asyncio has Streams API for high-level programming.
Client code can look like:
#asyncio.coroutine
def communicate():
reader, writer = yield from asyncio.open_connection(HOST, PORT)
writer.write(b'data')
yield from writer.drain()
answer = yield from reader.read()
# process answer, maybe send new data back to server and wait for answer again
writer.close()
You don't have to change the client code.
echo-client.py
#!/usr/bin/env python3.4
import asyncio
class EchoClient(asyncio.Protocol):
message = 'Client Echo'
def connection_made(self, transport):
transport.write(self.message.encode())
print('data sent: {}'.format(self.message))
def data_received(self, data):
print('data received: {}'.format(data.decode()))
def connection_lost(self, exc):
print('server closed the connection')
asyncio.get_event_loop().stop()
loop = asyncio.get_event_loop()
coro = loop.create_connection(EchoClient, '127.0.0.1', 8888)
loop.run_until_complete(coro)
loop.run_forever()
loop.close()
The trick is to place your code (including self.transport methods) into a coroutine and use the wait_for() method, with the yield from statement in front of the statements that require their values returned, or ones which take a while to complete:
echo-server.py
#!/usr/bin/env python3.4
import asyncio
class EchoServer(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
print('connection from {}'.format(peername))
self.transport = transport
def data_received(self, data):
print('data received: {}'.format(data.decode()))
fut = asyncio.async(self.sleeper())
result = asyncio.wait_for(fut, 60)
#asyncio.coroutine
def sleeper(self):
yield from asyncio.sleep(2)
self.transport.write("Hello World".encode())
self.transport.close()
loop = asyncio.get_event_loop()
coro = loop.create_server(EchoServer, '127.0.0.1', 8888)
server = loop.run_until_complete(coro)
print('serving on {}'.format(server.sockets[0].getsockname()))
try:
loop.run_forever()
except KeyboardInterrupt:
print("exit")
finally:
server.close()
loop.close()
Call echo-server.py and then echo-client.py, the client will wait 2 seconds as determined by asyncio.sleep, then stop.
I have been working on this issue for quite a while now and I cannot find a solution...
A web app built with play framework 2.2.1 using h2 db (for dev) and a simple Model package.
I am trying to implement a REST JSON endpoint and the code works... but only once per server instance.
def createOtherModel() = Action(parse.json) {
request =>
request.body \ "name" match {
case _: JsUndefined => BadRequest(Json.obj("error" -> true,
"message" -> "Could not match name =(")).as("application/json")
case name: JsValue =>
request.body \ "value" match {
case _: JsUndefined => BadRequest(Json.obj("error" -> true,
"message" -> "Could not match value =(")).as("application/json")
case value: JsValue =>
// this breaks the secod time
val session = ThinkingSession.dummy
val json = Json.obj(
"content" -> value,
"thinkingSession" -> session.id,
)
)
Ok(Json.obj("content" -> json)).as("application/json")
}
} else {
BadRequest(Json.obj("error" -> true,
"message" -> "Name was not content =(")).as("application/json")
}
}
}
so basically I read the JSON, echo the "value" value, create a model obj and send it's id.
the ThinkingSession.dummy function does this:
def all(): List[ThinkingSession] = {
// Tried explicitly closing connection, no difference
//val conn = DB.getConnection()
//try {
// DB.withConnection { implicit conn =>
// SQL("select * from thinking_session").as(ThinkingSession.DBParser *)
// }
//} finally {
// conn.close()
//}
DB.withConnection { implicit conn =>
SQL("select * from thinking_session").as(ThinkingSession.DBParser *)
}
}
def dummy: ThinkingSession = {
(all() head)
}
So this should do a SELECT * FROM thinking_session, create a model obj list from the result and return the first out of the list.
This works fine the first time after server start but the second time I get a
play.api.Application$$anon$1: Execution exception[[SQLException: Timed out waiting for a free available connection.]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
at scala.Option.map(Option.scala:145) [scala-library.jar:na]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2.applyOrElse(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
Caused by: java.sql.SQLException: Timed out waiting for a free available connection.
at com.jolbox.bonecp.DefaultConnectionStrategy.getConnectionInternal(DefaultConnectionStrategy.java:88) ~[bonecp.jar:na]
at com.jolbox.bonecp.AbstractConnectionStrategy.getConnection(AbstractConnectionStrategy.java:90) ~[bonecp.jar:na]
at com.jolbox.bonecp.BoneCP.getConnection(BoneCP.java:553) ~[bonecp.jar:na]
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:131) ~[bonecp.jar:na]
at play.api.db.DBApi$class.getConnection(DB.scala:67) ~[play-jdbc_2.10.jar:2.2.1]
at play.api.db.BoneCPApi.getConnection(DB.scala:276) ~[play-jdbc_2.10.jar:2.2.1]
My application.conf (db section)
db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:file:database/[my_db]"
db.default.logStatements=true
db.default.idleConnectionTestPeriod=5 minutes
db.default.connectionTestStatement="SELECT 1"
db.default.maxConnectionAge=0
db.default.connectionTimeout=10000
Initially the only thing set in my config was the connection and the error occurred. I added all the other stuff while reading up on the issue on the web.
What is interesting is that when I use the h2 in memory db it works once after server start and after that it fails. when I use the h2 file system db it only works once, regardless of the server instances.
Can anyone give me some insight on this issue? Have found some stuff on bonecp problem and tried upgrading to 0.8.0-rc1 but nothing changed... I am at a loss =(
Try to set a maxConnectionAge and idle timeout
turns out the error was quite somewhere else... it was a good ol' stack overflow... have not seen one in a long time. I tried down-voting my question but it's not possible^^
I'm trying to configure AKKA SSL connection to use my keystore and trustore files, and I want it to be able to find them on a classpath.
I tried to set application.conf to:
...
remote.netty.ssl = {
enable = on
key-store = "keystore"
key-store-password = "passwd"
trust-store = "truststore"
trust-store-password = "passwd"
protocol = "TLSv1"
random-number-generator = "AES128CounterSecureRNG"
enabled-algorithms = ["TLS_RSA_WITH_AES_128_CBC_SHA"]
}
...
This works fine if keystore and trustore files are in the current directory. In my application these files get packaged into WAR and JAR archives, and because of that I'd like to read them from the classpath.
I tried to use getResource("keystore") in application.conf as described here without any luck. Config reads it literally as a string.
I also tried to parse String conf and force it to read the value:
val conf: Config = ConfigFactory parseString (s"""
...
"${getClass.getClassLoader.getResource("keystore").getPath}"
...""")
In this case it finds proper path on the classpath as file://some_dir/target/scala-2.10/server_2.10-1.1-one-jar.jar!/main/server_2.10-1.1.jar!/keystore which is exactly where it's located (in the jar). However, underlying Netty SSL transport can't find the file given this path, and I get:
Oct 03, 2013 1:02:48 PM org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink
WARNING: Failed to initialize an accepted socket.
45a13eb9-6cb1-46a7-a789-e48da9997f0fakka.remote.RemoteTransportException: Server SSL connection could not be established because key store could not be loaded
at akka.remote.netty.NettySSLSupport$.constructServerContext$1(NettySSLSupport.scala:113)
at akka.remote.netty.NettySSLSupport$.initializeServerSSL(NettySSLSupport.scala:130)
at akka.remote.netty.NettySSLSupport$.apply(NettySSLSupport.scala:27)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$.defaultStack(NettyRemoteSupport.scala:74)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$$anon$1.getPipeline(NettyRemoteSupport.scala:67)
at akka.remote.netty.NettyRemoteTransport$PipelineFactory$$anon$1.getPipeline(NettyRemoteSupport.scala:67)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.registerAcceptedChannel(NioServerSocketPipelineSink.java:277)
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:242)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.FileNotFoundException: file:/some_dir/server/target/scala-2.10/server_2.10-1.1-one-jar.jar!/main/server_2.10-1.1.jar!/keystore (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:97)
at akka.remote.netty.NettySSLSupport$.constructServerContext$1(NettySSLSupport.scala:118)
... 10 more
I wonder if there is any way to configure this in AKKA without implementing custom SSL transport. Maybe I should configure Netty in the code?
Obviously I can hardcode the path or read it from an environment variable, but I would prefer a more flexible classpath solution.
I decided to look at the akka.remote.netty.NettySSLSupport at the code where exception is thrown from, and here is the code:
def initializeServerSSL(settings: NettySettings, log: LoggingAdapter): SslHandler = {
log.debug("Server SSL is enabled, initialising ...")
def constructServerContext(settings: NettySettings, log: LoggingAdapter, keyStorePath: String, keyStorePassword: String, protocol: String): Option[SSLContext] =
try {
val rng = initializeCustomSecureRandom(settings.SSLRandomNumberGenerator, settings.SSLRandomSource, log)
val factory = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm)
factory.init({
val keyStore = KeyStore.getInstance(KeyStore.getDefaultType)
val fin = new FileInputStream(keyStorePath)
try keyStore.load(fin, keyStorePassword.toCharArray) finally fin.close()
keyStore
}, keyStorePassword.toCharArray)
Option(SSLContext.getInstance(protocol)) map { ctx ⇒ ctx.init(factory.getKeyManagers, null, rng); ctx }
} catch {
case e: FileNotFoundException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because key store could not be loaded", e)
case e: IOException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because: " + e.getMessage, e)
case e: GeneralSecurityException ⇒ throw new RemoteTransportException("Server SSL connection could not be established because SSL context could not be constructed", e)
}
It looks like it must be a plain filename (String) because that's what FileInputStream takes.
Any suggestions would be welcome!
I also got stuck in similar issue and was getting similar errors. In my case I was trying to hit an https server with self-signed certificates using akka-http, with following code I was able to get through:
val trustStoreConfig = TrustStoreConfig(None, Some("/etc/Project/keystore/my.cer")).withStoreType("PEM")
val trustManagerConfig = TrustManagerConfig().withTrustStoreConfigs(List(trustStoreConfig))
val badSslConfig = AkkaSSLConfig().mapSettings(s => s.withLoose(s.loose
.withAcceptAnyCertificate(true)
.withDisableHostnameVerification(true)
).withTrustManagerConfig(trustManagerConfig))
val badCtx = Http().createClientHttpsContext(badSslConfig)
Http().superPool[RequestTracker](badCtx)(httpMat)
At the time of writing this question there was no way to do it AFAIK. I'm closing this question but I welcome updates if new versions provide such functionality or if there are other ways to do that.