I am trying to force my redis client to timeout for testing purpose and I a fail to achieve so. I specify timeout to 2ms in my config and the set operation I perform takes > 2ms so why it does not timeout? Are these settings are kind of soft settings and not hard enforcement? I am using Jedis 2.6 and Scala 2.10 with Play 2.2.3
#Singleton
class RedisClient extends Cache {
// set timeout
val TIMEOUT = 2
private val pool = new JedisPool(new JedisPoolConfig(), getStringFromConfig("redis.url"), getIntFromConfig("redis.port"), TIMEOUT);
def isOpen = pool.getNumActive()
def set(key: String, value: String) = {
isOpen match {
case -1 => throw new Exception("Redis server is not running")
case _ => {
val jedis = pool.getResource()
val before = Platform.currentTime
jedis.set(key, value)
println("TIME TAKEN " + (Platform.currentTime - before))
pool.returnResource(jedis)
}
}
}
}
Actualy you do not need to test it but you may found answer in Jedis sources. TimeOut value used:
as java socket connection timeout
as SO_TIMEOUT value. More info about it here.
To achieve goal in your test:
Redis server should be heavy loaded during your test to not accept the connection. If you need connection timeout.
Try to use some proxy to drop down connection perfomance (timeout by SO_TIMEOUT value).
Related
In the current version of the Play Framework, there is no way to have the WebSocket connection to be persistent.
https://www.playframework.com/documentation/2.8.x/ScalaWebSockets#Keeping-a-WebSocket-Alive
I have the following piece of code and the need for this WebSocket connection to be persistent.
class ProvisioningActor(sink: ActorRef) extends Actor {
private[this] val source = Observable.interval(appConfig.pingInterval).map(elem => elem.toInt)
private[this] val ping = Consumer.foreach[Int](x => self ! x)
private[this] val task = source.consumeWith(ping).runToFuture
override def receive: Receive = {
case jsValue: JsValue =>
logger.debug(s"Received OCPPCallRequest: \n ${Json.prettyPrint(jsValue)}")
jsValue.validate[OCPPCallRequest].asEither match {
case Right(ocppCall) => handleOCPPCallRequest(ocppCall).materialize.map {
case Failure(fail) => sink ! JsError(s"${fail.getMessage}")
case Success(succ) => sink ! Json.toJson(succ)
}
case Left(errors) =>
logger.error(s"Errors occurred when validating OCPPCallRequest: \n $errors")
sink ! Json.toJson(s"error -> ${errors.head._2}") // TODO: Work on this issue here on how we want to propagate errors
}
case x: Int =>
logger.debug(s"Elem: $x")
handleHeartBeatRequest(2, "HeartbeatRequest").materialize.map {
case Failure(fail) => sink ! JsError(s"${fail.getMessage}")
case Success(succ) => sink ! Json.toJson(succ)
}
case msg: Any =>
logger.warn(s"Received unknown message ${msg.getClass.getTypeName} that cannot be handled, " +
s"eagerly closing websocket connection")
task.cancel()
self ! PoisonPill
}
}
It kind of works be sending a heartbeat message back to the client. My question is:
Is this good enough for an implementation?
By default all WebSocket connections will be persistent and this may not be desired. So this has to be on a per connection basis. Correct?
Is there any other way that is advisable?
We use PlayFramework websockets for long running sessions, a busy server supports more than 1000 concurrent Websocket connections and lack of ping-pong packets causes idle websocket connections to be terminated by intermediate firewalls, proxies etc and also the play framework idleTimeout itself - play.server.https.idleTimeout.
Form PlayFramework v2.1(now with v2.8) we have been using Sockjs protocol, Play Sockjs - https://github.com/fdimuccio/play2-sockjs which uses a application layer heartbeat
https://github.com/fdimuccio/play2-sockjs/wiki/API-reference-for-0.5.x#configuring-sockjs-handler
package controllers
import scala.concurrent.duration._
import play.api.mvc._
import play.sockjs.api._
// mixin SockJSRouter trait with your controller
class SockJSController extends Controller with SockJSRouter {
// override this method to specify custom SockJSSettings
override protected val settings = SockJSSettings(websocket = false, heartbeat = 55 seconds)
// here goes the request handler
def sockjs = SockJS.accept[String, String] { request =>
...
}
}
We use 20s heartbeat in production which has proven very safe, each connection has the same heartbeat setting, which works well for our usecase.
This topic may be helpful: Play2.5 Java WebSockets
I want to implement a high-throughput server that accepts multiple clients. Every request should query a database, so I need some kind of async behavior.
I followed the ROUTER-to-REQ pattern from documentation + Futures, so I ended with this "architecture":
trait ZmqProtocol extends Protocol {
private val pool = Executors.newCachedThreadPool()
private implicit val ec: ExecutionContextExecutor = ExecutionContext.fromExecutor(pool)
val context: ZMQ.Context = ZMQ.context(1)
val socket: ZMQ.Socket = context.socket(ZMQ.ROUTER)
socket.bind("tcp://*:5555")
override def receiveMessages(): String = {
while (true) {
val address = socket.recv(0)
val empty = socket.recv(0)
val request = socket.recv(0)
Future {
val message = new String(request)
getResponseFromDb(message)
} onComplete {
case Success(response) =>
// Send reply back to client
socket.send(address, ZMQ.SNDMORE)
socket.send("".getBytes, ZMQ.SNDMORE)
socket.send(response.getBytes(), 0)
case Failure(ex) => println(ex)
}
}
"DONE"
}
}
I understand this won't work because I'm sharing socket in Future so I need a better model. I know the ZeroMQ sockets are fast and creating several worker threads would be enough on input side, but if the bottleneck is on the database side and if I need to do some other work while waiting for DB, I presume all my threads would soon be exhausted.
Would it be too much of an overhead if I create new socket and bind on ROUTER in every Future or is there some better solution?
Also, for Scala developers: is there a way to force onComplete being executed on main thread (I suppose it would solve the issue)? Thanks!
I have an HTTP Connection Pool that hangs after a couple of hours of running:
private def createHttpPool(host: String): SourceQueue[(HttpRequest, Promise[HttpResponse])] = {
val pool = Http().cachedHostConnectionPoolHttps[Promise[HttpResponse]](host)
Source.queue[(HttpRequest, Promise[HttpResponse])](config.poolBuffer, OverflowStrategy.dropNew)
.via(pool).toMat(Sink.foreach {
case ((Success(res), p)) => p.success(res)
case ((Failure(e), p)) => p.failure(e)
})(Keep.left).run
}
I enqueue items with:
private def enqueue(uri: Uri): Future[HttpResponse] = {
val promise = Promise[HttpResponse]
val request = HttpRequest(uri = uri) -> promise
queue.offer(request).flatMap {
case Enqueued => promise.future
case _ => Future.failed(ConnectionPoolDroppedRequest)
}
}
And resolve the response like this:
private def request(uri: Uri): Future[HttpResponse] = {
def retry = {
Thread.sleep(config.dispatcherRetryInterval)
logger.info(s"retrying")
request(uri)
}
logger.info("req-start")
for {
response <- enqueue(uri)
_ = logger.info("req-end")
finalResponse <- response.status match {
case TooManyRequests => retry
case OK => Future.successful(response)
case _ => response.entity.toStrict(10.seconds).map(s => throw Error(s.toString, uri.toString))
}
} yield finalResponse
}
The result of this function is then always transformed if the Future is successful:
def get(uri: Uri): Future[Try[JValue]] = {
for {
response <- request(uri)
json <- Unmarshal(response.entity).to[Try[JValue]]
} yield json
}
Everything works fine for a while and then all I see in the logs are req-start and no req-end.
My akka configuration is like this:
akka {
actor.deployment.default {
dispatcher = "my-dispatcher"
}
}
my-dispatcher {
type = Dispatcher
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 256
parallelism-factor = 128.0
parallelism-max = 1024
}
}
akka.http {
host-connection-pool {
max-connections = 512
max-retries = 5
max-open-requests = 16384
pipelining-limit = 1
}
}
I'm not sure if this is a configuration problem or a code problem. I have my parallelism and connection numbers so high because without it I get very poor req/s rate (I want to request as fast possible - I have other rate limiting code to protect the server).
You are not consuming the entity of the responses you get back from the server. Citing the docs below:
Consuming (or discarding) the Entity of a request is mandatory! If
accidentally left neither consumed or discarded Akka HTTP will assume
the incoming data should remain back-pressured, and will stall the
incoming data via TCP back-pressure mechanisms. A client should
consume the Entity regardless of the status of the HttpResponse.
The entity comes in the form of a Source[ByteString, _] which needs to be run to avoid resource starvation.
If you don't need to read the entity, the simplest way to consume the entity bytes is to discard them, by using
res.discardEntityBytes()
(you can attach a callback by adding - e.g. - .future().map(...)).
This page in the docs describes all the alternatives to this, including how to read the bytes if needed.
--- EDIT
After more code/info was provided, it is clear that the resource consumption is not the problem. There is another big red flag in this implementation, namely the Thread.sleep in the retry method.
This is a blocking call that is very likely to starve the threading infrastructure of your underlying actor system.
A full blown explanation of why this is dangerous was provided in the docs.
Try changing that and using akka.pattern.after (docs). Example below:
def retry = akka.pattern.after(200 millis, using = system.scheduler)(request(uri))
I post some data to Server using the following code
def post(endpoint: String, entity: Strict) = {
Http().singleRequest(HttpRequest(uri = Notifier.notificationUrl + endpoint, method = HttpMethods.POST,
entity = entity)) onComplete {
case Success(response) => response match {
case HttpResponse(StatusCodes.OK, _, _, _) =>
log.info("communicated successfully with Server")
}
case Failure(response) =>
log.error("communicated failed with Server: {}", response)
}
}
This is called every 10 seconds when Notifier actor receives message as following
case ecMonitorInformation: ECMonitorInformation =>
post("monitor", httpEntityFromJson(ecMonitorInformation.toJson))
Problem?
I see that Initially (around 5 requests going to server) but then it hungs up, I do not see any logging, server does not receive any data. After a while on the client side, I see following
ERROR c.s.e.notification.Notifier - communicated failed with Server: java.lang.RuntimeException: Exceeded configured max-open-requests value of [32]
What is going on? How do I fix this issue?
I went through the docs and tried the following
val connectionFlow: Flow[HttpRequest, HttpResponse,
Future[Http.OutgoingConnection]] =
Http().outgoingConnection(host = "localhost", port = 8080)
and then
def httpPost(uri: String, httpEntity:Strict) {
val responseFuture: Future[HttpResponse] =
Source.single(HttpRequest(uri = "/monitor", method = HttpMethods.POST, entity=httpEntity))
.via(connectionFlow)
.runWith(Sink.head)
responseFuture onComplete {
case Success(response) => log.info("Communicated with Server: {}", response)
case Failure(failure) => log.error("Communication failed with Server: {}", failure)
}
and this worked for me
You can also overcome this error by upping the max-open-requests property of akka which is 32 by default.
The property to change will be:
akka.http.host-connection-pool.max-open-requests = 64
The only caveat is that this will fail when the client opens more concurrent connections than what the new value of that parameter is, in this example if the open connections exceed 64, you will get the same error.
If you are going to be repeatedly calling your method, you might want to consider using one of the connection pool based client methods as described here:
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/scala/http/client-side/index.html
You can also set the connection pool settings in the akka-http client configuration:
http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/scala/http/configuration.html#akka-http-core
Search for host-connection-pool.
You could use Source.queue instead of Source.single to provide buffering and overflow strategy. See more details at https://stackoverflow.com/a/35115314/1699837
I have a problem with futures executing SQL queries.
While database connection is good futures asynchronously executing sql queries within sessions from connection pool. TypeSafe Slick helps me to get sessions from the pool.
When database connection broke down the new coming futures can't execute their queries and wait. I don't see any callbacks onComplete.
When the database connection is good again all previous futures are still waiting. Only new futures coming after reconnect can execute their work.
Please advise how to tell already called waiting futures about db reconnection and continue their work, or do callback onComplete Failure.
My configuration for c3p0 ComboPooledDataSource:
val ds = new ComboPooledDataSource
ds.setDriverClass("oracle.jdbc.OracleDriver")
ds.setJdbcUrl(jdbcUrl)
ds.setInitialPoolSize(20)
ds.setMinPoolSize(1)
ds.setMaxPoolSize(40)
ds.setAcquireIncrement(5)
ds.setMaxIdleTime(3600)
//Connection testing
ds.setTestConnectionOnCheckout(false)
ds.setTestConnectionOnCheckin(false)
ds.setIdleConnectionTestPeriod(10)
//Connection recovery
ds.setBreakAfterAcquireFailure(false)
ds.setAcquireRetryAttempts(30)
ds.setAcquireRetryDelay(10000)
val databasePool = Database.forDataSource(ds)
// Typesafe Slick session handling
def withClient[T](body: Session => T) = {
databasePool.withSession(body)
}
Futures creates here:
class RewardActivatorHelper {
private implicit val ec = new ExecutionContext {
val threadPool = Executors.newFixedThreadPool(1000)
def execute(runnable: Runnable) {threadPool.submit(runnable)}
def reportFailure(t: Throwable) {throw t}
}
case class FutureResult(spStart:Long, spFinish:Long)
def activateReward(msg:Msg, time:Long):Unit = {
msg.users.foreach {
user =>
val future:Future[FutureResult] = Future {
val (spStart, spFinish) = OracleClient.rewardActivate(user)
FutureResult(spStart, spFinish)
}
future.onComplete {
case Success(futureResult:FutureResult) =>
//do something
case Failure(e:Throwable) => log.error(e.getMessage())
}
}
}
Problem solved by setTestConnectionOnCheckout(true)