Play ws request timeout - scala

I am recieving an error
java.util.concurrent.TimeoutException: Read timeout to {url} after 120000 ms
In my settings I have set:
play.ws.timeout.request = 5 minutes
play.ws.timeout.idle = 5 minutes
But these still aren't working.

In my application.conf I will define a timeout like this:
someRequestTimeout = "45 seconds"
and then in my code I'd write
wsClient
.url("http://example.com")
.withRequestTimeout(config.getDuration("someRequestTimeout"))
.post(somePayload)
.map { response => ??? }

Related

Gatling request is not getting executed

val makeReport =
feed(randomNumberFeeder)
.exec( session => {
http("post_report")
.post("/api/path/reports")
.body(StringBody(JsonFactory.report(id = 1, number= session("number").asOption[String].get))).asJson
.check(jsonPath("$.reportId").saveAs("reportId"))
session
})
val scn = scenario("ReportCreation").exec(makeReport)
But When I run the Gatling tests the request is not being sent, and the whole HTTP block is ignored. Where am I going wrong?
So I am end up with the following exception.
Exception in thread "main" java.lang.UnsupportedOperationException: There were no requests sent during the simulation, reports won't be generated
at io.gatling.charts.report.ReportsGenerator.generateFor(ReportsGenerator.scala:50)
at io.gatling.app.RunResultProcessor.generateReports(RunResultProcessor.scala:65)
at io.gatling.app.RunResultProcessor.processRunResult(RunResultProcessor.scala:40)
at io.gatling.app.Gatling$.start(Gatling.scala:89)
at io.gatling.app.Gatling$.fromArgs(Gatling.scala:45)
at io.gatling.app.Gatling$.main(Gatling.scala:37)
at io.gatling.app.Gatling.main(Gatling.scala)
I fixed it myself.
val makeReport =
feed(randomNumberFeeder)
.exec(http("post_report")
.post("/api/path/reports")
.body(StringBody( session => JsonFactory.report(id = 1, number= session("number").as[String]))).asJson
.check(jsonPath("$.reportId").saveAs("reportId"))
})

akka.http.scaladsl.model.ParsingException: Unexpected end of multipart entity while uploading a large file to S3 using akka http

I am trying to upload a large file (90 MB for now) to S3 using Akka HTTP with Alpakka S3 connector. It is working fine for small files (25 MB) but when I try to upload large file (90 MB), I got the following error:
akka.http.scaladsl.model.ParsingException: Unexpected end of multipart entity
at akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$1.applyOrElse(MultipartUnmarshallers.scala:108)
at akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$1.applyOrElse(MultipartUnmarshallers.scala:103)
at akka.stream.impl.fusing.Collect$$anon$6.$anonfun$wrappedPf$1(Ops.scala:227)
at akka.stream.impl.fusing.SupervisedGraphStageLogic.withSupervision(Ops.scala:186)
at akka.stream.impl.fusing.Collect$$anon$6.onPush(Ops.scala:229)
at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:523)
at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:510)
at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:376)
at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:606)
at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:485)
at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:581)
at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:749)
at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$shortCircuitBatch(ActorGraphInterpreter.scala:739)
at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:765)
at akka.actor.Actor.aroundReceive(Actor.scala:539)
at akka.actor.Actor.aroundReceive$(Actor.scala:537)
at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:671)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:614)
at akka.actor.ActorCell.invoke(ActorCell.scala:583)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
at akka.dispatch.Mailbox.run(Mailbox.scala:229)
at akka.dispatch.Mailbox.exec(Mailbox.scala:241)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Although, I get the success message at the end but file does not uploaded completely. It gets upload of 45-50 MB only.
I am using the below code:
S3Utility.scala
class S3Utility(implicit as: ActorSystem, m: Materializer) {
private val bucketName = "test"
def sink(fileInfo: FileInfo): Sink[ByteString, Future[MultipartUploadResult]] = {
val fileName = fileInfo.fileName
S3.multipartUpload(bucketName, fileName)
}
}
Routes:
def uploadLargeFile: Route =
post {
path("import" / "file") {
extractMaterializer { implicit materializer =>
withoutSizeLimit {
fileUpload("file") {
case (metadata, byteSource) =>
logger.info(s"Request received to import large file: ${metadata.fileName}")
val uploadFuture = byteSource.runWith(s3Utility.sink(metadata))
onComplete(uploadFuture) {
case Success(result) =>
logger.info(s"Successfully uploaded file")
complete(StatusCodes.OK)
case Failure(ex) =>
println(ex, "Error in uploading file")
complete(StatusCodes.FailedDependency, ex.getMessage)
}
}
}
}
}
}
Any help would be appraciated. Thanks
Strategy 1
Can you break the file into smaller chunks and retry, here is the sample code:
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("some-kind-of-endpoint"))
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("user", "pass")))
.disableChunkedEncoding()
.withPathStyleAccessEnabled(true)
.build();
// Create a list of UploadPartResponse objects. You get one of these
// for each part upload.
List<PartETag> partETags = new ArrayList<PartETag>();
// Step 1: Initialize.
InitiateMultipartUploadRequest initRequest = new
InitiateMultipartUploadRequest("bucket", "key");
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);
File file = new File("filepath");
long contentLength = file.length();
long partSize = 5242880; // Set part size to 5 MB.
try {
// Step 2: Upload parts.
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
// Last part can be less than 5 MB. Adjust part size.
partSize = Math.min(partSize, (contentLength - filePosition));
// Create a request to upload a part.
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName("bucket").withKey("key")
.withUploadId(initResponse.getUploadId()).withPartNumber(i)
.withFileOffset(filePosition)
.withFile(file)
.withPartSize(partSize);
// Upload part and add response to our list.
partETags.add(
s3Client.uploadPart(uploadRequest).getPartETag());
filePosition += partSize;
}
// Step 3: Complete.
CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(
"bucket",
"key",
initResponse.getUploadId(),
partETags);
s3Client.completeMultipartUpload(compRequest);
} catch (Exception e) {
s3Client.abortMultipartUpload(new AbortMultipartUploadRequest(
"bucket", "key", initResponse.getUploadId()));
}
Strategy 2
Increase the idle-timeout of the Akka HTTP server (just set it to infinite), like the following:
akka.http.server.idle-timeout=infinite
This would increase the time period for which the server expects to be idle. By default its value is 60 seconds. And if the server is not able to upload the file within that time period, it will close the connection and throw "Unexpected end of multipart entity" error.

http4s client returns partial payload

I'm using the AsyncHttpClient in http4s-0.19.0-M2 to make a client-call:
for {
resp <- http.expectOr[String](GET(url)){ error =>
error.as[String].map(body => throw new Exception(...)
}
_ <- doSomethingWithResponse(resp)
} yield ()
Occassiaonally the remote end times out, and I see the following in the log:
java.util.concurrent.TimeoutException: Request timeout to remote.server.com after 60000 ms
at org.asynchttpclient.netty.timeout.TimeoutTimerTask.expire(TimeoutTimerTask.java:43)
at org.asynchttpclient.netty.timeout.RequestTimeoutTimerTask.run(RequestTimeoutTimerTask.java:50)
at shade.cda.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:670)
at shade.cda.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:745)
at shade.cda.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:473)
at shade.cda.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
However, it looks like doSomethingWithResponse() is still invoked, but with a partial resp string. Is there a way to change this behavior so that the http.expectOr call fails if it can't retrieve the entire payload?

Akka Streams Hikari Connection Pool for MySQL Streaming

I am streaming data from mysql using Slick 3 and Akka Streams.
This is how I build my source
import slick.jdbc.MySQLProfile.api._
val enableJdbcStreaming: (java.sql.Statement) => Unit = {statement =>
if (statement.isWrapperFor(classOf[com.mysql.cj.jdbc.StatementImpl])) {
statement.unwrap(classOf[com.mysql.cj.jdbc.StatementImpl]).enableStreamingResults()
}
}
val query = Tables.Foo.filter(r => r.isActive === true)
.map(r => r.id).result.withStatementParameters(statementInit = enableJdbcStreaming)
Source.fromPublisher(db.stream(query))
My application runs for like 20 minutes and then shuts down with the following error
[error] Exception in thread "abhipool network timeout executor" java.lang.NullPointerException
[info] 15:31:46 INFO [HikariPool] - abhipool - Close initiated...
[error] at com.mysql.cj.mysqla.io.MysqlaProtocol.setSocketTimeout(MysqlaProtocol.java:1397)
[error] at com.mysql.cj.mysqla.MysqlaSession$1.run(MysqlaSession.java:401)
[error] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[error] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[error] at java.lang.Thread.run(Thread.java:745)
I have a feeling that because my query is running for a very long time there is some kind of timeout occurring which is initiating this shutdown.
My connection
mysql {
profile = "slick.jdbc.MySQLProfile$"
dataSourceClass = "slick.jdbc.DatabaseUrlDataSource"
properties {
driver = "com.mysql.cj.jdbc.Driver"
url = "jdbc:mysql://foo:3306/bar?useLegacyDatetimeCode=false&serverTimezone=America/Chicago"
user = "foo"
password = "bar"
}
connectionTimeout = 0
idleTimeout = 0
maxLifetime = 0
maxConnections = 40
minConnections = 10
poolName = "abhipool"
numThreads = 10
}
Dependencies
"com.typesafe.slick" %% "slick" % "3.2.1",
"com.typesafe.slick" %% "slick-hikaricp" % "3.2.1",
"mysql" % "mysql-connector-java" % "6.0.6",
How can I configure my application database connections so that even if my streaming application streams data for several days... it keeps running.
There is an extremely lengthy conversation about this same issue here but it doesn't tell me how to really fix this issue. This issues makes it totally impossible to write long running streaming tasks which use Mysql as a source.
You can configure the MySQL driver by adding parameters in the URL
url = "jdbc:mysql://foo:3306/bar?useLegacyDatetimeCode=false&serverTimezone=America/Chicago&socketTimeout=30000"
I put 30000 for the sake of the example, put the right value that fits your need

Session Configuration/Timeout with Slick 3

Is there a way to handle sessions explicitly in Slick 3? I currently have some code that looks like
def findUserByEmail(email: String): Option[User] = {
val users = TableQuery[Users]
val action = users.filter(_.email === email).result.headOption
val result = db.run(action.transactionally)
Await.result(result, Duration.Inf)
}
It works fine the first few times I run it, but then I start running into issues where it looks like connections/sessions are being left open (see below). This code is running inside aws lambda functions and I'm thinking I need to handle sessions more explicitly. How would I do this in Slick 3?
"errorMessage": "Timeout after 5000ms of waiting for a connection.",
"errorType": "java.sql.SQLTimeoutException",
"stackTrace": [
"com.zaxxer.hikari.pool.BaseHikariPool.getConnection(BaseHikariPool.java:233)",
"com.zaxxer.hikari.pool.BaseHikariPool.getConnection(BaseHikariPool.java:183)",
"com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:93)",
"slick.jdbc.hikaricp.HikariCPJdbcDataSource.createConnection(HikariCPJdbcDataSource.scala:18)",
"slick.jdbc.JdbcBackend$BaseSession.<init>(JdbcBackend.scala:424)",
"slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:47)",
"slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:38)",
"slick.basic.BasicBackend$DatabaseDef.acquireSession(BasicBackend.scala:218)",
"slick.basic.BasicBackend$DatabaseDef.acquireSession$(BasicBackend.scala:217)",
"slick.jdbc.JdbcBackend$DatabaseDef.acquireSession(JdbcBackend.scala:38)",
"slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:239)",
"java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)",
"java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)",
"java.lang.Thread.run(Thread.java:745)"
],
"cause": {
"errorMessage": "FATAL: remaining connection slots are reserved for non-replication superuser connections",
"errorType": "org.postgresql.util.PSQLException",
"stackTrace": [
"org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2455)",
"org.postgresql.core.v3.QueryExecutorImpl.readStartupMessages(QueryExecutorImpl.java:2586)",
"org.postgresql.core.v3.QueryExecutorImpl.<init>(QueryExecutorImpl.java:113)",
"org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222)",
"org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:52)",
"org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:216)",
"org.postgresql.Driver.makeConnection(Driver.java:404)",
"org.postgresql.Driver.connect(Driver.java:272)",
You could try to set query timeout. Like this:
db.run(action.transactionally.withStatementParameters(statementInit = st => st.setQueryTimeout(100)))
You can also set different properties on Hikari connection pool as below:
slick {
// https://github.com/slick/slick/blob/master/slick-hikaricp/src/main/scala/slick/jdbc/hikaricp/HikariCPJdbcDataSource.scala
dataSourceClass = "slick.jdbc.DriverDataSource"
user = ${database.user}
password = ${database.password}
url = ${database.url}
connectionPool = HikariCP
maxConnections = 50
numThreads = 10
queueSize = 5000
connectionInitSql = "SELECT 1;"
connectionTestQuery = "SELECT 1;"
registerMbeans = true
properties = {
driver = ${database.driver}
url = ${database.url}
}
}