http4s client returns partial payload - scala

I'm using the AsyncHttpClient in http4s-0.19.0-M2 to make a client-call:
for {
resp <- http.expectOr[String](GET(url)){ error =>
error.as[String].map(body => throw new Exception(...)
}
_ <- doSomethingWithResponse(resp)
} yield ()
Occassiaonally the remote end times out, and I see the following in the log:
java.util.concurrent.TimeoutException: Request timeout to remote.server.com after 60000 ms
at org.asynchttpclient.netty.timeout.TimeoutTimerTask.expire(TimeoutTimerTask.java:43)
at org.asynchttpclient.netty.timeout.RequestTimeoutTimerTask.run(RequestTimeoutTimerTask.java:50)
at shade.cda.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:670)
at shade.cda.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:745)
at shade.cda.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:473)
at shade.cda.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
However, it looks like doSomethingWithResponse() is still invoked, but with a partial resp string. Is there a way to change this behavior so that the http.expectOr call fails if it can't retrieve the entire payload?

Related

Response entity was not subscribed after 100 seconds. Make sure to read the response entity body or call `discardBytes()` on it

I am building a scala application. Within the application, we are making a call to an external service, and fetching the data.
When I am hitting the endpoint of this external service using the
postman, I am getting the complete data, around 9000 lines of JSON
data, within 9 seconds.
But when I am hitting the same endpoint through my scala application, I am getting a 200 OK response, but getting the below error:
[WARN] [06/09/2022 18:05:45.765] [default-akka.actor.default-dispatcher-9] [default/Pool(shared->http://ad-manager-api-production.ap-south-1.elasticbeanstalk.com:80)] [4 (WaitingForResponseEntitySubscription)] Response entity was not subscribed after 100 seconds. Make sure to read the response entity body or call `discardBytes()` on it. GET /admin/campaigns Empty -> 200 OK Chunked
I read about it and found that we can set response-entity-subscription-timeout property to a higher value. I set it to about 100 seconds, but this does not seem to help.
My Code:
private val sendAndReceive = customSendAndReceive.getOrElse(HttpClientUtils.singleRequest)
.
.
.
def getActiveCampaigns: GetActiveCampaigns = () => {
val request = HttpRequest(
uri = s"$endpoint/admin/campaigns?status=PUBLISHED", // includes both PUBLISHED_READY and PUBLISHED_PAUSED
method = HttpMethods.GET,
headers = heathers
)
sendAndReceive(request).timed(getActiveCampaignsTimer).flatMap {
case HttpResponse(StatusCodes.OK, _, entity, _) =>
Unmarshal(entity).to[List[CampaignListDetailsDto]]
case response#HttpResponse(_, _, _, _) =>
response.discardEntityBytes()
Future.failed(new RuntimeException(s"Ad manager service exception: $response"))
case response =>
log.error(s"Error calling ad manager service: $response")
response.discardEntityBytes()
Future.failed(new RuntimeException(s"Ad manager service exception: $response"))
}
}
.
.
.
def getCampaignSpendData(getActiveCampaigns: GetActiveCampaigns, getCampaignTotalSpend: GetCampaignTotalSpend)(implicit ec: ExecutionContext): GetCampaignsSpendData = () => {
getActiveCampaigns()
.andThen {
case Failure(t) => log.error("Failed to fetch ads from ad manager", t)
}
.flatMap {
campaignList => Future.sequence(campaignList.map(campaign => budgetSpendPercentage(getCampaignTotalSpend)(campaign)))
}
}
Questions
What does this error exactly mean? Is it that it is able to connect to the endpoint but not able to get the complete data from it before the connection is closed/reset?
How can we address this issue?

Can I get Akka HTTP to backpressure when it encounters an error?

I'm using Akka HTTP in a project and for certain flows, we have external infrastructure that can easily become overloaded, throwing back HTTP 500-level errors when we do POST requests (which in our case are idempotent). In these instances, we retry the request like so:
RetryFlow.withBackoff(minBackoff = retryMin.seconds, maxBackoff = retryMax.seconds, randomFactor = 0d, maxRetries = numRetries, httpReqRespFlow)(
decideRetry = { (request, response) =>
response.httpResponse match {
case Success(httpResponse: HttpResponse) =>
if(httpResponse.status.isSuccess()) {
None
} else {
processHttpResponseFailure(request, httpResponse)
}
case Failure(exception: Exception) =>
logger.error(s"Retrying HTTP request from ${response.httpResponse} future failure", exception)
response.httpResponse.map(_.discardEntityBytes())
Some(request)
}
}
)
}.log("Backoff HTTP request")
I understand that Akka HTTP will backpressure if the entityBytes are not consumed or discarded but is there any way for me to force the flow to backpressure when we get these load-related errors? There will be several streams of execution, and ideally, I'd like the whole system to "throttle back" if it starts to overload the downstream system.

Grpc parallel Stream communication leads to error:AkkaNettyGrpcClientGraphStage

I have two services: one that sends stream data and the second one receives it using akka-grpc for communication. When source data is provided Service one is called to process and send it to service two via grpc client. It's possible that multiple instances of server one runs at the same time when multiple source data are provided at the same time.In long running test of my application. I see below error in service one:
ERROR i.a.g.application.actors.DbActor - GraphStage [akka.grpc.internal.AkkaNettyGrpcClientGraphStage$$anon$1#59d40805] terminated abruptly, caused by for example materializer or act
akka.stream.AbruptStageTerminationException: GraphStage [akka.grpc.internal.AkkaNettyGrpcClientGraphStage$$anon$1#59d40805] terminated abruptly, caused by for example materializer or actor system termination.
I have never shutdown actor systems but only kill actors after doing their job. Also I used proto3 and http2 for request binding. Here is a piece of my code in service one:
////////////////////server http binding /////////
val service: HttpRequest => Future[HttpResponse] =
ServiceOneServiceHandler(new ServiceOneServiceImpl(system))
val bound = Http().bindAndHandleAsync(
service,
interface = config.getString("akka.grpc.server.interface"),
port = config.getString("akka.grpc.server.default-http-port").toInt,
connectionContext = HttpConnectionContext(http2 = Always))
bound.foreach { binding =>
logger.info(s"gRPC server bound to: ${binding.localAddress}")
}
////////////////////client /////////
def send2Server[A](data: ListBuffer[A]): Future[ResponseDTO] = {
val reply = {
val thisClient = interface.initialize()
interface.call(client = thisClient, req = data.asInstanceOf[ListBuffer[StoreRequest]].toList)
}
reply
}
///////////////// grpc communication //////////
def send2GrpcServer[A](data: ListBuffer[A]): Unit = {
val reply = send2Server(data)
Await.ready(reply, Duration.Inf) onComplete {
case util.Success(response: ResponseDTO) =>
logger.info(s"got reply message: ${res.description}")
//////check response content and stop application if desired result not found in response
}
case util.Failure(exp) =>
//////stop application
throw exp.getCause
}
}
Error occurred exactly after waiting for service 2 response :
Await.ready(reply, Duration.Inf)
I can't catch the cause of error.
UPDATE
I found that some stream is missed such that service one sends an stream an indefinitely wait for the response and service two does not receive any thing to reply to service one but still don't know why stream is missed
I also updated akka grpc plugin but has no sense:
addSbtPlugin("com.lightbend.akka.grpc" % "sbt-akka-grpc" % "0.6.1")
addSbtPlugin("com.lightbend.sbt" % "sbt-javaagent" % "0.1.4")

How do I get my asyncio client to call a socket server and waiting for response

I am working with an asyncio.Protocol server where the purpose is for the client to call the server, but wait until the server has responded and data is returned before stopping the client loop.
Based on the asyncio doc Echo Client and Server here: https://docs.python.org/3/library/asyncio-protocol.html#protocol-example-tcp-echo-server-and-client , results of transport.write(...) are returned immediately when called.
Through experience, calling loop.run_until_complete(coroutine) fails with RuntimeError: Event loop is running.
Running asyncio.sleep(n) in the data_received() method of the server doesn't have any effect either.
yield from asyncio.sleep(n) and yield from asyncio.async(asyncio.sleep(n)) in data_received() both hang the server.
My question is, how do I get my client to wait for the server to write a response before giving back control?
I guess to never use transport/protocol pair directly.
asyncio has Streams API for high-level programming.
Client code can look like:
#asyncio.coroutine
def communicate():
reader, writer = yield from asyncio.open_connection(HOST, PORT)
writer.write(b'data')
yield from writer.drain()
answer = yield from reader.read()
# process answer, maybe send new data back to server and wait for answer again
writer.close()
You don't have to change the client code.
echo-client.py
#!/usr/bin/env python3.4
import asyncio
class EchoClient(asyncio.Protocol):
message = 'Client Echo'
def connection_made(self, transport):
transport.write(self.message.encode())
print('data sent: {}'.format(self.message))
def data_received(self, data):
print('data received: {}'.format(data.decode()))
def connection_lost(self, exc):
print('server closed the connection')
asyncio.get_event_loop().stop()
loop = asyncio.get_event_loop()
coro = loop.create_connection(EchoClient, '127.0.0.1', 8888)
loop.run_until_complete(coro)
loop.run_forever()
loop.close()
The trick is to place your code (including self.transport methods) into a coroutine and use the wait_for() method, with the yield from statement in front of the statements that require their values returned, or ones which take a while to complete:
echo-server.py
#!/usr/bin/env python3.4
import asyncio
class EchoServer(asyncio.Protocol):
def connection_made(self, transport):
peername = transport.get_extra_info('peername')
print('connection from {}'.format(peername))
self.transport = transport
def data_received(self, data):
print('data received: {}'.format(data.decode()))
fut = asyncio.async(self.sleeper())
result = asyncio.wait_for(fut, 60)
#asyncio.coroutine
def sleeper(self):
yield from asyncio.sleep(2)
self.transport.write("Hello World".encode())
self.transport.close()
loop = asyncio.get_event_loop()
coro = loop.create_server(EchoServer, '127.0.0.1', 8888)
server = loop.run_until_complete(coro)
print('serving on {}'.format(server.sockets[0].getsockname()))
try:
loop.run_forever()
except KeyboardInterrupt:
print("exit")
finally:
server.close()
loop.close()
Call echo-server.py and then echo-client.py, the client will wait 2 seconds as determined by asyncio.sleep, then stop.

Play 2.1 Scala SQLException Connection Timed out waiting for a free available connection

I have been working on this issue for quite a while now and I cannot find a solution...
A web app built with play framework 2.2.1 using h2 db (for dev) and a simple Model package.
I am trying to implement a REST JSON endpoint and the code works... but only once per server instance.
def createOtherModel() = Action(parse.json) {
request =>
request.body \ "name" match {
case _: JsUndefined => BadRequest(Json.obj("error" -> true,
"message" -> "Could not match name =(")).as("application/json")
case name: JsValue =>
request.body \ "value" match {
case _: JsUndefined => BadRequest(Json.obj("error" -> true,
"message" -> "Could not match value =(")).as("application/json")
case value: JsValue =>
// this breaks the secod time
val session = ThinkingSession.dummy
val json = Json.obj(
"content" -> value,
"thinkingSession" -> session.id,
)
)
Ok(Json.obj("content" -> json)).as("application/json")
}
} else {
BadRequest(Json.obj("error" -> true,
"message" -> "Name was not content =(")).as("application/json")
}
}
}
so basically I read the JSON, echo the "value" value, create a model obj and send it's id.
the ThinkingSession.dummy function does this:
def all(): List[ThinkingSession] = {
// Tried explicitly closing connection, no difference
//val conn = DB.getConnection()
//try {
// DB.withConnection { implicit conn =>
// SQL("select * from thinking_session").as(ThinkingSession.DBParser *)
// }
//} finally {
// conn.close()
//}
DB.withConnection { implicit conn =>
SQL("select * from thinking_session").as(ThinkingSession.DBParser *)
}
}
def dummy: ThinkingSession = {
(all() head)
}
So this should do a SELECT * FROM thinking_session, create a model obj list from the result and return the first out of the list.
This works fine the first time after server start but the second time I get a
play.api.Application$$anon$1: Execution exception[[SQLException: Timed out waiting for a free available connection.]]
at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.1]
at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2$$anonfun$applyOrElse$3.apply(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
at scala.Option.map(Option.scala:145) [scala-library.jar:na]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$2.applyOrElse(PlayDefaultUpstreamHandler.scala:261) [play_2.10.jar:2.2.1]
Caused by: java.sql.SQLException: Timed out waiting for a free available connection.
at com.jolbox.bonecp.DefaultConnectionStrategy.getConnectionInternal(DefaultConnectionStrategy.java:88) ~[bonecp.jar:na]
at com.jolbox.bonecp.AbstractConnectionStrategy.getConnection(AbstractConnectionStrategy.java:90) ~[bonecp.jar:na]
at com.jolbox.bonecp.BoneCP.getConnection(BoneCP.java:553) ~[bonecp.jar:na]
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:131) ~[bonecp.jar:na]
at play.api.db.DBApi$class.getConnection(DB.scala:67) ~[play-jdbc_2.10.jar:2.2.1]
at play.api.db.BoneCPApi.getConnection(DB.scala:276) ~[play-jdbc_2.10.jar:2.2.1]
My application.conf (db section)
db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:file:database/[my_db]"
db.default.logStatements=true
db.default.idleConnectionTestPeriod=5 minutes
db.default.connectionTestStatement="SELECT 1"
db.default.maxConnectionAge=0
db.default.connectionTimeout=10000
Initially the only thing set in my config was the connection and the error occurred. I added all the other stuff while reading up on the issue on the web.
What is interesting is that when I use the h2 in memory db it works once after server start and after that it fails. when I use the h2 file system db it only works once, regardless of the server instances.
Can anyone give me some insight on this issue? Have found some stuff on bonecp problem and tried upgrading to 0.8.0-rc1 but nothing changed... I am at a loss =(
Try to set a maxConnectionAge and idle timeout
turns out the error was quite somewhere else... it was a good ol' stack overflow... have not seen one in a long time. I tried down-voting my question but it's not possible^^