How to pass correlated value in next request in Gatling script - scala

How can I replace "details1" in "request_2" with correlated value "SynchToken" from "request_1". I am trying to replace with ${SynchToken} but it is not reflecting the correlated value.
val Transaction_Name_1 = group("Transaction_Name_1") {
exec(http("request_1")
.get(session => "/abc/details1?_=" + System.currentTimeMillis())
.check(regex("""name="SYNCHRONIZER_TOKEN" value="(.*?)"""").saveAs("SynchToken")))
.pause(5)
.exec(http("request_2")
.get(session => "/abc/details1?_=" + System.currentTimeMillis()))
}

You should really spend some time reading the documentation.
Here, you need to use the Session API.
exec(http("request_2")
.get(session => "/abc/" + session("SynchToken").as[String] + "?_=" + System.currentTimeMillis()))

Related

How can I parameterise information in Gatling scenarios

I need to send specific parameters to a scenario that is being reused multiple times with different payloads depending on the workflows. The following is the code that is to be reused:
var reqName = ""
var payloadName = ""
lazy val sendInfo: ScenarioBuilder = scenario("Send info")
.exec(session => {
reqName = session("localReqName").as[String]
payloadName = session("localPayloadName").as[String]
session}
)
.exec(jms(s"$reqName")
.send
.queue(simQueue)
.textMessage(ElFileBody(s"$payloadName.json"))
)
.exec(session => {
val filePath = s"$payloadName"
val body = new String(Files.readAllBytes(Paths.get(ClassLoader.getSystemResource(filePath).toURI)))
logger.info("timestamp value: " + session("timestamp").as[String])
logger.debug("Template body:\n " + body)
session
})
I know that you can chain scenarios in Scala/Gatling but how can I pass in information like reqName and payloadName down the chain, where reqName is a parameter to indicate the name of the request where the info is being sent and payloadName is the name of the actual JSON payload for the related request:
lazy val randomInfoSend: ScenarioBuilder = scenario("Send random info payloads")
.feed(csv("IDfile.csv").circular)
.randomSwitch(
(randomInfoType1prob) -> exec(
feed(timeFeeder)
.exec(session => {
payloadName = "Info1payload.json"
reqName ="Info1Send"
session.set("localReqName", "Info1Send")
session.set("localPayloadName", "Info1payload.json")
session
})
.exec(sendInfo)
),
(100.0 - randomInfoType1prob) -> exec(
feed(timeFeeder)
.exec(session => {
payloadName = "Info2Payload.json"
reqName ="Info2Send"
session.set("localReqName", "Info2Send")
session.set("localPayloadName", "Info2Payload.json")
session
})
.exec(sendInfo)
)
I attempted the above but the values of that 2 specific parameters were not passed through correctly. (The IDs and timestamps were fed through correctly though) Any suggestions?
Please properly read the Session API documentation. Session is immutable so Session#set returns a new instance.

Flink table sink doesn't work with debezium-avro-confluent source

I'm using Flink SQL to read debezium avro data from Kafka and store as parquet files in S3. Here is my code,
import os
from pyflink.datastream import StreamExecutionEnvironment, FsStateBackend
from pyflink.table import TableConfig, DataTypes, BatchTableEnvironment, StreamTableEnvironment, \
ScalarFunction
exec_env = StreamExecutionEnvironment.get_execution_environment()
exec_env.set_parallelism(1)
# start a checkpoint every 12 s
exec_env.enable_checkpointing(12000)
t_config = TableConfig()
t_env = StreamTableEnvironment.create(exec_env, t_config)
INPUT_TABLE = 'source'
KAFKA_TOPIC = os.environ['KAFKA_TOPIC']
KAFKA_BOOTSTRAP_SERVER = os.environ['KAFKA_BOOTSTRAP_SERVER']
OUTPUT_TABLE = 'sink'
S3_BUCKET = os.environ['S3_BUCKET']
OUTPUT_S3_LOCATION = os.environ['OUTPUT_S3_LOCATION']
ddl_source = f"""
CREATE TABLE {INPUT_TABLE} (
`event_time` TIMESTAMP(3) METADATA FROM 'timestamp' VIRTUAL,
`id` BIGINT,
`price` DOUBLE,
`type` INT,
`is_reinvite` INT
) WITH (
'connector' = 'kafka',
'topic' = '{KAFKA_TOPIC}',
'properties.bootstrap.servers' = '{KAFKA_BOOTSTRAP_SERVER}',
'scan.startup.mode' = 'earliest-offset',
'format' = 'debezium-avro-confluent',
'debezium-avro-confluent.schema-registry.url' = 'http://kafka-production-schema-registry:8081'
)
"""
ddl_sink = f"""
CREATE TABLE {OUTPUT_TABLE} (
`event_time` TIMESTAMP,
`id` BIGINT,
`price` DOUBLE,
`type` INT,
`is_reinvite` INT
) WITH (
'connector' = 'filesystem',
'path' = 's3://{S3_BUCKET}/{OUTPUT_S3_LOCATION}',
'format' = 'parquet'
)
"""
t_env.sql_update(ddl_source)
t_env.sql_update(ddl_sink)
t_env.execute_sql(f"""
INSERT INTO {OUTPUT_TABLE}
SELECT *
FROM {INPUT_TABLE}
""")
When I submit the job, I get the following error message,
pyflink.util.exceptions.TableException: Table sink 'default_catalog.default_database.sink' doesn't support consuming update and delete changes which is produced by node TableSourceScan(table=[[default_catalog, default_database, source]], fields=[id, price, type, is_reinvite, timestamp])
I'm using Flink 1.12.1. The source is working properly and I have tested it using a 'print' connector in the sink. Here is a sample data set which was extracted from the task manager logs when using 'print' connector in the table sink,
-D(2021-02-20T17:07:27.298,14091764,26.0,9,0)
-D(2021-02-20T17:07:27.298,14099765,26.0,9,0)
-D(2021-02-20T17:07:27.299,14189806,16.0,9,0)
-D(2021-02-20T17:07:27.299,14189838,37.0,9,0)
-D(2021-02-20T17:07:27.299,14089840,26.0,9,0)
-D(2021-02-20T17:07:27.299,14089847,26.0,9,0)
-D(2021-02-20T17:07:27.300,14189859,26.0,9,0)
-D(2021-02-20T17:07:27.301,14091808,37.0,9,0)
-D(2021-02-20T17:07:27.301,14089911,37.0,9,0)
-D(2021-02-20T17:07:27.301,14099937,26.0,9,0)
-D(2021-02-20T17:07:27.302,14091851,37.0,9,0)
How can I make my table sink work with the filesystem connector ?
What happens is that:
when receiving the Debezium records, Flink updates a logical table by adding, removing and suppressing Flink rows based on their primary key.
the only sinks that can handle that kind of information are those that have a concept of update by key. Jdbc would be a typical example, in which case it's straightforward for Flink to translate the concept of "a Flink row with key foo has been updated to bar" into "JDBC row with key foo should be updated with value bar", or something. filesystem sink do not support that kind of operation since files are append-only.
See also Flink documentation on append and update queries
In practice, in order to do the conversion, we first have to decide what is it exactly we want to have in this append-only file.
If what we want is to have in the file the latest version of each item any time an id is updated, then to my knowledge the way to go would be to convert it to a stream first, and then output that with a FileSink. Note that in that case, the result contains a boolean saying if the row is updated or deleted, and we have to decide how we want this information to be visible in the resulting file.
Note: I used this other CDC example from the Flink SQL cookbook to reproduce a similar setup:
// assuming a Flink retract table of claims build from a CDC stream:
tableEnv.executeSql("" +
" CREATE TABLE accident_claims (\n" +
" claim_id INT,\n" +
" claim_total FLOAT,\n" +
" claim_total_receipt VARCHAR(50),\n" +
" claim_currency VARCHAR(3),\n" +
" member_id INT,\n" +
" accident_date VARCHAR(20),\n" +
" accident_type VARCHAR(20),\n" +
" accident_detail VARCHAR(20),\n" +
" claim_date VARCHAR(20),\n" +
" claim_status VARCHAR(10),\n" +
" ts_created VARCHAR(20),\n" +
" ts_updated VARCHAR(20)" +
") WITH (\n" +
" 'connector' = 'postgres-cdc',\n" +
" 'hostname' = 'localhost',\n" +
" 'port' = '5432',\n" +
" 'username' = 'postgres',\n" +
" 'password' = 'postgres',\n" +
" 'database-name' = 'postgres',\n" +
" 'schema-name' = 'claims',\n" +
" 'table-name' = 'accident_claims'\n" +
" )"
);
// convert it to a stream
Table accidentClaims = tableEnv.from("accident_claims");
DataStream<Tuple2<Boolean, Row>> accidentClaimsStream = tableEnv
.toRetractStream(accidentClaims, Row.class);
// and write to file
final FileSink<Tuple2<Boolean, Row>> sink = FileSink
// TODO: adapt the output format here:
.forRowFormat(new Path("/tmp/flink-demo"),
(Encoder<Tuple2<Boolean, Row>>) (element, stream) -> stream.write((element.toString() + "\n").getBytes(StandardCharsets.UTF_8)))
.build();
ordersStreams.sinkTo(sink);
streamEnv.execute();
Note that during the conversion, you obtain a boolean telling you whether that row is a new value for that accident claim, or a deletion of such claim. My basic FileSink config there is just including that boolean in the output, although how to handle deletions is to be decided case by case.
The result in the file then looks like this:
head /tmp/flink-demo/2021-03-09--09/.part-c7cdb74e-893c-4b0e-8f69-1e8f02505199-0.inprogress.f0f7263e-ec24-4474-b953-4d8ef4641998
(true,1,4153.92,null,AUD,412,2020-06-18 18:49:19,Permanent Injury,Saltwater Crocodile,2020-06-06 03:42:25,IN REVIEW,2021-03-09 06:39:28,2021-03-09 06:39:28)
(true,2,8940.53,IpsumPrimis.tiff,AUD,323,2019-03-18 15:48:16,Collision,Blue Ringed Octopus,2020-05-26 14:59:19,IN REVIEW,2021-03-09 06:39:28,2021-03-09 06:39:28)
(true,3,9406.86,null,USD,39,2019-04-28 21:15:09,Death,Great White Shark,2020-03-06 11:20:54,INITIAL,2021-03-09 06:39:28,2021-03-09 06:39:28)
(true,4,3997.9,null,AUD,315,2019-10-26 21:24:04,Permanent Injury,Saltwater Crocodile,2020-06-25 20:43:32,IN REVIEW,2021-03-09 06:39:28,2021-03-09 06:39:28)
(true,5,2647.35,null,AUD,74,2019-12-07 04:21:37,Light Injury,Cassowary,2020-07-30 10:28:53,REIMBURSED,2021-03-09 06:39:28,2021-03-09 06:39:28)

WSClient - Too Many Open Files

I'm working with Play Framework 2.4 on CentOS 6 and my application is throwing this exception:
java.net.SocketException: Too many open files
I've searched a lot of topics on Stack Overflow and tried the solutions:
Increase the number of open files to 65535;
Change hard and soft limits on /etc/security/limits.conf;
Change the value of fs.file-max on /etc/sysctl.conf;
Reduced the timeout on file /proc/sys/net/ipv4/tcp_fin_timeout;
The error keeps happening. On another sites, i've found people that are facing the same problem because they weren't calling the method close() from WSClient but in my case, i'm working with dependency injection:
#Singleton
class RabbitService #Inject()(ws:WSClient) {
def myFunction() {
ws.url(“url”).withHeaders(
"Content-type" -> "application/json",
"Authorization" -> ("Bearer " + authorization))
.post(message)
.map(r => {
r.status match {
case 201 => Logger.debug("It Rocks")
case _ => Logger.error(s"It sucks")
}
})
}
}
If i change my implementation to await the result, it work's like a charm but the performance is very poor - and i would like to use map function instead wait the result:
#Singleton
class RabbitService #Inject()(ws:WSClient) {
def myFunction() {
val response = ws.url("url")
.withHeaders(
"Content-type" -> "application/json",
"Authorization" -> ("Bearer " + authorization))
.post(message)
Try(Await.result(response, 1 seconds)) match {
case Success(r) =>
if(r.status == 201) {
Logger.debug(s"It rocks")
} else {
Logger.error(s"It sucks")
}
case Failure(e) => Logger.error(e.getMessage, e)
}
}
}
Anyone have an idea how can i fix this error? I've tried everything but without success.
If anyone is facing the same problem, you need to configure WSClient on application.conf - need to set maxConnectionsTotal and maxConnectionsPerHost.
This is how I solved this problem.
https://www.playframework.com/documentation/2.5.x/ScalaWS

OutOfMemoryError: Java heap space and memory variables in Spark

I have been trying to execute a scala program and the output somehow always seems to be something like this:
15/08/17 14:13:14 ERROR util.Utils: uncaught error in thread SparkListenerBus, stopping SparkContext
java.lang.OutOfMemoryError: Java heap space
at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:64)
at java.lang.StringBuilder.<init>(StringBuilder.java:97)
at com.fasterxml.jackson.core.util.TextBuffer.contentsAsString(TextBuffer.java:339)
at com.fasterxml.jackson.core.io.SegmentedStringWriter.getAndClear(SegmentedStringWriter.java:83)
at com.fasterxml.jackson.databind.ObjectMapper.writeValueAsString(ObjectMapper.java:2344)
at org.json4s.jackson.JsonMethods$class.compact(JsonMethods.scala:32)
at org.json4s.jackson.JsonMethods$.compact(JsonMethods.scala:44)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$1.apply(EventLoggingListener.scala:143)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$1.apply(EventLoggingListener.scala:143)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:143)
at org.apache.spark.scheduler.EventLoggingListener.onJobStart(EventLoggingListener.scala:169)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:34)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:56)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:79)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
or like this
15/08/19 11:45:11 ERROR util.Utils: uncaught error in thread SparkListenerBus, stopping SparkContext
java.lang.OutOfMemoryError: GC overhead limit exceeded
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider$Impl.createInstance(DefaultSerializerProvider.java:526)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider$Impl.createInstance(DefaultSerializerProvider.java:505)
at com.fasterxml.jackson.databind.ObjectMapper._serializerProvider(ObjectMapper.java:2846)
at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:1902)
at com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:280)
at com.fasterxml.jackson.core.JsonGenerator.writeObjectField(JsonGenerator.java:1255)
at org.json4s.jackson.JValueSerializer.serialize(JValueSerializer.scala:22)
at org.json4s.jackson.JValueSerializer.serialize(JValueSerializer.scala:7)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:128)
at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:1902)
at com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:280)
at org.json4s.jackson.JValueSerializer.serialize(JValueSerializer.scala:17)
at org.json4s.jackson.JValueSerializer.serialize(JValueSerializer.scala:7)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:128)
at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:1902)
at com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:280)
at com.fasterxml.jackson.core.JsonGenerator.writeObjectField(JsonGenerator.java:1255)
at org.json4s.jackson.JValueSerializer.serialize(JValueSerializer.scala:22)
at org.json4s.jackson.JValueSerializer.serialize(JValueSerializer.scala:7)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:128)
at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:1902)
at com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:280)
at org.json4s.jackson.JValueSerializer.serialize(JValueSerializer.scala:17)
at org.json4s.jackson.JValueSerializer.serialize(JValueSerializer.scala:7)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:128)
at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:1902)
at com.fasterxml.jackson.core.base.GeneratorBase.writeObject(GeneratorBase.java:280)
at com.fasterxml.jackson.core.JsonGenerator.writeObjectField(JsonGenerator.java:1255)
at org.json4s.jackson.JValueSerializer.serialize(JValueSerializer.scala:22)
at org.json4s.jackson.JValueSerializer.serialize(JValueSerializer.scala:7)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:128)
at com.fasterxml.jackson.databind.ObjectMapper._configAndWriteValue(ObjectMapper.java:2881)
Are these errors on the driver or executor side?
I am a bit confused with the memory variables that Spark uses. My current settings are
spark-env.sh
export SPARK_WORKER_MEMORY=6G
export SPARK_DRIVER_MEMORY=6G
export SPARK_EXECUTOR_MEMORY=4G
spark-defaults.conf
# spark.driver.memory 6G
# spark.executor.memory 4G
# spark.executor.extraJavaOptions ' -Xms5G -Xmx5G '
# spark.driver.extraJavaOptions ' -Xms5G -Xmx5G '
Do I need to uncomment any of the variables contained in spark-defaults.conf, or are they redundant?
Is for example setting SPARK_WORKER_MEMORY equivalent to setting the spark.executor.memory?
Part of my scala code where it stops after a few iterations:
val filteredNodesGroups = connCompGraph.vertices.map{ case(_, array) => array(pagerankIndex) }.distinct.collect
for (id <- filteredNodesGroups){
val clusterGraph = connCompGraph.subgraph(vpred = (_, attr) => attr(pagerankIndex) == id)
val pagerankGraph = clusterGraph.pageRank(0.15)
val completeClusterPagerankGraph = clusterGraph.outerJoinVertices(pagerankGraph.vertices) {
case (uid, attrList, Some(pr)) =>
attrList :+ ("inClusterPagerank:" + pr)
case (uid, attrList, None) =>
attrList :+ ""
}
val sortedClusterNodes = completeClusterPagerankGraph.vertices.toArray.sortBy(_._2(pagerankIndex + 1))
println(sortedClusterNodes(0)._2(1) + " with rank: " + sortedClusterNodes(0)._2(pagerankIndex + 1))
}
Many questions disguised as one. Thank you in advance!
I'm not a Spark expert, but there is line that seems suspicious to me :
val filteredNodesGroups = connCompGraph.vertices.map{ case(_, array) => array(pagerankIndex) }.distinct.collect
Basically, by using the collect method, you are getting back all the data from your executors (before even processing it) to the driver. Do you have any idea about the size of this data ?
In order to fix this, you should proceed in a more functional way. To extract the distinct values, you could for example use a groupBy and map :
val pairs = connCompGraph.vertices.map{ case(_, array) => array(pagerankIndex) }
pairs.groupBy(_./* the property to group on */)
.map { case (_, arrays) => /* map function */ }
Regarding the collect, there should be a way to sort each partition and then to return the (processed) result to the driver. I would like to help you more but I need more information about what you are trying to do.
UPDATE
After digging a little bit, you could sort your data using shuffling as described here
UPDATE
So far, I've tried to avoid the collect, and to get the data back to the driver as much as possible, but I've no idea how to solve this :
val filteredNodesGroups = connCompGraph.vertices.map{ case(_, array) => array(pagerankIndex) }.distinct()
val clusterGraphs = filteredNodesGroups.map { id => connCompGraph.subgraph(vpred = (_, attr) => attr(pagerankIndex) == id) }
val pageRankGraphs = clusterGraphs.map(_.pageRank(0.15))
Basically, you need to join two RDD[Graph[Array[String], String]], but I don't know what key to use, and secondly this would necessarily return an RDD of RDD (I don't know if you can even do that). I'll try to find something later this day.

Why makes calling error or done in a BodyParser's Iteratee the request hang in Play Framework 2.0?

I am trying to understand the reactive I/O concepts of Play 2.0 framework. In order to get a better understanding from the start I decided to skip the framework's helpers to construct iteratees of different kinds and to write a custom Iteratee from scratch to be used by a BodyParser to parse a request body.
Starting with the information available in Iteratees and ScalaBodyParser docs and two presentations about play reactive I/O this is what I came up with:
import play.api.mvc._
import play.api.mvc.Results._
import play.api.libs.iteratee.{Iteratee, Input}
import play.api.libs.concurrent.Promise
import play.api.libs.iteratee.Input.{El, EOF, Empty}
01 object Upload extends Controller {
02 def send = Action(BodyParser(rh => new SomeIteratee)) { request =>
03 Ok("Done")
04 }
05 }
06
07 case class SomeIteratee(state: Symbol = 'Cont, input: Input[Array[Byte]] = Empty, received: Int = 0) extends Iteratee[Array[Byte], Either[Result, Int]] {
08 println(state + " " + input + " " + received)
09
10 def fold[B](
11 done: (Either[Result, Int], Input[Array[Byte]]) => Promise[B],
12 cont: (Input[Array[Byte]] => Iteratee[Array[Byte], Either[Result, Int]]) => Promise[B],
13 error: (String, Input[Array[Byte]]) => Promise[B]
14 ): Promise[B] = state match {
15 case 'Done => { println("Done"); done(Right(received), Input.Empty) }
16 case 'Cont => cont(in => in match {
17 case in: El[Array[Byte]] => copy(input = in, received = received + in.e.length)
18 case Empty => copy(input = in)
19 case EOF => copy(state = 'Done, input = in)
20 case _ => copy(state = 'Error, input = in)
21 })
22 case _ => { println("Error"); error("Some error.", input) }
23 }
24 }
(Remark: All these things are new to me, so please forgive if something about this is total crap.)
The Iteratee is pretty dumb, it just reads all chunks, sums up the number of received bytes and prints out some messages. Everything works as expected when I call the controller action with some data - I can observe all chunks are received by the Iteratee and when all data is read it switches to state done and the request ends.
Now I started to play around with the code because I wanted to see the behaviour for these two cases:
Switching into state error before all input is read.
Switching into state done before all input is read and returning a Result instead of the Int.
My understanding of the documentation mentioned above is that both should be possible but actually I am not able to understand the observed behaviour. To test the first case I changed line 17 of the above code to:
17 case in: El[Array[Byte]] => copy(state = if(received + in.e.length > 10000) 'Error else 'Cont, input = in, received = received + in.e.length)
So I just added a condition to switch into the error state if more than 10000 bytes were received. The output I get is this:
'Cont Empty 0
'Cont El([B#38ecece6) 8192
'Error El([B#4ab50d3c) 16384
Error
Error
Error
Error
Error
Error
Error
Error
Error
Error
Error
Then the request hangs forever and never ends. My expectation from the above mentioned docs was that when I call the error function inside fold of an Iteratee the processing should be stopped. What is happening here is that the Iteratee's fold method is called several times after error has been called - well and then the request hangs.
When I switch into done state before reading all input the behaviour is quite similar. Changing line 15 to:
15 case 'Done => { println("Done with " + input); done(if (input == EOF) Right(received) else Left(BadRequest), Input.Empty) }
and line 17 to:
17 case in: El[Array[Byte]] => copy(state = if(received + in.e.length > 10000) 'Done else 'Cont, input = in, received = received + in.e.length)
produces the following output:
'Cont Empty 0
'Cont El([B#16ce00a8) 8192
'Done El([B#2e8d214a) 16384
Done with El([B#2e8d214a)
Done with El([B#2e8d214a)
Done with El([B#2e8d214a)
Done with El([B#2e8d214a)
and again the request hangs forever.
My main question is why the request is hanging in the above mentioned cases. If anybody could shed light on this I would greatly appreciate it!
Your understanding is perfectly right and I have just push a fix to master:
https://github.com/playframework/Play20/commit/ef70e641d9114ff8225332bf18b4dd995bd39bcc
Fixed both cases plus exceptions in the Iteratees.
Nice use of copy in case class for doing an Iteratee BTW.
Things must have changed with Play 2.1 - Promise is no longer parametric, and this example no longer compiles.