Gatling request with random number of body parts - scala

I want to test a HTTP upload API that accepts a list of files in a single request.
I want to write a Gatling script that generates a request with a random number of body parts each time.
This is what I have:
feed(feeder)
.exec(
{
var req = http("My request")
.post("/${id}")
.header("Content-Type", "multipart/mixed")
1 to Random.nextInt(10) foreach {
i => {
req = req.bodyPart(
ByteArrayBodyPart("file-put", session => randomByteArray(10 * 1024 + Random.nextInt(10 * 1024 * 1024)))
.contentType("application/pdf")
.fileName(session => s"/$i-UPLOAD-TEST.pdf")
)
}
}
req
}
)
private def randomByteArray(size: Int): Array[Byte] = {
val bytes = new Array[Byte](size)
Random.nextBytes(bytes)
bytes
}
With every request the file sizes and contents are randomized, so the randomByteArray works fine. But each time I get the same number of body parts. I assume it's because the request "template" is generated at the start of the simulation, so the foreach loop runs only once and configures the number of body parts for all the future requests.
How can I make the number of body parts random each time?

You'd have to build each branch (one for one part, one for 2, etc) beforehand and then switch randomly.

Related

Gatling Feeders - creating new instances

The following code works as expected, for each iteration the next value from the valueFeed is popped and written to the output.csv file
class TestSimulation extends Simulation {
val valueFeed = csv("input.csv")
val writer = {
val fos = new java.io.FileOutputStream("output.csv")
new java.io.PrintWriter(fos, true)
}
val scn = scenario("Test Sim")
.repeat(2) {
feed(valueFeed)
.exec(session => {
writer.println(session("value").as[String])
session
})
}
setUp(scn.inject(constantUsersPerSec(1) during (10 seconds)))
}
When feed creation is inlined in the feed method the behaviour is still exactly the same
class TestSimulation extends Simulation {
val writer = {
val fos = new java.io.FileOutputStream("output.csv")
new java.io.PrintWriter(fos, true)
}
val scn = scenario("Test Sim")
.repeat(2) {
feed(csv("input.csv"))
.exec(session => {
writer.println(session("value").as[String])
session
})
}
setUp(scn.inject(constantUsersPerSec(1) during (10 seconds)))
}
Since the feed creation is not extracted I would not expect each iteration to be using the same feed but creating it's own feed instance.
Why then is it the behaviour implies the same feed is being used and the first value from the input file not always written to the output?
Example input file (data truncated, tested with more lines to prevent empty feeder exception):
value
1
2
3
4
5
Because csv(...) is in fact FeederBuilder which is called once to produce the feeder to be used within the scenario.
The gatling DSL defines builders - these are executed only once at startup, so even when you inline you get a feeder shared between all users as the same (and only) builder is used to create all the users.
if you want to have each user have its own copy of the data, you can't use the .feed method, but you can get all the records and use other looping constructs to iterate through them
val records = csv("foo.csv").records
foreach(records, "record") {
exec(flattenMapIntoAttributes("${record}"))
}

akka-http chunked response concatenation

I'm using akka-http to make a request to a http service which sends back chunked response. This is how the relevant bit of code looks like:
val httpRequest: HttpRequest = //build the request
val request = Http().singleRequest(httpRequest)
request.flatMap { response =>
response.entity.dataBytes.runForeach { chunk =>
println("-----")
println(chunk.utf8String)
}
}
and the output produced in the command line looks something like this:
-----
{"data":
-----
"some text"}
-----
{"data":
-----
"this is a longer
-----
text"}
-----
{"data": "txt"}
-----
...
The logical piece of data - a json in this case ends with an end of line symbol \r\n, but the problem is, that the json doesn't always fit in a single http response chunk as clearly visible in the example above.
My question is - how do I concatenate the incoming chunked data into full jsons so that the resulting container type would still remain either Source[Out,M1] or Flow[In,Out,M2]? I'd like to follow the idealogy of akka-stream.
UPDATE: It's worth mentioning also, that the response is endless and the aggregation must be done in real time
Found a solution:
val request: HttpRequest = //build the request
request.flatMap { response =>
response.entity.dataBytes.scan("")((acc, curr) => if (acc.contains("\r\n")) curr.utf8String else acc + curr.utf8String)
.filter(_.contains("\r\n"))
.runForeach { json =>
println("-----")
println(json)
}
}
The akka stream documentation has an entry in the cookbook for this very problem: "Parsing lines from a stream of ByteString". Their solution is quite verbose but can also handle the situation where a single chunk can contain multiple lines. This seems more robust since the chunk size could change to be big enough to handle multiple json messages.
response.entity.dataBytes
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 8096))
.mapAsyncUnordered(Runtime.getRuntime.availableProcessors()) { data =>
if (response.status == OK) {
val event: Future[Event] = Unmarshal(data).to[Event]
event.foreach(x => log.debug("Received event: {}.", x))
event.map(Right(_))
} else {
Future.successful(data.utf8String)
.map(Left(_))
}
}
The only requirement is you know the maximum size of one record. If you start with something small, the default behavior is to fail if the record is larger than the limit. You can set it to truncate instead of failing, but piece of a JSON makes no sense.

Send attachment to email Gateway using Akka Actors

I am new to Akka, Scala.
I have to build a service which sends emails with attachment to emailIds given. I am using Sendgrid as a gateway.
For the attachment I have a file uploaded in S3 of size 28KB.
I have REST service to which I can pass document Id through which I can fetch the document as InputStream. Now this input Stream has to be sent to many email Ids . All this downloading the file is handled by an actor called "attachmentActor" which I am creating below.
Now lets say I have two emailIds which I need to send that attachment to, the problem I am facing is its not sending complete file to both , infact 28KB file gets divided into 16KB and 12KB which are finally sent to emailIds.
so emailId 1 would receive 16KB //it should actually have 28KB
email 2 would receive 12KB //it should actually have 28KB
Following is the code:
class SendgridConsumer{
def receive(request: EmailRequest) = {
val service = Sendgrid(username , password)
val logData = request.logData
var errorMessage = new String
val attachmentRef = system.actorOf(Props[AttachmentRequestConsumer], "attachmentActor")
val future = attachmentRef ? AttachmentRequest(request.documentId.get)
var targetStream = Await.result(future, timeout.duration).asInstanceOf[InputStream]
val results = request.emailContacts.par.map( emailContact => {
val email=postData(new Email(),request , emailContact, targetStream,request.documentName.get)
val sendGridResponse=service.send(email)
}
}
// postData() creates an Email Object
// This is my Attachment Actor
class AttachmentRequestConsumer extends Actor with ActorLogging {
def receive = {
case request:AttachmentRequest => {
log.info(" inside Attachment RequestConsumer with document Id:" + request.documentId)
val req: HttpRequest = Http(url)
val response = req.asBytes
val targetStream = ByteSource.wrap(response.body).openStream()
log.info("response body :" + response.body)
sender ! targetStream
targetStream.close()
}
}
}
One of the things you should know about actors is that you should not be sending mutable objects (such as InputStream) as messages (technically you can as long as you won't mutate them). Another thing is that sending of messages is asynchronous. This means that the targetStream.close() is called before the other actor receives the message. That is probably the reason why you are getting truncated attachments.
One thing that you could do is send the data instead of an InputStream. Something like
def receive = {
case request:AttachmentRequest => {
log.info(" inside Attachment RequestConsumer with document Id:" + request.documentId)
val req: HttpRequest = Http(url)
val response = req.asBytes
val data = ByteSource.wrap(response.body).read.toVector
log.info("response body :" + response.body)
sender ! data
}
}
That is if you can comfortably fit the contents of the attachment into memory. If that is not the case, you can try to break it into chunks or something.
On a side note, you should not be blocking in receive (the Await.result). A better approach would be to just send a message to AttachmentRequestConsumer and then expect a message of type Seq[Byte] (or even better some wrapper like AttachmentResponse) back in SendgridConsumer's receive.

waiting for ws future response in play framework

I am trying to build a service that grab some pages from another web service and process the content and return results to users. I am using Play 2.2.3 Scala.
val aas = WS.url("http://localhost/").withRequestTimeout(1000).withQueryString(("mid", mid), ("t", txt)).get
val result = aas.map {
response =>
(response.json \ "status").asOpt[Int].map {
st => status = st
}
(response.json \ "msg").asOpt[String].map {
txt => msg = txt
}
}
val rs1 = Await.result(result, 5 seconds)
if (rs1.isDefined) {
Ok("good")
}
The problem is that the service will wait 5 seconds to return "good" even the WS request takes 100 ms. I also cannot set Await time to 100ms because the other web service I am requesting may take between 100ms to 1 second to respond.
My question is: is there a way to process and serve the results as soon as they are ready instead of wait a fixed amount of time?
#wingedsubmariner already provided the answer. Since there is no code example, I will just post what it should be:
def wb = Action.async{ request =>
val aas = WS.url("http://localhost/").withRequestTimeout(1000).get
aas.map(response =>{
Ok("responded")
})
}
Now you don't need to wait until the WS to respond and then decide what to do. You can just tell play to do something when it responds.

Play Framework WebSocket Async

I'm using a WebSocket end point exposed by my Play Framework controller. My client will however send a large byte array and I'm a bit confused on how to handle this in my Iteratee. Here is what I have:
def myWSEndPoint(f: String => String) = WebSocket.async[Array[Byte]] {
request =>
Akka.future {
val (out, chan) = Concurrent.broadcast[Array[Byte]]
val in: Iteratee[Array[Byte], Unit] = Iteratee.foreach[Array[Byte]] {
// How do I get the entire file?
}
(null, null)
}
}
As it can be seen in the code above, I'm stuck on the line on how to handle the Byte array as one request and send the response back as a String? My confusion is on the Iteratee.foreach call. Is this foreach a foreach on the byte array or the entire content of the request that I send as a byte array from my client? It is confusing!
Any suggestions?
Well... It depends. Is your client sending all binaries at once, or is it (explicitly) chunk by chunk?
-> If it's all at once, then everything will be in the first chunk (therefore why a websocket? Why an Iteratee? Actions with BodyParser will probably be more efficient for that).
-> If it's chunk by chunk you have to keep every chunks you receive, and concatenate them on close (on close, unless you have another way for the client to say: "Hey I'm done!").