Gatling. ConnectException: connection timed out: - scala

I'm new in gatling and now experimenting with Gatling on its website computer-database.gatling.io.
Everything is working fine and i'm going to apply load test for my project at work in few days, but one problem,
I need 2000 users/per second for my website (it seems not a big deal for gatling as I read on other recourses)
I have this simple code: where I want to load 250 users/admins during 300 seconds, and now issues and questions (Please, help me figure this out)
When I've started this script:
package load
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import io.gatling.core.structure.ChainBuilder
import io.gatling.jdbc.Predef.jdbcFeeder
import scala.util.Random
import scala.concurrent.duration._
class TestOvo extends Simulation {
val httpProtocol = http
.baseURL("http://computer-database.gatling.io")
.acceptHeader("text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8")
.acceptEncodingHeader("gzip, deflate")
.acceptLanguageHeader("en-US,en;q=0.8")
.userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36")
val headers_0 = Map("Upgrade-Insecure-Requests" -> "1")
val headers_10 = Map(
"Origin" -> "http://computer-database.gatling.io",
"Upgrade-Insecure-Requests" -> "1")
object Search {
val search = exec(http("request_0")
.get("/")
.headers(headers_0))
.pause(1)
.exec(http("request_1")
.get("/computers?f=macbook")
.headers(headers_0))
.pause(2)
.exec(http("request_2")
.get("/computers/89")
.headers(headers_0))
.pause(2)
.exec(http("request_3")
.get("/")
.headers(headers_0))
.pause(2)
}
object Browse {
val browse = exec(http("request_4")
.get("/computers?p=1")
.headers(headers_0))
.pause(1)
.exec(http("request_5")
.get("/computers?p=2")
.headers(headers_0))
.pause(1)
.exec(http("request_6")
.get("/computers?p=3")
.headers(headers_0))
.pause(2)
.exec(http("request_7")
.get("/computers?p=4")
.headers(headers_0)
.resources(http("request_8")
.get("/computers?p=5")
.headers(headers_0)))
.pause(1)
}
object Edit {
val edit = exec(http("request_9")
.get("/computers/new")
.headers(headers_0))
.pause(1)
.exec(http("request_10")
.post("/computers")
.headers(headers_10)
.formParam("name", "VoxooBox")
.formParam("introduced", "12.11.2017")
.formParam("discontinued", "")
.formParam("company", "16")
.check(status.is(400)))
.pause(1)
.exec(http("request_11")
.post("/computers")
.headers(headers_10)
.formParam("name", "VoxooBox")
.formParam("introduced", "2017.08.17")
.formParam("discontinued", "")
.formParam("company", "16")
.check(status.is(400)))
.pause(1)
.exec(http("request_12")
.post("/computers")
.headers(headers_10)
.formParam("name", "VoxooBox")
.formParam("introduced", "2017-08-17")
.formParam("discontinued", "")
.formParam("company", "16"))
}
val users = scenario("Users").exec(Search.search, Browse.browse);
val admins = scenario("Admins").exec(Search.search, Browse.browse, Edit.edit);
setUp(
users.inject(constantUsersPerSec(200) during (300 seconds)),
admins.inject(constantUsersPerSec(50) during (300 seconds))
).protocols(httpProtocol)
}
it goes maximum to indicator of users:
================================================================================
2017-10-17 14:47:20 25s elapsed
---- Requests ------------------------------------------------------------------
> Global (OK=15950 KO=0 )
> request_0 (OK=5046 KO=0 )
> request_0 Redirect 1 (OK=3829 KO=0 )
> request_1 (OK=2582 KO=0 )
> request_2 (OK=1589 KO=0 )
> request_3 (OK=1249 KO=0 )
> request_3 Redirect 1 (OK=897 KO=0 )
> request_4 (OK=423 KO=0 )
> request_5 (OK=224 KO=0 )
> request_6 (OK=88 KO=0 )
> request_7 (OK=17 KO=0 )
> request_8 (OK=6 KO=0 )
---- Users ---------------------------------------------------------------------
[------ ] 0%
waiting: 55156 / active: 4842 / done:2
---- Admins --------------------------------------------------------------------
[------ ] 0%
waiting: 13788 / active: 1212 / done:0
================================================================================
14:47:21.107 [WARN ] i.g.h.a.ResponseProcessor - Request 'request_5' failed: j.n.ConnectException: connection timed out: computer-database.gatling.io/35.158.229.206:80
And then arise these kind of errors:
[WARN ] i.g.h.a.ResponseProcessor - Request 'request_4' failed: j.n.ConnectException: connection timed out: computer-database.gatling.io/35.158.229.206:80
13:31:12.407 [WARN ] i.g.h.a.ResponseProcessor - Request 'request_4' failed: j.n.ConnectException: Failed to open a socket.
In gatling.conf I found these rows:
ahc {
#keepAlive = true # Allow pooling HTTP connections (keep-alive header automatically added)
#connectTimeout = 10000 # Timeout when establishing a connection
#handshakeTimeout = 10000 # Timeout when performing TLS hashshake
#pooledConnectionIdleTimeout = 60000 # Timeout when a connection stays unused in the pool
#readTimeout = 60000 # Timeout when a used connection stays idle
#maxRetry = 2 # Number of times that a request should be tried again
#requestTimeout = 60000 # Timeout of the requests
#acceptAnyCertificate = true # When set to true, doesn't validate SSL certificates
#httpClientCodecMaxInitialLineLength = 4096 # Maximum length of the initial line of the response (e.g. "HTTP/1.0 200 OK")
#httpClientCodecMaxHeaderSize = 8192 # Maximum size, in bytes, of each request's headers
#httpClientCodecMaxChunkSize = 8192 # Maximum length of the content or each chunk
#webSocketMaxFrameSize = 10240000 # Maximum frame payload size
i guess it may be somehow related to my issue, or what i'm doing wrong? Is that my side-issue or their website just can't take it?
And why when I launch
users.inject(constantUsersPerSec(200) during (300 seconds)),
admins.inject(constantUsersPerSec(50) during (300 seconds))
In terminal it's
waiting: 55156 / active: 4842 / done:2
waiting: 13788 / active: 1212 / done:0
so many? How is it calculated? or why at all, explain someone to me, please.
Thanks.
My Computer:
MAC OS
Version: 10.12.6 (
Processor: 2.6 GHz Intel Core i5
Memory: 8 GB 1600 MHz DDR3

constantUsersPerSec() is about the number of users you're injecting, not the number of total users.
constantUsersPerSec(200) during (300 seconds), that's 60k users. So, that matches with waiting: 55156 / active: 4842 / done:2= 60k users.
I think that what you actually want is to inject users using the following strategy: rampUsers(10) over (300 seconds)

Related

Play ws request timeout

I am recieving an error
java.util.concurrent.TimeoutException: Read timeout to {url} after 120000 ms
In my settings I have set:
play.ws.timeout.request = 5 minutes
play.ws.timeout.idle = 5 minutes
But these still aren't working.
In my application.conf I will define a timeout like this:
someRequestTimeout = "45 seconds"
and then in my code I'd write
wsClient
.url("http://example.com")
.withRequestTimeout(config.getDuration("someRequestTimeout"))
.post(somePayload)
.map { response => ??? }

Performance issues with vert.x

When running the test using apache bench for a vert.x application, we are seeing that the response time increased as we increase the number of concurrent users.
D:\httpd-2.2.34-win64\Apache2\bin>ab -n 500 -c 1 -H "Authorization: 5" -H "Span_Id: 4" -H "Trace_Id: 1" -H "X-test-API-KEY: 6" http://localhost:8443/product_catalog/products/legacy/001~001~5ZP/terms
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests
Server Software:
Server Hostname: localhost
Server Port: 8443
Document Path: /product_catalog/products/legacy/001~001~5ZP/terms
Document Length: 319 bytes
Concurrency Level: 1
Time taken for tests: 12.366 seconds
Complete requests: 500
Failed requests: 0
Write errors: 0
Total transferred: 295094 bytes
HTML transferred: 159500 bytes
Requests per second: 40.43 [#/sec] (mean)
Time per request: 24.733 [ms] (mean)
Time per request: 24.733 [ms] (mean, across all concurrent requests)
Transfer rate: 23.30 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.5 1 3
Processing: 5 24 83.9 8 1293
Waiting: 5 23 83.9 8 1293
Total: 6 24 83.9 9 1293
Percentage of the requests served within a certain time (ms)
50% 9
66% 11
75% 13
80% 15
90% 29
95% 57
98% 238
99% 332
100% 1293 (longest request)
D:\httpd-2.2.34-win64\Apache2\bin>ab -n 500 -c 2 -H "Authorization: 5" -H "Span_Id: 4" -H "Trace_Id: 1" -H "X-test-API-KEY: 6" http://localhost:8443/product_catalog/products/legacy/001~001~5ZP/terms
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests
Server Software:
Server Hostname: localhost
Server Port: 8443
Document Path: /product_catalog/products/legacy/001~001~5ZP/terms
Document Length: 319 bytes
Concurrency Level: 2
Time taken for tests: 7.985 seconds
Complete requests: 500
Failed requests: 0
Write errors: 0
Total transferred: 295151 bytes
HTML transferred: 159500 bytes
Requests per second: 62.61 [#/sec] (mean)
Time per request: 31.941 [ms] (mean)
Time per request: 15.971 [ms] (mean, across all concurrent requests)
Transfer rate: 36.10 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.6 1 9
Processing: 6 30 71.5 12 720
Waiting: 5 30 71.4 12 720
Total: 7 31 71.5 13 721
Percentage of the requests served within a certain time (ms)
50% 13
66% 16
75% 21
80% 24
90% 53
95% 113
98% 246
99% 444
100% 721 (longest request)
D:\httpd-2.2.34-win64\Apache2\bin>ab -n 500 -c 3 -H "Authorization: 5" -H "Span_Id: 4" -H "Trace_Id: 1" -H "X-test-API-KEY: 6" http://localhost:8443/product_catalog/products/legacy/001~001~5ZP/terms
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests
Server Software:
Server Hostname: localhost
Server Port: 8443
Document Path: /product_catalog/products/legacy/001~001~5ZP/terms
Document Length: 319 bytes
Concurrency Level: 3
Time taken for tests: 7.148 seconds
Complete requests: 500
Failed requests: 0
Write errors: 0
Total transferred: 295335 bytes
HTML transferred: 159500 bytes
Requests per second: 69.95 [#/sec] (mean)
Time per request: 42.888 [ms] (mean)
Time per request: 14.296 [ms] (mean, across all concurrent requests)
Transfer rate: 40.35 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.6 1 5
Processing: 6 42 66.2 22 516
Waiting: 6 41 66.3 22 515
Total: 7 42 66.3 23 516
Percentage of the requests served within a certain time (ms)
50% 23
66% 31
75% 43
80% 51
90% 76
95% 128
98% 259
99% 430
100% 516 (longest request)
D:\httpd-2.2.34-win64\Apache2\bin>ab -n 500 -c 4 -H "Authorization: 5" -H "Span_Id: 4" -H "Trace_Id: 1" -H "X-test-API-KEY: 6" http://localhost:8443/product_catalog/products/legacy/001~001~5ZP/terms
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests
Server Software:
Server Hostname: localhost
Server Port: 8443
Document Path: /product_catalog/products/legacy/001~001~5ZP/terms
Document Length: 319 bytes
Concurrency Level: 4
Time taken for tests: 7.078 seconds
Complete requests: 500
Failed requests: 0
Write errors: 0
Total transferred: 295389 bytes
HTML transferred: 159500 bytes
Requests per second: 70.64 [#/sec] (mean)
Time per request: 56.623 [ms] (mean)
Time per request: 14.156 [ms] (mean, across all concurrent requests)
Transfer rate: 40.76 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.6 1 4
Processing: 8 55 112.8 22 1112
Waiting: 8 55 112.7 21 1111
Total: 9 56 112.8 22 1112
Percentage of the requests served within a certain time (ms)
50% 22
66% 31
75% 43
80% 59
90% 120
95% 213
98% 294
99% 387
100% 1112 (longest request)
Also when the test is running, if we hit the API from another console, we see that the response time increases for that as well.
We have used the following code:
Router code:
router.route().handler(LoggerHandler.create(LoggerFormat.SHORT));
router.route().handler(ResponseTimeHandler.create());
router.route().handler(CorsHandler.create("*")
.allowedMethod(io.vertx.core.http.HttpMethod.GET)
.allowedMethod(io.vertx.core.http.HttpMethod.POST)
.allowedMethod(io.vertx.core.http.HttpMethod.PUT)
.allowedMethod(io.vertx.core.http.HttpMethod.DELETE)
.allowedMethod(io.vertx.core.http.HttpMethod.OPTIONS)
.allowedHeader("Access-Control-Request-Method")
.allowedHeader("Access-Control-Allow-Credentials")
.allowedHeader("Access-Control-Allow-Origin")
.allowedHeader("Access-Control-Allow-Headers")
.allowedHeader("Content-Type")
.allowedHeader("Trace_Id")
.allowedHeader("Span_Id")
.allowedHeader("Authorization")
.allowedHeader("X-test-API-KEY")
.allowedHeader("Accept"));
router.get("/product_catalog/products/legacy/:id/terms").handler(this::getByProductCode);
Server Creation:
vertx
.createHttpServer()
.exceptionHandler(event -> {
logger.error("{}", new GPCLogEvent(className, "global_exception_handler", event, new GPCEntry("global_exception", event.getMessage() != null ? event.getMessage() : "")));
})
.connectionHandler(handler -> {
handler.exceptionHandler(event -> {
logger.error("{}", new GPCLogEvent(className, "global_exception_connection_handler", event, new GPCEntry("global_exception", event.getMessage() != null ? event.getMessage() : "")));
});
})
.websocketHandler(handler -> {
handler.exceptionHandler(event -> {
logger.error("{}", new GPCLogEvent(className, "global_exception_websocket_handler", event, new GPCEntry("global_exception", event.getMessage() != null ? event.getMessage() : "")));
});
})
.requestHandler(handler -> {
handler.exceptionHandler(event -> {
logger.error("{}", new GPCLogEvent(className, "global_exception_request_handler", event, new GPCEntry("global_exception", event.getMessage() != null ? event.getMessage() : "")));
});
})
.requestHandler(router)
.listen(serverPort, result -> {
if (result.succeeded()) {
logger.info("{}",
new GPCLogEvent(className, "start", new GPCEntry<>("Server started at port", serverPort)));
} else {
logger.error("{}", new GPCLogEvent(className, "start", result.cause(),
new GPCEntry<>("Server failed to start at port", serverPort)));
}
});
Handler Method:
private void getByProductCode(RoutingContext routingContext) {
LocalDateTime requestReceivedTime = LocalDateTime.now();
ZonedDateTime nowUTC = ZonedDateTime.now(ZoneOffset.UTC);
JsonObject jsonObject = commonUtilities.populateJsonObject(routingContext.request().headers());
jsonObject.put("requestReceivedTime", requestReceivedTime.format(commonUtilities.getDateTimeFormatter()));
jsonObject.put("path_param_id", routingContext.pathParam("id"));
jsonObject.put("TRACING_ID",UUID.randomUUID().toString());
long timeTakenInGettingRequestVar = 0;
if (jsonObject.getString("requestSentTime") != null) {
ZonedDateTime requestSentTime = LocalDateTime.parse(jsonObject.getString("requestSentTime"), commonUtilities.getDateTimeFormatter()).atZone(ZoneId.of("UTC"));
timeTakenInGettingRequestVar = ChronoUnit.MILLIS.between(requestSentTime.toLocalDateTime(), nowUTC.toLocalDateTime());
}
final long timeTakenInGettingRequest = timeTakenInGettingRequestVar;
vertx.eventBus().send(TermsVerticleGet.GET_BY_PRODUCT_CODE,
jsonObject,
result -> {
if (result.succeeded()) {
routingContext.response()
.putHeader("content-type", jsonObject.getString("Accept"))
.putHeader("TRACING_ID", jsonObject.getString("TRACING_ID"))
.putHeader("TRACE_ID", jsonObject.getString("TRACE_ID"))
.putHeader("SPAN_ID", jsonObject.getString("SPAN_ID"))
.putHeader("responseSentTime", LocalDateTime.now().format(commonUtilities.getDateTimeFormatter()))
.putHeader("timeTakenInGettingRequest", Long.toString(timeTakenInGettingRequest))
.putHeader("requestReceivedTime", nowUTC.toLocalDateTime().format(commonUtilities.getDateTimeFormatter()))
.putHeader("requestSentTime", jsonObject.getString("requestSentTime") != null ? jsonObject.getString("requestSentTime") : "")
.setStatusCode(200)
.end(result.result().body().toString())
;
logger.info("OUT");
} else {
ReplyException cause = (ReplyException) result.cause();
routingContext.response()
.putHeader("content-type", jsonObject.getString("Accept"))
.putHeader("TRACING_ID", jsonObject.getString("TRACING_ID"))
.putHeader("TRACE_ID", jsonObject.getString("TRACE_ID"))
.putHeader("SPAN_ID", jsonObject.getString("SPAN_ID"))
.putHeader("responseSentTime", LocalDateTime.now().format(commonUtilities.getDateTimeFormatter()))
.putHeader("timeTakenInGettingRequest", Long.toString(timeTakenInGettingRequest))
.putHeader("requestReceivedTime", nowUTC.toLocalDateTime().format(commonUtilities.getDateTimeFormatter()))
.putHeader("requestSentTime", jsonObject.getString("requestSentTime") != null ? jsonObject.getString("requestSentTime") : "")
.setStatusCode(cause.failureCode())
.end(cause.getMessage());
logger.info("OUT");
}
});
}
Worker Verticle:
private void getByProductCode(Message<JsonObject> messageConsumer) {
LocalDateTime requestReceivedTime_handler = LocalDateTime.now();
ZonedDateTime nowUTC_handler = ZonedDateTime.now(ZoneOffset.UTC);
final String TRACING_ID = messageConsumer.body().getString("TRACING_ID");
final String TRACE_ID = !commonUtilities.validateNullEmpty(messageConsumer.body().getString("Trace_Id")) ? UUID.randomUUID().toString() : messageConsumer.body().getString("Trace_Id");
final String SPAN_ID = !commonUtilities.validateNullEmpty(messageConsumer.body().getString("Span_Id")) ? UUID.randomUUID().toString() : messageConsumer.body().getString("Span_Id");
logger.info("{}", new GPCLogEvent(className, "getByProductCode", new GPCEntry<>("IN", System.currentTimeMillis()), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID)));
// Run the validation
logger.info("{}", new GPCLogEvent(className, "getByProductCode", new GPCEntry<>("validateCommonRequestHeader", true), new GPCEntry<>("IN", System.currentTimeMillis()), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID)));
commonUtilities.validateCommonRequestHeader(messageConsumer.body());
logger.info("{}", new GPCLogEvent(className, "getByProductCode", new GPCEntry<>("validateCommonRequestHeader", true), new GPCEntry<>("OUT", System.currentTimeMillis()), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID)));
// Product code - Mandatory
messageConsumer.body().put("product_codes", messageConsumer.body().getString("path_param_id"));
// Product code validation
if (commonUtilities.validateNullEmpty(messageConsumer.body().getString("product_codes"))) {
commonUtilities.checkProductIAORGLOGOPCTCode(messageConsumer.body(), TRACING_ID, TRACE_ID, SPAN_ID, false);
} else {
messageConsumer.body().getJsonArray("errors").add("id (path parameter) is mandatory field");
}
// There are validation errors
if (messageConsumer.body().getJsonArray("errors").size() > 0) {
messageConsumer.body().put("error_message", "Validation errors");
messageConsumer.body().put("developer_message", messageConsumer.body().getJsonArray("errors").toString());
messageConsumer.body().put("error_code", "400");
messageConsumer.body().put("more_information", "There are " + messageConsumer.body().getJsonArray("errors").size() + " validation errors");
messageConsumer.fail(400, Json.encode(commonUtilities.errors(messageConsumer.body(), TRACING_ID, TRACE_ID, SPAN_ID)));
logger.info("{}", new GPCLogEvent(className, "getByProductCode", new GPCEntry<>("OUT", System.currentTimeMillis()), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID), new GPCEntry<>("TIME_TAKEN", ChronoUnit.MILLIS.between(requestReceivedTime_handler, LocalDateTime.now()))));
return;
}
Handler<AsyncResult<CreditCardTerms>> dataHandler = data -> {
if (data.succeeded()) {
logger.info("{}", new GPCLogEvent(className, "getByProductCode", new GPCEntry<>("Success", true), new GPCEntry<>("IN", System.currentTimeMillis()), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID)));
/*routingContext.response()
.putHeader("content-type", messageConsumer.body().getString("Accept"))
.putHeader("TRACING_ID", TRACING_ID)
.putHeader("TRACE_ID", TRACE_ID)
.putHeader("SPAN_ID", SPAN_ID)
.putHeader("responseSentTime", ZonedDateTime.now(ZoneOffset.UTC).toLocalDateTime().format(commonUtilities.getDateTimeFormatter()))
.putHeader("timeTakenInGettingRequest", Long.toString(timeTakenInGettingRequest))
.putHeader("requestReceivedTime", nowUTC_handler.toLocalDateTime().format(commonUtilities.getDateTimeFormatter()))
.putHeader("requestSentTime", messageConsumer.body().getString("requestSentTime") != null ? messageConsumer.body().getString("requestSentTime") : "")
.setStatusCode(200)
.end(Json.encode(data.result()));
*/
messageConsumer.reply(Json.encode(data.result()));
logger.info("{}", new GPCLogEvent(className, "getByProductCode", new GPCEntry<>("OUT", System.currentTimeMillis()), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID), new GPCEntry<>("TIME_TAKEN", ChronoUnit.MILLIS.between(requestReceivedTime_handler, LocalDateTime.now()))));
} else {
logger.info("{}", new GPCLogEvent(className, "getByProductCode", new GPCEntry<>("onError", true), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID)));
if (data.cause() instanceof NoDocumentFoundException) {
messageConsumer.body().put("error_message", "Issue while fetching the details of the product");
messageConsumer.body().put("developer_message", messageConsumer.body().getJsonArray("errors").add(commonUtilities.getStackTrace(data.cause())).toString());
messageConsumer.body().put("error_code", "404");
messageConsumer.body().put("more_information", "Issue while fetching the details of the product");
//commonUtilities.errors(routingContext, messageConsumer.body(), TRACING_ID, TRACE_ID, SPAN_ID);
messageConsumer.fail(404, Json.encode(commonUtilities.errors(messageConsumer.body(), TRACING_ID, TRACE_ID, SPAN_ID)));
} else {
messageConsumer.body().put("error_message", "Internal Server Error: Issue while fetching the details of the product");
messageConsumer.body().put("developer_message", messageConsumer.body().getJsonArray("errors").add(commonUtilities.getStackTrace(data.cause())).toString());
messageConsumer.body().put("error_code", "500");
messageConsumer.body().put("more_information", "Internal Server Error: Issue while fetching the details of the product");
//commonUtilities.errors(routingContext, messageConsumer.body(), TRACING_ID, TRACE_ID, SPAN_ID);
messageConsumer.fail(500, Json.encode(commonUtilities.errors(messageConsumer.body(), TRACING_ID, TRACE_ID, SPAN_ID)));
}
logger.error("{}", new GPCLogEvent(className, "getByProductCode", data.cause(), new GPCEntry<>("OUT", System.currentTimeMillis()), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID), new GPCEntry<>("TIME_TAKEN", ChronoUnit.MILLIS.between(requestReceivedTime_handler, LocalDateTime.now()))));
}
};
// Search based on product codes
gpcFlowable.getByGPID(TRACING_ID,
TRACE_ID,
SPAN_ID,
TermsConstant.DOCUMENT_KEY,
gpcFlowable.getByProductCode(TRACING_ID, TRACE_ID, SPAN_ID, messageConsumer.body().getString("product_codes"),
TermsConstant.API_VERSION_V1), // Get the GPID for the given IA or ORG~LOGO~PCT code
TermsConstant.API_VERSION_V1,
CreditCardTerms.class)
.subscribe(doc -> dataHandler.handle(Future.succeededFuture(doc)),
error -> dataHandler.handle(Future.failedFuture(error)));
}

Response 411 Length Required in Gatling

I'm starting in Gatling. I have 411 status and don't understand why.
Response DefaultHttpResponse(decodeResult: success, version: HTTP/1.1)
HTTP/1.1 411 Length Required
Connection: close
Date: Tue, 13 Feb 2018 16:07:51 GMT
Server: Kestrel
Content-Length: 0
19:07:53.083 [gatling-http-thread-1-2] DEBUG org.asynchttpclient.netty.channel.ChannelManager - Closing Channel [id: 0x5f14313e, L:/10.8.1.89:52767 - R:blabla.com:5000]
19:07:53.107 [gatling-http-thread-1-2] INFO io.gatling.commons.validation.package$ - Boon failed to parse into a valid AST: -1
java.lang.ArrayIndexOutOfBoundsException: -1
...
19:07:53.111 [gatling-http-thread-1-2] WARN io.gatling.http.ahc.ResponseProcessor - Request 'HTTP Request createCompany' failed: status.find.is(200), but actually found 411
19:07:53.116 [gatling-http-thread-1-2] DEBUG io.gatling.http.ahc.ResponseProcessor -
My code:
package load
import io.gatling.core.scenario.Simulation
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
class LoadScript extends Simulation{
val httpConf = http
.baseURL("http://blabla.com:5000")
.authorizationHeader("Bearer 35dfd7a3c46f3f0bc7a2f06929399756029f47b9cc6d193ed638aeca1306d")
.acceptHeader("application/json, text/plain,")
.acceptEncodingHeader("gzip, deflate, br")
.acceptLanguageHeader("ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7")
.userAgentHeader("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36")
val basicLoad = scenario("BASIC_LOAD").exec(BasicLoad.start)
setUp(
basicLoad.inject(rampUsers(1) over (1 minutes))
.protocols(httpConf))
}
object BasicLoad {
val start =
exec(
http("HTTP Request createCompany")
.post("/Companies/CreateCompanyAndStartTransaction")
.queryParam("inn","7733897761")
.queryParam("ogrn","5147746205041")
.check(status is 200, jsonPath("$.id").saveAs("idCompany"))
)
}
When you are not sending message-body you need to add
.header("Content-Length", "0") as workaround.
I have similar issue. I'm running my tests on two environments and a difference is in application infrastructure.
Tests are passing on Amazon AWS but getting HTTP 411 on Azure. So looks like the issue is not in Gatling itself.
This issue has been also well answered by Gatling team at the and of this chat:
https://groups.google.com/forum/#!topic/gatling/mAGzjzoMr1I
I've just upgraded Gatling from 2.3 to 3.0.2. They wrote their own HTTP client and it sends now content-length: 0 except one case described in this bug:
https://github.com/gatling/gatling/issues/3648
so if you avoid using httpRequest() with method type passed as string e.g:
exec(http("empty POST test").httpRequest("POST","https://gatling.io/"))
and use post() as you do:
exec(
http("HTTP Request createCompany")
.post("/Companies/CreateCompanyAndStartTransaction")...
or
exec(
http("HTTP Request createCompany")
.httpRequest(HttpMethod.POST, "/Companies/CreateCompanyAndStartTransaction")
then upgrade Gatling to 3.0.2 is enough. Otherwise you need to wait for Gatling 3.0.3

spray-routing with spray-can - Hand off to actor only works sometimes in my application

Sorry, this is kind of long, because I need to include various files.
Problem
I am not sure what is going on in my setup [using spray 1.3.3]. I am trying to do file uploads using chunked requests which seemed to work as expected once or twice, but for some reason most of the time the actor never receives the chunks after the initial registration of the chunk handler is finished. It just disappears into oblivion and my request and logs just keep waiting the whole time. The 2 times out of 50 it did work were when I ran through the debugger. However even with the debugger it doesn't work mostly.
Based on various examples and discussions related to DemoService and FileUploadHandler, I am using spray-can to check the received HttpMessage for chunks, and at that point spawn off a separate route. I use curl with chunked encoding to test my output.
Please help! I have spent too many hours trying to get chunked requests mixed with routes working for my use case.
Code
Here is the code I have:
TestApp.scala
object TestApp extends App with GlobalConfig {
implicit val system = ActorSystem("TestApp")
implicit val ec = system.dispatcher
val healthActor = system.actorOf(Props[HealthStateActor])
val routes = new HealthcheckController(healthActor).route ~
new ResourceController().route
val requestRouter = system.actorOf(Props(new HttpRequestCustomHandler(routes)))
IO(Http) ! Http.Bind(requestRouter, "0.0.0.0", HttpBindPort)
}
FileUploadActor.scala
class FileUploadActor(client: ActorRef, requestMetadata: RequestMetadata, request: HttpRequest, ctx: RequestContext)
extends Actor with ActorLogging with GlobalConfig {
import request._
var bytesWritten = 0L
var bytes: Array[Byte] = "".getBytes
// client ! CommandWrapper(SetRequestTimeout(Duration.Inf)) // cancel timeout
def receive = {
case c: MessageChunk =>
log.info(s"Got ${c.data.length} bytes of chunked request $method $uri")
bytes ++= c.data.toByteArray
bytesWritten += c.data.length
case e: ChunkedMessageEnd =>
log.info(s"Got end of chunked request $method $uri. Writing $bytesWritten bytes for upload: $requestMetadata")
Try(saveFile(requestMetadata)) match {
case Success(_) => ctx.complete(HttpResponse(StatusCodes.Created, entity = "success"))
case Failure(f) => f.printStackTrace(); ctx.complete(HttpResponse(StatusCodes.InternalServerError, entity = "failure"))
}
// client ! CommandWrapper(SetRequestTimeout(UploadRequestTimeout.seconds)) // reset timeout to original value
context.stop(self)
}
}
FileUploadService.scala
The RegisterChunkHandler message is the last step where I see the debugger stop at break points, and where the logs go quiet. When it does work I can see MessageChunk messages being received by FileUploadActor.
trait FileUploadService extends Directives {
this: Actor with ActorLogging with GlobalConfig =>
def chunkedRoute() = {
path(resourceAPI / "upload" / "resource" / Segment) { resourceId =>
put {
detach() {
ctx => {
val request = ctx.request
val client = sender()
val handler = context.actorOf(Props(new FileUploadActor(client,
RequestMetadata(....),
request, ctx)))
sender ! RegisterChunkHandler(handler)
}
}
}
}
}
}
HttpRequestCustomHandler.scala
class HttpRequestCustomHandler(routes: Route, resourceProviderRef: ResourceProvider)
extends HttpServiceActor
with FileUploadService
with ActorLogging
with GlobalConfig {
val normal = routes
val chunked = chunkedRoute()
def resourceProvider = resourceProviderRef
val customReceive: Receive = {
// clients get connected to self (singleton handler)
case _: Http.Connected => sender ! Http.Register(self)
case r: HttpRequest =>
normal(RequestContext(r, sender(), r.uri.path).withDefaultSender(sender()))
case s#ChunkedRequestStart(HttpRequest(PUT, path, _, _, _)) =>
chunked(RequestContext(s.request, sender(), s.request.uri.path).withDefaultSender(sender()))
}
override def receive: Receive = customReceive
}
HttpRequestHandler.scala
abstract class HttpRequestHandler(routes: Route) extends HttpServiceActor{
override def receive: Receive = runRoute(routes)
}
application.conf:
spray.can.server {
request-timeout = 20 s
pipelining-limit = disabled
reaping-cycle = infinite
stats-support = off
request-chunk-aggregation-limit = 0
parsing.max-content-length = 100000000
parsing.incoming-auto-chunking-threshold-size = 15000000
chunkless-streaming = on
verbose-error-messages = on
verbose-error-logging = on
}
curl Success:
curl -vvv -X PUT -H "Content-Type: multipart/form-data" -d
'#/Users/abc/Documents/test.json'
'http://localhost:8180/upload/resource/02521081'
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8100 (#0)
> PUT /upload/resource/02521081 HTTP/1.1
> User-Agent: curl/7.37.1
> Host: localhost:8100
> Accept: */*
> Content-Type: multipart/form-data
> Content-Length: 82129103
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
< HTTP/1.1 201 Created
* Server spray-can/1.3.3 is not blacklisted
< Server: spray-can/1.3.3
< Date: Mon, 17 Aug 2015 07:45:58 GMT
< Content-Type: text/plain; charset=UTF-8
< Content-Length: 7
<
* Connection #0 to host localhost left intact
success
Failure with same curl (waits forever):
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8100 (#0)
> PUT /upload/resource/02521081 HTTP/1.1
> User-Agent: curl/7.37.1
> Host: localhost:8100
> Accept: */*
> Transfer-Encoding: chunked
> Content-Type: multipart/form-data
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
^C
Failure (waits forever):
resource 2015-08-17 01:33:09.374 [Resource] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
resource 2015-08-17 01:33:09.396 08:33:09.382UTC [Resource] DEBUG akka.event.EventStream main EventStream(akka://resource) - logger log1-Slf4jLogger started
resource 2015-08-17 01:33:09.404 08:33:09.383UTC [Resource] DEBUG akka.event.EventStream main EventStream(akka://resource) - Default Loggers started
resource 2015-08-17 01:33:10.160 08:33:10.159UTC [Resource] INFO spray.can.server.HttpListener Resource-akka.actor.default-dispatcher-4 akka://resource/user/IO-HTTP/listener-0 - Bound to /0.0.0.0:8100
Success (logs edited for clarity):
resource 2015-08-17 00:42:12.283 [Resource] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
resource 2015-08-17 00:42:12.295 07:42:12.290UTC [Resource] DEBUG akka.event.EventStream main EventStream(akka://resource) - logger log1-Slf4jLogger started
resource 2015-08-17 00:42:12.308 07:42:12.291UTC [Resource] DEBUG akka.event.EventStream main EventStream(akka://resource) - Default Loggers started
resource 2015-08-17 00:42:13.007 07:42:13.005UTC [Resource] INFO spray.can.server.HttpListener Resource-akka.actor.default-dispatcher-4 akka://resource/user/IO-HTTP/listener-0 - Bound to /0.0.0.0:8100
resource 2015-08-17 00:43:47.615 07:43:47.529UTC [Resource] DEBUG c.l.resource.actor.FileUploadActor Resource-akka.actor.default-dispatcher-7 akka://resource/user/$b/$b - Got 131072 bytes of chunked request PUT http://localhost:8100/resourcesvc/0.2/api/upload/resource/02521081-20e5-483a-929f-712a9e11d117/content/5adfb5-561d-4577-b6ad-c6f42eef98
resource 2015-08-17 00:43:49.220 07:43:49.204UTC [Resource] DEBUG c.l.resource.actor.FileUploadActor Resource-akka.actor.default-dispatcher-7 akka://resource/user/$b/$b - Got 131072 bytes of chunked request PUT http://localhost:8100/resourcesvc/0.2/api/upload/resource/02521081-20e5-483a-929f-712a9e11d117/content/5adfb5-561d-4577-b6ad-c6f42eef98
.
.
.
resource 2015-08-17 00:44:05.605 07:44:05.605UTC [Resource] DEBUG c.l.resource.actor.FileUploadActor Resource-akka.actor.default-dispatcher-7 akka://resource/user/$b/$b - Got 45263 bytes of chunked request PUT http://localhost:8100/resourcesvc/0.2/api/upload/resource/02521081-20e5-483a-929f-712a9e11d117/content/5adfb5-561d-4577-b6ad-c6f42eef98
resource 2015-08-17 00:44:05.633 07:44:05.633UTC [Resource] INFO c.l.resource.actor.FileUploadActor Resource-akka.actor.default-dispatcher-7 akka://resource/user/$b/$b - Got end of chunked request PUT http://localhost:8100/resourcesvc/0.2/api/upload/resource/02521081-20e5-483a-929f-712a9e11d117/content/5adfb5-561d-4577-b6ad-c6f42eef98. Writing 82129103 bytes for upload: RequestMetadata(...,multipart/form-data)
resource 2015-08-17 00:44:05.634 07:44:05.633UTC [Resource] DEBUG c.l.resource.actor.FileUploadActor Resource-akka.actor.default-dispatcher-7 akka://resource/user/$b/$b - actor is akka://resource/user/$b/$b, sender is Actor[akka://resource/temp/$a], client is Actor[akka://resource/temp/$a]
resource 2015-08-17 00:45:58.445 [Resource] DEBUG com.abc.resource.io.FileClient$ - UploadResult#109a69fb
resource 2015-08-17 00:45:58.445 [Resource] DEBUG com.abc.resource.io.FileClient$ - upload is done: true
Please let me know if you see anything weird. What is usually the reason an actor would vanish like this? Thanks in advance for your help!
UPDATE:
I added further logging and see that the from and to actors are apparently both turning into deadLetters even though that is clearly not the case in the log line above that. This is when message 'RegisterChunkHandler' is sent to sender in FileUploadService.scala.
sender ! RegisterChunkHandler(handler)
Related log:
resource 2015 - 0 8 - 17 21: 14: 32.173 20: 14: 32.173 UTC[Resource] DEBUG c.l.a.io.HttpRequestCustomHandler Resource - akka.actor.default - dispatcher - 3 akka :// Resource / user / httpcustomactor - sender is Actor[akka :// Resource / temp / $a]
resource 2015 - 0 8 - 17 21: 14: 32.175 20: 14: 32.175 UTC[Resource] INFO akka.actor.DeadLetterActorRef A4Resource-akka.actor.default-dispatcher-6 akka://A4Resource/deadLetters - Message [spray.can.Http$RegisterChunkHandler] from Actor[akka://A4Resource/user/httpcustomactor#-1286373908] to Actor[akka://A4Resource/deadLetters] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
resource 2015 - 0 8 - 17 21: 14: 32.176 20: 14: 32.176 UTC[Resource] DEBUG c.l.resource.actor.FileUploadActor Resource - akka.actor.default - dispatcher - 7 akka :// Resource / user / httpcustomactor / $a - pre - start
Any idea how this can be avoided?

Cherrypy _cp_dispatch strange behaviour with url without trailing slash: POST then GET

I am testing CherryPy with _cp_dispatch.
However, when I send 1 single post, _cp_dispatch is called twice, not once. First for the expected post then a second time with a get: Why?
The code:
import os
import cherrypy
class WebServerApp:
def __init__(self):
self.index_count = 0
self.cpdispatch_count = 0
def __del__(self):
self.exit()
def _cp_dispatch(self, vpath):
self.cpdispatch_count += 1
cherrypy.log.error('_cp_dispatch: ' + str(vpath) + ' - index count:' + str(self.cpdispatch_count))
if len(vpath) == 0:
return self
if len(vpath) == 2:
vpath.pop(0)
cherrypy.request.params['id'] = vpath.pop(0)
return self
return vpath
#cherrypy.expose
def index(self, **params):
try:
self.index_count += 1
cherrypy.log.error('Index: received params' + str(params) + ' - index count:' + str(self.index_count))
except Exception as e:
cherrypy.log.error(e.message)
def exit(self):
cherrypy.log.error('Exiting')
exit.exposed = True
ws_conf = os.path.join(os.path.dirname(__file__), 'verybasicwebserver.conf')
if __name__ == '__main__':
cherrypy.quickstart(WebServerApp(), config=ws_conf)
The config file:
[global]
server.socket_host = "127.0.0.1"
server.socket_port = 1025
server.thread_pool = 10
log.screen = True
log.access_file = "/Users/antoinebrunel/src/Rankings/log/cherrypy_access.log"
log.error_file = "/Users/antoinebrunel/src/Rankings/log/cherrypy_error.log"
The post with request:
r = requests.post("http://127.0.0.1:1025/id/12345")
The log results showing that cp_dispatch is called 3 times: 1 at startup and twice for the post
pydev debugger: starting (pid: 5744)
[30/Sep/2014:19:16:29] ENGINE Listening for SIGUSR1.
[30/Sep/2014:19:16:29] ENGINE Listening for SIGHUP.
[30/Sep/2014:19:16:29] ENGINE Listening for SIGTERM.
[30/Sep/2014:19:16:29] ENGINE Bus STARTING
[30/Sep/2014:19:16:29] _cp_dispatch: ['global', 'dummy.html'] - _cp_dispatch count:1
[30/Sep/2014:19:16:29] ENGINE Started monitor thread '_TimeoutMonitor'.
[30/Sep/2014:19:16:29] ENGINE Started monitor thread 'Autoreloader'.
[30/Sep/2014:19:16:29] ENGINE Serving on http://127.0.0.1:1025
[30/Sep/2014:19:16:29] ENGINE Bus STARTED
[30/Sep/2014:19:16:34] _cp_dispatch: ['id', '12345'] - _cp_dispatch count:2
127.0.0.1 - - [30/Sep/2014:19:16:34] "POST /id/12345 HTTP/1.1" 301 117 "" "python-requests/2.4.0 CPython/3.4.1 Darwin/13.3.0"
[30/Sep/2014:19:16:34] _cp_dispatch: ['id', '12345'] - _cp_dispatch count:3
[30/Sep/2014:19:16:34] Index: received params{'id': '12345'} - index count:1
127.0.0.1 - - [30/Sep/2014:19:16:34] "GET /id/12345/ HTTP/1.1" 200 - "" "python-requests/2.4.0 CPython/3.4.1 Darwin/13.3.0"
Any idea of why _cp_dispatch is called twice for a single post?
-- EDIT
I'm suspecting some 301 redirection going on internally since it appears in the log.
In cherrypy, an internal redirection occurs when the url does not end with a slash.
https://cherrypy.readthedocs.org/en/3.3.0/refman/_cprequest.html#cherrypy._cprequest.Request.is_index
There are 2 ways to resolve the "problem":
First is obviously posting to http://example.com/id/12345/
Second is adding the following to the configuration file:
tools.trailing_slash.on = False
https://cherrypy.readthedocs.org/en/3.2.6/concepts/config.html