Glusterfs rebalance - centos

I have a glusterFS storage with volume distributed dispersed , gluster version 9.5 with 180 bricks in 15 server, every server has 12 bricks
Number of Bricks: 30 x (4 + 2) = 180 .
I have a problem when added last 4 servers no data transferred and when issue command :
gluster volume rebalance [Volume-Name] start
or
gluster volume rebalance [Volume-Name] start force
nothing occur with no error and no data transferred to new bricks . Just completed rebalance with nothing happened
gluster volume info command :
Volume Name: [vol-name]
Type: Distributed-Disperse
Volume ID: id
Status: Started
Snapshot Count: 0
Number of Bricks: 30 x (4 + 2) = 180
Transport-type: tcp
Bricks:
Brick1: gfs12:/1/brick_a
Brick2: gfs8:/a1/brick
Brick3: gfs2:/1/brick
Brick4: gfs12:/2/brick_b
Brick5: gfs8:/a2/brick
Brick6: gfs2:/2/brick
Brick7: gfs12:/3/brick_c
Brick8: gfs8:/a3/brick
Brick9: gfs2:/3/brick
Brick10: gfs12:/4/brick_d
Brick11: gfs8:/a4/brick
Brick12: gfs2:/4/brick
Brick13: gfs12:/5/brick_e
Brick14: gfs8:/a5/brick
Brick15: gfs2:/5/brick
Brick16: gfs12:/6/brick_f
Brick17: gfs8:/a6/brick
Brick18: gfs2:/6/brick
Brick19: gfs12:/7/brick_g
Brick20: gfs8:/a7/brick
Brick21: gfs2:/7/brick
Brick22: gfs12:/8/brick_h
Brick23: gfs8:/a8/brick
Brick24: gfs2:/8/brick
Brick25: gfs12:/9/brick_i
Brick26: gfs8:/a9/brick
Brick27: gfs2:/9/brick
Brick28: gfs12:/10/brick_j
Brick29: gfs8:/a10/brick
Brick30: gfs2:/10/brick
Brick31: gfs12:/11/brick_k
Brick32: gfs8:/a11/brick
Brick33: gfs2:/11/brick
Brick34: gfs12:/12/brick_l
Brick35: gfs8:/a12/brick
Brick36: gfs2:/12/brick
Brick37: gfs3:/1/brick
Brick38: gfs4:/1/brick
Brick39: gfs11:/1/brick
Brick40: gfs3:/2/brick
Brick41: gfs4:/2/brick
Brick42: gfs11:/2/brick
Brick43: gfs3:/3/brick
Brick44: gfs4:/3/brick
Brick45: gfs11:/3/brick
Brick46: gfs3:/4/brick
Brick47: gfs4:/4/brick
Brick48: gfs11:/4/brick
Brick49: gfs3:/5/brick
Brick50: gfs4:/5/brick
Brick51: gfs11:/5/brick
Brick52: gfs3:/6/brick
Brick53: gfs4:/6/brick
Brick54: gfs11:/6/brick
Brick55: gfs3:/7/brick
Brick56: gfs4:/7/brick
Brick57: gfs11:/7/brick
Brick58: gfs3:/8/brick
Brick59: gfs4:/8/brick
Brick60: gfs11:/8/brick
Brick61: gfs3:/9/brick
Brick62: gfs4:/9/brick
Brick63: gfs11:/9/brick
Brick64: gfs3:/10/brick
Brick65: gfs4:/10/brick
Brick66: gfs11:/10/brick
Brick67: gfs3:/11/brick
Brick68: gfs4:/11/brick
Brick69: gfs11:/11/brick
Brick70: gfs3:/12/brick
Brick71: gfs4:/12/brick
Brick72: gfs11:/12/brick
Brick73: gfs5:/1/brick_a
Brick74: gfs5:/2/brick_b
Brick75: gfs10:/1/brick_a
Brick76: gfs10:/2/brick_b
Brick77: gfs6:/1/brick_a
Brick78: gfs6:/2/brick_b
Brick79: gfs5:/3/brick_c
Brick80: gfs5:/4/brick_d
Brick81: gfs10:/3/brick_c
Brick82: gfs10:/4/brick_d
Brick83: gfs6:/3/brick_c
Brick84: gfs6:/4/brick_d
Brick85: gfs5:/5/brick_e
Brick86: gfs5:/6/brick_f
Brick87: gfs10:/5/brick_e
Brick88: gfs10:/6/brick_f
Brick89: gfs6:/5/brick_e
Brick90: gfs6:/6/brick_f
Brick91: gfs5:/7/brick_g
Brick92: gfs5:/8/brick_h
Brick93: gfs10:/7/brick_g
Brick94: gfs10:/8/brick_h
Brick95: gfs6:/7/brick_g
Brick96: gfs6:/8/brick_h
Brick97: gfs5:/9/brick_i
Brick98: gfs5:/10/brick_j
Brick99: gfs10:/9/brick_i
Brick100: gfs10:/10/brick_j
Brick101: gfs6:/9/brick_i
Brick102: gfs6:/10/brick_j
Brick103: gfs5:/11/brick_k
Brick104: gfs5:/12/brick_l
Brick105: gfs10:/11/brick_k
Brick106: gfs10:/12/brick_l
Brick107: gfs6:/11/brick_k
Brick108: gfs6:/12/brick_l
Brick109: gfs1:/1/brick
Brick110: gfs7:/1/brick
Brick111: gfs9:/1/brick
Brick112: gfs1:/2/brick
Brick113: gfs7:/2/brick
Brick114: gfs9:/2/brick
Brick115: gfs1:/3/brick
Brick116: gfs7:/3/brick
Brick117: gfs9:/3/brick
Brick118: gfs1:/4/brick
Brick119: gfs7:/4/brick
Brick120: gfs9:/4/brick
Brick121: gfs1:/5/brick
Brick122: gfs7:/5/brick
Brick123: gfs9:/5/brick
Brick124: gfs1:/6/brick
Brick125: gfs7:/6/brick
Brick126: gfs9:/6/brick
Brick127: gfs1:/7/brick
Brick128: gfs7:/7/brick
Brick129: gfs9:/7/brick
Brick130: gfs1:/8/brick
Brick131: gfs7:/8/brick
Brick132: gfs9:/8/brick
Brick133: gfs1:/9/brick
Brick134: gfs7:/9/brick
Brick135: gfs9:/9/brick
Brick136: gfs1:/10/brick
Brick137: gfs7:/10/brick
Brick138: gfs9:/10/brick
Brick139: gfs1:/11/brick
Brick140: gfs7:/11/brick
Brick141: gfs9:/11/brick
Brick142: gfs1:/12/brick
Brick143: gfs7:/12/brick
Brick144: gfs9:/12/brick
Brick145: gfs13:/1/brick
Brick146: gfs14:/1/brick
Brick147: gfs15:/1/brick
Brick148: gfs13:/2/brick
Brick149: gfs14:/2/brick
Brick150: gfs15:/2/brick
Brick151: gfs13:/3/brick
Brick152: gfs14:/3/brick
Brick153: gfs15:/3/brick
Brick154: gfs13:/4/brick
Brick155: gfs14:/4/brick
Brick156: gfs15:/4/brick
Brick157: gfs13:/5/brick
Brick158: gfs14:/5/brick
Brick159: gfs15:/5/brick
Brick160: gfs13:/6/brick
Brick161: gfs14:/6/brick
Brick162: gfs15:/6/brick
Brick163: gfs13:/7/brick
Brick164: gfs14:/7/brick
Brick165: gfs15:/7/brick
Brick166: gfs13:/8/brick
Brick167: gfs14:/8/brick
Brick168: gfs15:/8/brick
Brick169: gfs13:/9/brick
Brick170: gfs14:/9/brick
Brick171: gfs15:/9/brick
Brick172: gfs13:/10/brick
Brick173: gfs14:/10/brick
Brick174: gfs15:/10/brick
Brick175: gfs13:/11/brick
Brick176: gfs14:/11/brick
Brick177: gfs15:/11/brick
Brick178: gfs13:/12/brick
Brick179: gfs14:/12/brick
Brick180: gfs15:/12/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
server.event-threads: 12
client.event-threads: 12
performance.parallel-readdir: on
performance.cache-size: 10GB
performance.cache-max-file-size: 1024MB
performance.io-thread-count: 64
storage.build-pgfid: on
features.bitrot: on
features.scrub: Active
performance.stat-prefetch: on
features.scrub-throttle: aggressive
performance.client-io-threads: on
So if someone have an idea how can tshoot and solve this issue

Related

Use GPIO as chip select for SPI ACPI overlay

I want to use a GPIO pin as a new chip select for SPI on an Up Squared board. The Up squared uses an Intel Pentium N4200, so it's a x86 machine. I have managed to this on a Raspberry Pi by using Device Tree Overlays but as this is an x86 machine I may have to use ACPI overlays.
The Up squared has two spi available and they propose here to use ACPI overlays, this repo, which actually works very well. Below one of the asl files they use
/*
* This ASL can be used to declare a spidev device on SPI0 CS0
*/
DefinitionBlock ("", "SSDT", 5, "INTEL", "SPIDEV0", 1)
{
External (_SB_.PCI0.SPI1, DeviceObj)
Scope (\_SB.PCI0.SPI1)
{
Device (TP0) {
Name (_HID, "SPT0001")
Name (_DDN, "SPI test device connected to CS0")
Name (_CRS, ResourceTemplate () {
SpiSerialBus (
0, // Chip select
PolarityLow, // Chip select is active low
FourWireMode, // Full duplex
8, // Bits per word is 8 (byte)
ControllerInitiated, // Don't care
1000000, // 10 MHz
ClockPolarityLow, // SPI mode 0
ClockPhaseFirst, // SPI mode 0
"\\_SB.PCI0.SPI1", // SPI host controller
0 // Must be 0
)
})
}
}
}
I compiled this file using
$ sudo iasl spidev1.0.asl > /dev/null
$ sudo mv spidev1.0.asl /lib/firmware/acpi-upgrades
$ sudo update-initramfs -u -k all
Then I reboot an I can see a device and communicate through it.
up#up:~$ ls /dev/spi*
/dev/spidev1.0
Thus, I decided to write my own overlay based on themeta-acpi samples from intel and I wrote this:
/*
* This ASL can be used to declare a spidev device on SPI0 CS2
*/
DefinitionBlock ("", "SSDT", 5, "INTEL", "SPIDEV2", 1)
{
External (_SB_.PCI0.SPI1, DeviceObj)
External (_SB_.PCI0.GIP0.GPO, DeviceObj)
Scope (\_SB.PCI0.SPI1)
{
Name (_CRS, ResourceTemplate () {
GpioIo (Exclusive, PullUp, 0, 0, IoRestrictionOutputOnly,
"\\_SB.PCI0.GIP0.GPO", 0) {
22 // pin 22 is BCM25 or 402 in linux
}
})
Name (_DSD, Package() {
ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"),
Package () {
Package () { "compatible", "spidev" }, // not sure if this is needed
Package () {
"cs-gpios", Package () {
0,
0,
^SPI1, 0, 0, 0, // index 0 in _CRS -> pin 22
}
},
}
})
Device (TP2) {
Name (_HID, "SPT0001")
Name (_DDN, "SPI test device connected to CS2")
Name (_CRS, ResourceTemplate () {
SpiSerialBus (
2, // Chip select
PolarityLow, // Chip select is active low
FourWireMode, // Full duplex
8, // Bits per word is 8 (byte)
ControllerInitiated, // Don't care
1000000, // 10 MHz
ClockPolarityLow, // SPI mode 0
ClockPhaseFirst, // SPI mode 0
"\\_SB.PCI0.SPI1", // SPI host controller
0 // Must be 0
)
})
}
}
}
But I cannot see the new device. What am I missing?
Edit:
I have modified the code with a code which actually worked. I can see now a device on /dev/spidev1.2.
However, the CS on pin 22 is low all the time which shouldn't be the case. is the number of the pin correct? I'm using pin numbering from here
Edit 2:
Here is the output of my kernel version
Linux up 5.4.65-rt38+ #1 SMP PREEMPT_RT Mon Feb 28 13:42:31 CET 2022 x86_64 x86_64 x86_64 GNU/Linux
I compiled this up linux repository with the RT patch for the right kernel version.
I also installed the upboard-extras package and I'm actually able to communicate through spi for devices /dev/spidev1.0 and /dev/spidev1.1. So I think I have configured the up squared correctly.
There is nongpio file under /sys/class/gpio
up#up:~/aru$ ls /sys/class/gpio
export gpiochip0 gpiochip267 gpiochip310 gpiochip357 gpiochip434 unexport
I can set the GPIO to 1 or 0 and I can see the output on a multimeter, so I think I have right permissions for GPIO.
Edit 3:
Please find in this link the .dat result from acpidump -o up2-tables.dat
I assume that you are using this board. To be able to use I/O pins(i2c, spi etc.), you need to enable them firstly. Easy way to check you already enabled them or not is that typing in terminal:
uname -a
Output of this will be look like:
Linux upxtreme-UP-WHL01 5.4.0-1-generic #2~upboard2-Ubuntu SMP Thu Jul 25 13:35:27 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
In here #2~upboard2-Ubuntu part can be changed accordingto your board type. However if you don't see a similar result, then you didn't configure your board yet. Otherway to check it, go to folder: /sys/class/gpio and check the ngpio file. Inside it, there should be written 28.
To use any I/O pins(i2c, spi etc.), you don't need to change anything on BIOS side, because its coming to you defaultly enabled.
Please check the up-wiki page, and update your board kernel after linux installation. Then your I/O configurations will be enabled. Up-wiki main page.

Performance issues with vert.x

When running the test using apache bench for a vert.x application, we are seeing that the response time increased as we increase the number of concurrent users.
D:\httpd-2.2.34-win64\Apache2\bin>ab -n 500 -c 1 -H "Authorization: 5" -H "Span_Id: 4" -H "Trace_Id: 1" -H "X-test-API-KEY: 6" http://localhost:8443/product_catalog/products/legacy/001~001~5ZP/terms
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests
Server Software:
Server Hostname: localhost
Server Port: 8443
Document Path: /product_catalog/products/legacy/001~001~5ZP/terms
Document Length: 319 bytes
Concurrency Level: 1
Time taken for tests: 12.366 seconds
Complete requests: 500
Failed requests: 0
Write errors: 0
Total transferred: 295094 bytes
HTML transferred: 159500 bytes
Requests per second: 40.43 [#/sec] (mean)
Time per request: 24.733 [ms] (mean)
Time per request: 24.733 [ms] (mean, across all concurrent requests)
Transfer rate: 23.30 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.5 1 3
Processing: 5 24 83.9 8 1293
Waiting: 5 23 83.9 8 1293
Total: 6 24 83.9 9 1293
Percentage of the requests served within a certain time (ms)
50% 9
66% 11
75% 13
80% 15
90% 29
95% 57
98% 238
99% 332
100% 1293 (longest request)
D:\httpd-2.2.34-win64\Apache2\bin>ab -n 500 -c 2 -H "Authorization: 5" -H "Span_Id: 4" -H "Trace_Id: 1" -H "X-test-API-KEY: 6" http://localhost:8443/product_catalog/products/legacy/001~001~5ZP/terms
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests
Server Software:
Server Hostname: localhost
Server Port: 8443
Document Path: /product_catalog/products/legacy/001~001~5ZP/terms
Document Length: 319 bytes
Concurrency Level: 2
Time taken for tests: 7.985 seconds
Complete requests: 500
Failed requests: 0
Write errors: 0
Total transferred: 295151 bytes
HTML transferred: 159500 bytes
Requests per second: 62.61 [#/sec] (mean)
Time per request: 31.941 [ms] (mean)
Time per request: 15.971 [ms] (mean, across all concurrent requests)
Transfer rate: 36.10 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.6 1 9
Processing: 6 30 71.5 12 720
Waiting: 5 30 71.4 12 720
Total: 7 31 71.5 13 721
Percentage of the requests served within a certain time (ms)
50% 13
66% 16
75% 21
80% 24
90% 53
95% 113
98% 246
99% 444
100% 721 (longest request)
D:\httpd-2.2.34-win64\Apache2\bin>ab -n 500 -c 3 -H "Authorization: 5" -H "Span_Id: 4" -H "Trace_Id: 1" -H "X-test-API-KEY: 6" http://localhost:8443/product_catalog/products/legacy/001~001~5ZP/terms
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests
Server Software:
Server Hostname: localhost
Server Port: 8443
Document Path: /product_catalog/products/legacy/001~001~5ZP/terms
Document Length: 319 bytes
Concurrency Level: 3
Time taken for tests: 7.148 seconds
Complete requests: 500
Failed requests: 0
Write errors: 0
Total transferred: 295335 bytes
HTML transferred: 159500 bytes
Requests per second: 69.95 [#/sec] (mean)
Time per request: 42.888 [ms] (mean)
Time per request: 14.296 [ms] (mean, across all concurrent requests)
Transfer rate: 40.35 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.6 1 5
Processing: 6 42 66.2 22 516
Waiting: 6 41 66.3 22 515
Total: 7 42 66.3 23 516
Percentage of the requests served within a certain time (ms)
50% 23
66% 31
75% 43
80% 51
90% 76
95% 128
98% 259
99% 430
100% 516 (longest request)
D:\httpd-2.2.34-win64\Apache2\bin>ab -n 500 -c 4 -H "Authorization: 5" -H "Span_Id: 4" -H "Trace_Id: 1" -H "X-test-API-KEY: 6" http://localhost:8443/product_catalog/products/legacy/001~001~5ZP/terms
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests
Server Software:
Server Hostname: localhost
Server Port: 8443
Document Path: /product_catalog/products/legacy/001~001~5ZP/terms
Document Length: 319 bytes
Concurrency Level: 4
Time taken for tests: 7.078 seconds
Complete requests: 500
Failed requests: 0
Write errors: 0
Total transferred: 295389 bytes
HTML transferred: 159500 bytes
Requests per second: 70.64 [#/sec] (mean)
Time per request: 56.623 [ms] (mean)
Time per request: 14.156 [ms] (mean, across all concurrent requests)
Transfer rate: 40.76 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.6 1 4
Processing: 8 55 112.8 22 1112
Waiting: 8 55 112.7 21 1111
Total: 9 56 112.8 22 1112
Percentage of the requests served within a certain time (ms)
50% 22
66% 31
75% 43
80% 59
90% 120
95% 213
98% 294
99% 387
100% 1112 (longest request)
Also when the test is running, if we hit the API from another console, we see that the response time increases for that as well.
We have used the following code:
Router code:
router.route().handler(LoggerHandler.create(LoggerFormat.SHORT));
router.route().handler(ResponseTimeHandler.create());
router.route().handler(CorsHandler.create("*")
.allowedMethod(io.vertx.core.http.HttpMethod.GET)
.allowedMethod(io.vertx.core.http.HttpMethod.POST)
.allowedMethod(io.vertx.core.http.HttpMethod.PUT)
.allowedMethod(io.vertx.core.http.HttpMethod.DELETE)
.allowedMethod(io.vertx.core.http.HttpMethod.OPTIONS)
.allowedHeader("Access-Control-Request-Method")
.allowedHeader("Access-Control-Allow-Credentials")
.allowedHeader("Access-Control-Allow-Origin")
.allowedHeader("Access-Control-Allow-Headers")
.allowedHeader("Content-Type")
.allowedHeader("Trace_Id")
.allowedHeader("Span_Id")
.allowedHeader("Authorization")
.allowedHeader("X-test-API-KEY")
.allowedHeader("Accept"));
router.get("/product_catalog/products/legacy/:id/terms").handler(this::getByProductCode);
Server Creation:
vertx
.createHttpServer()
.exceptionHandler(event -> {
logger.error("{}", new GPCLogEvent(className, "global_exception_handler", event, new GPCEntry("global_exception", event.getMessage() != null ? event.getMessage() : "")));
})
.connectionHandler(handler -> {
handler.exceptionHandler(event -> {
logger.error("{}", new GPCLogEvent(className, "global_exception_connection_handler", event, new GPCEntry("global_exception", event.getMessage() != null ? event.getMessage() : "")));
});
})
.websocketHandler(handler -> {
handler.exceptionHandler(event -> {
logger.error("{}", new GPCLogEvent(className, "global_exception_websocket_handler", event, new GPCEntry("global_exception", event.getMessage() != null ? event.getMessage() : "")));
});
})
.requestHandler(handler -> {
handler.exceptionHandler(event -> {
logger.error("{}", new GPCLogEvent(className, "global_exception_request_handler", event, new GPCEntry("global_exception", event.getMessage() != null ? event.getMessage() : "")));
});
})
.requestHandler(router)
.listen(serverPort, result -> {
if (result.succeeded()) {
logger.info("{}",
new GPCLogEvent(className, "start", new GPCEntry<>("Server started at port", serverPort)));
} else {
logger.error("{}", new GPCLogEvent(className, "start", result.cause(),
new GPCEntry<>("Server failed to start at port", serverPort)));
}
});
Handler Method:
private void getByProductCode(RoutingContext routingContext) {
LocalDateTime requestReceivedTime = LocalDateTime.now();
ZonedDateTime nowUTC = ZonedDateTime.now(ZoneOffset.UTC);
JsonObject jsonObject = commonUtilities.populateJsonObject(routingContext.request().headers());
jsonObject.put("requestReceivedTime", requestReceivedTime.format(commonUtilities.getDateTimeFormatter()));
jsonObject.put("path_param_id", routingContext.pathParam("id"));
jsonObject.put("TRACING_ID",UUID.randomUUID().toString());
long timeTakenInGettingRequestVar = 0;
if (jsonObject.getString("requestSentTime") != null) {
ZonedDateTime requestSentTime = LocalDateTime.parse(jsonObject.getString("requestSentTime"), commonUtilities.getDateTimeFormatter()).atZone(ZoneId.of("UTC"));
timeTakenInGettingRequestVar = ChronoUnit.MILLIS.between(requestSentTime.toLocalDateTime(), nowUTC.toLocalDateTime());
}
final long timeTakenInGettingRequest = timeTakenInGettingRequestVar;
vertx.eventBus().send(TermsVerticleGet.GET_BY_PRODUCT_CODE,
jsonObject,
result -> {
if (result.succeeded()) {
routingContext.response()
.putHeader("content-type", jsonObject.getString("Accept"))
.putHeader("TRACING_ID", jsonObject.getString("TRACING_ID"))
.putHeader("TRACE_ID", jsonObject.getString("TRACE_ID"))
.putHeader("SPAN_ID", jsonObject.getString("SPAN_ID"))
.putHeader("responseSentTime", LocalDateTime.now().format(commonUtilities.getDateTimeFormatter()))
.putHeader("timeTakenInGettingRequest", Long.toString(timeTakenInGettingRequest))
.putHeader("requestReceivedTime", nowUTC.toLocalDateTime().format(commonUtilities.getDateTimeFormatter()))
.putHeader("requestSentTime", jsonObject.getString("requestSentTime") != null ? jsonObject.getString("requestSentTime") : "")
.setStatusCode(200)
.end(result.result().body().toString())
;
logger.info("OUT");
} else {
ReplyException cause = (ReplyException) result.cause();
routingContext.response()
.putHeader("content-type", jsonObject.getString("Accept"))
.putHeader("TRACING_ID", jsonObject.getString("TRACING_ID"))
.putHeader("TRACE_ID", jsonObject.getString("TRACE_ID"))
.putHeader("SPAN_ID", jsonObject.getString("SPAN_ID"))
.putHeader("responseSentTime", LocalDateTime.now().format(commonUtilities.getDateTimeFormatter()))
.putHeader("timeTakenInGettingRequest", Long.toString(timeTakenInGettingRequest))
.putHeader("requestReceivedTime", nowUTC.toLocalDateTime().format(commonUtilities.getDateTimeFormatter()))
.putHeader("requestSentTime", jsonObject.getString("requestSentTime") != null ? jsonObject.getString("requestSentTime") : "")
.setStatusCode(cause.failureCode())
.end(cause.getMessage());
logger.info("OUT");
}
});
}
Worker Verticle:
private void getByProductCode(Message<JsonObject> messageConsumer) {
LocalDateTime requestReceivedTime_handler = LocalDateTime.now();
ZonedDateTime nowUTC_handler = ZonedDateTime.now(ZoneOffset.UTC);
final String TRACING_ID = messageConsumer.body().getString("TRACING_ID");
final String TRACE_ID = !commonUtilities.validateNullEmpty(messageConsumer.body().getString("Trace_Id")) ? UUID.randomUUID().toString() : messageConsumer.body().getString("Trace_Id");
final String SPAN_ID = !commonUtilities.validateNullEmpty(messageConsumer.body().getString("Span_Id")) ? UUID.randomUUID().toString() : messageConsumer.body().getString("Span_Id");
logger.info("{}", new GPCLogEvent(className, "getByProductCode", new GPCEntry<>("IN", System.currentTimeMillis()), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID)));
// Run the validation
logger.info("{}", new GPCLogEvent(className, "getByProductCode", new GPCEntry<>("validateCommonRequestHeader", true), new GPCEntry<>("IN", System.currentTimeMillis()), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID)));
commonUtilities.validateCommonRequestHeader(messageConsumer.body());
logger.info("{}", new GPCLogEvent(className, "getByProductCode", new GPCEntry<>("validateCommonRequestHeader", true), new GPCEntry<>("OUT", System.currentTimeMillis()), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID)));
// Product code - Mandatory
messageConsumer.body().put("product_codes", messageConsumer.body().getString("path_param_id"));
// Product code validation
if (commonUtilities.validateNullEmpty(messageConsumer.body().getString("product_codes"))) {
commonUtilities.checkProductIAORGLOGOPCTCode(messageConsumer.body(), TRACING_ID, TRACE_ID, SPAN_ID, false);
} else {
messageConsumer.body().getJsonArray("errors").add("id (path parameter) is mandatory field");
}
// There are validation errors
if (messageConsumer.body().getJsonArray("errors").size() > 0) {
messageConsumer.body().put("error_message", "Validation errors");
messageConsumer.body().put("developer_message", messageConsumer.body().getJsonArray("errors").toString());
messageConsumer.body().put("error_code", "400");
messageConsumer.body().put("more_information", "There are " + messageConsumer.body().getJsonArray("errors").size() + " validation errors");
messageConsumer.fail(400, Json.encode(commonUtilities.errors(messageConsumer.body(), TRACING_ID, TRACE_ID, SPAN_ID)));
logger.info("{}", new GPCLogEvent(className, "getByProductCode", new GPCEntry<>("OUT", System.currentTimeMillis()), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID), new GPCEntry<>("TIME_TAKEN", ChronoUnit.MILLIS.between(requestReceivedTime_handler, LocalDateTime.now()))));
return;
}
Handler<AsyncResult<CreditCardTerms>> dataHandler = data -> {
if (data.succeeded()) {
logger.info("{}", new GPCLogEvent(className, "getByProductCode", new GPCEntry<>("Success", true), new GPCEntry<>("IN", System.currentTimeMillis()), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID)));
/*routingContext.response()
.putHeader("content-type", messageConsumer.body().getString("Accept"))
.putHeader("TRACING_ID", TRACING_ID)
.putHeader("TRACE_ID", TRACE_ID)
.putHeader("SPAN_ID", SPAN_ID)
.putHeader("responseSentTime", ZonedDateTime.now(ZoneOffset.UTC).toLocalDateTime().format(commonUtilities.getDateTimeFormatter()))
.putHeader("timeTakenInGettingRequest", Long.toString(timeTakenInGettingRequest))
.putHeader("requestReceivedTime", nowUTC_handler.toLocalDateTime().format(commonUtilities.getDateTimeFormatter()))
.putHeader("requestSentTime", messageConsumer.body().getString("requestSentTime") != null ? messageConsumer.body().getString("requestSentTime") : "")
.setStatusCode(200)
.end(Json.encode(data.result()));
*/
messageConsumer.reply(Json.encode(data.result()));
logger.info("{}", new GPCLogEvent(className, "getByProductCode", new GPCEntry<>("OUT", System.currentTimeMillis()), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID), new GPCEntry<>("TIME_TAKEN", ChronoUnit.MILLIS.between(requestReceivedTime_handler, LocalDateTime.now()))));
} else {
logger.info("{}", new GPCLogEvent(className, "getByProductCode", new GPCEntry<>("onError", true), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID)));
if (data.cause() instanceof NoDocumentFoundException) {
messageConsumer.body().put("error_message", "Issue while fetching the details of the product");
messageConsumer.body().put("developer_message", messageConsumer.body().getJsonArray("errors").add(commonUtilities.getStackTrace(data.cause())).toString());
messageConsumer.body().put("error_code", "404");
messageConsumer.body().put("more_information", "Issue while fetching the details of the product");
//commonUtilities.errors(routingContext, messageConsumer.body(), TRACING_ID, TRACE_ID, SPAN_ID);
messageConsumer.fail(404, Json.encode(commonUtilities.errors(messageConsumer.body(), TRACING_ID, TRACE_ID, SPAN_ID)));
} else {
messageConsumer.body().put("error_message", "Internal Server Error: Issue while fetching the details of the product");
messageConsumer.body().put("developer_message", messageConsumer.body().getJsonArray("errors").add(commonUtilities.getStackTrace(data.cause())).toString());
messageConsumer.body().put("error_code", "500");
messageConsumer.body().put("more_information", "Internal Server Error: Issue while fetching the details of the product");
//commonUtilities.errors(routingContext, messageConsumer.body(), TRACING_ID, TRACE_ID, SPAN_ID);
messageConsumer.fail(500, Json.encode(commonUtilities.errors(messageConsumer.body(), TRACING_ID, TRACE_ID, SPAN_ID)));
}
logger.error("{}", new GPCLogEvent(className, "getByProductCode", data.cause(), new GPCEntry<>("OUT", System.currentTimeMillis()), new GPCEntry<>("TRACING_ID", TRACING_ID), new GPCEntry<>("TRACE_ID", TRACE_ID), new GPCEntry<>("SPAN_ID", SPAN_ID), new GPCEntry<>("TIME_TAKEN", ChronoUnit.MILLIS.between(requestReceivedTime_handler, LocalDateTime.now()))));
}
};
// Search based on product codes
gpcFlowable.getByGPID(TRACING_ID,
TRACE_ID,
SPAN_ID,
TermsConstant.DOCUMENT_KEY,
gpcFlowable.getByProductCode(TRACING_ID, TRACE_ID, SPAN_ID, messageConsumer.body().getString("product_codes"),
TermsConstant.API_VERSION_V1), // Get the GPID for the given IA or ORG~LOGO~PCT code
TermsConstant.API_VERSION_V1,
CreditCardTerms.class)
.subscribe(doc -> dataHandler.handle(Future.succeededFuture(doc)),
error -> dataHandler.handle(Future.failedFuture(error)));
}

TCP data sometimes not received by java (or python) server

I'm developing a system that consists of an arduino mkr1000 that I want to send data via wifi to a java server program running in my local network.
Everything works except the main part: data sent by the arduino is sometimes not received by the server...
I'm using the arduino Wifi101 library to connect to my wifi, get a WiFiClient and send data.
The following code is just a example to demonstrate the problem:
for (int i = 0; i < 3; ++i) {
Serial.println(F("Connecting to wifi"));
const auto status = WiFi.begin("...", "...");
if (status != WL_CONNECTED) {
Serial.print(F("Could not connect to WiFi: "));
switch (status) {
case WL_CONNECT_FAILED:
Serial.println(F("WL_CONNECT_FAILED"));
break;
case WL_DISCONNECTED:
Serial.println(F("WL_DISCONNECTED"));
break;
default:
Serial.print(F("Code "));
Serial.println(status, DEC);
break;
}
} else {
Serial.println(F("WiFi status: WL_CONNECTED"));
WiFiClient client;
if (client.connect("192.168.0.102", 1234)) {
delay(500);
client.print(F("Test "));
client.println(i, DEC);
client.flush();
Serial.println(F("Data written"));
delay(5000);
client.stop();
} else {
Serial.println(F("Could not connect"));
}
WiFi.end();
}
delay(2000);
}
The java server is based on Netty but the same thing with manually creating and reading from a Socket yields the same result.
The testing code is pretty standard with only a simple output (note: in Kotlin):
val bossGroup = NioEventLoopGroup(1)
val workerGroup = NioEventLoopGroup(6)
val serverFuture = ServerBootstrap().run {
group(bossGroup, workerGroup)
channel(NioServerSocketChannel::class.java)
childHandler(object : ChannelInitializer<NioSocketChannel>() {
override fun initChannel(ch: NioSocketChannel) {
ch.pipeline()
.addLast(LineBasedFrameDecoder(Int.MAX_VALUE))
.addLast(StringDecoder())
.addLast(object : ChannelInboundHandlerAdapter() {
override fun channelRead(ctx: ChannelHandlerContext, msg: Any) {
println("msg = $msg")
ctx.close()
}
})
}
})
bind(port).sync()
}
The arduino tells that everything is OK (i.e. writing Data written to the serial console for each iteration) but the server sometimes skips individual messages.
Adding the LoggingHandler from Netty tells me in these cases:
11:28:48.576 [nioEventLoopGroup-3-1] WARN i.n.handler.logging.LoggingHandler - [id: 0x9991c251, L:/192.168.0.20:1234 - R:/192.168.0.105:63845] REGISTERED
11:28:48.577 [nioEventLoopGroup-3-1] WARN i.n.handler.logging.LoggingHandler - [id: 0x9991c251, L:/192.168.0.20:1234 - R:/192.168.0.105:63845] ACTIVE
In the cases where the message is received it tells me:
11:30:01.392 [nioEventLoopGroup-3-6] WARN i.n.handler.logging.LoggingHandler - [id: 0xd51b7bc3, L:/192.168.0.20:1234 - R:/192.168.0.105:59927] REGISTERED
11:30:01.394 [nioEventLoopGroup-3-6] WARN i.n.handler.logging.LoggingHandler - [id: 0xd51b7bc3, L:/192.168.0.20:1234 - R:/192.168.0.105:59927] ACTIVE
11:30:01.439 [nioEventLoopGroup-3-6] WARN i.n.handler.logging.LoggingHandler - [id: 0xd51b7bc3, L:/192.168.0.20:1234 - R:/192.168.0.105:59927] READ: 8B
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 54 65 73 74 20 32 0d 0a |Test 2.. |
+--------+-------------------------------------------------+----------------+
11:30:01.449 [nioEventLoopGroup-3-6] WARN i.n.handler.logging.LoggingHandler - [id: 0xd51b7bc3, L:/192.168.0.20:1234 - R:/192.168.0.105:59927] CLOSE
11:30:01.451 [nioEventLoopGroup-3-6] WARN i.n.handler.logging.LoggingHandler - [id: 0xd51b7bc3, L:/192.168.0.20:1234 ! R:/192.168.0.105:59927] READ COMPLETE
11:30:01.453 [nioEventLoopGroup-3-6] WARN i.n.handler.logging.LoggingHandler - [id: 0xd51b7bc3, L:/192.168.0.20:1234 ! R:/192.168.0.105:59927] INACTIVE
11:30:01.464 [nioEventLoopGroup-3-6] WARN i.n.handler.logging.LoggingHandler - [id: 0xd51b7bc3, L:/192.168.0.20:1234 ! R:/192.168.0.105:59927] UNREGISTERED
With my understanding this means that the TCP packets are indeed received but in the faulty cases the IO thread from Netty is waiting to read the TCP data but does never continue...
The same problem exists when trying with a rudimentary python server (just waiting for a connection and printing the received data).
I confirmed the data is sent by using tcpflow on Arch Linux with the arguments -i any -C -g port 1234.
I even tried the server on a Windows 7 machine with the same results (TCP packets confirmed with SmartSniff).
Strangely using a java server to send the data always and reproducibly is received...
Does anybody have any idea to solve the problem or at least how to diagnose?
PS: Maybe it is important to note that with tcpflow (i.e. on linux) I could watch the TCP packets being resent to the server.
Does this mean the server is receiving the packets but not sending an ACK?
SmartSniff didn't show the same behavior (but maybe I used wrong options to display the resent packets).
In the meantime I send messages to acknowledge receiving another message. If the acknowledgement is not received the message is sent again.
For anyone with the same problem:
While testing something different I updated the wifi firmware of the board to the latest version 19.5.2. Since then I haven't noticed any lost data. Maybe this was the problem.
See Check WiFi101 Firmware Version and Firmware and certificates Updater.
Note: I couldn't get the sketches to run with the Arduino IDE but with PlatformIO.

Akka streams with gilt aws kinesis exception: Stream is terminated. SourceQueue is detached

I'm using gilt aws kinesis stream consumer library there to connect to a single shard kinesis stream.
Specifically:
...
val streamConfig = KinesisStreamConsumerConfig[String](
streamName = queueName
, applicationName = kinesisConsumerApp
, regionName = Some(awsRegion)
, checkPointInterval = 5.minutes
, retryConfig = RetryConfig(initialDelay = 1.second, retryDelay = 1.second, maxRetries = 3)
, initialPositionInStream = InitialPositionInStream.LATEST
)
implicit val mat = ActorMaterializer()
val flow = Source.queue[String](0, OverflowStrategy.backpressure)
.to(Sink.foreach {
msgBody => {
log.info(s"Flow got message: $msgBody")
try {
val workAsJson = parse(msgBody)
frontEnd ! workAsJson
} catch {
case th: Throwable => log.error(s"Exception thrown trying to parse message from Kinesis stream, e.cause: ${th.getCause}, e.message: ${th.getMessage}")
}
}
})
.run()
val consumer = new KinesisStreamConsumer[String](
streamConfig,
KinesisStreamHandler(
KinesisStreamSource.pumpKinesisStreamTo(flow, 10.second)
)
)
val ec = Executors.newSingleThreadExecutor()
ec.submit(new Runnable {
override def run(): Unit = consumer.run()
})
The application runs fine for about 24 hours, I verify occasionally by pushing records using aws kinesis put-record command line and watch them getting consumed by my application, but then suddenly the application start receiving exceptions each time a new record is pushed to the stream.
Here is the console logging when that happens:
INFO: Sleeping ... [863/1962]
DEBUG[RecordProcessor-0000] KCLRecordProcessorFactory$IRecordProcessorFactoryImpl - Processing 1 records from shard shardId-000000000000
WARN [RecordProcessor-0000] KCLRecordProcessorFactory$IRecordProcessorFactoryImpl - Kinesis shard: shardId-000000000000 :: Stream is terminated. SourceQueue is detached
WARN [RecordProcessor-0000] KCLRecordProcessorFactory$IRecordProcessorFactoryImpl - Kinesis shard: shardId-000000000000 :: Stream is terminated. SourceQueue is detached
WARN [RecordProcessor-0000] KCLRecordProcessorFactory$IRecordProcessorFactoryImpl - Kinesis shard: shardId-000000000000 :: Stream is terminated. SourceQueue is detached
ERROR[RecordProcessor-0000] KCLRecordProcessorFactory$IRecordProcessorFactoryImpl - SKIPPING 1 records from shard shardId-000000000000 :: Kinesis shard: shardId-000000000000 :: Stream is termi
nated. SourceQueue is detached
com.gilt.gfc.aws.kinesis.client.KCLRecordProcessorFactory$KCLProcessorException: Kinesis shard: shardId-000000000000 :: Stream is terminated. SourceQueue is detached
at com.gilt.gfc.aws.kinesis.client.KCLRecordProcessorFactory$IRecordProcessorFactoryImpl$$anon$1.$anonfun$doRetry$2(KCLRecordProcessorFactory.scala:156)
at com.gilt.gfc.util.Retry$.retryWithExponentialDelay(Retry.scala:67)
at com.gilt.gfc.aws.kinesis.client.KCLRecordProcessorFactory$IRecordProcessorFactoryImpl$$anon$1.doRetry(KCLRecordProcessorFactory.scala:151)
at com.gilt.gfc.aws.kinesis.client.KCLRecordProcessorFactory$IRecordProcessorFactoryImpl$$anon$1.processRecords(KCLRecordProcessorFactory.scala:120)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.V1ToV2RecordProcessorAdapter.processRecords(V1ToV2RecordProcessorAdapter.java:42)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ProcessTask.call(ProcessTask.java:176)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:49)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:24)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Stream is terminated. SourceQueue is detached
at akka.stream.impl.QueueSource$$anon$1.$anonfun$postStop$1(Sources.scala:57)
at akka.stream.impl.QueueSource$$anon$1.$anonfun$postStop$1$adapted(Sources.scala:56)
at akka.stream.stage.CallbackWrapper.$anonfun$invoke$1(GraphStage.scala:1373)
at akka.stream.stage.CallbackWrapper.locked(GraphStage.scala:1379)
at akka.stream.stage.CallbackWrapper.invoke(GraphStage.scala:1370)
at akka.stream.stage.CallbackWrapper.invoke$(GraphStage.scala:1369)
at akka.stream.impl.QueueSource$$anon$1.invoke(Sources.scala:47)
at akka.stream.impl.QueueSource$$anon$2.offer(Sources.scala:180)
at com.gilt.gfc.aws.kinesis.akka.KinesisStreamSource$.$anonfun$pumpKinesisStreamTo$1(KinesisStreamSource.scala:20)
at com.gilt.gfc.aws.kinesis.akka.KinesisStreamSource$.$anonfun$pumpKinesisStreamTo$1$adapted(KinesisStreamSource.scala:20)
at com.gilt.gfc.aws.kinesis.akka.KinesisStreamHandler$$anon$1.onRecord(KinesisStreamHandler.scala:29)
at com.gilt.gfc.aws.kinesis.akka.KinesisStreamConsumer.$anonfun$run$1(KinesisStreamConsumer.scala:40)
at com.gilt.gfc.aws.kinesis.akka.KinesisStreamConsumer.$anonfun$run$1$adapted(KinesisStreamConsumer.scala:40)
at com.gilt.gfc.aws.kinesis.client.KCLWorkerRunner.$anonfun$runSingleRecordProcessor$2(KCLWorkerRunner.scala:159)
at com.gilt.gfc.aws.kinesis.client.KCLWorkerRunner.$anonfun$runSingleRecordProcessor$2$adapted(KCLWorkerRunner.scala:159)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at com.gilt.gfc.aws.kinesis.client.KCLWorkerRunner.$anonfun$runSingleRecordProcessor$1(KCLWorkerRunner.scala:159)
at com.gilt.gfc.aws.kinesis.client.KCLWorkerRunner.$anonfun$runSingleRecordProcessor$1$adapted(KCLWorkerRunner.scala:159)
at com.gilt.gfc.aws.kinesis.client.KCLWorkerRunner.$anonfun$runBatchProcessor$1(KCLWorkerRunner.scala:121)
at com.gilt.gfc.aws.kinesis.client.KCLWorkerRunner.$anonfun$runBatchProcessor$1$adapted(KCLWorkerRunner.scala:116)
at com.gilt.gfc.aws.kinesis.client.KCLRecordProcessorFactory$IRecordProcessorFactoryImpl$$anon$1.$anonfun$processRecords$2(KCLRecordProcessorFactory.scala:120)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at com.gilt.gfc.aws.kinesis.client.KCLRecordProcessorFactory$IRecordProcessorFactoryImpl$$anon$1.$anonfun$doRetry$2(KCLRecordProcessorFactory.scala:153)
... 11 common frames omitted
Wondering if that answer might be related. If so, I'd appreciate simpler explanation/how-to-fix that suits a newbie like myself
Notes:
This is still in testing/staging phase so there is no real load to
the stream except for the occasional manual pushes I'm making
The 24h duration in which the application runs fine is not
accurately tested but was an observation.
I'm running the test for a third time (started at 8:42 UTC) but with
the difference of increasing Source.queue buffer size to 100.
If 24h turns out to be accurate, could that be related to Kinesis
default 24h retention period of stream records?
Update:
Application still working fine after 24+ hours of operation.
Update2:
So the application has been running fine for the past 48+ hours, again the only difference is increasing the stream's Source.queue size to 100.
Could that be the proper fix to the issue?
Will I face similar issue with increased load once we go to production?
Is 100 enough/too much/too few?
Can someone please explain how this change fixed/suppressed/metigated the error?

Cherrypy _cp_dispatch strange behaviour with url without trailing slash: POST then GET

I am testing CherryPy with _cp_dispatch.
However, when I send 1 single post, _cp_dispatch is called twice, not once. First for the expected post then a second time with a get: Why?
The code:
import os
import cherrypy
class WebServerApp:
def __init__(self):
self.index_count = 0
self.cpdispatch_count = 0
def __del__(self):
self.exit()
def _cp_dispatch(self, vpath):
self.cpdispatch_count += 1
cherrypy.log.error('_cp_dispatch: ' + str(vpath) + ' - index count:' + str(self.cpdispatch_count))
if len(vpath) == 0:
return self
if len(vpath) == 2:
vpath.pop(0)
cherrypy.request.params['id'] = vpath.pop(0)
return self
return vpath
#cherrypy.expose
def index(self, **params):
try:
self.index_count += 1
cherrypy.log.error('Index: received params' + str(params) + ' - index count:' + str(self.index_count))
except Exception as e:
cherrypy.log.error(e.message)
def exit(self):
cherrypy.log.error('Exiting')
exit.exposed = True
ws_conf = os.path.join(os.path.dirname(__file__), 'verybasicwebserver.conf')
if __name__ == '__main__':
cherrypy.quickstart(WebServerApp(), config=ws_conf)
The config file:
[global]
server.socket_host = "127.0.0.1"
server.socket_port = 1025
server.thread_pool = 10
log.screen = True
log.access_file = "/Users/antoinebrunel/src/Rankings/log/cherrypy_access.log"
log.error_file = "/Users/antoinebrunel/src/Rankings/log/cherrypy_error.log"
The post with request:
r = requests.post("http://127.0.0.1:1025/id/12345")
The log results showing that cp_dispatch is called 3 times: 1 at startup and twice for the post
pydev debugger: starting (pid: 5744)
[30/Sep/2014:19:16:29] ENGINE Listening for SIGUSR1.
[30/Sep/2014:19:16:29] ENGINE Listening for SIGHUP.
[30/Sep/2014:19:16:29] ENGINE Listening for SIGTERM.
[30/Sep/2014:19:16:29] ENGINE Bus STARTING
[30/Sep/2014:19:16:29] _cp_dispatch: ['global', 'dummy.html'] - _cp_dispatch count:1
[30/Sep/2014:19:16:29] ENGINE Started monitor thread '_TimeoutMonitor'.
[30/Sep/2014:19:16:29] ENGINE Started monitor thread 'Autoreloader'.
[30/Sep/2014:19:16:29] ENGINE Serving on http://127.0.0.1:1025
[30/Sep/2014:19:16:29] ENGINE Bus STARTED
[30/Sep/2014:19:16:34] _cp_dispatch: ['id', '12345'] - _cp_dispatch count:2
127.0.0.1 - - [30/Sep/2014:19:16:34] "POST /id/12345 HTTP/1.1" 301 117 "" "python-requests/2.4.0 CPython/3.4.1 Darwin/13.3.0"
[30/Sep/2014:19:16:34] _cp_dispatch: ['id', '12345'] - _cp_dispatch count:3
[30/Sep/2014:19:16:34] Index: received params{'id': '12345'} - index count:1
127.0.0.1 - - [30/Sep/2014:19:16:34] "GET /id/12345/ HTTP/1.1" 200 - "" "python-requests/2.4.0 CPython/3.4.1 Darwin/13.3.0"
Any idea of why _cp_dispatch is called twice for a single post?
-- EDIT
I'm suspecting some 301 redirection going on internally since it appears in the log.
In cherrypy, an internal redirection occurs when the url does not end with a slash.
https://cherrypy.readthedocs.org/en/3.3.0/refman/_cprequest.html#cherrypy._cprequest.Request.is_index
There are 2 ways to resolve the "problem":
First is obviously posting to http://example.com/id/12345/
Second is adding the following to the configuration file:
tools.trailing_slash.on = False
https://cherrypy.readthedocs.org/en/3.2.6/concepts/config.html