Vert.x delayed response during send - vert.x

We are running a vertx HTTPS server on cloud using ubi8 OpenJDK 17 image. The resources assigned to container are 2 Core CPU and 8 GB RAM. This is the server startup parameter for the vertx 4.2.7 application.
-Xms954m -Xmx3815m -XX:+UnlockExperimentalVMOptions -XX:+UseZGC -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -XX:+ExitOnOutOfmoryError -Dvertx.disableDnsResolver=true
We have enabled epoll and BoringSSL, set the blocked thread checker to check thread blocked for more than 100 msec and starting the server with following flags.
.setSendBufferSize(30 * 1024) // 30 KB
.setReceiveBufferSize(1 * 1024) // 1 KB
.setTcpFastOpen(true)
.setTcpNoDelay(true)
.setTcpQuickAck(true)
.setTcpCork(false)
.setReusePort(true)
.setReuseAddress(true)
.setTcpKeepAlive(true)
.setCompressionSupported(true)
.setUseAlpn(true)
.setHandle100ContinueAutomatically(true)
.setTracingPolicy(TracingPolicy.IGNORE)
.setSsl(true)
.setSslEngineOptions(new OpenSSLEngineOptions().setSessionCacheEnabled(false))
.setKeyStoreOptions(new JksOptions().setPath(config.getJsonObject(ConfigConstants.SSL).getString(ConfigConstants.KEY_STORE)).setPassword(config.getJsonObject(ConfigConstants.SSL).getString(ConfigConstants.CERTIFICATE_PASSWORD)));
We also have the LoggingHandler as the first Handler in the chain.
This is how we are sending the response
routingContext.response()
.setChunked(true)
.putHeader(HttpHeaders.CONTENT_TYPE.toString(), HttpConstants.APPLICATION_HAL_JSON)
.putHeader(HttpConstants.IDENTIFIER, requestObject.getPathParams().get(UrlConstants.IDENTIFIER))
.end(response.encode());
We are seeing that the LoggingHandler reports that the server has taken 1 ms to process the request whereas at client end (for 0.1% of request), we see that the server time and the client reported time do not match. We created a JMH test and used OkHttp at client end and we are seeing this in the logs for the request for which server took 1 ms.
Req # Time Operation
67997 0.000 callStart
67997 0.000 connectionAcquired
67997 0.000 requestHeadersStart
67997 0.000 requestHeadersEnd
67997 0.423 responseHeadersStart
67997 0.423 responseHeadersEnd
67997 0.427 responseBodyStart
67997 0.427 responseBodyEnd
67997 0.427 connectionReleased
67997 0.427 callEnd
Any clues to narrow down the problem to find the root cause ?

Related

checkpoint_completion_target being ignored

I'm testing checkpoint_completion_target in RDS PostgreSQL and see that checkpoint is taking total time of 28.5 seconds. However, I configured the
checkpoint_completion_target = 0.9
checkpoint_timeout = 300
According to this, should the checkpoint spread for 300*0.9 which is 270 seconds?
PostgreSQL version 11.10
Log:
2021-03-19 16:06:47 UTC::#:[25023]:LOG: checkpoint starting: time
2021-03-19 16:07:16 UTC::#:[25023]:LOG: checkpoint complete: wrote 283 buffers (0.2%); 0 WAL file(s) added, 0 removed, 1 recycled; write=28.500 s, sync=0.006 s, total=28.533 s; sync files=56, longest=0.006 s, average=0.000 s; distance=64990 kB, estimate=68721 kB
https://www.postgresql.org/docs/10/runtime-config-wal.html
https://www.postgresql.org/docs/11/wal-configuration.html
The checkpointer implements its throttling by napping in 0.1 second chunks. And there is no provision for taking more than one nap per buffer needing to be written. So if there is very little work to be done, it will finish early despite the setting of checkpoint_completion_target.

WFLYDC0082: ConcurrentServerGroupUpdateTask timed out after 305000 ms awaiting server prepared response(s) -- cancelling updates for servers

System property is set to 600 seconds.
jboss.as.management.blocking.timeout=600
But getting timeout after 300 secs while deploying WAR.
[Host Controller] 10:05:14,951 INFO [org.jboss.as.host.controller] (Host Controller Service Threads - 51) WFLYDC0082: ConcurrentServerGroupUpdateTask timed out after 305000 ms awaiting server prepared response(s) -- cancelling updates for servers

Error Nmap NSE http-form-brute

I'm trying to get some time using the http-form-brute script, but every time it says that the path is wrong, but I already checked the path, yes, I also checked the syntax and it looks correct ... Point where I'm going wrong.
Starting Nmap 7.25BETA1 ( https://nmap.org ) at 2017-01-12 19:48 UTC
--------------- Timing report ---------------
hostgroups: min 1, max 100000
rtt-timeouts: init 1000, min 100, max 10000
max-scan-delay: TCP 1000, UDP 1000, SCTP 1000
parallelism: min 0, max 0
max-retries: 10, host-timeout: 0
min-rate: 0, max-rate: 0
---------------------------------------------
NSE: Using Lua 5.2.
NSE: Arguments from CLI: userdb=d.dic,passdb=d.dic,http-form-
brute.uservar=usuario,http-form-brute.passvar=senha,http-form-brute.onfailure=invalido!,http-form-brute.path=/admin/validar.php
NSE: Arguments parsed: userdb=d.dic,passdb=d.dic,http-form-brute.uservar=usuario,http-form-brute.passvar=senha,http-form-brute.onfailure=invalido!,http-form-brute.path=/admin/validar.php
NSE: Loaded 1 scripts for scanning.
NSE: Script Pre-scanning.
NSE: Starting runlevel 1 (of 1) scan.
Initiating NSE at 19:48
Completed NSE at 19:48, 0.00s elapsed
Initiating Ping Scan at 19:48
Scanning www.laboratoriohacker.com.br (31.170.164.209) [4 ports]
Packet capture filter (device wlan0): dst host 192.168.0.102 and (icmp or icmp6 or ((tcp or udp or sctp) and (src host 31.170.164.209)))
We got a ping packet back from 31.170.164.209: id = 1632 seq = 0 checksum = 63903
Completed Ping Scan at 19:48, 0.52s elapsed (1 total hosts)
Overall sending rates: 7.76 packets / s, 294.96 bytes / s.
mass_rdns: Using DNS server 192.168.0.1
Initiating Parallel DNS resolution of 1 host. at 19:48
mass_rdns: 0.01s 0/1 [#: 1, OK: 0, NX: 0, DR: 0, SF: 0, TR: 1]
Completed Parallel DNS resolution of 1 host. at 19:48, 0.01s elapsed
DNS resolution of 1 IPs took 0.01s. Mode: Async [#: 1, OK: 0, NX: 1, DR: 0, SF: 0, TR: 1, CN: 0]
Initiating SYN Stealth Scan at 19:48
Scanning www.laboratoriohacker.com.br (31.170.164.209) [1 port]
Packet capture filter (device wlan0): dst host 192.168.0.102 and (icmp or icmp6 or ((tcp or udp or sctp) and (src host 31.170.164.209)))
Discovered open port 80/tcp on 31.170.164.209
Completed SYN Stealth Scan at 19:48, 0.31s elapsed (1 total ports)
Overall sending rates: 3.24 packets / s, 142.60 bytes / s.
NSE: Script scanning 31.170.164.209.
NSE: Starting runlevel 1 (of 1) scan.
Initiating NSE at 19:48
NSE: Starting http-form-brute against www.laboratoriohacker.com.br (31.170.164.209:80).
NSE: [http-form-brute 31.170.164.209:80] Form submission path: /admin/validar.php
NSE: [http-form-brute 31.170.164.209:80] HTTP method: POST
NSE: [http-form-brute 31.170.164.209:80] Username field: usuario
NSE: [http-form-brute 31.170.164.209:80] Password field: senha
NSE: [http-form-brute 31.170.164.209:80] Failed to get new session cookies: Unable to retrieve a login form from path "/admin/validar.php"
NSE: Finished http-form-brute against www.laboratoriohacker.com.br (31.170.164.209:80).
Completed NSE at 19:48, 1.35s elapsed
Nmap scan report for www.laboratoriohacker.com.br (31.170.164.209)
Host is up, received echo-reply ttl 52 (0.46s latency).
Scanned at 2017-01-12 19:48:02 UTC for 2s
PORT STATE SERVICE REASON
80/tcp open http syn-ack ttl 52
| http-form-brute:
|_ ERROR: Failed to submit the form to path "/admin/validar.php"
Final times for host: srtt: 457110 rttvar: 414875 to: 2116610
NSE: Script Post-scanning.
NSE: Starting runlevel 1 (of 1) scan.
Initiating NSE at 19:48
Completed NSE at 19:48, 0.00s elapsed
Read from /usr/bin/../share/nmap: nmap-payloads nmap-services.
Nmap done: 1 IP address (1 host up) scanned in 3.02 seconds
Raw packets sent: 5 (196B) | Rcvd: 2 (72B)
You have provided the path to the HTML form as /admin/validar.php, but the script is unable to GET a response containing a form from that page. Most likely, this is the path that the form POSTs to, not the page that the form exists on. The path provided should be the URI path that a user sees in his browser when filling out the form. Alternatively, you can try setting sessioncookies to 0 (false) to avoid the form detection, but if the form requires new cookies for each submission, then brute forcing will not be possible.

Openshift application stopped and restarted automaticlally with catridge of type DIY

Opens shift application stopped and restarted automatically with cartridge of type DIY,so continues downtime for my application,as i running spring boot application with PostgreSQL database and the server starts and can see application running but after while server is down why and then it automatically started and automatically shutdown ,i also see only few logs in logs directory following are the logs
these are some logs for application---->
rhc tail tiworld
==> app-root/logs/diy.log <==
[2016-07-22 08:55:41] INFO WEBrick 1.3.1
[2016-07-22 08:55:41] INFO ruby 1.8.7 (2013-06-27) [x86_64-linux]
[2016-07-22 08:55:41] INFO WEBrick::HTTPServer#start: pid=380495 port=8080
127.3.82.129 - - [22/Jul/2016:09:10:32 EDT] "HEAD / HTTP/1.1" 200 0
- -> /
127.3.82.129 - - [22/Jul/2016:09:10:32 EDT] "HEAD / HTTP/1.1" 200 0
- -> /
[2016-07-22 09:21:58] INFO going to shutdown ...
[2016-07-22 09:21:58] INFO WEBrick::HTTPServer#start done.
==> app-root/logs/postgresql.log <==
2016-07-27 12:51:12 GMT LOG: could not bind socket for statistics collector: Cannot assign requested address
2016-07-27 12:51:12 GMT LOG: disabling statistics collector for lack of working socket
2016-07-27 12:51:12 GMT WARNING: autovacuum not started because of misconfiguration
2016-07-27 12:51:12 GMT HINT: Enable the "track_counts" option.
2016-07-27 12:51:12 GMT LOG: database system was interrupted; last known up at 2016-07-27 12:45:45 GMT
2016-07-27 12:51:12 GMT FATAL: the database system is starting up
2016-07-27 12:51:12 GMT LOG: database system was not properly shut down; automatic recovery in progress
2016-07-27 12:51:12 GMT LOG: record with zero length at 0/198F218
2016-07-27 12:51:12 GMT LOG: redo is not required
2016-07-27 12:51:12 GMT LOG: database system is ready to accept connections
You can tail this application directly with:
ssh -t 579217552d5271eaa80000c0#programmers-pvb.rhcloud.com 'tail */log*/*'
/var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/ruby_compat.rb:30:in `select': closed stream (IOError)
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/ruby_compat.rb:30:in `io_select'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/transport/packet_stream.rb:75:in `available_for_read?'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/transport/packet_stream.rb:87:in `next_packet'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/transport/session.rb:183:in `block in poll_message'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/transport/session.rb:178:in `loop'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/transport/session.rb:178:in `poll_message'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/connection/session.rb:461:in `dispatch_incoming_packets'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/connection/session.rb:222:in `preprocess'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/connection/session.rb:206:in `process'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/connection/session.rb:170:in `block in loop'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/connection/session.rb:170:in `loop'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh/connection/session.rb:170:in `loop'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/ssh_helpers.rb:198:in `block in ssh_ruby'
from /var/lib/gems/2.3.0/gems/net-ssh-2.9.2/lib/net/ssh.rb:215:in `start'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/ssh_helpers.rb:173:in `ssh_ruby'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/commands/tail.rb:40:in `tail'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/commands/tail.rb:21:in `run'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/commands.rb:294:in `execute'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/commands.rb:285:in `block (3 levels) in to_commander'
from /var/lib/gems/2.3.0/gems/commander-4.2.1/lib/commander/command.rb:180:in `call'
from /var/lib/gems/2.3.0/gems/commander-4.2.1/lib/commander/command.rb:155:in `run'
from /var/lib/gems/2.3.0/gems/commander-4.2.1/lib/commander/runner.rb:421:in `run_active_command'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/command_runner.rb:72:in `run!'
from /var/lib/gems/2.3.0/gems/commander-4.2.1/lib/commander/delegates.rb:8:in `run!'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/lib/rhc/cli.rb:37:in `start'
from /var/lib/gems/2.3.0/gems/rhc-1.38.4/bin/rhc:20:in `<top (required)>'
from /usr/local/bin/rhc:23:in `load'
from /usr/local/bin/rhc:23:in `<main>
If you are running this on a small gear especially if you have java & a db on the same gear, changes are that you are running out of resources and the gear is restarting (after awhile it will not restart automatically anymore).
You can check out this article for more information on checking your memory utilization: https://developers.openshift.com/faq/troubleshooting.html#_why_is_my_application_restarting_automatically_or_having_memory_issues

Facebook OpenGraph API calls does not work at all times

We are not able to connect to Facebook OpenGraph API now and then.
Our .NET client returns the below error:
System.Net.WebException: Unable to connect to the remote server ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 66.220.146.100:443 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.Sockets.Socket.InternalConnect(EndPoint remoteEP) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception)
Please note that this is not specific to any particular API call.
Even pinging graph.facebook.com times out randomly.
Below is some trace route info:
1 <1 ms <1 ms <1 ms ip-97-74-87-252.ip.secureserver.net [97.74.87.252]
2 <1 ms <1 ms <1 ms ip-208-109-113-173.ip.secureserver.net [208.109.113.173]
3 1 ms <1 ms <1 ms ip-97-74-252-21.ip.secureserver.net [97.74.252.21]
4 1 ms <1 ms <1 ms ip-97-74-252-21.ip.secureserver.net [97.74.252.21]
5 9 ms 1 ms 1 ms 206-15-85-9.static.twtelecom.net [206.15.85.9]
6 13 ms 13 ms 44 ms lax2-pr2-xe-1-3-0-0.us.twtelecom.net [66.192.241.218]
7 26 ms 28 ms 27 ms cr1.la2ca.ip.att.net [12.122.104.14]
8 26 ms 27 ms 27 ms cr82.sj2ca.ip.att.net [12.122.1.146]
9 23 ms 23 ms 23 ms 12.122.128.201
10 22 ms 22 ms 22 ms 12.249.231.26
11 22 ms 22 ms 22 ms ae1.bb02.sjc1.tfbnw.net [204.15.21.164]
12 23 ms 23 ms 23 ms ae5.dr02.snc4.tfbnw.net [204.15.21.169]
13 24 ms 23 ms 23 ms eth-17-2.csw02b.snc4.tfbnw.net [74.119.76.105]
14 * * * Request timed out.
15 * * * Request timed out.
16 * * * Request timed out.
17 * * * Request timed out.
18 * * * Request timed out.
Steps to reproduce:
Try pinging graph.facebook.com.
Ping request will get timed out for certain IPs.