stunnel : SSL alert (read): warning: close notify - stunnel

we have a stunnel proxy server running on the same box. Basically, our app connects to stunnel proxy server in un-encrypted fashion, and this stunnel proxy server then connects to a remote server ( that remote server belongs to a different company) over TLS.
the relevant configuration are :
client=yes
accept=9998
connect=x.y.z.w:nnnn
verify=level 1
Occasionally, ( once every 2-3 weeks), the connection is down after running a few hours, with following stunnel log :
2018.04.02 10:05:37 LOG7[197784:140586247849728]: SSL alert (read): warning: close notify
2018.04.02 10:05:37 LOG7[197784:140586247849728]: SSL closed on SSL_read
2018.04.02 10:05:37 LOG7[197784:140586247849728]: Socket write shutdown
2018.04.02 10:05:37 LOG7[197784:140586247849728]: Socket closed on read
2018.04.02 10:05:37 LOG7[197784:140586247849728]: SSL write shutdown
2018.04.02 10:05:37 LOG7[197784:140586247849728]: SSL alert (write): warning: close notify
2018.04.02 10:05:37 LOG6[197784:140586247849728]: SSL_shutdown successfully sent close_notify
2018.04.02 10:05:37 LOG5[197784:140586247849728]: Connection closed: 25757163 bytes sent to SSL, 937186 bytes sent to socket
2018.04.02 10:05:37 LOG7[197784:140586247849728]: FIX session XXXXXX-XXX finished (1 left)
I am not very clear is it caused by the remote server ( belonging to a different company ) terminating the connection first ?
Any advice ?
Thanks in advance,
Frank

Related

Kafka stretched cluster stopped when second DC become down

My Kafka version:
/opt/kafka/bin/kafka-topics.sh --version
2.4.1 (Commit:c57222ae8cd7866b)
My Kafka cluster configuration looks like:
6 nodes Kafka cluster
6 x Zookeeper i.e. is installed on each node/broker
2 DC's, there are 3 nodes in each DC
rack-awareness feature is enabled on each node:
node1 DC1:
broker.id=1
broker.rack=dc1
node2 DC1:
broker.id=2
broker.rack=dc1
node3 DC1:
broker.id=3
broker.rack=dc1
node1 DC2:
broker.id=4
broker.rack=dc2
node2 DC2:
broker.id=5
broker.rack=dc2
node3 DC2:
broker.id=6
broker.rack=dc2
When the whole DC2 become down the kafka cluster stopped and node1 from DC1 show errors like this:
[2022-03-16 07:38:45,422] INFO Unable to read additional data from server sessionid 0x40000004f930002, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,549] INFO Unable to read additional data from server sessionid 0x200ab15af610000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,787] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:45,787] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:45,787] INFO Opening socket connection to server dc2kafkabr2/A.B.C.72:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:45,788] INFO Socket error occurred: dc2kafkabr2/A.B.C.72:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,503] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:46,503] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:46,503] INFO Opening socket connection to server dc1kafkabr1/A.B.C.68:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,504] INFO Socket connection established, initiating session, client: /A.B.C.68:35796, server: dc1kafkabr1/A.B.C.68:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,505] INFO Unable to read additional data from server sessionid 0x40000004f930002, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,616] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-03-16 07:38:46,617] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-03-16 07:38:46,617] INFO Opening socket connection to server dc1kafkabr2/A.B.C.69:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,617] INFO Socket connection established, initiating session, client: /A.B.C.68:38936, server: dc1kafkabr2/A.B.C.69:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,619] INFO Unable to read additional data from server sessionid 0x200ab15af610000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2022-03-16 07:38:46,896] INFO Client successfully logged in. (org.apache.zookeeper.Login)
However when the Kafka nodes will be stopped normally/humanly in DC2 by systemctl command then Kafka cluster works properly on the nodes in DC1.
The question is why if DC2 is turned off, the Kafka cluster stops working? How to prevent of it? Any idea?
Best Regards,
Dan
Dears,
After next tests I know that problem is by side of Zookeeper because when I trun off two brokers in DC2 the Kafka cluster still works. After turn off kafka.service on the last broker in DC2 the Kafka cluster still works. But when I turn off zookeeper.service on the last broker in DC2 the cluster becomes unresponsive.
This is my zookeeper's configuration:
cat zookeeper.properties
tickTime=2000
dataDir=/opt/zookeeper/data
#dataLogDir=/var/log/zookeeper
clientPort=2181
initLimit=5
syncLimit=3
############## HARDENING #################
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
###########################################
server.1=A.B.C.68:2888:3888
server.2=A.B.C.69:2888:3888
server.3=A.B.C.70:2888:3888
server.4=A.B.C.71:2888:3888
server.5=A.B.C.72:2888:3888
server.6=A.B.C.73:2888:3888
Any idea what is wrong in this configuration?
Best Regards,
Dan
Zookeeper quorum is not ensure and this is reason.

UNIX domain SOCK_DGRAM Client - Connection refused

Unix domain SOCK_DGRAM returns connection refused error. As we know, Datagram sockets are connection-less. I expect client, not to return any error on sendto() if server is not started yet. But how client returns connection refused error ?

Kafka broker unable to recover

I am running a 5 node kafka broker with a 5 node zookeeper ensemble.
On Kafka nodes I keep getting this error
[2016-11-14 19:05:11,345] INFO Opening socket connection to server 10.105.23.188/10.105.23.188:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2016-11-14 19:05:11,345] INFO Socket connection established to 10.105.23.188/10.105.23.188:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2016-11-14 19:05:11,346] INFO Unable to read additional data from server sessionid 0x55861d8ea6f0000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2016-11-14 19:05:11,721] INFO Opening socket connection to server 10.105.25.4/10.105.25.4:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2016-11-14 19:05:11,721] INFO Socket connection established to 10.105.25.4/10.105.25.4:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2016-11-14 19:05:11,722] INFO Unable to read additional data from server sessionid 0x55861d8ea6f0000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2016-11-14 19:05:11,841] INFO Opening socket connection to server 10.105.24.4/10.105.24.4:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2016-11-14 19:05:11,841] WARN Session 0x55861d8ea6f0000 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
[2016-11-14 19:05:12,319] INFO Opening socket connection to server 10.105.27.23/10.105.27.23:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2016-11-14 19:05:12,319] INFO Socket connection established to 10.105.27.23/10.105.27.23:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2016-11-14 19:05:12,320] INFO Unable to read additional data from server sessionid 0x55861d8ea6f0000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2016-11-14 19:05:12,395] WARN [ReplicaFetcherThread-0-5], Error in fetch kafka.server.ReplicaFetcherThread$FetchRequest#3fbe518f. Possible cause: org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'responses': Error reading array of size 1204063, only 920 bytes available (kafka.server.ReplicaFetcherThread)
On Zookeeper nodes :
I get the following on the leader
2016-11-14 19:14:47,502 [myid:4] - ERROR [LearnerHandler-/10.105.25.4:50099:LearnerHandler#631] - Unexpected exception causing shutdown while sock still open
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83)
at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99)
at org.apache.zookeeper.server.quorum.LearnerHandler.run(LearnerHandler.java:513)
When a consumer tries to read from this cluster it get this error repeatedly
2016-11-14 19:17:36,847 WARN {main} [kafka.clients.NetworkClient$DefaultMetadataUpdater:handleResponse:629] Error while fetching metadata with correlation id 1603 : {=UNKNOWN_TOPIC_OR_PARTITION}
Please help me debug this. The consumers have auto commit enabled. If you need me to provide more configuration please let me know.
Thanks!

zookeeper not able to connect to two server

I have two zookeeper/kafka servers.
say,
10.10.1.9
10.10.1.10
When i start my zokeeper with configuration:
On 10.10.1.9,
dataDir=/var/zookeeper
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=0
initLimit=5
syncLimit=2
server.1=10.10.1.9:2888:3888
server.2=10.10.1.10:2888:3888
I get the following warning on 10.10.1.9:
[2016-05-23 07:26:56,047] WARN Cannot open channel to 2 at election address /10.10.1.10:3888 (org.apache.zookeeper.server.quorum.QuorumCnxManager)
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
at java.lang.Thread.run(Thread.java:745)
Meaning to say that zookeeper is not able to connect to 10.10.1.10 server.
ON 10.10.1.10 zookeeper logs, i get the following logs:
[2016-05-23 06:01:48,513] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-05-23 07:23:00,400] INFO Accepted socket connection from /10.10.1.9:44808 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-05-23 07:23:05,537] WARN Exception causing close of session 0x0 due to java.io.IOException: Len error -720899 (org.apache.zookeeper.server.NIOServerCnxn)
[2016-05-23 07:23:05,538] INFO Closed socket connection for client /10.10.1.9:44808 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
[2016-05-23 07:24:43,180] INFO Accepted socket connection from /10.10.1.9:44821 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-05-23 07:24:45,189] WARN Exception causing close of session 0x0 due to java.io.IOException: Len error -1179651 (org.apache.zookeeper.server.NIOServerCnxn)
[2016-05-23 07:24:45,189] INFO Closed socket connection for client /10.10.1.9:44821 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
[2016-05-23 07:47:47,166] INFO Accepted socket connection from /10.10.1.9:45015 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-05-23 07:47:51,080] WARN caught end of stream exception (org.apache.zookeeper.server.NIOServerCnxn)
EndOfStreamException: Unable to read additional data from client sessionid 0x0, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:745)
[2016-05-23 07:47:51,091] INFO Closed socket connection for client /10.10.1.9:45015 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
[2016-05-23 07:56:05,693] INFO Accepted socket connection from /10.10.1.9:45085 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-05-23 07:56:11,359] WARN caught end of stream exception (org.apache.zookeeper.server.NIOServerCnxn)
EndOfStreamException: Unable to read additional data from client sessionid 0x0, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:745)
[2016-05-23 07:56:11,360] INFO Closed socket connection for client /10.10.1.9:45085 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
[2016-05-23 08:01:22,066] INFO Accepted socket connection from /10.10.1.9:45127 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-05-23 08:01:26,015] WARN caught end of stream exception (org.apache.zookeeper.server.NIOServerCnxn)
EndOfStreamException: Unable to read additional data from client sessionid 0x0, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:745)
Anybody encountered such issue? Any idea?
Can you list down all the port permissions or any thing else to be done for the two zookeeper servers to be able to communicate.
Thanks in advance guys!!

WHM Email Stuck in Manager Queue after Cloudflare Setup

My WHM server seems to be storing all its email in the queue manager, I get the following errors from EXIM
LOG: MAIN
cwd=/usr/local/cpanel/whostmgr/docroot 4 args: /usr/sbin/exim -v -M 1ZHBnT-0003rU-0v
delivering 1ZHBnT-0003rU-0v
LOG: MAIN
SMTP connection identification H=localhost A=::1 P=60184 M=1ZHBnT-0003rU-0v U=root ID=0 S=root B=authenticated_local_user
Connecting to gmail-smtp-in.l.google.com [74.125.70.27]:25 ... failed: Connection timed out (timeout=5m)
LOG: MAIN
H=gmail-smtp-in.l.google.com [74.125.70.27] Connection timed out
Connecting to alt1.gmail-smtp-in.l.google.com [173.194.204.27]:25 ... failed: Connection timed out (timeout=5m)
LOG: MAIN
H=alt1.gmail-smtp-in.l.google.com [173.194.204.27] Connection timed out
Connecting to alt2.gmail-smtp-in.l.google.com [74.125.141.27]:25 ... failed: Connection timed out (timeout=5m)
LOG: MAIN
H=alt2.gmail-smtp-in.l.google.com [74.125.141.27] Connection timed out
Connecting to alt3.gmail-smtp-in.l.google.com [64.233.190.27]:25 ... failed: Connection timed out (timeout=5m)
LOG: MAIN
H=alt3.gmail-smtp-in.l.google.com [64.233.190.27] Connection timed out
Port 25 is open.
I think your server mail IP is blocked on gmail server and due to that you are getting this issues. May be your one of the user is sending spam mails from your server and that is the reason your server IP is blocked. You can check your mail server IP status through http://mxtoolbox.com/blacklists.aspx URL.