Getting Openssl "close notify" without an "SSL_read", it there a way? - sockets

I've recently upgraded a BLE packet capture application running on a Raspberry Pi Zero/w (Raspbian Stretch) to use OpenSSL when forwarding packets to a cloud server. The cloud server just receives the packets and stores them in a db, the app on the Pi just sends the packets to the cloud server. So it's a one-way conversation from the Pi to the cloud server.
The problem is that the cloud server will shutdown a connection after a period of inactivity. So it calls SSL_shutdown(), but on the Pi the SSL_RECEIVED_SHUTDOWN flag never gets set, unless I call SSL_read() (or peek).
Additionally, polling on the socket doesn't show SSL_ERROR_WANT_READ on the Pi, even after the server has made it's call.
So, bottom line, I'm having to constantly call SSL_read() just to catch the shutdown. Is that normal? Is there another approach that makes more sense?
I'm using non-blocking sockets, so it doesn't seem to hurt anything, but it just seems hokey since the cloud server is never going to be sending any data (so there's never going to be anything to actually read).
Anyway, I've spent hours researching and trying various approaches, but this is the only way I've found to get it to work.

Related

TCP retransmission on RST - Different socket behaviour on Windows and Linux?

Summary:
I am guessing that the issue here is something to do with how Windows and Linux handle TCP connections, or sockets, but I have no idea what it is. I'm initiating a TCP connection to a piece of custom hardware that someone else has developed and I am trying to understand its behaviour. In doing so, I've created a .Net core 2.2 application; run on a Windows system, I can initiate the connection successfully, but on Linux (latest Raspbian), I cannot.
It appears that it may be because Linux systems do not try to retry/retransmit a SYN after a RST, whereas Windows ones do - and this behaviour seems key to how this peculiar piece of hardware works..
Background:
We have a black box piece of hardware that can be controlled and queried over a network, by using a manufacturer-supplied Windows application. Data is unencrypted and requires no authentication to connect to it and the application has some other issues. Ultimately, we want to be able to relay data from it to another system, so we decided to make our own application.
I've spent quite a long time trying to understand the packet format and have created a library, which targets .net core 2.2, that can be used to successfully communicate with this kit. In doing so, I discovered that the device seems to require a kind of "request to connect" command to be sent, via UDP. Straight afterwards, I am able to initiate a TCP connection on port 16000, although the first TCP attempt always results in a RST,ACK being returned - so a second attempt needs to be made.
What I've developed works absolutely fine on both Windows (x86) and Linux (Raspberry Pi/ARM) systems and I can send and receive data. However, when run on the Raspbian system, there seems to be problems when initiating the TCP connection. I could have sworn that we had it working absolutely fine on a previous build, but none of the previous commits seem to work - so it may well be a system/kernel update that has changed something.
The issue:
When initiating a TCP connection to this device, it will - straight away - reset the connection. It does this even with the manufacturer-supplied software, which itself then immediately re-attempts the connection again and it succeeds; so this kind of reset-once-then-it-works-the-second-time behaviour in itself isn't a "problem" that I have any control over.
What I am trying to understand is why a Windows system immediately re-attempts the connection through a retransmission...
..but the Linux system just gives up after one attempt (this is the end of the packet capture..)
To prove it is not an application-specific issue, I've tried using ncat/netcat on both the Windows system and the Raspbian system, as well as a Kali system on a separate laptop to prove it isn't an ARM/Raspberry issue. Since the UDP "request" hasn't been sent, the connection will never succeed anyway, but this simply demonstrates different behaviour between the OSes.
Linux versions look pretty much the same as above, whereby they send a single packet that gets reset - whereas the Windows attempt demonstrates the multiple retransmissions..
So, does anyone have any answer for this behaviour difference? I am guessing it isn't a .net core specific issue, but is there any way I can set socket options to attempt a retransmission? Or can it be set at the OS level with systemctl commands or something? I did try and see if there are any SocketOptionNames, in .net, that look like they'd control attempts/retries, as this answer had me wonder, but no luck so far.
If anyone has any suggestions as to how to better align this behaviour across platforms, or can explain the reason for this difference is at all, I would very much appreciate it!
Nice find! According to this, Windows´ TCP will retry a connection if it receives a RST/ACK from the remote host after sending a SYN:
... Upon receiving the ACK/RST client from the target host, the client determines that there is indeed no service listening there. In the Microsoft Winsock implementation of TCP, a pending connection will keep attempting to issue SYN packets until a maximum retry value is reached (set in the registry, this value defaults to 3 extra times)...
The value used to limit those retries is set in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\TcpMaxConnectRetransmissions according to the same article. At least in Win10 Pro it doesn´t seem to be present by default.
Although this is a conveniece for Windows machines, an application still should determine its own criteria for handling a failed connect attempt IMO (i. e number of attempts, timeouts etc).
Anyhow, as I said, surprising fact! Living and learning I guess ...
Cristian.

Running a Sensu handler on the client instead of the server

I have the following problem: I am using sensu to monitor some raspberry pis. Im using standalone checks which works just fine. Now sometimes it might happen that one of the pis lost its wifi connection or just gets restarted manual and dhcp fails or for some other reason has no internet connection. The idea is to let the pi check it self for a internet connection and if the check fails it should solve the problem by it self like restarting wifi or reboot the pi.
Of course a simple bash script with a cronjob should do the job but I want to do the check with sensu. The problem is obvious if the check fails i don't have a internet connection and therefore can't send the check result to the sensu server.
Long story short ;) is it possible to implement something like the remediation feature just on the client? So that a handler on the client it self starts the script which should resolve the problem.
I don't think this is possible. Standalone checks are scheduled by the client but the check result us still published to the server. The result is then handled by the handler which resides on the server.
You could write a standalone "check" plugin which monitors the wifi and if it is off then it will turn it on. It isn't using a handler though.

socket opening on WIndows 2012 extremely slow

I'm working on a legacy VB6 app that uses sockets to communicate to various devices.
On a 2012 system, we are noticing the time between calling winSock.Connect() to the connection event being fired is holding at about 9 seconds, across multiple systems on different domains.
On a 2008 R2 or lower system, it's taking 1-3 milliseconds between the call and the event being fired.
Has anyone run into this before, or has any ideas on what could be causing this?
I've done some snooping with Wireshark, and found that the first few TCP transmissions are not connecting and being retransmitted, not sure if that will help.
I ended up finding the answer to this after some extensive digging.
Starting in Windows Server 2012, Microsoft has enabled an extension of TCP called Explicit Congestion Notification (ECN). This allows end-to-end notification of network congestion with the loss of packets. The way this is enabled on a TCP packet is via a flag, which is defined in the definition of ECN (RFC 3168(2001)).
What was happening for me was that the devices my application talks to are older, and don't support the ECN flag. When they received packets with that flag enabled, they wouldn't acknowledge the transmission, leading to a timeout from the server. After two failed transmissions, it looks like Windows shuts off the ECN flag, and the device acknowledged the packets.
I disabled ECN running the following command from an Administrator Command Prompt:
netsh interface tcp set global ecncapability=disabled
There is nothing particularly "special" about the Winsock control, which is just a thin wrapper on top of the API. The only thing of note really is that it is 32-bit and must run inside WOW64.
You're probably doing something funny or all 32-bit programs using the winsock API the same way should see the same issue.
Perhaps you have a name resolution issue on this server?

Session getting disconnected in the middle of working

Sessions are getting disconnected automatically (in the middle of working).
Disconnection happens for the users when they working by using telnet connection to Linux server via putty telnet application.
During the disconnection, the Network b/w utilization is high and no limitation for total number of users in a network.
Error "Hangup signal received (562)"
Any idea about this ??
The network connection was interrupted or a hangup signal was sent via "kill".
You mention network utilization being "high" when disconnects happen. How do you know that? What measurement are you looking at that tells you it is "high"? That might be a symptom of a networking issue that is at the root of the problem.
There are few directions:
OpenEdge has published this article with links to implementing keep-alive packets:
https://knowledgebase.progress.com/articles/Article/Telnet-connection-times-out-after-15-minutes
Increase the number of "instances" in xinetd.conf, and then restart the service.
Make sure that the database watchdog is up and running: https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/dmadm/prowdog-command.html
Check the database log file, to find out what happened just before the hangup (https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/gsins/openedge-database-log-file.html)

Is there a way to wait for a listening socket on win32?

I have a server and client program on the same machine. The server is part of an application- it can start and stop arbitrarily. When the server is up, I want the client to connect to the server's listening socket. There are win32 functions to wait on file system changes (ReadDirectoryChangesW) and registry changes (RegNotifyChangeKeyValue)- is there anything similar for network changes? I'd rather not have the client constantly polling.
There is no such Win32 API, however this can be easily accomplished by using an event. The client would wait on that event to be signaled. The server would signal the event when it starts up.
The related API that you will need to use is CreateEvent, OpenEvent, SetEvent, ResetEvent and WaitForSingleObject.
If your server will run as a service, then for Vista and up it will run in session 0 isolation. That means you will need to use an event with a name prefixed with "Global\".
You probably do have a good reason for needing this, but before you implement this please consider:
Is there some reason you need a connect right away? I see this as a non issue because if you perform an action in the client, you can at that point make a new server connection.
Is the server starting and stopping more frequently than the client? You could switch roles of who listens/connects
Consider using some form of Windows synchronization, such as semaphore. The client can wait on the synchronization primitive and the server can signal it when it starts up.
Personally I'd use a UDP broadcast from the server and have the "client" listening for it. The server could broadcast a UDP packet every X period whilst running and when the client gets one, if it's not already connected, it could connect.
This has the advantage that you can move the client onto a different machine without any issues (and since the main connection from client to server is sockets already it would be a pity to tie the client and server to the same machine simply because you selected a local IPC method for the initial bootstrap).