How Postgres negotiate TLS usage? - postgresql

I am puzzled a bit about Postgres option sslmode=prefer. It implies that it negotiates with the server to figure out whether the server supports TLS or not.
I am curious how it's done. Does it try TLS first and if it fails, try without TLS or am I missing something in TLS (or Postgres) which allow them to truly negotiate this?

Does it try TLS first and if it fails, try without TLS
Yes. And when both attempts fail, this might be visible, as two different error messages might be produced.

Some additional info on top of #janes answer:
https://www.postgresql.org/docs/current/protocol-flow.html
To initiate an SSL-encrypted connection, the frontend initially sends
an SSLRequest message rather than a StartupMessage. The server then
responds with a single byte containing S or N, indicating that it is
willing or unwilling to perform SSL, respectively. The frontend might
close the connection at this point if it is dissatisfied with the
response. To continue after S, perform an SSL startup handshake (not
described here, part of the SSL specification) with the server. If
this is successful, continue with sending the usual StartupMessage. In
this case the StartupMessage and all subsequent data will be
SSL-encrypted. To continue after N, send the usual StartupMessage and
proceed without encryption.

Related

Looking for debugging advice on SSL errors from EKS using varnish

I know this is somewhat specific of a question, but I'm having a problem I can't seem to track down. I have a single pod deployed to EKS - the pod contains a python app, and a varnish reverse caching proxy. I'm serving chunked json (that is, streaming lines of json, a la http://jsonlines.org/), and it can be multiple GB of data.
The first time I make a request, and it hits the python server, everything acts correctly. It takes (much) longer than the cached version, but the entire set of json lines is downloaded. However, now that it's cached in varnish, if I use curl, I get:
curl: (56) GnuTLS recv error (-110): The TLS connection was non-properly terminated.
or
curl: (56) GnuTLS recv error (-9): A TLS packet with unexpected length was received.
The SSL is terminated at the ELB, and when I use curl from the proxy container itself (using curl http://localhost?....), there is no problem.
The hard part of this is that the problem is somewhat intermittent.
If there is any advice in terms of clever varnishlog usage, or anything of the same ilk on AWS, I'd be much obliged.
Thanks!
Because TLS is terminated on your ELB loadbalancers, the connection between should be in plain HTTP.
The error is probably not coming from Varnish, because Varnish currently doesn't handle TLS natively. I'm not sure if varnishlog can give you better insights in what is actually happening.
Checklist
The only checklist I can give you is the following:
Make sure the certificate you're using is valid
Make sure you're connecting to your target group over HTTP, not HTTPS
If you enable the PROXY protocol on your ELB, make sure Varnish has a -a listener that listens for PROXY protocol requests, on top of regular HTTP requests.
Debugging
Perform top-down debugging:
Increase the verbosity of your cURL calls and try to get more information about the error
Try accessing the logs of your ELB and get more details there
Get more information from your EKS logs
And finally, perform a varnislog -g request -q "ReqUrl eq '/your-url'" to get a full Varnishlog for a specific URL

HAProxy : Prevent stickiness to a backup server

I'm facing a configuration issue with HAProxy (1.8).
Context:
In a HAProxy config, I have a several severs in a backend, and an additional backup server in case the other servers are down.
Once a client gets an answer from a server, it must stick to this server for its next queries.
For some good reasons, I can't use a cookie for this concern, and I had to use a stick-table instead.
Problem:
When every "normal" server is down, clients are redirected to the backup server, as expected.
BUT the stick-table is then filled with an association between the client and the id of the backup server.
AND when every "normal" server is back, the clients which are present in the stick table and associated with the id of the backup server will continue to get redirected to the backup server instead of the normal ones!
This is really upsetting me...
So my question is: how to prevent HAProxy to stick clients to a backup server in a backend?
Please find below a configuration sample:
defaults
option redispatch
frontend fe_test
bind 127.0.0.1:8081
stick-table type ip size 1m expire 1h
acl acl_test hdr(host) -i whatever.domain.com
...
use_backend be_test if acl_test
...
backend be_test
mode http
balance roundrobin
stick on hdr(X-Real-IP) table fe_test
option httpchk GET /check
server test-01 server-01.lan:8080 check
server test-02 server-02.lan:8080 check
server maintenance 127.0.0.1:8085 backup
(I've already tried to add a lower weight to the backup server, but it didn't solve this issue.)
I read in the documentation that the "stick-on" keyword has some "if/unless" options, and maybe I can use it to write a condition based on the backend server names, but I have no clue about the syntax to use, or even if it is possible.
Any idea is welcome!
So silly of me! I was so obsessed by the stick table configuration that I didn't think to look in the server options...
There is a simple keyword that perfectly solves my problem: non-stick
Never add connections allocated to this sever to a stick-table. This
may be used in conjunction with backup to ensure that stick-table
persistence is disabled for backup servers.
So the last line of my configuration sample simply becomes:
server maintenance 127.0.0.1:8085 backup non-stick
...and everything is now working as I expected.

SMTP protocol synchronization error (input sent without waiting for greeting)

I configured exim mail server on centos. It is working with no encryption type. But not with SSL and TLS. I din't get correct solution for this type of error. Can anyone tell solution and why this error message in exim main.log file?
The error message is like below in the exim main.log file.
2015-03-17 10:34:16 SMTP protocol synchronization error (input sent without waiting for greeting): rejected connection from H=acp-node [10.7.2.137] input="\026\003\001"
(input sent without waiting for greeting) ... input="\026\003\001"
In short: You are trying to use implicit TLS on a port where explicit TLS is needed.
In detail: There are two ways to use TLS with SMTP:
implicit TLS, that is TLS from start. This is used on port 465 (smtps). This mode is in some SMTP stacks simply called "SSL".
explicit TLS, that is start with plain SMTP and upgrade to TLS with the STARTTLS command. This is used on ports 25 (smtp) and 587 (submission). This mode is in some SMTP stacks simply called "TLS".
If you look around at the questions regarding use of SMTP with TLS you will find lots of confusion about how to use these modes with the various setups. And you will find lots of bad code which tries to use implicit TLS where explicit TLS is needed.
What you see is the result of the client trying to use implicit TLS on a port not suitable for this. \026\003\001 (or hex 16 03 01) is the start of a TLS 1.0 handshake and input sent without waiting for greeting refers to the fact, that the client is sending data first without waiting for the server to send the (plain text) SMTP greeting.
Judging from the error log entry, your mail client 10.7.2.137 is trying to establish a secure (TLS) connection but your Exim server is not expecting it.
Most probably, TLS is not configured properly in your Exim configuration file. You can refer to http://www.exim.org/exim-html-current/doc/html/spec_html/ch-encrypted_smtp_connections_using_tlsssl.html for tutorial.
The solution is, therefore, to edit your Exim configuration file, making sure TLS certificates are defined and tls_advertise_hosts is set; and then restart Exim.

When connecting to SMTP servers should I try SSL or TCP/STARTTLS first?

SMTP allows unecrypted communication over port 25. For some servers (like Google's MX servers) I'm able to switch to a TLS connection using STARTTLS after making the initial unencrypted connection.
S:220 mx.google.com ESMTP l1si352658een.133
C:EHLO mail.example.com
S:250-mx.google.com at your service
S:250-SIZE 35882577
S:250-8BITMIME
S:250-STARTTLS
S:250-ENHANCEDSTATUSCODES
S:250 PIPELINING
C:STARTTLS
S:220 2.0.0 Ready to start TLS
[socket switches to TLS here]
C:EHLO mail.example.com
...
However, I would also like to support straight SSL connections and I'm wondering whether most mail servers prefer starting with SSL or starting with TCP and then moving to TLS after a connection is made.
Unless you have prior arrangements with the administrator of a sever, don't try to connect using SSL. Port (465) was used for SSMTP or SMTPS (SMTP over SSL). Connections to this port were expected to start the connecton with SSL. Use of this port and protocol has been abandoned now that StartTLS is available.
There are two ports which may support SMTP with StartTLS. Neither are expected to support SSL without StartTLS, and will likely drop the connection if you try. Both the SMTP (25) and Submission (587) may support StartTLS. If it is supported, it wlll be listed in the response to an EHLO message. You can then initiate the StartTLS process. See RFC 3207 for more details.
It appears from your comments, that your real concern is how to verify the certificate. That is a different but related question. It also assumes that mail servers are not using self-signed certificates. In my case, I use a self-signed certificate. This works well for me as StartTLS is rarely, if ever, used for SMTP (port 25) connections. I have reasonable control over the clients connecting for message Submission (port 587 or port 25) that must authenticate before sending messages. In my experience, StartTLS is mainly used to secure the connection for clients that must authenticate before sending email.
The support for SSL/TLS on connect (SMTPS) or SSL/TLS after STARTTLS really varies from one server to another, depending on the software and how they've been configured.
The main advantage of SSL/TLS on connect is that it doesn't require any changes in the application protocol. In fact, you could wrap the connection using something like stunnel on each side.
The main advantage of SSL/TLS after STARTTLS is that it can be done on the same port. Another advantage could be to be able to host multiple host names (replacing the need for Server Name Indication at the TLS level), but I'm not sure this has ever been used for SMTP servers.
STMPS (SSL/TLS on connect) doesn't have an official specification and uses a port number for which it is not registered (465). It's also deprecated, in theory. Yet, a number of servers can support it (e.g. Exim) and will be able to support both if they are able to do so: it will be up to the hosting service to choose what to configure.
If you're writing a client and already have support for STARTTLS, it should be fairly cheap to support SSL/TLS upon connect too. It's certainly a good idea to support both, since it will be usable by a wider number of users (if I remember correctly, Gmail used to support only SMTPS at some point, and it can also be useful in case of a firewall that would block one of the ports only).
Both can offer similar levels of security, as long as SSL/TLS is used, one way or another (and that proper certificate verification, including host name, is performed).
There is generally some confusion regarding the difference between SSL and TLS. For some reason, it seems that a number of e-mail software implementations failed to realise that the most important word in "STARTTLS" is "START", not TLS (in terms of connection mode and protocol choice). This confusion has unfortunately propagated to some software configuration options (even in popular mail clients) and thus in ISP documentations. Expect your users to be confused.
Whichever mode you want to support, make sure it doesn't have a "Use TLS, if available" option, which would fall back to a plain exchange if SSL/TLS wasn't available: this opens the connections to MITM attacks.

How do online port checkers work?

For example http://www.utorrent.com/testport?port=12345
How does this work? Can the server side script attempt to open a socket?
There are many ways of accomplishing this through server-side scripting. As #Oded mentioned, most server-side handlers are capable of initiating socket connections on arbitrary ports, and most of those even have dedicated port-scanning packages/libraries (PHP has one in the PEAR repository, Python's 'socket' module makes this type of tasks a breeze, etc...)
Keep in mind that on shared host platforms, socket connections are typically disabled for security purposes.
Another way that is also very easy to accomplish is to use a command-line port-scanner such as nmap from your server-side script. i.e in PHP, you would do echo ``nmap -p $port $ip\
The server side script will try to open a connection on the specified port to the originating IP.
If there is no response (the attempt will timeout), this would be an indication that the port is not open.
The server can try, as #Oded said. But that doesn't ensure the receiver will respond.
Typically, something like this happens:
The URL request contains instructions about which port to access. The headers that your browser sends include information about where the request is originating from.
Before responding to the request, the server tries to open a port and checks if this is successful. It waits a while before timing out.
The webpage is rendered dynamically based on the results of this test.
The response is returned to you containing the results.
Sometimes steps (2) and (3) will be replaced with an AJAX callback, which allows the response to return sooner.