Best practice for secureing an existing socket connection, without SSL - sockets

In Best practice for secure socket connection, the OP wants to secure the connection between two sockets, without SSL.
Thomas Pornin suggests SSH is the answer.
Is this answer based on SSH port forwarding of existing sockets, or just switching to SSH in general?
If not, and the question was how to make existing sockets more secure without SSL, what is the best way to to do that?
If a client on port 10 connects to a server on port 20, how can the server restrict access so that only client on port 10 can connect? And that it really is the client on port 10 (not an imposter)? (Availability only for an authenticated client).

The answer there is any form of the SSH protocol, which is based on channels. You can use those channels to transmit fairly arbitrary information, including port-forwarded data or terminal sessions, or anything you can turn into a byte stream. That said, TLS is generally much easier to implement in code because the libraries are ubiquitous and designed to be used this way. SSH is easier to implement in scripts on Unix-like systems because it has a powerful command-line API.
In most cases, TLS is the better choice. Unless you have a very specialized problem, TLS is almost always the better choice. So the question here is, what problem do you have that TLS doesn't work for? If it's "I hate TLS" then sure, SSH. But TLS is better in most cases.
TLS authenticates using client certificates. SSH authenticates using your private key. In either case, the cert/key is stored in a file that the client reads and uses to authenticate to the server.
It's not clear from your question what you mean by "client" or "imposter" here. Anything that has access to the cert/key will be authorized (possibly requiring a user-provided password), so those must be protected. If when you say "client" you mean "my application," that is not a solvable problem. You can authenticate people. You can to some extent authenticate machines (particularly if you have an HSM or similar piece of security hardware available). You can weakly authenticate that client is claiming to be on port 10, but this is generally useless and extremely fragile, so I wouldn't pursue it. You cannot authenticate software over the network in any meaningful way.
Short answer, though, is to use TLS unless you have a very specialized problem and a good security expert to help you design another solution (and your security expert will almost certainly say "use TLS").

Related

Is accepting all client certificates considered insecure for a public OPC UA server?

I am aware of certificate chains when validating a client certificate. Still, this either puts a lot of burden on the server administrator or restricts clients, which can be unfavorable when implementing a public OPC UA server.
An implementation of the client certificate validator that accepts all certificates for message encryption/signing is certainly possible. But would such an implementation be considered insecure in that matter?
If yes, how?
Yes, it is considered insecure.
Aside from the (hopefully) obvious use case, where certificates ensure you know exactly what client applications are allowed to connect to the server, certificates are also the first line of defense against malicious clients and are part of a "defense in depth" strategy.
A malicious actor that can't establish a secure channel with the server doesn't have much to work with. A malicious actor that can establish a secure channel can, e.g., open many connections, create many sessions (without activating, potentially causing a DoS are you use resources), attempt to guess credentials, re-use default credentials that an application may ship with, etc...
Further... in the face of the recent CIS alert re: ICS/SCADA devices + OPC UA servers, you'd be a bit of a fool to willingly ship a less secure product for the sake of convenience.

Apache Thrift - How to provide secure communication

I want to secure the communication between Thrift server and client instances. To achieve that, firstly I enabled SSL communication using keystore on the server-side and truststore on the client-side as explained in this post: https://chamibuddhika.wordpress.com/2011/10/03/securing-a-thrift-service/
Afterwards, I wrapped my transport instances on both client and server with TEncryptedFramedTransport.java class provided in the following SO post: Symmetric encryption (AES) in Apache Thrift. This enabled symmetric encryption of messages transferred through socket connection.
My question is that does applying both of these make my communication more secure? Or is it unnecessary to apply both and should go with only one of these?
There is a concept called "defense in depth". The idea is that you still have one more defense in place even when one might got broken. The downside is, as always, that you have to pay for it with performance.
The real question here is this: Do I trust SSL/TLS alone or do I absolutely want to add another (application-)level of security that serves as another hurdle if some man-in-the-middle manages to get inside my SSL/TLS channel, even if that will cost me some performance?
Another aspect could be that might be forced to communicate across unsecure channels, i.e. when there is no TLS available. Remember, Thrift allows to switch transports as needed, and the SSL/TLS infrastructure is only available in certain cases.
If the answer is yes, do it. It would be the same answer with REST, SOAP, XMLRPC, Avro, gRPC or the well-known avian carriers.
So the final, decisive answer if you should do that depends on your priorities.
Be also aware that there could also be other attack vectors in your solution that might need to be adressed.

Secure RESTful API via HTTP(S): How to deal with the certificate host check without host name (only IP address)?

For implementing a RESTful API via HTTP I need a way to secure communication (encryption of communication, prevention of man-in-the-middle and replay attacks).
The API is supposed to be used for communication between software PC clients (Windows, Linux), smart phones, hardware clients on the one hand and an embedded device (the server) on the other hand.
If I use HTTPS with one (all embedded devices ever manufactured can use the same one I think) self-signed certificate (that clients have embedded/store somewhere) I get all the benefits I want.
Now I have got one issue:
As the embedded devices are always accessed by IP address, the client side host check for the certificate is going to fail. Whatever is written in the certificate is NOT going to be the host that answers.
E.g. with libcurl I have to disable the check via
curl_easy_setopt(curlEasyHandle, CURLOPT_SSL_VERIFYHOST, 0L);
This doesn't hurt too bad for self-written clients - but clients are also supposed to be written by 3rd party developers. What I find awkward now, is that 3rd parties have to know, that they have to disable the host check (and they have to do it...).
Also, I am not sure if disabling this check is always possible with whatever http/TLS lib that 3rd parties are using.
A certificate issued for an IP address (if even possible?!) is not an option, as the IP address can be changed by the user of the device.
Is there a way for a certificate to be "host neutral"? Or is a part of my approach incorrect and I should do something differently? Or is there nothing that can be done about it and everybody implements it like this?

What's the purpose of running VCS over SSH?

I'm not very familiar with SSH and *nix systems in general, so please forgive me for possibly stupid question.
What are the benefit and what is the exact purpose behind having one's VCS be tunneled (hope this is an appropriate term here) over an SSH connection? Is it speed? Or security? Or something else?
Security and that SSH is a standard transport protocol. Also use of key authentication is common with SSH to provide password-less interaction with the VCS. Speed is not a benefit as SSH encrypts transmissions and so time is taken doing the encrypt/decrypt.
Why pick a standard transport protocol? Getting firewall clearance is more straight-forward, the VCS doesn't have to re-invent the wheel, etc.
This is a subjective answer, but here are three reasons that I would tunnel any application protocol over SSH, in order of importance:
Authentication and Authorization
I don't have to maintain my own database of users, don't have to think about password encryption, don't have to give the sysadmins yet another thing to manage.
Connection management
I can focus on my application-level communications, without worrying that I've created an exploitable security hole.
Admins are more likely to open well-known ports

Does Openfire Support TLS Over Http-Bind? If not whats the alternative

I apologize in advance for the somewhat long windedness of this question but I feel that I need to provide some additional information in order to properly qualify my current predicament.
Background
Okay so in many ways this question is a follow up to a previous question I asked regarding TLS/SSL encryption for XMPP communication and which libraries were the best. At first I resigned myself to using only .net libraries that used TLS/SSL but have since expanded to include Java libraries as also being a suitable alternative and have attempted a simple implementation of the Smack API as well. After exhaustive (and largely misguided) research regarding TLS/SSL encryption I realized that when Openfire is properly configured to block non-secure connections, most XMPP clients when connecting to Openfire will simply auto-negotiated TLS encrypted communications and that as long as I controlled the user roster on the server side (i.e. disable users abilities to create new accounts from any client) that I could more or less create secure end-to-end XMPP collaboration through Openfire.
The New Problem
Once I got the previous issues settled, I attempted to use this method for secure communication over HTTP-binding via Openfire's HTTP-binding functionality and ports. The reason for this is because our implementation will require users to connect to our Openfire server from additional networks. Additionally, and perhaps obviously, we will have no control over how these users firewalls will be configured to allow outgoing socket connections over port 5222 and whats more due to the nature of the system we are implementing it is highly unlikely that any of our clients will be willing/allowed to open their firewall to establish a socket connection to our XMPP server.
The issue is due to the fact that Openfire's Http-Bind does not appear to support auto TLS and instead only supports (as Openfire puts it) the 'Old SSL' method of encryption. This and other Openfire Socket vs Http are discussed in another question here, although not yet at great length
The Question (Finally)
First, can anyone confirm that
Http-Bind through Openfire actually
does not support auto TLS?
Second, does the Smack API support
Http-Bind? There is an existing
ticket on Ignite realtime's
website that seems to state that it
is not supported however the ticket
was created in 2007 and its last
comment from June 2011 that asks if
any update has been made on this
feature has as of yet gone
unanswered.
Third, it seems as though my last
resort to achieve secure
communication using Openfire and
Http-bind would be to use the 'Old
SSL' method however this does not
seem like a good long term solution.
Also, the Openfire forums and other
various rumor mills have indicated
that SSL functionality will be
deprecated in future Openfire
releases (can anyone give credence to
this rumor). All that being said, is
SSL my only real alternative to
secure connection using Http-bind.
By default, Openfire opens two ports for HTTP-Binding (BOSH) based connectivity. One is a plain-text port (7080), one is a TLS/SSL-encrypted port (7483). This is much like the two ports used for regular socket connections (5222,5223).
Clients connecting over the non-HTTP, regular socket port (5222) can elevating the initially plain-text channels to encrypted channels (using STARTTLS). When STARTTLS was introduced (back in ... well, I didn't have kids then) the pre-existing, TLS/SSL-encrypted port (5223) got referred to as the 'old' way of doing encryption. Somewhat overzealous perhaps, some suggestions were made to drop the 'old' technique in favor of the 'new' one.
STARTTLS has not been explicitly added to the 'plain text' port (7080) of the HTTP Binding (BOSH) implementation. This is by design. Unlike the 'plain socket' connectivity on port 5222, BOSH makes use of a transport protocol: HTTP. Channel encryption for BOSH should be completed at the HTTP (transport) layer (port 7483), not the XMPP (application) layer (that translates back to the 'old' way of doing things in the non-HTTP based world of things). This is not Openfire-specific, by the way: it's specified by the BOSH protocol.
As for the deprecation of the 'old' SSL based ports: the general consensus (between Openfire developers) is that there's no point in removing the 'old' SSL port: although the technique is somewhat outdated, it's not less secure than the more modern (STARTTLS) technique. On top of that, the discussion to drop 'old' SSL ports is oriented towards the non-HTTP based connectivity (socket connections for clients, server-to-server, external components, etc). And finally, the discussion is somewhat distorted by a similar yet distinct discussion on whether to change the default port numbering for BOSH (Openfire usage of 7080/7483 predates the definition of standard BOSH port numbering).
As, by design, the BOSH implementation is intended to utilize HTTP-provided encryption, its encrypted port will continue to exist.
As for the Smack-supporting-BOSH question: Smack supports that: https://www.igniterealtime.org/builds/smack/docs/latest/javadoc/org/jivesoftware/smack/bosh/XMPPBOSHConnection.html