XMPP Server-to-Server for Gmail.com/Jabber.org - certificate

I'm in the phase of implementing the server-to-server communication part of a XMPP Server.
I'm testing my implementation with Gmail.com and Jabber.org but both seem to use the dialback protocol. Does anyone know more information about this protocol related to this servers?
The protocol seems to be separated in several federation types, and can't seem to find the one this servers use, and the implications of this (root certificate, self-signed certificate, ...).

XEP-220 specifies the dialback protocol.
Keep in mind in your implementation that you may get asked for dialback on a socket that has already been fully set up and had traffic come across it.

Related

Is accepting all client certificates considered insecure for a public OPC UA server?

I am aware of certificate chains when validating a client certificate. Still, this either puts a lot of burden on the server administrator or restricts clients, which can be unfavorable when implementing a public OPC UA server.
An implementation of the client certificate validator that accepts all certificates for message encryption/signing is certainly possible. But would such an implementation be considered insecure in that matter?
If yes, how?
Yes, it is considered insecure.
Aside from the (hopefully) obvious use case, where certificates ensure you know exactly what client applications are allowed to connect to the server, certificates are also the first line of defense against malicious clients and are part of a "defense in depth" strategy.
A malicious actor that can't establish a secure channel with the server doesn't have much to work with. A malicious actor that can establish a secure channel can, e.g., open many connections, create many sessions (without activating, potentially causing a DoS are you use resources), attempt to guess credentials, re-use default credentials that an application may ship with, etc...
Further... in the face of the recent CIS alert re: ICS/SCADA devices + OPC UA servers, you'd be a bit of a fool to willingly ship a less secure product for the sake of convenience.

Apache Thrift - How to provide secure communication

I want to secure the communication between Thrift server and client instances. To achieve that, firstly I enabled SSL communication using keystore on the server-side and truststore on the client-side as explained in this post: https://chamibuddhika.wordpress.com/2011/10/03/securing-a-thrift-service/
Afterwards, I wrapped my transport instances on both client and server with TEncryptedFramedTransport.java class provided in the following SO post: Symmetric encryption (AES) in Apache Thrift. This enabled symmetric encryption of messages transferred through socket connection.
My question is that does applying both of these make my communication more secure? Or is it unnecessary to apply both and should go with only one of these?
There is a concept called "defense in depth". The idea is that you still have one more defense in place even when one might got broken. The downside is, as always, that you have to pay for it with performance.
The real question here is this: Do I trust SSL/TLS alone or do I absolutely want to add another (application-)level of security that serves as another hurdle if some man-in-the-middle manages to get inside my SSL/TLS channel, even if that will cost me some performance?
Another aspect could be that might be forced to communicate across unsecure channels, i.e. when there is no TLS available. Remember, Thrift allows to switch transports as needed, and the SSL/TLS infrastructure is only available in certain cases.
If the answer is yes, do it. It would be the same answer with REST, SOAP, XMLRPC, Avro, gRPC or the well-known avian carriers.
So the final, decisive answer if you should do that depends on your priorities.
Be also aware that there could also be other attack vectors in your solution that might need to be adressed.

Simplest server to server authentication

I have microservice on a new server/vps that will only ever be called via REST by monolith app to perform some heavy lifting and then post the operation results back to the monolith in few minutes.
How should I protect these two endpoints? I think my main goal, for now, is just preventing someone that found servers address to be able to do anything.
Almost every solution I google seems like overkill/premature optimization.
Is it sufficient that I generate random long token once on each machine and then just pass it to headers and check it's presence on the other end?
Do I even need to SSL this? As far as I understand we need SSL encryption for clients that are trying to send sensitive data via wireless or unsafe shared networks.
What are the chances(is it even possible?) that somebody is gonna eavesdrop between two digitalocean vps's sending data via http? Did it ever happen before ?
Q: Is it sufficient that I generate random long token once on each machine and then just pass it to headers and check it's presence on the other end?
A: Generally microservices are behind a GateKeeper/Gateway(nginx,haproxy) so you can expose the endpoints you want. In your case I would recommend to create a private network between the two vps's and expose your microservice on that internal IP.
Q: Do I even need to SSL this? As far as I understand we need SSL encryption for clients that are trying to send sensitive data via wireless or unsafe shared networks.
A: No. If you use internal networks and don't expose to the public then there is no need for SSL/TLS. If you would do something with Tier 3/4 then you would need encryption for cross datacenter communication.
Q: What are the chances(is it even possible?) that somebody is gonna eavesdrop between two digitalocean vps's sending data via http? Did it ever happen before ?
A: There are bots that scan for open ports on servers/computers and try to penetrate them with exploits. In all cases always use a firewall like UFW/firewalld.
So let's say you have two servers with these microservices using the internal private network from your favorite provider:
VPS1 (ip = 10.0.1.50)
FooBarService:1337
BarFooService:7331
VPS2 (ip = 10.0.1.51)
AnotherMicroService:9999
Now both VPS's can access each other's services by simply calling the ip + port.
Good luck.
There's a few simple solutions you could use to authenticate both servers back and forth. The one I would recommend if you want to keep it simple, as you say, is Basic Auth. As long as you're utilizing that over an SSL/HTTPS connection, it suffices as a super simple way to authenticate each end.
You state it is your main goal to protect these endpoints, but then ask if SSL/HTTP is even needed. If these servers are vulnerable to the web in any way, then I would say yes, your endpoints need to be protected, and if you're transmitting sensitive data, then you need to be sending it through a secure stream.
If you believe the data you're sending is not very sensitive, and is likely that no one that knows these two endpoints will even know how to properly manipulate your data by sending fake requests, then sure, you don't need any of this, but then you assume the risk and responsibility for if and when it ever is exposed. Basic Auth is super easy, and with LetsEncrypt it's incredibly easy to obtain an SSL certificate for free. It's good experience, so may as well try it out and protect these endpoints and ensure that they're safe.

Best practice for secureing an existing socket connection, without SSL

In Best practice for secure socket connection, the OP wants to secure the connection between two sockets, without SSL.
Thomas Pornin suggests SSH is the answer.
Is this answer based on SSH port forwarding of existing sockets, or just switching to SSH in general?
If not, and the question was how to make existing sockets more secure without SSL, what is the best way to to do that?
If a client on port 10 connects to a server on port 20, how can the server restrict access so that only client on port 10 can connect? And that it really is the client on port 10 (not an imposter)? (Availability only for an authenticated client).
The answer there is any form of the SSH protocol, which is based on channels. You can use those channels to transmit fairly arbitrary information, including port-forwarded data or terminal sessions, or anything you can turn into a byte stream. That said, TLS is generally much easier to implement in code because the libraries are ubiquitous and designed to be used this way. SSH is easier to implement in scripts on Unix-like systems because it has a powerful command-line API.
In most cases, TLS is the better choice. Unless you have a very specialized problem, TLS is almost always the better choice. So the question here is, what problem do you have that TLS doesn't work for? If it's "I hate TLS" then sure, SSH. But TLS is better in most cases.
TLS authenticates using client certificates. SSH authenticates using your private key. In either case, the cert/key is stored in a file that the client reads and uses to authenticate to the server.
It's not clear from your question what you mean by "client" or "imposter" here. Anything that has access to the cert/key will be authorized (possibly requiring a user-provided password), so those must be protected. If when you say "client" you mean "my application," that is not a solvable problem. You can authenticate people. You can to some extent authenticate machines (particularly if you have an HSM or similar piece of security hardware available). You can weakly authenticate that client is claiming to be on port 10, but this is generally useless and extremely fragile, so I wouldn't pursue it. You cannot authenticate software over the network in any meaningful way.
Short answer, though, is to use TLS unless you have a very specialized problem and a good security expert to help you design another solution (and your security expert will almost certainly say "use TLS").

Does Openfire Support TLS Over Http-Bind? If not whats the alternative

I apologize in advance for the somewhat long windedness of this question but I feel that I need to provide some additional information in order to properly qualify my current predicament.
Background
Okay so in many ways this question is a follow up to a previous question I asked regarding TLS/SSL encryption for XMPP communication and which libraries were the best. At first I resigned myself to using only .net libraries that used TLS/SSL but have since expanded to include Java libraries as also being a suitable alternative and have attempted a simple implementation of the Smack API as well. After exhaustive (and largely misguided) research regarding TLS/SSL encryption I realized that when Openfire is properly configured to block non-secure connections, most XMPP clients when connecting to Openfire will simply auto-negotiated TLS encrypted communications and that as long as I controlled the user roster on the server side (i.e. disable users abilities to create new accounts from any client) that I could more or less create secure end-to-end XMPP collaboration through Openfire.
The New Problem
Once I got the previous issues settled, I attempted to use this method for secure communication over HTTP-binding via Openfire's HTTP-binding functionality and ports. The reason for this is because our implementation will require users to connect to our Openfire server from additional networks. Additionally, and perhaps obviously, we will have no control over how these users firewalls will be configured to allow outgoing socket connections over port 5222 and whats more due to the nature of the system we are implementing it is highly unlikely that any of our clients will be willing/allowed to open their firewall to establish a socket connection to our XMPP server.
The issue is due to the fact that Openfire's Http-Bind does not appear to support auto TLS and instead only supports (as Openfire puts it) the 'Old SSL' method of encryption. This and other Openfire Socket vs Http are discussed in another question here, although not yet at great length
The Question (Finally)
First, can anyone confirm that
Http-Bind through Openfire actually
does not support auto TLS?
Second, does the Smack API support
Http-Bind? There is an existing
ticket on Ignite realtime's
website that seems to state that it
is not supported however the ticket
was created in 2007 and its last
comment from June 2011 that asks if
any update has been made on this
feature has as of yet gone
unanswered.
Third, it seems as though my last
resort to achieve secure
communication using Openfire and
Http-bind would be to use the 'Old
SSL' method however this does not
seem like a good long term solution.
Also, the Openfire forums and other
various rumor mills have indicated
that SSL functionality will be
deprecated in future Openfire
releases (can anyone give credence to
this rumor). All that being said, is
SSL my only real alternative to
secure connection using Http-bind.
By default, Openfire opens two ports for HTTP-Binding (BOSH) based connectivity. One is a plain-text port (7080), one is a TLS/SSL-encrypted port (7483). This is much like the two ports used for regular socket connections (5222,5223).
Clients connecting over the non-HTTP, regular socket port (5222) can elevating the initially plain-text channels to encrypted channels (using STARTTLS). When STARTTLS was introduced (back in ... well, I didn't have kids then) the pre-existing, TLS/SSL-encrypted port (5223) got referred to as the 'old' way of doing encryption. Somewhat overzealous perhaps, some suggestions were made to drop the 'old' technique in favor of the 'new' one.
STARTTLS has not been explicitly added to the 'plain text' port (7080) of the HTTP Binding (BOSH) implementation. This is by design. Unlike the 'plain socket' connectivity on port 5222, BOSH makes use of a transport protocol: HTTP. Channel encryption for BOSH should be completed at the HTTP (transport) layer (port 7483), not the XMPP (application) layer (that translates back to the 'old' way of doing things in the non-HTTP based world of things). This is not Openfire-specific, by the way: it's specified by the BOSH protocol.
As for the deprecation of the 'old' SSL based ports: the general consensus (between Openfire developers) is that there's no point in removing the 'old' SSL port: although the technique is somewhat outdated, it's not less secure than the more modern (STARTTLS) technique. On top of that, the discussion to drop 'old' SSL ports is oriented towards the non-HTTP based connectivity (socket connections for clients, server-to-server, external components, etc). And finally, the discussion is somewhat distorted by a similar yet distinct discussion on whether to change the default port numbering for BOSH (Openfire usage of 7080/7483 predates the definition of standard BOSH port numbering).
As, by design, the BOSH implementation is intended to utilize HTTP-provided encryption, its encrypted port will continue to exist.
As for the Smack-supporting-BOSH question: Smack supports that: https://www.igniterealtime.org/builds/smack/docs/latest/javadoc/org/jivesoftware/smack/bosh/XMPPBOSHConnection.html