I need to setup a cron job to run a SOAP client. The customer insists that I connect to their web service (on an https address) from an https address. They insist that if I don't their response to me can't be encrypted.
My first question is, is that true? I thought that as long as I'm connecting to their SOAP service over https, the response back would automatically be encrypted.
If that's true, how can I run a cron job to be as https? My site is on a LAMP setup with cPanel access.
Thank you in advance for your help!
Your customers statement seems to be a little bit unclear in what he/she specifically means by "... connecting from an https adress" as there isn't any notion of the term "https adress" in the specs and https URLS only seem to make sense in the context of Request-URI s given in a https request.
Given this unclarity I'm only wild guessing. Nevertheless to me it seems your clients requirements might most probably not be connected to the http protocol but rather to establishing your TLS connection.
If your client is very sensitive in respect to the security of his system - which in fact if he intends to offer RPC requests might be a very good idea - he might not want to the whole world to be able to connect an encrypted connection to his machines and rely on any secondary authentication mechanism once the connection has been established.
As most users of the public internet don't have any certificates signed by a trusted authority this feature it isn't used out in the open wild but besides server authentication the TLS handshake protocol also provides a means of client authentication via client certificates (the relevant part being section 7 in RFC 5246 here. see: https://www.rfc-editor.org/rfc/rfc5246#section-7)
While in the absence of widely used client certificates web services usually rely on establishing an encryted connection to first to authenticate users by some kind of challange response test like querying for username and password your client might want to either additionally secure access to his machines by additionally requiring a valid client certificate or even - probably not the best idea - replace a second authorization like the one already mentioned above.
Nevertheless all this are nothing but some ideas that I came along with given the riddle in your question.
Most probably the best idea might be to just ask your client what he/she meant when saying "... connecting from an https adress"
Related
We're currently rewriting various services to use OpenID Connect (via Keycloak).
This works great for any modern browser-based clients, but in our case we also need to support legacy IoT devices, which:
cannot receive a firmware update (and thus are stuck in their current modes of authentication/communication)
are not aware of Keycloak and are not configured to participate in OpenID Connect. (and are also only aware of the application's URL and not the Keycloak URL)
authenticate directly with the application using either Basic Authentication or SSL Client Authentication with a certificate.
From the documentation we gathered that mapping each device to a Keycloak user and using the
Resource Owner Password Credentials would be the way to go in such cases.
We were thinking that it'd be nice to add centralized support for such devices by exposing a reverse proxy that sits in front of all services and performs the following steps:
Receive the IoT device requests (and optionally terminate SSL)
Extract the credentials from the request (either basic auth / client certificate)
Perform the Resource Owner Password Credentials Flow against Keycloak to exchange the credentials for an access token (where the IoT device acts as the OAuth Resource Owner and the reverse proxy acts as the OAuth Client
If successful, enrich the original request with the retrieved access token and forward it to the proxied service
In that way the entire OpenID Connect authentication is transparent for any legacy devices.
This design could be further improved/optimized by caching the access tokens for the duration they are valid for (using the credentials as the cache key) and refreshing them when they expire.
Now, this idea seems like such a no-brainer, that we were surprised that we couldn't find any existing gateways, reverse proxies or plugins that do this.
So I guess we're in need of a sanity check on:
Is this something that can work as described or are there any obvious flaws with the idea?
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
UPDATE 1: (responding to question) The described legacy IoT devices are (physically) Arduino microcontrollers with baked-in unique credentials. In the context of Keycloak, each such Arduino microcontroller is mapped as a Keycloak user. We're open for suggestions if this is not the most adequate mapping for this use-case.
UPDATE 2: (responding to question) Agreed that the Client Credentials Flow would be semantically more correct for such a device-to-device authentication and any future devices we produce will use it. However we can't use it for the existing legacy devices for two reasons: 1) the devices only know the server's URL and can't authenticate directly against Keycloak and 2) we also want to support SSL Client Authentication using a X.509 certificate and from our understanding Keycloak only supports X.509 client certificate user authentication for users, and not for clients.
Is this something that can work as described or are there any obvious flaws with the idea?
It works fine, so long as your OP supports the Resource Owner Password Credentials flow, which is deprecated and removed from modern OAuth2.
Why isn't anyone doing this already? (assuming that supporting legacy devices is a major pain point when switching to OpenID Connect)
Lots of reverse proxies do this, just not with resource owner credentials. The ROPC flow was never a good idea, exists for legacy reasons, and has been removed from OAuth 2.1.
I suspect that most people move away from storing and transmitting resource owner credentials as they modernize their architecture.
Our system only has a HTTP Smart Service feature to connect to external system.
If the target external system we are trying to connect to is “https” then does that make it secure?
We are just putting an authentication token in the header.
Absolutely not.
There are many factors that make it secure vs insecure. HTTPS is just one piece of the puzzle. If HTTPS used it might mean that the connection is encrypted and no one can eavesdrop, but being 'secure' can mean a lot more things.
I can't work out a definitive answer on this, but from searching I find two links which seem to indicate to me that a server (in this case it's MS Exchange as per the links) can have different certificates in place for https than for secure smtp/TLS.
http://technet.microsoft.com/en-GB/library/bb851505(v=exchg.80).aspx
https://www.sslshopper.com/article-how-to-use-ssl-certificates-with-exchange-2007.html
I have an issue which no-one has been able to help with here and this question is a follow on, in that I am coming to the suspicion that my first problem is that my machine trusts the https certificate, but not the one being used for smtp/TLS. But what I'm asking now, is that even possible?
Going through the diagnostic steps here shows me that the certificates in use when I access my mail server's web interface through https are fully trusted. However when I look at the debug of my c# process it is stating a completely different certificate issued by one of our servers to it's self (the server on which exchange is installed).
So... any one know if it's possible that I am thinking along the right lines... is it possible that when I do an https connection I get one certificate and when I use the .net SMTP client I get a completely different certificate (from exactly the same address, but I assume a different port)?
Is it possible that when I do an https connection I get one certificate and when I use the .net SMTP client I get a completely different certificate (from exactly the same address, but I assume a different port)?
Yes, you can have a different certificate for each listening socket on the machine, that is SMTP and HTTPS can use different certificates. On a machine with multiple hostnames you could even have multiple different certificates on a single socket, which get distinguished by the hostname (using SNI).
I hope i am at the right place asking this question, its regarding understanding of SNI
According to https://devcentral.f5.com/articles/ssl-profiles-part-7-server-name-indication#.U5wEnfmSzOz
"With the introduction of SNI, the client can indicate the name of the server to which he is attempting to connect as part of the "Client Hello" message in the handshake process"
My question is how does client like browser or any HTTP client (say java.net) send this server name in CLIENT HELLO?? Does client do by itself or you have to add it Programmatically to https request (e.g how in JAVA.net HttpsURLConnection)
Reading from http://www.ietf.org/rfc/rfc4366.txt
"Currently, the only server names supported are DNS hostnames"
so the hostname is the server_name sent by SNI complient client or any other name can be sent by the client..
I hope i am clear, do improve the question/wording if its unclear or let me know if its not clear
thanks
If you are using an https library, which you can give a URL and the library will fetch the contents of that URL for you, then the clean way to add SNI support is to perform it entirely within the library.
It is the library which parses the URL to find the hostname, the caller will never know which part of the URL is the hostname, so the caller couldn't tell the library which hostname to send in the SNI request. If the caller had to somehow figure out the hostname in order to tell this to the library, then that would be a poorly designed library.
You might look a level deeper in the software stack and find that an https library might be building on top of an SSL library. In such a case even the https library does not need to know about SNI. The https library would simply tell the SSL library, that it want a connection to a particular hostname. The SSL library would resolve the hostname to get IP address to connect to, the SSL library would also be performing the SSL handshake during which the client may send a hostname as part of SNI and the server send a hostname as part of a certificate for the client to verify.
During connection setup, the SSL client library need to use the hostname for three different purposes. It would be trivial to support the usage of three different hostnames for those three purposes. The https library already know the hostname, and passing that hostname three times to the SSL library rather than just one wouldn't be any significant amount of additional work. But it would hardly make sense to support this anyway.
In fact SNI could be entirely transparent to the https library. It would make sense to extend the SSL library with SNI support without changing the API to the https library. There is little reason to turn off SNI support in a client, which supports it. So defaulting to having SNI enabled makes sense.
Currently I am using Net::LDAP::Server to setup my server but it is not secure enough.
Is there any module or method so that I can setup a LDAP server over TLS or other secure connection?
I just found many information about how to connect to a secure ldap server, but cant found how to setup a secure ldap server.
Can anyone give some advices?
How does an LDAPS connection work
LDAPS is an unofficial protocol. It is to LDAP what HTTPS is to HTTP, namely the exact same protocol (but in this case LDAPv2 or LDAPv3) running over a secured SSL ("Secure Socket Layer") connection to port 636 (by default).
Not all servers will be configured to listen for LDAPS connections, but if they do, it will commonly be on a different port from the normal plain text LDAP port.
Using LDAPS can potentially solve the vulnerabilities described above, but you should be aware that simply "using" SSL is not a magic bullet that automatically makes your system "secure".
First of all, LDAPS can solve the problem of verifying that you are connected to the correct server. When the client and server connect, they perform a special SSL 'handshake', part of which involves the server and client exchanging cryptographic keys, which are described using X.509 certificates. If the client wishes to confirm that it is connected to the correct server, all it needs to do is verify the server's certificate which is sent in the handshake. This is done in two ways:
check that the certificate is signed (trusted) by someone that you trust, and that the certificate hasn't been revoked. For instance, the server's certificate may have been signed by Verisign (www.verisign.com), and you decide that you want to trust Verisign to sign legitimate certificates.
check that the least-significant cn RDN in the server's certificate's DN is the fully-qualified hostname of the hostname that you connected to when creating the LDAPS object. For example if the server is , then the RDN to check is cn=ldap.example.com.
You can do this by using the cafile and capath options when creating a Net::LDAPS object, and by setting the verify option to 'require'.
To prevent hackers 'sniffing' passwords and other information on your connection, you also have to make sure the encryption algorithm used by the SSL connection is good enough. This is also something that gets decided by the SSL handshake - if the client and server cannot agree on an acceptable algorithm the connection is not made.
Net::LDAPS will by default use all the algorithms built into your copy of OpenSSL, except for ones considered to use "low" strength encryption, and those using export strength encryption. You can override this when you create the Net::LDAPS object using the 'ciphers' option.
Once you've made the secure connection, you should also check that the encryption algorithm that is actually being used is one that you find acceptable. Broken servers have been observed in the field which 'fail over' and give you an unencrypted connection, so you ought to check for that.
How does LDAP and TLS work
SSL is a good solution to many network security problems, but it is not a standard. The IETF corrected some defects in the SSL mechanism and published a standard called RFC 2246 which describes TLS ("Transport Layer Security"), which is simply a cleaned up and standardized version of SSL.
You can only use TLS with an LDAPv3 server. That is because the standard (RFC 2830) for LDAP and TLS requires that the normal LDAP connection (ie., on port 389) can be switched on demand from plain text into a TLS connection. The switching mechanism uses a special extended LDAP operation, and since these are not legal in LDAPv2, you can only switch to TLS on an LDAPv3 connection.
So the way you use TLS with LDAPv3 is that you create your normal LDAPv3 connection using Net::LDAP::new(), and then you perform the switch using Net::LDAP::start_tls(). The start_tls() method takes pretty much the same arguments as Net::LDAPS::new(), so check above for details.
Well, perhaps LDAPS is not an RFC but to say it is not a standard or secure is certainly a stretch.
LDAPS is supported by ALL LDAP Server Vendors.
LDAPS is at least as secure as HTTPS.
As with ALL SSL (or TLS) the security weak points are how the certificates are handled.
Certainly LDAPS is more supported by LDAP server vendors and clients than is TLS. Active Directory as one example, does not support TLS. Querying the rootDSE for the supportedExtention 1.3.6.1.4.1.1466.20037 will (should) show if TLS is supported on any particular LDAP server.
We have some examples at:
http://ldapwiki.willeke.com/wiki/Perl%20LDAP%20Samples.