How do I safeguard against the OpenSSL vulnerability until I can patch it? - server-side-attacks

The X509_verify_cert function in crypto/x509/x509_vfy.c in OpenSSL 1.0.1n, 1.0.1o, 1.0.2b, and 1.0.2c does not properly process X.509 Basic Constraints cA values during identification of alternative certificate chains, which allows remote attackers to spoof a Certification Authority role and trigger unintended certificate verifications via a valid leaf certificate. I am interested in mitigating this threat until I can install the patch. Has anybody implemented a viable mitigation technique to safeguard against this vulnerability while waiting for the patch to clear testing?

Related

How to check google -transparency logs to detect malicious ssl certificates of my domain

I would like to use google certificate transparency API to check the malicious SSL certificates(if any) of my domain. I am able to get all the certificates but how do i check whether the certificate is legitimate or not.
I had found this repository(https://github.com/ProtonMail/ct-monitor) but this simply searches certificates and stores it . What is the use of storing these certificates unless we validate the certificates first.
Can any one suggest me how do i get to know the malicious SSL certificates using this google certificate transparency api.
Certificate Transparency logs are, as explained on the CT site:
simple network services that maintain cryptographically assured,
publicly auditable, append-only records of certificates. Anyone can
submit certificates to a log, although certificate authorities will
likely be the foremost submitters.
The logging of the certificates in this fashion allows for interested parties (e.g. domain owners) to monitor these logs for malicious/erroneous
entries.
But a certificate being logged in a CT log doesn't mean it isn't a bad certificate. As explained on the CT site:
Certificate Transparency relies on existing mitigation mechanisms to
address harmful certificates and CAs--for example, certificate
revocation--the shortened detection time will speed up the overall
mitigation process when harmful certificates or CAs are discovered.
So CT API won't help you in working out whether a certificate is malicious - you need to check using other methods such as checking of certificate revocation lists (CRLs) or by using the Online Certificate Status Protocol (OCSP). See this related question on how to check certs. There are sites that allow for checking of certificates e.g. revocationcheck.com. Modern browsers seem to be converging on the use of compressed lists of CRLs - Mozilla's now using CRLite, whilst Chrome uses CRLSets.
The CT API allows you verify that a certificate has been logged in the CT logs which means that domain owners can monitor them and promptly insert any malicious/erroneous certificates into the relevant CRLs so they won't be used any longer.

Certificate based authentication on internet facing secure site

I have to develop a web application that is both secured over https and uses client authentication certificates. The clients are connecting via invitation, thus it is not intended for users stumbling upon this application by googling around.
The ideal would be to get an intermediate CA certificate form a public root authority and sign both the ssl certificate and use it to issue client authentication certificates. I think that won't work, as simply put I will never qualify for such an intermediate CA (as far as I know, but maybe I am wrong with that).
Second guess: create own Root CA, an intermediate CA and use them. Because of what I wrote about the users, I can embed the necessary certificate chain in the issued certificates. This technically works.
What I would prefer is to get an ssl certificate from public authority and to use my own chain to issue authentication certificates and verify the users. According to this it is possible. But I haven't found anything about how to configure IIS for example (or Kestrel) to request client certificates issued by a specific CA, even less some standard specification where this flow is described.

Using self-signed X509 certs to secure a production SF Cluster

I'm going down the path of figuring out the details of securing our SF Clusters. I'm finding that the docs note in a number of places not to use self-signed certs for production workloads. But nowhere does it explain why.
Can anyone from the SF team explain why a self-signed X509 cert is not as secure as one issued from a known CA? I thought the only true difference is that self-signed certs do not chain to a certified root authority, which would mean any clients might not see the cert as valid. But with node-to-node security why would this matter?
So what risk am I taking if I use self-sign certs for node-to-node or even client-to-node security of my production SF Clusters?
For client to node: As anyone can spoof your self signed certificate,
you won't be able to assert from the client you're actually talking
to the correct server. Also, there's no way to revoke a self signed
cert. Finally, end users will see that nasty security warning in the
address bar.
For node to node: same thing applies, but since it's in a vnet behind the load balancer, the
risk of tampering is lower.
Encryption of the data itself will work using either type of certificate, but a MITM attack is made easier.

Dispute over the efficacy of using public CA certs to secure SAML assertions

Here's the question:
Is there any benefit to securing a SAML assertion with a CA cert? I understand how using a CA cert is of benefit when establishing the SSL connection over which the SAML assertion is transported, but what about a CA cert for the PKI handshake that occurs when the SP accepts the SAML assertion itself? I have one side contending that within the SAML exchange there's no way for the SP to iterate through the chain of trust to the root CA cert, while on the other side I have someone saying that it can.
Bonus points if you can point me to an authoritative source that supports your answer.
If I understand you correctly you wonder if there is any point in using a certificate sign by a CA when signing the SAML assertion.
In my opinion you should not need this. When you establish the initial trust and exchange metadata you can include the public key of the entity in the metadata.
If you can trust that the exchange of metadata is secure, you can just verify the signature against the public key in metadata.
I can not see how a CA would give any value to this situation.
I agree too. Although in standard Shibboleth Metadata sharing mechanism (Federation) the whole published metadata block is sign by Federation certificate. So PKI may be (and probably is) used to distribute service and IdP metadata between security partners. But as Stefan wrote, there is no point in signing Assertion with Certificate signed by trusted CA

PKI: the procedure of checking certificate revocation status and its setup

Good day!
In my asp.net web application I need to check incoming digital signatures of files. I do it by calling:
SignedCms.CheckSignature(false) or SignerInfo.CheckSignature(false) (C#).
And I want to ensure, that signers’ certificates during such calls are checked for revocation correctly, ensure that system settings are correct and to clear this process for myself.
Incoming signers’ certificates maybe of large amount of CA’s. So, the signer’s certificate may or may not include references to CA’s OCSP service or CA’s CRL service.
I want the system to check revocation in such way:
IF certificate has reference to CA’s OCSP web service then the system makes the request to CA,
ELSE
IF certificate has a reference to CA’s online CRL service the system downloads CRL and uses it
ELSE
System uses the local CRL.
Could you, please, answer my questions:
how (where) can I find the system settings which describe needed behavior? (Is this behavior changable or fixed?)
If the certificate has reference to CRL web service of CA, must I download and install the CA’s CRL by script periodically, or I can rely on the system downloads and uses them automatically when it needs CRL for checking?
Thank you.
The procedure is described in RFC 5280 and is very complicated. In brief, you do the following:
For certificate in question, check it's signature, validity period and key usage
Check the certificate against CRL.
Check the certificate against OCSP
For each certificate encountered during CRL and/or OCSP checking, perform steps 1-3 (this in turn can involve multiple CRL and OCSP checks).
I am not mentioning policy checks, which are very complicated, here.
It took me about a month to implement certificate validator for our SecureBlackbox library (but we have everything on our own, from CRL and OCSP clients to management of those CRLs). If you want to perform certificate checking using OS means, you should be looking for existing function that does the job for you.
You can find some useful information on getting internals of the CRL, more specifically on the CRL caching crl-caching-in-windows-and-little-bit
There are two approaches which i am thinking here, where, windows cryptoAPI will take care of handling the revocation by itself automatically including caching. But the issue here is that cryptoAPI will look for the CA server only when the current CRL in the cache expires. So there is the difficulty of maintaining the freshness of the CRL. But if your CA's CRL publishing frequency is on a daily basis, you may get away with this approach of using CertVerifyRevocation, function call from windows with appropriate configuration for some applications.
The second approach is to download and install the CRL from the CA server and use CertFindCertificateInCRL for validation. Your CRL down-loader application can be configured to update the CRL at a predefined time interval. This will work, if the CA publishes only Base CRL, as you get the entire revoked certificate list when you download your CRL every time. I donot know the answer, to what happens if the CA publishes the Delta CRL and the Base CRL at less frequent intervals.