Is there a version of iTextSharp that is FIPS compliant? - itext

I am trying to use iTextSharp to generate PDF documents in my ASP.NET WebForms application using version 4.1.6, but it is throwing an exception on a staging server that has FIPS compliance turned on.
Does anyone know of a version of iTextSharp that is FIPS-compliant?

I recently needed to update some older iTextSharp code to be FIPS compliant -- I used iText 7 (basically the newest version of iTextSharp), which is FIPS compliant and generated PDFs fine on a FIPS enabled server.
Porting from iTextSharp to iText 7 wasn't very easy, mostly due to a lack of decent documentation but the update should get past any FIPS compliance issues.
As far as I can tell, the primary FIPS issue with iTextSharp is that it uses MD5, throwing exceptions (particularly on pdf.Close() events) since MD5 is not an approved FIPS hashing algorithm.

This is actually more of a big comment rather than an answer. Sorry about that...
throwing an exception on a staging server that has FIPS compliance turned on FIPS validated cryptography enabled.
So, they have probably used HKLM\System\CurrentControlSet\Control\Lsa\FIPSAlgorithmPolicy (Windows XP and Server 2003) or HKLM\System\CurrentControlSet\Control\Lsa\FIPSAlgorithmPolicy\Enabled (Vista and Server 2008) in effect.
Or, they may have done it by hand via How to restrict the use of certain cryptographic algorithms and protocols in Schannel.dll.
throwing an exception ...
Do you know what the exception is? If you know the exception, you might be able to hunt down its use in iTextSharp.
Generally speaking, all the FIPS approved algorithms and implementations are in System.Security.Cryptography and are non-managed. (More correctly, some System.Security.Cryptography classes are wrappers for CAPI calls because CAPI modules hold the validation).
So you might try finding cryptograhy not within System.Security.Cryptography; or within System.Security.Cryptography but using managed classes. For example, RijndaelManaged will get you in trouble here, and it will cause an expception.
EDIT: according to KB 811833, "System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing" security setting effects in Windows XP and in later versions of Windows:
Microsoft .NET Framework applications such as Microsoft ASP.NET only
allow for using algorithm implementations that are certified by NIST
to be FIPS 140 compliant. Specifically, the only cryptographic
algorithm classes that can be instantiated are those that implement
FIPS-compliant algorithms. The names of these classes end in
"CryptoServiceProvider" or "Cng." Any attempt to create an instance of
other cryptographic algorithm classes, such as classes with names
ending in "Managed," cause an InvalidOperationException exception to
occur.
I think you might simply be between a rock and a hard place:
$ grep -R MD5 * | grep -v "\.svn"
src/core/iTextSharp/text/ImgJBIG2.cs: this.globalHash = DigestAlgorithms.Digest("MD5", this.global);
src/core/iTextSharp/text/pdf/PdfSignatureAppearance.cs: reference.Put(new PdfName("DigestMethod"), new PdfName("MD5"));
src/core/iTextSharp/text/pdf/PdfSignatureAppearance.cs: reference.Put(new PdfName("DigestMethod"), new PdfName("MD5"));
src/core/iTextSharp/text/pdf/PdfEncryption.cs: /** The message digest algorithm MD5 */
src/core/iTextSharp/text/pdf/PdfEncryption.cs: md5 = DigestUtilities.GetDigest("MD5");
...
$ grep -R MD5 * | grep -v "\.svn" | wc -l
128
And:
$ grep -R SHA1 * | grep -v "\.svn"
src/core/iTextSharp/text/error_messages/nl.lng:support.only.sha1.hash.algorithm=Enkel ondersteuning voor SHA1 hash algoritme.
src/core/iTextSharp/text/error_messages/en.lng:support.only.sha1.hash.algorithm=Support only SHA1 hash algorithm.
src/core/iTextSharp/text/pdf/PdfName.cs: public static readonly PdfName ADBE_PKCS7_SHA1 = new PdfName("adbe.pkcs7.sha1");
src/core/iTextSharp/text/pdf/PdfName.cs: public static readonly PdfName ADBE_X509_RSA_SHA1 = new PdfName("adbe.x509.rsa_sha1");
src/core/iTextSharp/text/pdf/AcroFields.cs: if (sub.Equals(PdfName.ADBE_X509_RSA_SHA1)) {
...
$ grep -R SHA1 * | grep -v "\.svn" | wc -l
188
MD5 shows up in 128 places and SHA-1 shows up in 188 places. Those algorithms are burrowed into that code, and its probably difficult to impossible to remove them.
You might have to build that on a server that allows weak/wounded ciphers because it appears MD5 and SHA1 might be part of the PDF specification (perhaps a PDF expert can help out here).
FIPS compliance turned on
A quick note about this. You either use validated cryptography, or you don't use validated cryptography. NIST and the DHS auditors are very precise about their use of these terms.
FIPS compliance, FIPS compliant, FIPS approved, FIPS enabled, FIPS <favorite word here> mean nothing. I'm aware that NIST and DHS pulled one vendor's network switches out of US Federal because the vendor's marketing department stated they were FIPS Compliant rather than stating they provided FIPS Validated cryptography.

Related

Why getting SSLCertVerificationError ... self signed certificate in certificate chain - from one machine but not another?

I am trying to test an API on my site. The tests work just fine from one machine, but running the code from a different machine results in the SSLCertVerificationError - which is odd because the site has an SSL cert and is NOT self signed.
Here is the core of my code:
async def device_connect(basename, start, end):
url = SERVER_URL
async with aiohttp.ClientSession() as session:
post_tasks = []
# prepare the coroutines that post
for x in range(start, end):
myDevice={'test':'this'}
post_tasks.append(do_post(session, url, myDevice))
# now execute them all at once
await asyncio.gather(*post_tasks)
async def do_post(session, url, data):
async with session.post(url, data =data) as response:
x = await response.text()
I tried (just for testing) to set 'verify=False' or trust_env=True, but I continue to get the same error. On the other computer, this code runs fine and no trust issue results.
That error text is somewhat misleading. OpenSSL, which python uses, has dozens of error codes that indicate different ways certificate validation can fail, including
X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN -- the peer's cert can't be chained to a root cert in the local truststore; the chain received from the peer includes a root cert, which is self-signed (because root certs must be self-signed), but that root is not locally trusted
Note this is not talking about the peer/leaf cert; if that is self signed and not trusted, there is a different error X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT which displays as just 'self signed certificate' without the part about 'in certificate chain'.
X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY (displays in text as 'unable to get local issuer certificate') -- the received chain does not contain a self-signed root and the peer's cert can't be chained to a locally trusted root
In both these cases the important info is the peer's cert doesn't chain to a trusted root; whether the received chain includes a self-signed root is less important. It's kind of like if you go to your doctor and after examination in one case s/he tells you "you have cancer, and the weather forecast for tomorrow is a bad storm" or in another case "you have cancer, but the weather forecast for tomorrow is sunny and pleasant". While these are in fact slightly different situations, and you might conceivably want to distinguish them, you need to focus on the part about "you have cancer", not tomorrow's weather.
So, why doesn't it chain to a trusted root? There are several possibilities:
the server is sending a cert chain with a root that SHOULD be trusted, but machine F is using a truststore that does not contain it. Depending on the situation, it might be appropriate to add that root cert to the default truststore (affecting at least all python apps unless specifically coded otherwise, and often other types of programs like C/C++ and Java also) or it might be better to customize the truststore for your appplication(s) only; or it might be that F is already customized wrongly and just needs to be fixed.
the server is sending a cert chain that actually uses a bad CA, but machine W's truststore has been wrongly configured (again either as a default or customized) to trust it.
machine F is not actually getting the real server's cert chain, because its connection is 'transparently' intercepted by something. This might be something authorized by an admin of the network (like an IDS/IPS/DLP or captive portal) or machine F (like antivirus or other 'endpoint security'), or it might be something very bad like malware or a thief or spy; or it might be in a gray area like some ISPs (try to) intercept connections and insert advertisements (at least in data likely to be displayed to a person like web pages and emails, but these can't always be distinguished).
the (legit) server is sending different cert chains to F (bad) and W (good). This could be intentional, e.g. because W is on a business' internal network while F is coming in from the public net; however you describe this as 'my site' and I assume you would know if it intended to make distinctions like this. OTOH it could be accidental; one fairly common cause is that many servers today use SNI (Server Name Indication) to select among several 'certs' (really cert chains and associated keys); if F is too old it might not be sending SNI, causing the server to send a bad cert chain. Or, some servers use different configurations for IPv4 vs IPv6; F could be connecting over one of these and W the other.
To distinguish these, and determine what (if anything) to fix, you need to look at what certs are actually being received by both machines.
If you have (or can get) OpenSSL on both, do openssl s_client -connect host:port -showcerts. For OpenSSL 1.1.1 up (now common) to omit SNI add -noservername; for older versions to include SNI add -servername host. Add -4 or -6 to control the IP version, if needed. This will show subject and issuer names (s: and i:) for each received cert; if any are different, and especially the last, look at #3 or #4. If the names are the same compare the whole base64 blobs to make sure they are entirely the same (it could be a well-camoflauged attacker). If they are the same, look at #1 or #2.
Alternatively, if policy and permissions allow, get network-level traces with Wireshark or a more basic tool like tcpdump or snoop. In a development environment this is usually easy; if either or both machine(s) is production, or in a supplier, customer/client, or partner environment, maybe not. Check SNI in ClientHello, and in TLS1.2 (or lower, but nowadays lower is usually discouraged or prohibited) look at the Certificate message received; in wireshark you can drill down to any desired level of detail. If both your client(s) and server are new enough to support TLS1.3 (and you can't configure it/them to downgrade) the Certificate message is encrypted and wireshark won't be able to show you the contents unless you can get at least one of your endpoints to export the session secrets in SSLKEYLOGFILE format.

How to add changes for NO ENCRYPT DB2 option to db2RestoreStruct

I am trying to restore encrypted DB to non-encryped DB. I made changes by setting piDbEncOpts to SQL_ENCRYPT_DB_NO but still restore is being failed. Is there db2 sample code is there where I can check how to set "NO Encrypt" option in DB2. I am adding with below code snippet.
db2RestoreStruct->piDbEncOpts->encryptDb = SQL_ENCRYPT_DB_NO
The 'C' API named db2Restore will restore an encrypted-image to a unencrypted database , when used correctly.
You can use a modified version of IBM's samples files: dbrestore.sqc and related files, to see how to do it.
Depending on your 'C' compiler version and settings you might get a lot of warnings from IBM's code, because IBM does not appear to maintain the code of their samples as the years pass. However, you do not need to run IBM's sample code, you can study it to understand how to fix your own C code.
If installed, the samples component must match your Db2-server version+fixpack , and you must use the C include files that come with your Db2-server version+fixpack to get the relevant definitions.
The modifications to IBM's samples code include:
When using the db2Restore API ensure its first argument has a value that is compatible with your server Db2-version-and-fixpack to access the required functionality. If you specify the wrong version number for the first argument, for example a version of Db2 that did not support this functionality, then the API will fail. For example, on my Db2-LUW v11.1.4.6, I used the predefined db2Version1113 , like this:
db2Restore(db2Version1113, &restoreStruct, &sqlca);
When setting the restore iOptions field: enable the flag DB2RESTORE_NOENCRYPT, for example, in IBM's example, include the additional flag: restoreStruct.iOptions = DB2RESTORE_OFFLINE | DB2RESTORE_DB | DB2RESTORE_NODATALINK | DB2RESTORE_NOROLLFWD | DB2RESTORE_NOENCRYPT;
Ensure the restoredDbAlias differs from the encrypted-backup alias name.
I tested with Db2 v11.1.4.6 (db2Version1113 in the API) with gcc 9.3.
I also tested with Db2 v11.5 (db2Version11500 in the API) with gcc 9.3.

Postgresql : SSL certificate error unable to get local issuer certificate

In PostgreSQL, whenever I execute an API URL with secure connection with query
like below
select *
from http_get('https://url......');
I get an error
SSL certificate problem: unable to get local issuer certificate
For this I have already placed a SSL folder in my azure database installation file at following path
C:\Program Files\PostgreSQL\9.6\ssl\certs
What should I do to get rid of this? Is there any SSL extension available, or do I require configuration changes or any other effort?
Please let me know the possible solutions for it.
A few questions...
First, are you using this contrib module: https://github.com/pramsey/pgsql-http ?
Is the server that serves https://url....... using a self-signed (or invalid) certificate?
If the answer to those two questions is "yes" then you may not be able to use that contrib module without some modification. I'm not sure how limited your access is to PostgreSQL in Azure, but if you can install your own C-based contrib modules there is some hope...
pgsql-http only exposes certain CURLOPTs (see: https://github.com/pramsey/pgsql-http#curl-options) values which are settable with http_set_curlopt()
For endpoints using self-signed certificates, I expect the CURLOPT you'll want to include support for to ignore SSL errors is CURLOPT_SSL_VERIFYPEER
If there are other issues like SSL/TLS protocol or cipher mismatches, there are other CURLOPTs that can be patched-in, but those also are not available without customization of the contrib module.
I don't think anything in your
C:\Program Files\PostgreSQL\9.6\ssl\certs
folder has any effect on the http_get() functionality.
If you don't want to get your hands dirty compiling and installing custom contrib modules, you can create an issue on the github page of the maintainer and see if it gets picked up.
You might also take a peek at https://github.com/pramsey/pgsql-http#why-this-is-a-bad-idea because the author of the module makes several very good points to consider.

Perl Net::LDAPS SSL version negotiation

Platform
OS: RHEL 5.4
Perl: 5.8.8
OpenSSL: 0.9.8e
Net::LDAP: 0.33
Details
We have a perl script which monitors LDAP servers used by our application and sends alerts if these are not reachable or functional in some way. On the whole this works well except for a particular LDAP server which is only accepting SSLv2. The LDAP object creation looks as follows
my $current_server = Net::LDAP->new('ldaps://10.1.1.1:636/,
verify => 'none',
sslversion => 'sslv23',
timeout => 30 )
Notice sslv23, which according to the documentation, allows SSLv2 or 3 to be used. The relevant entry in the docs is
sslversion => 'sslv2' | 'sslv3' | 'sslv23' | 'tlsv1'
However, the above does not work. When I change the "sslversion" to be "sslv2" then the script does work.
Conclusion
How can I get Net::LDAPS to retry with sslv2 if sslv3 does not work?
Thanks,
Fred.
To answer my own question - I could not find a way to make the library fall back to SSLv2 if SSLv3 fails so I did that in my own code by first trying SSLv3 and, if that fails, SSLv2. This works for my specific situation.
I encourage anyone who has more information to comment.
Which version of IO::Socket::SSL are you using?
Which version of Net::SSLeay are you using?
If you look at the source code for IO::Socket::SSL, you will see that sslv23 means it will use Net::SSLeay::CTX_new() which is equivalent to Net::SSLeay::CTX_v23_new(). SSLv2 may have been dropped from your library's CTX_v23_new implementation.

Enable FIPS on PostgreSQL database

Can someone please specify the steps to enable FIPS on Postgres Database? I have googled but was not able to find anything concrete.
Can someone please specify the steps to enable FIPS on Postgres Database?
I don't believe you can run Postgres in "FIPS mode" because of its use of non-approved cryptography. From a past audit, I know it makes extensive use of MD5 (see, for example, Postgres Mailing List: Use of MD5. So lots of stuff is going to break in practice.
Notwithstanding, here are the steps to try and do it via OpenSSL. There are three parts because Postgres is not FIPS-aware, and you need to make some modifications to Postgres.
Step One
You have to build OpenSSL for the configuration. This is a two step process. First you build the FIPS Object Module; and second, you build the FIPS Capable Library.
To build the FIPS Object Module, first you download `openssl-fips-2.n.n.tar.gz. After unpacking, you perform:
./configure
make
sudo make install
After you run the above commands, the fipscanister will be located in /usr/local/ssl/fips-2.0. The FIPS Capable Library will use it to provide the FIPS Validated Cryptography.
Second, you download openssl-1.n.n.tar.gz. After unpacking, you perform:
./configure fips shared <other options>
make all
sudo make install
The critical part is the fips option during configure.
After you run the above commands, you will have a FIPS Capable Library. The library will be located in /usr/local/ssl/lib. Use libcrypto.so and libssl.so as always.
The FIPS Capable Library uses the fipscanister, so you don't need to worry about what's in /usr/local/ssl/fips-2.0. Its just an artifact from building FIPS Object Module (some hand waiving).
Step Two
Find where Postgres calls SSL_library_init:
$ grep -R SSL_library_init *
...
src/backend/libpq/be-secure.c: SSL_library_init();
src/interfaces/libpq/fe-secure.c: SSL_library_init();
Open be-secure.c and fe-secure.c, and add a call to FIPS_mode_set.
/* be-secure.c, near line 725 */
static void
initialize_SSL(void)
{
struct stat buf;
STACK_OF(X509_NAME) *root_cert_list = NULL;
#if defined(OPENSSL_FIPS)
int rc;
rc = FIPS_mode();
if(rc == 0)
{
rc = FIPS_mode_set(1);
assert(1 == rc);
}
#endif
if (!SSL_context)
{
#if SSLEAY_VERSION_NUMBER >= 0x0907000L
OPENSSL_config(NULL);
#endif
SSL_library_init();
SSL_load_error_strings();
...
}
...
}
If the call to FIPS_mode_set succeeds, then you will be using FIPS Validated cryptography. If it fails, you will still be using OpenSSL's cryptography, but it will not be FIPS Validated cryptography.
You will also need to add the following headers to be-secure.c and fe-secure.c:
#include <openssl/opensslconf.h>
#include <openssl/fips.h>
Step Three
The final step is to ensure you are using the FIPS Capable Library from step one. Do that via CFLAGS and LDFLAGS:
cd postgres-9.3.2
export CFLAGS="-I/usr/local/ssl/include"
export LDFLAGS="-L/usr/local/ssl/lib"
./config --with-openssl <other options>
...
For PostgreSQL on Red Hat Linux, the https://public.cyber.mil/stigs/downloads/ web site has a Security Technical Implementation Guide for PostgreSQL 9.x which has this check.
Rule Title: PostgreSQL must implement NIST FIPS 140-2 validated
cryptographic modules to protect unclassified information requiring
confidentiality and cryptographic protection, in accordance with the data
owners requirements.
STIG ID: PGS9-00-008200
Rule ID: SV-87645r1_rule
Vuln ID: V-72993
The "Fix Text" reads
Configure OpenSSL to be FIPS compliant.
PostgreSQL uses OpenSSL for cryptographic modules. To configure OpenSSL to
be FIPS 140-2 compliant, see the official RHEL Documentation:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security_Guide/sect-Security_Guide-Federal_Standards_And_Regulations-Federal_Information_Processing_Standard.html
For more information on configuring PostgreSQL to use SSL, see supplementary
content APPENDIX-G.
Joseph Conway pointed out "the Appendix G the STIG refers to is in the PostgreSQL STIG supplement, not the [postgresql.org] docs. You can get the supplement (and the rest of the STIG) here: https://dl.dod.cyber.mil/wp-content/uploads/stigs/zip/U_PGS_SQL_9-x_V2R1_STIG.zip
As I understand your question you are looking at trying to ensure that you can encrypt data transferred to and from PostgreSQL using AES algorithms. While FIPS goes well beyond that, and well beyond what can be asked in Q&A, that question at least is easily answerable.
The simple solution is to use SSL with a certificate authority of your choice (if you are using Active Directory, you could use Certificate Server, and if not you could use OpenSSL to run your own certificate authority). You could then specify which encryption standards to use (see official docs). From there encryption will be used and your server will be authenticated to your clients. You can also set up client certs and require those to be authenticated as well.