FIPS policy blocks access to RDS-Postgres on AWS - postgresql

I am facing an access problem when I connect to a postgres database instance on AWS/RDS. The connection is made using the library NPGSQL. Debbugging and logging, I found that the problem occurs during theauthentication. The configuration of PG_HBA made behind the curtain by RDS is set to have the password MD5 encrypted. But MD5 is not a FIPS compliant algorithm and I get the exception.
I cannot bypass FIPS compliancy because of a company domain rule. If i try to set the flag
Computer\HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Lsa\FipsAlgorithmPolicy
in the windows registry to 0, it is set back to 1 after a while because of this policy.
Is there a way to change the encryption method for the connection password in RDP/Postgres?
Is there a way to overcome this problem some other way, for example editing the web.config file of the application?
Thank you.

I have solved bypassing the FIPS compliancy by adding the following settings in the file machine.config of the .net platform:
<configuration>
<runtime>
<enforceFIPSPolicy enabled="false"/>
</runtime>
</configuration>
I have been inspired here for the solution: https://blogs.msdn.microsoft.com/shawnfa/2008/03/14/disabling-the-fips-algorithm-check/

Related

PostgreSQL connection require a valid client certificate

I am trying to connect via SSL to a PostgreSQL using FireDac in Delphi. I have followed the instructions at the following site:
https://www.howtoforge.com/postgresql-ssl-certificates
I have created all the certificates. Configured the postgreql.conf as specified so it points to the required files. Copied the specified files to the client machine and installed the root.crt certificate.
Via FireDAC's connection params I have specified the following:
Params.values[SSL_ca']:=sslCertsPath+'root.crt';
Params.values['SSL_cert']:=sslCertsPath+'postgresql.crt.';
Params.values['SSL_key']:=sslCertsPath+'postgresql.key';
I am getting a connection error re invalid client certificate. I am not sure which certificate it is referring to and why it is invalid. Am I specifying the correct client certificates by way of the connection's params? If so, any suggestions as to why I may be getting the error please?
OpenSSL verify against the root.crt and postgresql.crt confirms the certificate is ok.
After over 3 weeks of frustration trying to set up PostgreSQL with SSL using FireDAC, I have finally figured out what the problem is and what the solution is.
For anyone wishing to connect using FireDAC, the howtoforge guide (see link in original post) works fine.
However, do not use the FireDAC parameters in my original post. PostgreSQL does not use them. You need to use the PGAdvanced parameter.
But even after figuring this out, I still could not get it to work for weeks until after testing I got an error message which finally made it clear what I was doing wrong. On Windows PostgreSQL strips out path delimiters unless you escape them (this is not mentioned in the PostgreSQL or FireDAC help files as far as I can see).
Below is an example of the correct way to connect using FireDac paramaters for ssl
Params.values['PGAdvanced']:='sslmode=verify-ca sslrootcert=C:\\ProgramData\\MWC\\Viewer\\Certs\\root.crt sslcert=C:\\ProgramData\\MWC\\Viewer\\Certs\\postgresql.crt sslkey=C:\\ProgramData\\MWC\\Viewer\\Certs\\postgresql.key';
If you don't wish to use a root certificate set sslmode to require.

Bitvise SSH Client command line (stnlc.exe) gets error while the one with GUI successfully connected

I'm integrating Bitvise client into my winform app. I am using Bitvise SSH Client command line (stnlc.exe in the app's directory) to do so. My app needs to have multiple connections at the same time.
It works well with some addresses, but some other it doesn't. This is the command that I'm using:
"C:\Program Files (x86)\Bitvise SSH Client\stnlc.exe" -profile="C:\Users\AutoOffer\AutoOffer\bin\Debug\data\sshprofile.bscp" -host=<myhost> -port=22 -user=<username> -pw=<password> -ka=y -proxyFwding=y -proxyListIntf=127.0.0.1 -proxyListPort=<port>
And this is the error I got:
Bitvise SSH Client 6.45 - stnlc - free for individual use only, see EULA
Copyright (C) 2000-2015 by Bitvise Limited.
Connecting to SSH2 server XX.XX.XX.XX:22.
Connection established.
Server version: SSH-2.0-dropbear_0.46
First key exchange started.
ERROR: The SSH2 session has terminated with error.
Reason: Error class: LocalSshDisconn, code: KeyExchangeFailed, message: FlowSshTransport: no mutually supported key exchange algorithm.
Local list: "ecdh-sha2-1.3.132.0.10,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group14-sha1".
Remote list: "diffie-hellman-group1-sha1".
I tried to connect manually by the Bitvise app with GUI and it successfully connected!
I also updated my bitvise version to the latest (6.45).
Local list: "ecdh-sha2-1.3.132.0.10,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group14-sha1".
Remote list: "diffie-hellman-group1-sha1".
So it looks like the remote side just supports diffie-hellman-group1-sha1, which is not supported on your side.
On Bitvise SSH Server Version History I read:
The 1024-bit fixed prime Diffie Hellman key exchange methods, diffie-hellman-group1-sha1 and gssapi-group1-sha1 with Kerberos 5, are now disabled by default, due to doubts about continuing security of Diffie Hellman with a 1024-bit fixed prime. Compatibility with most older clients should be retained via the diffie-hellman-group14-sha1 method, which uses a 2048-bit fixed prime. We recommend migrating older SSH clients to new versions supporting ECDH and ECDSA.
So it looks like you have to modify the settings and allow 1024-bit fixed prime Diffie Hellman key exchange methods. Otherwise you will not be able to connect. As explained it is of course better to change the ssh server settings.
Also, please note that running stnlc as a service is a possibility. With it, the tunnel can be started even without the user having to log on, and can be restarted upon dropping.
Be aware that wrapping and running stnlc as a service (using eg. nssm or winsw) absolutely requires adding the unat=y option to prevent the service from going interactive and failing.

Enabling mdf location to be accessible in management studio and easy to create when distributing the application

I am using Entity Framework Code First to create my database.
Here is the current connection string
"Integrated Security=SSPI;MultipleActiveResultSets=True;Pooling=false;Data Source=(localdb)\\v11.0;Initial Catalog=Inventory"
However this database is not visible when I try to attach it inside SQL Server Management Studio.
This is because the account that runs the SQL Server service would need to have access to my user folder in order to see it.
I tried giving this account access but had problems due to permissions of other things in my user folder.
Thus I thought I should perhaps specify a folder name for the database to be created in, but I am unsure on how to do this, and what other problems this approach may bring.
[Update]
I am now investigating setting the AttachDbFilename in app.config
this link is helpful however I aren't clear on how to set up |DataDirectory| for a winforms app.
[Update]
The following connection string works
<add name="ConnectionString" connectionString="Integrated Security=SSPI;MultipleActiveResultSets=True;Pooling=false;Data Source=(localdb)\v11.0;AttachDbFilename=c:\databases\MyDatabase.mdf;"/>
It would be helpful to know how to configure the path to be the same as the exe file location.
You may first use sql server management studio (ssms) to connect to your localdb instance (server name: (localdb)\v11.0, Windows authentication)
make a backup of your localdb database (right click db -> task -> backup)
then share the db backup file with other system.
I wound up placing the following in Main()
AppDomain.CurrentDomain.SetData("DataDirectory", AppDomain.CurrentDomain.BaseDirectory);
and the following in app.config
<add name="ConnectionString" connectionString="Integrated Security=SSPI;MultipleActiveResultSets=True;Pooling=false;Data Source=(localdb)\v11.0;AttachDbFilename=|DataDirectory|\MyDatabase.mdf;"/>

__RequestVerificationToken always the same when reloading form in MVC4 application

While on a development server a standard MVC4 / EF4.5 login form with Html.AntiForgeryToken() refreshes its value with every page load. When deploying the same code on IIS the hidden value __RequestVerificationToken is always the same (at least in one browser session). Other similar applications on the same server do not seem to have this behavior.
Which web.config/IIS parameter might be responsible for this?
Already tried setting the machine key (single server) but this only seems to shorten the token, the refresh problem remains the same.
Also tested in different browsers. Here are some demo values for IIS and development server:
IIS:
Without machine key in web.config:
BGxqV7DjpHomi22By0r70WebHiWMV2OcsrCMN-dNDkRElZrv6BMQH23_zK9abmRsty_n1NImH2-gEsi3nBrWIQ2
With machine key in web.config:
dXBdht7mn2plT2rPvv0HzWtFvn-N9MT6xzW_xc8dVqnLdofzrL5v0SZFMAFPTANR0
Cassini / development:
Without machine key in web.config:
Yedkrxms9oYmHGzhV93qsrryVuNKZSWKBwCkP-RzK-tAZGgQ6J5g6Yp0LsCQPehucVwDcUs5lfRUf6Y6FxYUqY0olkE3-PmtF0ZnrCcbXD6XuA1PgPoFchreTPnCCSCwsh3E3FPmdKPlabyOfqiykkVqocxzYBMqd7A3bCZIxU01
With machine key in web.config:
iFjqi1OYplYfhCYdflAw1LSncVwK3b1yfDaJRgfrqVamucJ992D3-pFD__RolMZ_edp6muXQWLkxGOQp5Wn2ObTKXltO2J9tq32-JUMGu7cXdYZMkty3MRwuE-SuIFt7zo7TvQ2
Try the fix mentioned in the below KB. It solved the issue in our environment.
http://support.microsoft.com/kb/2656351

Does Greenplum support Kerberos Authentication between its nodes?

I need to "kerberize" our Greenplum cluster. One of the aspects of this is that I should kerberize the interface between the GP master and its Segment Hosts. I have been unable to determine if this is supported or not.
I have seen the parameters in the posgresql.conf file (krb_server_keyfile and krb_srvname) and have tried to set these, but it does not seem to work (Greenplum still works, it just does not appear the connection is kerberized).
I did this with hadoop and it was pretty straight forward, but, again, cannot figure out how to do it in GP or if it is even possible. Any ideas?
Thanks
Refer to Greenplum HD Manager 1.2 Installation and User Guide for instructions on how to deploy Kerberos. The document is related to Hadoop, but should serve for a general Greenplum install.
So... the answer, as near as I can tell is this:
First, for clarification, there are two places where I am required to "kerberize" GP. The first in master/slave connectivity. This turned out the be easy enough after I learned this communication is ssh based. I just switched the rsa/dsa generated passwordless authorization with Kerberos SSH. I am not sure this is really any more or less secure, but a requirement none-the-less. The second is locking down the administrative/jdbc access. This should easy, after all GP is based upon Postgres, I have have secured Postgres with Kerberos in the past. Unfortuntaly, GP is based upon Postgres 8.2. This was before GSS support for Kerberos was added to Postgres, and I cannot get this to work. I am not positive that it can. Maybe GP will upgrade to 8.4 (at a minimum) soon and I can try that.