Does anyone know in DB2 if you can run any SQL that shows if you are connecting using a kerberos connection ?
The following might work for; it doesn't say explicitly that the connection was authenticated via Kerberos but it implies authentication via Kerberos.
SELECT 1 FROM SYSIBMADM.DBMCFG
WHERE NAME='authentication'
AND VALUE IN('KERBEROS','KRB_SERVER_ENCRYPT') FETCH FIRST ROW ONLY
Related
Following the instructions here, I'm having problem connecting to the DB from Azure Data Studio using the token I generate. It connects to the DB successfully, but as soon as I want to run a simple query ( I already gave my user read access there), it gives me this connection error, and I need to connect using the token again and the disconnection happens again randomly after a short while:
FATAL: Cloud SQL IAM user authentication failed for user
"user#company.com" FATAL: pg_hba.conf rejects connection for host
"...", user "user#company.com", database "db-name",
SSL off
I did some search and found there is also a way of logging in with IAM database authentication using the Cloud SQL Auth proxy but the documentation is limited to Postgress command line and not a GUI database tool like Azure Data Studio. Can anyone shed some light on this about what's needed if you want to connect with a GUI tool in this case?
And about changing the pg_hba.conf file, since I work with a cloud SQL instance, I'm not sure how to turn sslmode off on the cloud instance. I checked the connection tab of my instance and SSL encryption wasn't checked there (not sure if that's the same),and I changed the sslmode to disable on my Azure Data Studio for the connection but it won't allow me to connect after this change:
FATAL: pg_hba.conf rejects connection for host "*.*.*.*", user "user#company.com", database "database", SSL off
Help, anyone?
I've found the answer: we can connect using IAM database authentication using the Cloud SQL Auth proxy. The only step after to be done from the GUI DB tool (mine is Azure Data Studio) would be, to connect to the IP (127.0.0.1 in my case)the Cloud SQL Auth proxy listens on(127.0.0.1 is the default) after starting the Cloud SQL Auth proxy using:
./cloud_sql_proxy -instances=<GCPproject:Region:DBname>=tcp:127.0.0.1:5432
I have 2 systems. system A and system B and both are DB2 servers. I want to be able to access system B database from system A. Both have a database called TESTDB. I am trying to run the following command to create a server.
CREATE WRAPPER "drdawrapper"
LIBRARY 'libdb2drda.so'
OPTIONS (DB2_FENCED 'Y'
);
db2 "CREATE SERVER "PRD_SERVER_SSL_FLEX" TYPE DB2/UDB VERSION '11' WRAPPER "drdawrapper" AUTHORIZATION "xyz" PASSWORD "xyz" OPTIONS (DB2_CONCAT_NULL_NULL 'Y',DB2_VARCHAR_BLANKPADDED_COMPARISON 'Y',DBNAME 'TESTDB',HOST '169.62.253.230',NO_EMPTY_STRING 'N',PORT '50001',SECURITY 'SSL',STRING_UNITS 'S');"
But I keep getting:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL1101N Remote database "TESTDB" on node "<unknown>" could not be accessed
with the specified authorization id and password. SQLSTATE=08004
Node directory:
db2 list node directory
Node Directory
Number of entries in the directory = 1
Node 1 entry:
Node name = TESTNODE
Comment =
Directory entry type = LOCAL
Protocol = TCPIP
Hostname = 123.21.23.12
Service name = 50001
The credentials are correct. I am not sure what node is it looking for. Any pointers?
Your question is more about configuration than programming.
As you appear to be encrypting the federated connection it can be wise to first verify that the encrypted connection works at the command-line, separately from federation. This irons out a lot of the detail and is easier to troubleshoot. After you get that working, you can then begin on encrypting the federated connection.
Please follow the detailed instructions here (choose the correct Db2-version):
You have to know in advance which kind of SSL/TLS trust verification you want (i.e. either single cert (client trusts the server - simplest and easiest), or multiple certs (both sides trust the other - more setup, arguably more secure), because this determines the configuration.
Ensure both of your Db2 instances and databases are properly configured for SSL.
Catalog the remote-node locally with security SSL (db2 catalog tcpip node ... remote ... server ...security ssl)
Catalog the remote-database locally on the new node name (db2 catalog database ... at node ...) followed by db2 terminate .
Verify a command-line connect to the remote database using the federated credentials, using the configured db2dsdriver.cfg if using SSLSERVERCERTIFICATE method, or using the keystore/stash configuration ( db2 connect to remotedb user ... using ... ). Use the same userid/password that you will use later in the create server command.
Once that command-line connect works, you can proceed with the encrypted federation link, via db2 create wrapper... and db2 create server....
There's no need to use quotes around the wrapper name, just let it fold, otherwise quotes are just annoying redundant noise, although it is not a mistake.
Inside the script for create server command options instead of AUTHORIZATION "xyz" PASSWORD "xyz" use AUTHORIZATION \"xyz\" PASSWORD \"xyz\" (i.e. escape the quotes).
For one-sided trust, use SSL_SERVERCERTIFICATE in the create server options clause and ensure the value is accurate (fully qualified path to the remote-db2instance-certificate-file), and that the file/directory permissions are valid.
For mutual trusts, use both SSL_KEYSTORE and SSL_KEYSTASH keywords with correct values, in the create server options clause (having previously ensured your keystores are properly populated, as verified by a command-line connect above).
You may also want to consider create user mapping depending on the requirements.
Finally you can create your nicknames, and test out the federated link by querying those nicknames.
Right now I have the following, which works:
host all all all ldap ldapserver=ldap.server.name ldapprefix="DOMAIN\"
but to my understanding the connection between the ldap server and pg db isn't encrypted and I need it to be. So i change to:
host all all all ldap ldapserver=ldap.server.name ldapprefix="DOMAIN\" ldaptls=1
this give me an error saying "could not start ldap tls session connect error".
What are the steps that Im missing in order to get this working? I have a feeling I need to be dropping certs either somewhere on my ldap instance or pg instance (or both) but I don't really have any experience configuring any of this.
If you are looking to use ldaptls=1, then please make sure that you are using the correct certs to connect to the LDAP server. Also, depending on how LDAP is set up, you may need to add ldapport=389 to tell Postgres to use the TLS port on the LDAP side.
More information at https://richyen.com/postgres/2018/02/09/making_postgres_talk_to_ldap_with_starttls.html
DB2 is our Application DB and we are connecting from our application using the DB2 libraries.
However, we store the credentials in an encrypted format and use that for connecting.
If DB2 has an option to connect using a trusted user (like Informix), we could remove the password stored, though it is encrypted.
Anyone knows, is it possible with DB2?
Any help is much appreciated.
Thanks
You may try the Client authentication method or Kerberos Authentication.
Authentication methods for servers.
Kerberos authentication.
I am trying to setup PostgreSQL and allow only certain Windows users to access the data from the database. Setting up Windows Authentication is Quite easy with MS SQL, but I can't figure out how to set it up in PostgreSQL.
I have gone through the documentation at http://www.postgresql.org/docs/current/static/auth-methods.html
and edited the pg_hba file. But after doing so, the PostgreSQL service fails to start.
Is the Postgresql server running on Windows as well as the clients then you might test with this to see if this works:
host all all 0.0.0.0/0 sspi
Magnus Hagander, a Postgresql developer, elaborates on this:
"All users connecting from the local machine, your domain, or a trusted domain will be automatically authenticated using the SSPI configured authentication (you can enable/disable things like NTLMv2 or LM using Group Policy - it's a Windows configuration, not a PostgreSQL one). You still need to create the login role in PostgreSQL, but that's it. Note that the domain is not verified at all, only the username. So the user Administrator in your primary and a trusted domain will be considered the same user if they try to connect to PostgreSQL. Note that this method is not compatible with Unix clients."
If you mix Unix-Windows then you have to resort to kerberos using GSSAPI which means you have to do some configuration. This article on deploying Pg in Windows environments may perhaps lead you in the right path.
If anyone else encouters this like I did so starting from 9.5 you wil need to add an optional parameter both to the ipv4 and ipv6 in order for this to work
include_realm=0
so the whole thing will look like
host all your_username 127.0.0.1/32 sspi include_realm=0