Postgresql - LDAP Authentication against Active Directory (AD) - trouble from linux server while ok from windows server - postgresql

EDIT
I put the same pg_hba rule on the postgresql installed on my WINDOWS laptop, and it works... so I changed titled : how can I make my linux server work to authenticate users with AD, like the windows Pg server does ?
/ EDIT
I need to perform authentication of postgresql db users with our Active Directory servers. I've tested lots of configurations but so far, I couldn't find why postgresql users can't be authenticated with this authentication methode.
LDAP : Active Directory / Postgresql : Postgresql 9.4
Here is the pg_hba rule I use :
host myDB myUser localhost ldap ldapserver="192.168.10.1" ldapbasedn="DC=companygroup,DC=priv" ldapbinddn="cn=LDAP - Lecture,ou=Users,ou=Specials Objects,dc=companygroup,dc=priv" ldapbindpasswd="ldapPassWord" ldapsearchattribute="sAMAccountName"
When logging with 'myUser' with the correct password for this user, I have the following logs in postgresql log file :
2015-11-18 10:01:50 CET [25991-1] [unknown]#[unknown] LOG: 00000: connection received: host=127.0.0.1 port=39074
2015-11-18 10:01:50 CET [25991-2] [unknown]#[unknown] LOCATION: BackendInitialize, postmaster.c:4003
2015-11-18 10:01:50 CET [25991-3] myUser#myDB LOG: 00000: could not search LDAP for filter "(sAMAccountName=myUser)" on server "192.168.10.1": Operations error
2015-11-18 10:01:50 CET [25991-4] myUser#myDB LOCATION: CheckLDAPAuth, auth.c:2030
2015-11-18 10:01:50 CET [25991-5] myUser#myDB FATAL: 28000: LDAP authentication failed for user "myUser"
2015-11-18 10:01:50 CET [25991-6] myUser#myDB DETAIL: Connection matched pg_hba.conf line 104: "host myDB myUser localhost ldap ldapserver="192.168.10.1" ldapbasedn="DC=companygroup,DC=priv" ldapbinddn="cn=LDAP - Lecture,ou=Users,ou=Specials Objects,dc=companygroup,dc=priv" ldapbindpasswd="ldapPassWord" ldapsearchattribute="sAMAccountName"
I saw that if I change somehow ldapbinddn or ldapbindpasswd, I have another error like 'couldn't perform initial LDAP bind for ldapbinddn "...". so these parameters should be ok.
"Operations error" was no very detailed, so I tcpdump the authentication process and here is what I found. It seems that Postgres perform two queries :
First to search the user via the search attribute. This action seems OK because in the response of Active Directory, I saw information tied to my user.
Then another query is performed. On this one, the real message from the LDAP Active directory server is :
LdapErr: DSID-0C0906E8, comment: In order to perform this operation a successful bind must be completed on the connection., data 0 , v1db1
On this second query, I see that PG seem to change slightly the base search with
"DC=ForestDnsZones,DC=companygroup,dc=priv" instead of "DC=companygroup,DC=priv"
(I saw it in the tcp trace :
LDAPMessage searchRequest(3) "DC=ForestDnsZones,DC=companygroup,dc=priv" wholeSubtree ...
)
When I tried a research thanks to windows soft "ldapbrowser", I was abled to find my account with a simple filter (sAMAccountName=myUser), with the searchDN DC=companygroup,DC=priv
Is my understanding correct ? is it possible that the search is not successful just because of the basedn that is maybe changed ? or do I miss something else ?

Based on your log, your configuration for ldapsearchattribute="sAMAccountName" doesn't work.
You can use a LDAP tool such as LDAPAdmin or OpenLDAP to test your filter. Make sure that the above return result when your filter the attribute below
(sAMAccountName=myUser)

Related

Origin of a postgres timeout impossible to determine

I have a mobile application (C#) which call an API using phoenix and ecto.
This API makes several calls to the postgres database. Each call lasts approximately 60ms and we make approximately 25 calls to the database.
But at some point, I get a timeout from the database.
Here is the postgres error:
2020-06-04 09:40:03.503 CEST [24455] postgres#view_models ERROR: canceling statement due to user request
2020-06-04 09:40:03.503 CEST [24455] postgres#view_models STATEMENT: SELECT DISTINCT ON (i0."intervention_id") i0."intervention_id" FROM "interventions" AS i0 LEFT OUTER JOIN "appointments" AS a1 ON a1."intervention_id" = i0."intervention_id" WHERE ((i0."account_id" = $1) AND ((i0."updated_at" > $2) OR (a1."updated_at" > $3))) LIMIT 1
2020-06-04 09:40:03.504 CEST [24455] postgres#view_models LOG: could not send data to client: Broken pipe
2020-06-04 09:40:03.504 CEST [24455] postgres#view_models FATAL: connection to client lost`
Here is the ecto error:
DBConnection.ConnectionError: tcp recv: closed (the connection was closed by the pool, possibly due to a timeout or because the pool has been terminated)
The postgres statement_tiemout parameter is 0
Here is the ecto config:
config :query_backend, QueryBackend.V1.Repo,
username: System.get_env("POSTGRES_USERNAME"),
password: System.get_env("POSTGRES_PASSWORD"),
database: System.get_env("VIEW_POSTGRES_DB_NAME"),
hostname: System.get_env("POSTGRES_HOST"),
pool_size: 10,
queue_target: 3_000,
queue_interval: 15_000,
connect_timeout: 20_000,
timeout: 30_000
On the mobile, the HTTP client timeout is 15s.
This problem is reproduced only by our customers and our technical team cannot reproduce it.
Can you tell me if my configuration is valid? What is the origin of timeout? the mobile HTTP client, Ecto, Postgres ?
Thank you in advance for your help
Best regards.
Julien.

How to pull mongodb logs with Wazuh agent?

I did following settings on /var/ossec/etc/ossec.conf and after that I restart agent but it's not showing logs on the Kibana dashboard
<localfile>
<log_format>syslog</log_format>
<location>/var/log/mongodb/mongod.log</location>
I performed a basic installation of Wazuh + MongoDB on agent side with the following results:
MongoDB by default writes inside syslog file located at /var/log/syslog.
Inside /var/log/mongodb/mongod.log there are internal mongo daemon logs that are more specific.
We could monitor such logs on Wazuh agent by:
<localfile>
<log_format>syslog</log_format>
<location>/var/log/syslog</location>
</localfile>
This rule is included by default on the agent but anyway is good to remember.
the other one as you point it out:
<localfile>
<log_format>syslog</log_format>
<location>/var/log/mongodb/mongod.log</location>
</localfile>
I only see that you didn't copy the closing tag </location> but it could be copy mistake, whatever is good to take a look at /var/ossec/logs/ossec.log to find some error.
With that configuration we could receive alerts like this:
** Alert 1595929148.661787: - syslog,access_control,authentication_failed,pci_dss_10.2.4,pci_dss_10.2.5,gpg13_7.8,gdpr_IV_35.7.d,gdpr_IV_32.2,hipaa_164.312.b,nist_800_53_AU.14,nist_800_53_AC.7,tsc_CC6.1,tsc_CC6.8,tsc_CC7.2,tsc_CC7.3,
2020 Jul 28 09:39:08 (ubuntu-bionic) any->/var/log/mongodb/mongod.log
Rule: 2501 (level 5) -> 'syslog: User authentication failure.'
2020-07-28T09:39:07.431+0000 I ACCESS [conn38] SASL SCRAM-SHA-1 authentication failed for root on admin from client 127.0.0.1:52244 ; UserNotFound: Could not find user "root" for db "admin"
If we run mongo -u root (with bad password) on agent side.

Postgres: log_statement = 'none' is ignored when query comes from C++

I am blocked by the following problem.
The log_statement of postgres on a server is set to ddl (or all). The database is created by my application, written in C++. Queries are sent to DB by using libpq - PQexec.
Every query is logged twice, i don't know why (I am not a C++ programmer nor expert in postgres):
Apr 3 02:26:44 xxx postgres[12345]: [8-1] [2020-04-03 02:26:44.487 CDT] [s:xxx.694d] [u:user] [a:[unknown]] [db:postgres] [p:12345] [clnt:[local]] LOG: statement: CREATE USER "gingillo" WITH PASSWORD 'giggio';
Apr 3 02:26:44 xxx postgres[12345]: [9-1] [2020-04-03 02:26:44.487 CDT] [s:xxx.694d] [u:user] [a:[unknown]] [db:postgres] [p:12345] [clnt:[local]] LOG: AUDIT: SESSION,1,1,ROLE,CREATE ROLE,,,"CREATE USER ""gingillo"" WITH PASSWORD <REDACTED>",<not logged>
As you already imaged, I don't want to log passwords, so I changed the current query (1):
CREATE USER "gingillo" WITH PASSWORD 'giggio'
to be following (2):
BEGIN;SET LOCAL log_statement = 'none';CREATE USER "gingillo" WITH PASSWORD 'giggio';COMMIT;
If I run manually the query (2), I get the result I want, just one log is logged containing REDACTED instead of the password:
Apr 3 02:26:44 xxx postgres[12345]: [9-1] [2020-04-03 02:26:44.487 CDT] [s:xxx.694d] [u:user] [a:[unknown]] [db:postgres] [p:12345] [clnt:[local]] LOG: AUDIT: SESSION,1,1,ROLE,CREATE ROLE,,,"CREATE USER ""gingillo"" WITH PASSWORD <REDACTED>",<not logged>
But when the same query is run by C++, I have double log, showing even the log_statement stuff:
Apr 8 06:44:24 xxx postgres[27171]: [8-1] [2020-04-08 06:44:24.489 CDT] [s:xxx.6a23] [u:user] [a:[unknown]] [db:postgres] [p:27171] [clnt:[local]] LOG: statement: BEGIN;SET LOCAL log_statement = 'none';CREATE USER "gingillo" WITH PASSWORD 'giggio';COMMIT;
Apr 8 06:44:24 xxx postgres[27171]: [9-1] [2020-04-08 06:44:24.490 CDT] [s:xxx.6a23] [u:user] [a:[unknown]] [db:postgres] [p:27171] [clnt:[local]] LOG: AUDIT: SESSION,1,1,ROLE,CREATE ROLE,,,"BEGIN;SET LOCAL log_statement = 'none';CREATE USER ""gingillo"" WITH PASSWORD <REDACTED>",<not logged>
Does anybody have any idea how to disable the first log? What am I doing wrong?
SET LOCAL scope is only for the current transaction.
Try:
Either to run SET log_statement=none that persists during the database session unless ROLLBACK is issued.
or to run SET LOCAL log_statement=none for each transaction.
It looks C++ code is setting log_statement parameter: the best solution should be to remove it since it already uses another logging.
You are sending PostgreSQL a single "statement" which contains 4 actual statements. By the time the "SET LOCAL log_statement = 'none'" is processed, the damage has already been done as the entire multi-statement statement has already been logged. You need to send each statement separately if you want to have part of it effect the settings for the other part of it.
Alternatively, just avoid setting log_statement to 'all' in the first place, as it seems to be unneeded.

sphinxsearch: ERROR: index 'ad1_offers': sql_connect: Access denied for user

Mysql client have access, password is true. The mysql user have all privileges on all databases:
GRANT SELECT,
INSERT,
UPDATE,
DELETE,
CREATE,
DROP,
RELOAD,
SHUTDOWN,
PROCESS,
REFERENCES,
INDEX,
ALTER,
SHOW DATABASES,
SUPER,
CREATE TEMPORARY TABLES, LOCK TABLES,
EXECUTE,
REPLICATION SLAVE, REPLICATION CLIENT,
CREATE VIEW, SHOW VIEW,
CREATE ROUTINE, ALTER ROUTINE,
CREATE USER,
EVENT,
TRIGGER ON . TO 'ad1'#'%'
I have errors:
/usr/bin/indexer --all
Sphinx 2.2.10-id64-release (2c212e0)
Copyright (c) 2001-2015, Andrew Aksyonoff
Copyright (c) 2008-2015, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file '/etc/sphinxsearch/sphinx.conf'...
indexing index 'ad1_offers'...
ERROR: index 'ad1_offers': sql_connect: Access denied for user ad1'#'192.168.0.177' (using password: YES) (DSN=mysql://ad1:***#192.168.0.177:3306/ad1).
or
/etc/init.d/sphinxsearch start
Starting sphinxsearch: Sphinx 2.2.10-id64-release (2c212e0)
Copyright (c) 2001-2015, Andrew Aksyonoff
Copyright (c) 2008-2015, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file '/etc/sphinxsearch/sphinx.conf'...
listening on 192.168.0.177:9312
listening on 192.168.0.177:9306
precaching index 'ad1_offers'
WARNING: index 'ad1_offers': preload: failed to open /var/lib/sphinxsearch/data/ ad1_offers.sph: No such file or directory; NOT SERVING
FATAL: no valid indexes to serve ERROR.
spnix.conf:
type = mysql
sql_host = 192.168.0.177
sql_user = ad1
sql_pass = ....
sql_db = ad1
sql_port = 3306 # optional, default is 3306
ERROR: index 'ad1_offers': sql_connect: Access denied for user ad1'#'192.168.0.177' (using password: YES) (DSN=mysql://ad1:***#192.168.0.177:3306/ad1).
That means indexer can't even CONNECT to mysql. Its more about the user existing and the password being right, than the actual permissions the user has.
Can you use the mysql command line client to connect to your database?
in my case, when I had such a mistake it's just was a wrong password for MySQL.
my users's password contained signs of type like '#' or '!'.
after I removed these characters from the password everything started working!

MongoDB : Cloning Database error?

When I try to clone my mongo database from other machine, I see the following on client
db.cloneDatabase('10.10.124.110')
{ "errmsg" : "query failed staging.system.namespaces", "ok" : 0 }
and on server I see
Thu Nov 10 11:29:01 [conn10] assertion 10057 unauthorized db:staging lock type:-1 client:10.10.124.110 ns:staging.system.namespaces query:{}
How can I resolve this issue?
That error seems a lot like this one https://jira.mongodb.org/browse/SERVER-2846 where an error is thrown because copyDatabase() ... which cloneDatabase() uses ... requires Admin privileges. In that case the user is using a hosted MongoDB instance where they didn't have admin privileges.
You can see some more about how to use the copyDatabase() command here and here.
So, for example if you are using -auth a username/password you'll need to run the copyDatabase() command like this:
> db.copyDatabase(from_db, to_db, from_host, username, password);
I was able to just resolve this error by querying the PRIMARY host in a replicaSet, rather than the SECONDARY.