Docbase could not connect to the database error in documentum content server installation - postgresql

While I am installing Documentum content server on AWS EKS, I am receiving this Error.
Postgress DB is installed on EC2 VM.
14:20:47,013 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerModifyDocbaseDirectory
- The installer will create the folder structure for repository postgres. 14:20:47,021 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerPasswordFileGenerator
- The installer is generating database password file... 14:20:47,111 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerIniGenerator
- The installer will create server.in file for repository postgres. 14:20:47,152 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCreateTableSpaceScriptGenerator
- The installer will create scripts to for Postgresql Database. 14:20:47,152 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCreateTableSpaceScriptGenerator
- The URL is jar:file:/tmp/install.dir.208/InstallerData/installer.zip!/dm_CreateTableSpace.sql 14:20:47,209 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCopyDeleteTableSpaceScript
- The installer will move file /opt/dctm/dba/config/postgres/dm_DeleteTableSpace.sql to a new location /opt/dctm/server_uninstall/delete_db/postgres/dm_DeleteTableSpace.sql. 14:20:47,214 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCreateTableSpace
- The installer is executing the : Creating the database script. 14:20:47,355 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerWebCacheIniGenerator
- The installer will create webcache.ini file for the repository. 14:20:47,394 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerTestServerIni
- The installer is testing the database connection information 14:20:47,395 INFO [main] com.documentum.install.server.common.services.db.DiServerPostgresqlServer
- The installer is validating the database connection information in the server.ini file. 14:20:47,563 ERROR [main] com.documentum.install.server.installanywhere.actions.DiWAServerTestServerIni
- Docbase could not connect to the database. Please check output file for more information: /tmp/291406.tmp/DBTestResult18051870723865753931.tmp com.documentum.install.shared.common.error.DiException: Docbase could not connect to the database. Please check output file for more information: /tmp/291406.tmp/DBTestResult18051870723865753931.tmp
This is log of dm_CreateTableSpace.out
psql:/opt/dctm/dba/config/postgres/dm_CreateTableSpace.sql:1: ERROR: role "postgres" already exists
psql:/opt/dctm/dba/config/postgres/dm_CreateTableSpace.sql:3: ERROR: zero-length delimited identifier at or near """"
LINE 1: GRANT "postgres" TO "";
^
psql:/opt/dctm/dba/config/postgres/dm_CreateTableSpace.sql:6: ERROR: database "dm_postgres_docbase" already exists
ALTER DATABASE
GRANT
psql:/opt/dctm/dba/config/postgres/dm_CreateTableSpace.sql:9: ERROR: zero-length delimited identifier at or near """"
LINE 1: REVOKE "postgres" FROM "";
^
You are now connected to database "dm_postgres_docbase" as user "postgres".
CREATE SCHEMA
SET
GRANT
GRANT
GRANT
I don't understand how to make it working.
Even I am facing issues with Postgres RDS Instant i.e.
GRANT "postgres" TO ""
The log is getting generated on PostgreSQL ec2 instance
2021-07-20 11:53:46.434 UTC [7854] dctm#dm_dctm_docbase FATAL:
password authentication fail
ed for user "dctm" 2021-07-20 11:53:46.434 UTC [7854]
dctm#dm_dctm_docbase DETAIL: Role "dctm" does not exist.
Connection matched pg_hba.conf line 99: "host all all 172
.16.0.0/16 md5" 2021-07-20 11:53:46.436 UTC [7855]
dctm#dm_dctm_docbase FATAL: password authentication fail
ed for user "dctm" 2021-07-20 11:53:46.436 UTC [7855]
dctm#dm_dctm_docbase DETAIL: Role "dctm" does not exist.
Connection matched pg_hba.conf line 99: "host all all 172
.16.0.0/16 md5" 2021-07-20 11:53:49.056 UTC [7857]
postgres#postgres ERROR: zero-length delimited identifie
r at or near """" at character 17 2021-07-20 11:53:49.056 UTC [7857]
postgres#postgres STATEMENT: GRANT "dctm" TO ""; 2021-07-20
11:53:49.145 UTC [7857] postgres#postgres ERROR: zero-length
delimited identifie
r at or near """" at character 20 2021-07-20 11:53:49.145 UTC [7857]
postgres#postgres STATEMENT: REVOKE "dctm" FROM ""; password
authentication fail ed for user "dctm" 2021-07-20
11:53:46.434 UTC [7854] dctm#dm_dctm_docbase DETAIL: Role "dctm" does
not exist.
I am using PostgreSQL superuser login and is working fine with cli.
updated the value in heml chart i.e values.yaml for documentum : content-server.

Every time you try to re-run the install you should completely delete everything. It looks like Postgres has the tablespace already created and that's why the test is failing

Related

"error: pq: role "root" does not exist" when running pq with Postgres for Docker [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 months ago.
Improve this question
I am setting up a local Postgres database on Docker with the postgres:14-alpine image, and running database migrations on it with golang-migrate, when I got the following error message after running the migrate tool:
error: pq: role "root" does not exist
I was running the following commands:
$ docker run --name postgres14 -p 5432:5432 -e POSTGRES_USER=root -e POSTGRES_PASSWORD=pass -d postgres:14-alpine
$ docker exec -it postgres14 createdb --user=root --owner=root demodb
$ migrate -path db/migrations -database postgresql://root:pass#localhost:5432/demodb?sslmode=disable --verbose up
These commands can also be viewed in this Makefile, and the full codebase can be found in this repository.
Here are the logs from the Postgres container:
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/data -l logfile start
waiting for server to start....2022-10-15 09:56:41.209 UTC [36] LOG: starting PostgreSQL 14.5 on x86_64-pc-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
2022-10-15 09:56:41.211 UTC [36] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2022-10-15 09:56:41.217 UTC [37] LOG: database system was shut down at 2022-10-15 09:56:41 UTC
2022-10-15 09:56:41.220 UTC [36] LOG: database system is ready to accept connections
done
server started
CREATE DATABASE
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
waiting for server to shut down...2022-10-15 09:56:41.422 UTC [36] LOG: received fast shutdown request
.2022-10-15 09:56:41.423 UTC [36] LOG: aborting any active transactions
2022-10-15 09:56:41.423 UTC [36] LOG: background worker "logical replication launcher" (PID 43) exited with exit code 1
2022-10-15 09:56:41.424 UTC [38] LOG: shutting down
2022-10-15 09:56:41.434 UTC [36] LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
What should I do to configure the root role correctly?
The docker image docs specify that POSTGRES_USER environment variable defaults to postgres if not set, try using that instead of root or drop the container and build it again using the correct environment variable
once you are inside the psql shell you can create a user with
CREATE USER username WITH PASSWORD 'your_password';
then to grant the user access on a specific database:
GRANT ALL PRIVILEGES ON DATABASE demodb TO username;
once that is done you can use the user in the connection string in make file
Turns out the Postgres server that was installed and setup on my OS by Hombrew was using the same port, which clashed with the requests made to the containerized database under the same port number.
This issue can be solved by either using a different port number for the containerized database, or by shutting down the database on the OS.

Bitnami jupyterhub charts on k8s failing due to password authentication failed for user "bn_jupyterhub"

I deployed JupyterHub packaged by Bitnami on our GKE k8s cluster under one of our namespaces with the same default values defined in their repo successfully.
https://github.com/bitnami/charts/tree/master/bitnami/jupyterhub/#installing-the-chart
However I noticed jupyterhub pod is crashing with Init:CrashLoopBackOff error and logs shows it could not connect to database server.
kubectl logs pod/jupyterhub-hub-59cc99bdfb-d4vjx wait-for-db
←[38;5;6m ←[38;5;5m04:24:03.35 ←[0m←[38;5;2mINFO ←[0m ==> Connecting to the PostgreSQL instance jupyterhub-postgresql:5432
←[38;5;6m ←[38;5;5m04:25:03.57 ←[0m←[38;5;1mERROR←[0m ==> Could not connect to the database server
/bin/bash: line 18: return: can only `return' from a function or sourced script
It seems postgressql-0 pod is runnin into authentication error seems to be the cause here. I have not used any changes in the values.yaml file provided in the https://github.com/bitnami/charts/blob/master/bitnami/jupyterhub/values.yaml.
kubectl logs jupyterhub-postgresql-0 postgresql
←[38;5;6mpostgresql ←[38;5;5m03:13:30.41 ←[0m
←[38;5;6mpostgresql ←[38;5;5m03:13:30.41 ←[0m←[1mWelcome to the Bitnami postgresql container←[0m
←[38;5;6mpostgresql ←[38;5;5m03:13:30.41 ←[0mSubscribe to project updates by watching ←[1mhttps://github.com/bitnami/containers←[0m
←[38;5;6mpostgresql ←[38;5;5m03:13:30.41 ←[0mSubmit issues and feature requests at ←[1mhttps://github.com/bitnami/containers/issues←[0m
←[38;5;6mpostgresql ←[38;5;5m03:13:30.42 ←[0m
←[38;5;6mpostgresql ←[38;5;5m03:13:30.45 ←[0m←[38;5;2mINFO ←[0m ==> ** Starting PostgreSQL setup **
←[38;5;6mpostgresql ←[38;5;5m03:13:30.46 ←[0m←[38;5;2mINFO ←[0m ==> Validating settings in POSTGRESQL_* env vars..
←[38;5;6mpostgresql ←[38;5;5m03:13:30.47 ←[0m←[38;5;2mINFO ←[0m ==> Loading custom pre-init scripts...
←[38;5;6mpostgresql ←[38;5;5m03:13:30.48 ←[0m←[38;5;2mINFO ←[0m ==> Initializing PostgreSQL database...
←[38;5;6mpostgresql ←[38;5;5m03:13:30.50 ←[0m←[38;5;2mINFO ←[0m ==> pg_hba.conf file not detected. Generating it...
←[38;5;6mpostgresql ←[38;5;5m03:13:30.50 ←[0m←[38;5;2mINFO ←[0m ==> Generating local authentication configuration
←[38;5;6mpostgresql ←[38;5;5m03:13:30.53 ←[0m←[38;5;2mINFO ←[0m ==> Deploying PostgreSQL with persisted data...
←[38;5;6mpostgresql ←[38;5;5m03:13:30.57 ←[0m←[38;5;2mINFO ←[0m ==> Configuring replication parameters
←[38;5;6mpostgresql ←[38;5;5m03:13:30.61 ←[0m←[38;5;2mINFO ←[0m ==> Configuring fsync
←[38;5;6mpostgresql ←[38;5;5m03:13:30.62 ←[0m←[38;5;2mINFO ←[0m ==> Configuring synchronous_replication
←[38;5;6mpostgresql ←[38;5;5m03:13:30.66 ←[0m←[38;5;2mINFO ←[0m ==> Loading custom scripts...
←[38;5;6mpostgresql ←[38;5;5m03:13:30.66 ←[0m←[38;5;2mINFO ←[0m ==> Enabling remote connections
←[38;5;6mpostgresql ←[38;5;5m03:13:30.67 ←[0m←[38;5;2mINFO ←[0m ==> ** PostgreSQL setup finished! **
←[38;5;6mpostgresql ←[38;5;5m03:13:30.69 ←[0m←[38;5;2mINFO ←[0m ==> ** Starting PostgreSQL **
2022-09-01 03:13:30.760 GMT [1] LOG: pgaudit extension initialized
2022-09-01 03:13:30.765 GMT [1] LOG: starting PostgreSQL 14.5 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2022-09-01 03:13:30.765 GMT [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-09-01 03:13:30.766 GMT [1] LOG: listening on IPv6 address "::", port 5432
2022-09-01 03:13:30.769 GMT [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2022-09-01 03:13:30.775 GMT [91] LOG: database system was shut down at 2022-09-01 02:36:41 GMT
2022-09-01 03:13:30.813 GMT [1] LOG: database system is ready to accept connections
2022-09-01 03:13:57.645 GMT [116] FATAL: password authentication failed for user "bn_jupyterhub"
2022-09-01 03:13:57.645 GMT [116] DETAIL: Connection matched pg_hba.conf line 1: "host all all 0.0.0.0/0 md5"
2022-09-01 03:14:02.663 GMT [118] FATAL: password authentication failed for user "bn_jupyterhub"
2022-09-01 03:14:02.663 GMT [118] DETAIL: Connection matched pg_hba.conf line 1: "host all all 0.0.0.0/0 md5"
2022-09-01 03:14:07.681 GMT [131] FATAL: password authentication failed for user "bn_jupyterhub"
This was deployed with the same command helm install jupyterhub bitnami/jupyterhub. There were no special instructions or password suppled in the values.yaml file.
What am I missing here? Do I have to specify password values here to make it working? restarting the pod and redeploying the charts seems not helpful.
Any further advises are highly appreciated!
Thank you
Thought i'd chip in here...
...This is caused due to deploying/redeploying via a pv/pvc, that doesn't necessarily get deleted - Then when you redeploy if you don't explicitly define a password then the previous pvc gets re-used, it recognises there's a bn_jupyterhub user on the database... And so never updates it.
Workarounds:
Specify inside your jupyterhub some override values for values.yaml to explicitly define the bn_jupyterhub password (functional, but clearly insecure).
Define a secret prior to deploying, and then use that with the same values.yaml overriding to use that existing secret. That way, the secret doesn't get re-created and you can use forevermore.
Define a separate postgres database and configure jupyterhub to use that external database.
Reference: https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#persistence-volumes-pvs-retained-from-previous-releases

Postgresql shutdown by itself

2021-11-03 07:15:23.704 UTC [354507] postgres#postgres FATAL: password authentication failed for user "postgres"
2021-11-03 07:15:23.704 UTC [354507] postgres#postgres DETAIL: Password does not match for user "postgres".
Connection matched pg_hba.conf line 105: "host all all 0.0.0.0/0 md5"
2021-11-03 07:33:29.904 UTC [354788] pgsql#postgres FATAL: password authentication failed for user "pgsql"
2021-11-03 07:33:29.904 UTC [354788] pgsql#postgres DETAIL: Role "pgsql" does not exist.
Connection matched pg_hba.conf line 105: "host all all 0.0.0.0/0 md5"
2021-11-03 07:52:40.628 UTC [355083] pgsql#postgres FATAL: password authentication failed for user "pgsql"
2021-11-03 07:52:40.628 UTC [355083] pgsql#postgres DETAIL: Role "pgsql" does not exist.
Connection matched pg_hba.conf line 105: "host all all 0.0.0.0/0 md5"
2021-11-03 07:53:02.963 UTC [327839] LOG: received smart shutdown request
2021-11-03 07:53:02.976 UTC [327839] LOG: background worker "logical replication launcher" (PID 327846) exited with exit code 1
2021-11-03 07:53:02.980 UTC [327841] LOG: shutting down
2021-11-03 07:53:03.011 UTC [327839] LOG: database system is shut down
I am hosting PostgreSQL on a DigitalOcean droplet and since this server is just for my toy project only so I have all the ports open. I understand this is bad practice but from my understanding, unless the hacker somehow gains access to my username and password the DB will be safe.
But last month and yesterday my Postgre just shut down itself and according to the log, it seems it was shut down after a shutdown request?
I am using "postgres" as my user name and from the log, I can see someone keep trying to log with the username "pgsql"?
So I want to know am I being hacked or I did something stupid and somehow shut down the server by myself?

PgPool-II backend authentication failed

I'm trying to configure pgpool as the load balancer for my Postgres cluster.
I have two postgres nodes, 1 master and 1 slave.
My pg_hba.conf looks like
hostssl user mydb 1.1.1.1/32 md5
hostssl user postgres 1.1.1.1/32 md5
host user mydb 1.1.1.1/32 md5
host user postgres 1.1.1.1/32 md5
where 1.1.1.1/32 is my actual pgpool server IP.
If I try to establish a connection to ether master or slave using psql right from the pgpool container, I can do it without any problems.
But when I start pgpool I got this error message:
2021-10-26 13:50:13: pid 753: ERROR: backend authentication failed
2021-10-26 13:50:13: pid 753: DETAIL: backend response with kind 'E' when expecting 'R'
2021-10-26 13:50:13: pid 753: HINT: This issue can be caused by version mismatch (current version 3)
2021-10-26 13:50:13: pid 736: ERROR: backend authentication failed
2021-10-26 13:50:13: pid 736: DETAIL: backend response with kind 'E' when expecting 'R'
2021-10-26 13:50:13: pid 736: HINT: This issue can be caused by version mismatch (current version 2)
If I edit pool_passwd file and set some invalid password I got a proper error
2021-10-26 13:59:03: pid 736: ERROR: md5 authentication failed
2021-10-26 13:59:03: pid 736: DETAIL: password does not match
So I guess that's not a problem with my postgres credentials.
Any ideas?

Password authentication failed for user when using new docker-compose

I am using django-cookiecutter template and for the second time for new project. And it fails to connect to postgres with following error:
postgres_1 | 2018-04-30 14:54:09.747 UTC [1] LOG: database system is ready to accept connections
postgres_1 | 2018-04-30 14:54:10.029 UTC [28] FATAL: password authentication failed for user "IViLGLIEWLBDGBnsAuoOEhtFaKrqKxfX"
postgres_1 | 2018-04-30 14:54:10.029 UTC [28] DETAIL: Role "IViLGLIEWLBDGBnsAuoOEhtFaKrqKxfX" does not exist.
postgres_1 | Connection matched pg_hba.conf line 95: "host all all all md5"
django_1 | PostgreSQL is unavailable (sleeping)...
One of the maintainers explained this:
the thing is, every time you bootstrap the project POSTGRES_USER and POSTGRES_PASSWORD get reset to a newly-generated random values
I tried to remove all docker containers but no success any idea how am i able to solve this? I dont have the old credentials to replace them.
With the help of https://github.com/webyneter (contributor to django-cookiecutter)
The solution is following:
To see existing volumes: docker volume ls
To remove the respective volumes docker volume rm <your project_slug>_postgres_backup_local <your project_slug>_postgres_data_local