This is my first post in stackoverflow. I am trying to implement read replicas for my DHIS2 instance. I am using DHIS2 version 2.37.9 and Postgres 13. The DB replication works. But read queries are not directed to read replicas.
I tried this config from DHIS2 official documentation.
# Read replica 1 configuration
# Database connection URL, username and password
read1.connection.url = jdbc:postgresql://127.0.0.11/dbread1
read1.connection.username = dhis
read1.connection.password = xxxx
# Read replica 2 configuration
# Database connection URL, username and password
read2.connection.url = jdbc:postgresql://127.0.0.12/dbread2
read2.connection.username = dhis
read2.connection.password = xxxx
# Read replica 3 configuration
# Database connection URL, fallback to master for username and password
read3.connection.url = jdbc:postgresql://127.0.0.13/dbread3
But it did not worked as expected.
From postgresql documentation I found that I can define master and slave in jdbc connection.
jdbc:postgresql://node1,node2,node3/database?targetServerType=master
jdbc:postgresql://node1,node2,node3/database?targetServerType=preferSlave&loadBalanceHosts=true
But that also did not work.
Thanks in advance for your kind help.
Related
I'm getting this error trying to replicate a postgre database (not RDS) to another postgre database (also not RDS). I get this connection error but the endpoints (source and target) are tested successfully. Any ideas?
Error: Last Error Unable to use plugins to establish logical replication on source PostgreSQL instance. Follow all prerequisites for 'PostgreSQL as a source in DMS' from https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.PostgreSQL.html Task error notification received from subtask 0, thread 0 [reptask/replicationtask.c:2880] [1020490] Remaining 1 error messages are truncated. Check the task log to find details Stop Reason FATAL_ERROR Error Level FATAL
I used DMS to reduce over-provisioned RDS storage size.
Set following values in db parameter group in both source and destination endpoints and restart. Maybe this will help for non-RDS endpoints if you add the same in Postgres configuration.
logical_replication = 1
max_wal_senders = 10
max_replication_slots = 10
wal_sender_timeout = 0
max_worker_processes = 8
max_logical_replication_workers = 8
max_parallel_workers = 8
max_worker_processes = 8
You need at least to set logical_replication = 1 in your source database configuration.
And then set the max_replication_slots = N, N being higher or equal to the number of replication processes you plan to run.
I have this problem with setting up the ongoing replication at the AWS DMS migration task.
I change the source and target endpoint Parameter groups setting below:
session_replication_role = replica
rds.logical_replication = 1
wal_sender_timeout = 0
and keep the remaining setting as the default
max_replication_slots = 20
max_worker_processes = GREATEST(${DBInstanceVCPU*2},8)
max_logical_replication_workers = null
autovacuum_max_workers = GREATEST({DBInstanceClassMemory/64371566592},3)
max_parallel_workers = GREATEST(${DBInstanceVCPU/2},8)
max_connections = LEAST({DBInstanceClassMemory/9531392},5000)
You need to follow all the steps in the guide that is displayed in the error. You need to update pg_hba.conf to allow the DMS instance to access. E.g. if the private IP on the DMS instance is on the 10.6.x.x range:
host your_db dms 10.6.0.0/16 md5
host replication all 10.6.0.0/16 md5
Then you'll need to create a dms user and role with superuser privileges.
Then follow the guide to update postgresql.conf with the wal settings, e.g. wal_level = logical and so on.
Master server : A postgresql user by name 'test' is there with password 'mytest-password-formaster'
Slave server : A postgresql user by name 'test' is there with password 'mytest-password-forslave'
Now, i have created HA setup as follows:
Modified postgresql.conf and pg_hba.conf on master required to act as 'master'
on slave server: Cleaned up data_dir, modified postgresql.conf, took data_dir backup from master using "pg_basebackup", and created reconvery.conf to start the postgresql in hotstandby mode.
Now, above steps works fine and my HA setup is also working. Now when i promote the slave server using 'pg_promote', it makes postgresql write enabled. Now the password for 'test' user is of master. i.e 'mytest-password-formaster'.
what i want is, when i promote standby node using 'pg_promote', it should have old password for its user i.e 'mytest-password-forslave'.
How to achieve this ? It is happening as i cleaned up data_dir on slave when setting standby configuration ?
I have a specific situation where I need to connect Kamailio to PostgreSQL DB rather than MySQL. Can someone please provide the step for that. Tried multiple steps from the forum but it failed.
Problem faced: whenever kamailio creates the database in PostgreSQL it keeps on asking the password and ultimately it fails.
Ubuntu version: 16.04 LTS
Kamailio: 5.0
I have done following things so far:
1. Included the postgre modules
2. Modified kamailio.cfg and added following lines:
#!ifdef WITH_PGSQL
# - database URL - used to connect to database server by modules such
# as: auth_db, acc, usrloc, a.s.o.
#!ifndef DBURL
#!define DBURL "postgres://kamailio:password#localhost/kamailio"
#!endif
#!endif
This is my file kambdctlrc:
# The Kamailio configuration file for the control tools.
#
# Here you can set variables used in the kamctl and kamdbctl setup
# scripts. Per default all variables here are commented out, the control tools
# will use their internal default values.
## your SIP domain
SIP_DOMAIN=sip.<DOMAIN>.net
## chrooted directory
# $CHROOT_DIR="/path/to/chrooted/directory"
## database type: MYSQL, PGSQL, ORACLE, DB_BERKELEY, DBTEXT, or SQLITE
# by default none is loaded
#
# If you want to setup a database with kamdbctl, you must at least specify
# this parameter.
DBENGINE=PGSQL
## database host
DBHOST=localhost
## database host
# DBPORT=3306
## database name (for ORACLE this is TNS name)
DBNAME=kamailio
# database path used by dbtext, db_berkeley or sqlite
# DB_PATH="/usr/local/etc/kamailio/dbtext"
## database read/write user
DBRWUSER="kamailio"
## password for database read/write user
DBRWPW="password"
## database read only user
DBROUSER="kamailioro"
Thanks in advance !!
Finally, we have figured out the issue, It was a small mistake in .pgpass file which eventually creating authentication problem.
Today I set up pgpool-II on one of my company servers for database replication purposes and I'm trying to connect to it from my application located on another server. Previously it connected to database on the same server where pgpool is now installed without problems - I just needed to provide something like this link in config file:
database: postgresql://user:password#host:port/db_name
I have changed port to the one on which pgpool listens for connections and provided user and password from pcp.conf, but then I get a list of errors after starting app, all of them of the same type:
OperationalError: (OperationalError) unable to open database file None None
Doesn't matter if I'm authenticating as postgresql or pgpool user and if I provide md5 encrypted or plaintext password, errors are the same. How can I properly connect to my database then?
Problem solved days ago, needed to change administrative database in pgpool config from "template1" to "postgres". IDK why it doesn't default to that.
I am trying to setup an ejabberd server on my Amazon EC2 Ubuntu instance.
With the default DB Provided by ejabberd, I can easily setup my connection. But I need to replace the mnesia DB with MySQL. I found some tutorials over the internet. From those tutorial I found out a solution. I will explain it as step by step.
I am using ejabberd 2.1.11. I made the following changes on ejabberd.cfg file
Commented the following line :
{auth_method, internal}
Uncommented this:
{auth_method, odbc}
Configured my MySQL DB
{odbc_server, {mysql, "localhost", "students", "root", ""}} // No Password set
Change mod_last to mod_last_odbc
Change mod_offline to mod_offline_odbc
Change mod_roster to mod_roster_odbc
Change mod_private to mod_private_odbc
Change mod_privacy to mod_privacy_odbc
Change mod_pubsub to mod_pubsub_odbc
Change mod_vcard to mod_vcard_odbc
Then I installed ejabberd-mysql driver from the following link
http://stefan-strigler.de/2009/01/14/ejabberd-mysql-drivers-for-debian-and-ubuntu/
After making all these changes I restarted my ejabberd server.
Then I tried to login to my ejabberd server. It shows me the login prompt.
After entering the credentials it takes a lot time and then displays authentication failed.
Any help on the topic is appreciated.
Let's dig into problem
Your setup is working that means your config file is fine. But then
Why does auth fails ?
What schema you have in your students database ?
If you have a proper schema installed then does the user present in ur db's users table?
Have you also updated conf/odbc.ini with proper mysql details.
Even if both the conditions meet then I'll advice you to set mysql password and try again.
Let me know if that helps or not.
Update :-
update your config with {loglevel, 5}
then hit the login and tail all the log files.
odbc.ini
1 [ejabberd]
2 Driver = MySQL
3 DATABASE = students
4 PWD =
5 SERVER = localhost
6 SOCKET = /tmp/mysql.sock
7 UID = root
One Major basic part that one can easily miss is that data which was previously stored in mnesia database will no longer will be available for your new configuration so again you have to create one admin user like this to access your admin account.
./ejabberdctl register admin "password"