I am trying to configure a docker-compose to create a master and slave servers local using WAL to replicate databases, and it is not working, because of some problems in the configuration.
I am receiving this error in the image below:
All my code is here:
https://github.com/Oracy/postgres
To run is just docker-compose up -d
Thanks in advance.
Related
I'm trying to get RabbitMQ to monitor a postgresql database to create a message queue when database rows are updated. The eventual plan is to feed this message queue into an AWS EKS (Elastic Kubernetes Service) cluster as a job.
I've read many many approaches to this but they are still confusing as a newcomer to RabbitMQ and many seemed to be written more than 5 years ago so I'm not sure if they'll still work with current versions of postgres and rabbitmq.
I've followed this guide about installing the area51/notify-rabbit docker container which can connect the two via a node app, but when I ran the docker container it immediately stopped and didn't seem to do anything.
There is also this guide, which uses a go app to connect the two, but I'd rather not use Go ouside of a docker container.
Additionally, there is also this method, to install the pg_amqp extension from a repository which hasn't been updated in years, which allows for a direct connection from PostgreSQL to RabbitMQ. However, when I followed this and attempted to install pg_amqp on my Postgres db (postgresql 12), I was unable to connect using psql to the database, getting the classic error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
My current set-up, is I have a rabbitMQ server installed in a docker container in an AWS EC2 instance which I can access via the internet. I ran the following to install and run it:
docker pull rabbitmq:3-management
docker run --rm -p 15672:15672 -p 5672:5672 rabbitmq:3-management
The postgresql database is running on a separate EC2 instance and both instances have the required ports open for accessing data from each server.
I have also looked into using Amazon SQS as well for this, but it didn't seem to have any info on linking Postgresql up to it. I haven't really seen any guides or Stack Overflow questions on this since 2017/18 so I'm wondering if this is still the best way to create a message broker for a kubernetes system? Any help/pointers on this much appreciated.
In the end, I decided the best thing to do was create some simple Python scripts to do the LISTEN/NOTIFY steps and route traffic from PostgreSQL to RabbitMQ based off the following code https://gist.github.com/kissgyorgy/beccba1291de962702ea9c237a900c79
I set it up inside Docker containers and set them to run in my Kubernetes cluster so they are within the automatic restarts if they fail.
I've just started 'kicking the tires' on Concourse-CI, using the quickstart tutorial as my starting point. That much works fine.
I've created a super basic pipeline with a single task, just like the quickstart tutorial. But instead of pulling the busybox image and executing the echo command, I'm pulling another image, and running a command that would try to update a local postgres db.
When I run the pipeline - my task (docker image writing to local postgres db) fails - because connection can't be made to the local db. I've searched far and wide - and can't seem to figure out how to do this. In the docker-compose from the quickstart tutorial, I've tried adding CONCOURSE_CONTAINERD_ALLOW_HOST_ACCESS: "true" to no avail
Any suggestions on how I may be able to achieve this?
Turns out my issue had nothing to do with Concourse.
The local postgres instance I was attempting to write to was only accepting connections from localhost - which won't allow connections from Docker containers. I updated postgres setting to allow remote connections - and all is well.
I set up a Master-Slave load testing environment using JMeter. I am using 3 CentOS machines with following IP's
xxx.xxx.xxx.1 (Master)
xxx.xxx.xxx.2 (Slave1)
xxx.xxx.xxx.3 (Slave2)
Here are the steps I did.
1) Added the following to the slaves jmeter.properties file:
remote_hosts=xxx.xxx.xxx.1
2) Added the following to master jmeter-server file
#RMI_HOST_DEF=-Djava.rmi.server.hostname=xxx.xxx.xxx.2 `
Then when I'm executing the following command from the /apache-jmeter-2.13/bin folder of xxx.xxx.xxx.2 Slave machine.(I don't have root user access have only SUDO root access)
sudo ./jmeter-server
I'm getting the error
./jmeter-server: line 32: ./jmeter: Permission denied
Is my Master-Slave setup is correct? Am I doing something wrong here?
Do I need to do anything else to setup master-slave?
Add the following to client (master) jmeter.properties file:
remote_hosts= xxx.xxx.xxx.2,xxx.xxx.xxx.3
Add the following to servers (in each slave machines) jmeter-server:
RMI_HOST_DEF=-Djava.rmi.server.hostname=xxx.xxx.xxx.2 for (Slave1)
&
RMI_HOST_DEF=-Djava.rmi.server.hostname=xxx.xxx.xxx.3 for (Slave2)
Then start jmeter-server.sh from those two Slave
machines(xxx.xxx.xxx.2,xxx.xxx.xxx.3) using this command
./jmeter-server
Then ran the following command from the client machine(xxx.xxx.xxx.1) to start remote start all the slaves.
./jmeter -n -t <testscript.jmx> -r
See this Thread.
I am using PostgreSQL 9.3.5 . For lack of better instructions anywhere else, I've configured a master-slave replication mechanism by following instructions here.
The missing steps to make it work where:
On both master and slave: set wal_keep_segments = 8
Restart master before proceeding with the slave
Do not use rsync to transfer files from master to slave. Use a command like this from the slave:
pg_basebackup -h masterHostName -D /var/lib/postgresql/9.3/main --username=rep —-password
Other than that, the process works. I am able to write into master, and I see changes reflected in the slave.
Now, my question is this. Suppose master experiences a hardware failure and I am forced to turn the read-only slave into a read-write master. A few days later I get a replacement machine for the one that failed. How do I recover the failed machine ? Do I turn it into a new slave ?
Advice is greatly appreciated.
You're correct, you'll need to completely setup the former master as a new slave, just like you did for the first slave.
Our postgres data folder was installed on a drive with very limited space. I'm now trying to move it over to a newly mounted drive (more space). I've followed several blog posts and they all say...
stop service
copy data cluster
update postgresql-9.1 file (PGDATA=)
restart service
The service starts but when I go to connect, it gives me "could not connect to server: Connection refused"
I tried telnet-ing to port 5432 and nothing.
Here is the link to what I've been trying:
http://www-01.ibm.com/support/docview.wss?uid=swg21324272
Thanks everyone for your help. Looks like the problem was with permissioning.
Instead of doing
cp -R fromfolder tofolder
I did
cp -a fromfolder tofolder
And that solved it. Thanks all.