How do I get pcp to automatically attach nodes to postgres pgpool? - postgresql

I'm using postgres 9.4.9, pgpool 3.5.4 on centos 6.8.
I'm having a major hard time getting pgpool to automatically detect when nodes are up (it often detects the first node but rarely detects the secondary) but if I use pcp_attach_node to tell it what nodes are up, then everything is hunky dory.
So I figured until I could properly sort the issue out, I would write a little script to check the status of the nodes and attach them as appropriate, but I'm having trouble with the password prompt. According to the documentation, I should be able to issue commands like
pcp_attach_node 10 localhost 9898 pgpool mypass 1
but that just complains
pcp_attach_node: Warning: extra command-line argument "localhost" ignored
pcp_attach_node: Warning: extra command-line argument "9898" ignored
pcp_attach_node: Warning: extra command-line argument "pgpool" ignored
pcp_attach_node: Warning: extra command-line argument "mypass" ignored
pcp_attach_node: Warning: extra command-line argument "1" ignored
it'll only work when I use parameters like
pcp_attach_node -U pgpool -h localhost -p 9898 -n 1
and there's no parameter for the password, I have to manually enter it at the prompt.
Any suggestions for sorting this other than using Expect?

You have to create PCPPASSFILE. Search pgpool documentation for more info.
Example 1:
create PCPPASSFILE for logged user (vi ~/.pcppass), file content is 127.0.0.1:9897:user:pass (hostname:port:username:password), set file permissions 0600 (chmod 0600 ~/.pcppass)
command should run without asking for password
pcp_attach_node -h 127.0.0.1 -U user -p 9897 -w -n 1
Example 2:
create PCPPASSFILE (vi /usr/local/etc/.pcppass), file content is 127.0.0.1:9897:user:pass (hostname:port:username:password), set file permissions 0600 (chmod 0600 /usr/local/etc/.pcppass), set variable PCPPASSFILE (export PCPPASSFILE=/usr/local/etc/.pcppass)
command should run without asking for password
pcp_attach_node -h 127.0.0.1 -U user -p 9897 -w -n 1
Script for auto attach the node
You can schedule this script with for example crontab.
#!/bin/bash
#pgpool status
#0 - This state is only used during the initialization. PCP will never display it.
#1 - Node is up. No connections yet.
#2 - Node is up. Connections are pooled.
#3 - Node is down.
source $HOME/.bash_profile
export PCPPASSFILE=/appl/scripts/.pcppass
STATUS_0=$(/usr/local/bin/pcp_node_info -h 127.0.0.1 -U postgres -p 9897 -n 0 -w | cut -d " " -f 3)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] NODE 0 status "$STATUS_0;
if (( $STATUS_0 == 3 ))
then
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [WARN] NODE 0 is down - attaching node"
TMP=$(/usr/local/bin/pcp_attach_node -h 127.0.0.1 -U postgres -p 9897 -n 0 -w -v)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] "$TMP
fi
STATUS_1=$(/usr/local/bin/pcp_node_info -h 127.0.0.1 -U postgres -p 9897 -n 1 -w | cut -d " " -f 3)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] NODE 1 status "$STATUS_1;
if (( $STATUS_1 == 3 ))
then
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [WARN] NODE 1 is down - attaching node"
TMP=$(/usr/local/bin/pcp_attach_node -h 127.0.0.1 -U postgres -p 9897 -n 1 -w -v)
echo $(date +%Y.%m.%d-%H:%M:%S.%3N)" [INFO] "$TMP
fi
exit 0

yes you can trigger execution of this command using a customised failover_command (failover.sh in your /etc/pgpool)

Automated way to up your pgpool down node:
copy this script into a file with execute permission to your desired location with postgres ownership into all nodes.
run crontab -e comamnd under postgres user
Finally set that script to run every minute at crontab . But to execute it for every second you may create your own
service and run it.
#!/bin/bash
# This script will up all pgpool down node
#************************
#******NODE STATUS*******
#************************
# 0 - This state is only used during the initialization.
# 1 - Node is up. No connection yet.
# 2 - Node is up and connection is pooled.
# 3 - Node is down
#************************
#******SCRIPT*******
#************************
server_node_list=(0 1 2)
for server_node in ${server_node_list[#]}
do
source $HOME/.bash_profile
export PCPPASSFILE=/var/lib/pgsql/.pcppass
node_status=$(pcp_node_info -p 9898 -h localhost -U pgpool -n $server_node -w | cut -d ' ' -f 3);
if [[ $node_status == 3 ]]
then
pcp_attach_node -n $server_node -U pgpool -p 9898 -w -v
fi
done

Related

run conditional migration sh script via docker compose yml

Been googling around for this but I can't get the exact answer.
I am building a mobile app and I want to run additional migration scripts when the environment is "local".
I have a docker-compose-local.yml which builds the db
database:
build:
context: ./src/Database/
dockerfile: Dockerfile
container_name: database
ports:
- 1401:1433
environment:
ACCEPT_EULA: 'Y'
SA_PASSWORD: 'password'
ENVIRONMENT: 'local'
networks:
- my-network
and then a Dockerfile with an entrypoint
ENTRYPOINT ["/usr/src/app/entry-point.sh"]
And then a script that runs migrations.
#!/bin/bash
# wait for MSSQL server to start
export STATUS=1
i=0
MIGRATIONS=$(ls migrations/*.sql | sort -V)
SEEDS=$(ls seed/*.sql)
while [[ $STATUS -ne 0 ]] && [[ $i -lt 30 ]]; do
i=$i+1
/opt/mssql-tools/bin/sqlcmd -t 1 -U sa -P $SA_PASSWORD -Q "select 1" >> /dev/null
STATUS=$?
done
if [ $STATUS -ne 0 ]; then
echo "Error: MSSQL SERVER took more than thirty seconds to start up."
exit 1
fi
echo "======= MSSQL SERVER STARTED ========" | tee -a ./config.log
# Run the setup script to create the DB and the schema in the DB
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i create-database.sql
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i create-database-user.sql
for f in $MIGRATIONS
do
echo "Processing migration $f"
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i $f
done
# RUN THIS ONLY FOR ENVIRONMENT = local
for s in $SEEDS
do
echo "Seeding $s"
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i $s
done
Currently everything works perfectly fine, except the seeds are added for all environments.
I only want to run the seed scripts if environment = local.
How can this condition be written into this script?
Alternative is there a cleaner way to do this?
Thanks
There are multiple ways to achieve your goal. Three that come to mind quickly are:
Insert and check the environment variable in the script (What you are trying to do now)
Have two versions of the script in the Container and change the entrypoint in the docker-compose file (either with environment variables or by using multiple compose files)
Build two different versions of the docker image for local and production environment
With your current setup the first alternative is the easiest:
# RUN THIS ONLY FOR ENVIRONMENT = local
if [[ "$ENVIRONMENT" == "local" ]]; then
for s in $SEEDS
do
echo "Seeding $s"
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i $s
done
fi

How to check Postgres backup has been successful(Without manual efforts)?

We have 100+ databases contains daily backups
how to check failure backups in PostgreSQL backup schedules
pg_dump -h localhost -p 5432 -U postgres -d db1 -v -f "path/file.backup"
pg_dump -h localhost -p 5432 -U postgres -d db2 -v -f "path/file.backup"
pg_dump -h localhost -p 5432 -U postgres -d db3 -v -f "path/file.backup"
pg_dump -h localhost -p 5432 -U postgres -d db4 -v -f "path/file.backup"
pg_dump -h localhost -p 5432 -U postgres -d db5 -v -f "path/file.backup"
...
like this i have 100 backup schedules
Try to do it in for loop for example?
#!/bin/bash
# create an indexed array with all your databases listed
database_names=( "1" "2" "3" )
# Declare an associative array to store dbname and dump status codes
declare -A all_dbdump_states
for db in "${database_names[#]}"; do
echo "Executing $db dump.."
pg_dump -h localhost -p 5432 -U postgres -d $db -v -f "path/file.backup"
dump_rc=$? # Save exit code of pg_dump process into variable
# After each for loop iteration, append data into array
all_dbdump_states[$db]+=$dump_rc
done
echo -e "\nListing status codes of all dumps:"
for db in "${!all_dbdump_states[#]}"; do
echo "Database [$db] status: ${all_dbdump_states[$db]}"
sleep 1
done
Here I'm echoing these pg_dump lines for better tests and made an explicit mistake in echo command to put second command fail, with exit code 127:
#!/bin/bash
# create an indexed array with all your databases listed
database_names=( "1" "2" "3" )
# Declare an assotiative array to store dbname and dump status codes
declare -A all_dbdump_states
for db in "${database_names[#]}"; do
echo "Executing $db dump.."
if [[ $db -eq 2 ]]; then
ech "pg_dump -h localhost -p 5432 -U postgres -d 2 -v -f 'path/file.backup'" &>/dev/null
dump_rc=$? # Save exit code of pg_dump process into variable
all_dbdump_states[$db]+=$dump_rc
continue
fi
echo "pg_dump -h localhost -p 5432 -U postgres -d $db -v -f 'path/file.backup'" &>/dev/null
dump_rc=$? # Save exit code of pg_dump process into variable
# After each for loop iteration, append data into array
all_dbdump_states[$db]+=$dump_rc
done
echo -e "\nListing status codes of all dumps:"
for db in "${!all_dbdump_states[#]}"; do
echo "Database [$db] status: ${all_dbdump_states[$db]}"
sleep 1
done
Sample output:
$ ./test.sh
Executing 1 dump..
Executing 2 dump..
Executing 3 dump..
Listing status codes of all dumps:
Database [1] status: 0
Database [2] status: 127
Database [3] status: 0

Expect script exiting too fast when trying to load data from DB

I have an expect script that loads data from a heroku postgres database into a local .csv file. I need to use expect due to a need for automation and entering a password. So far my script looks like the following:
#!/usr/bin/expect
spawn psql -h <host> -p <port> -U <username> -W <db name> -t -A -F "," -f sql.sql -o output.csv
expect "Password for user <db name>: "
send "<password>\r"
sleep 10
The sql.sql is a sql query, for example select * from my_table.
Notice that I need to add a sleep at the end of my expect script to allow data to be written to .csv file otherwise nothing gets written. However, if the data I am trying to load is too big then I will have to keep adjusting the sleep time every single time. How do I avoid this?
To avoid adjusting sleep for different queries, Just add global timeout in expect script.
set timeout -1
so your script would be something like this :-
#!/usr/bin/expect
set timeout -1
spawn psql -h <host> -p <port> -U <username> -W <db name> -t -A -F "," -f sql.sql -o output.csv
expect "Password for user <db name>: "
send "<password>\r"
#sleep 10
Hope it helps.

Why isn't my postgres docker container mounting the database?

So I have two docker images which are used to host postgres servers. One is for first time setup and puts a .sh and .sql script into the appropriate directory for postgres to load them up (which works as expect). The second image is for running any other time where the database has already been created. The docker files for these looks like this:
FROM postgres:9.3
RUN mkdir -p /etc/postgresql/9.3/main/
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
ADD Setup.sh /docker-entrypoint-initdb.d/1.sh
ADD schema.sql /docker-entrypoint-initdb.d/2.sql
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
and
FROM postgres:9.3
MAINTAINER Andrew Broadbent <andrew.broadbent#manchester.ac.uk>
RUN mkdir -p /etc/postgresql/9.3/main/
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
I run the first from the following shell script:
# Create the volumes for the data backend database.
docker volume create --name psql-data-etc
docker volume create --name psql-data-log
docker volume create --name psql-data-lib
# Create data store database
echo -e "\n${TITLE}[Data Store Database]${NC}"
read -p " User name: " user
read -s -p " Password: " password
echo ""
read -p " Database name: " dbname
docker run -v psql-data-etc:/etc/postgresql -v psql-data-log:/var/log/postgresql -v psql-data-lib:/var/lib/postgresql -e "NEW_USER=$user" -e "NEW_PASSWORD=$password" -e "POSTGRES_DB=$dbname" -p 9001:5432 -P -d --name psql-data postgres-setup-datastore
and I run the second from the following:
echo -e "\n${TITLE}[Data Store Database]${NC}"
read -p " User name: " user
read -s -p " Password: " password
docker run -v psql-data-etc:/etc/postgresql -v psql-data-log:/var/log/postgresql -v psql-data-lib:/var/lib/postgresql -e "POSTGRES_USER=$user" -e "POSTGRES_PASSWORD=$password" -p 9001:5432 -P --name psql-data postgres-runtime
If I run the first script, connect the the database make some changes, then remove the container, then run the second script, the changes I made to the database aren't persisted. How do I persist them? I managed to get it working earlier, but I'm not sure what I've changed and can't seem to get it working any more.

How to start Phoenix by using PostgreSQL through container?

I tried:
$ alias psql="docker exec -ti pg-hello-phoenix sh -c 'exec psql -h localhost -p 5432 -U postgres'"
$ mix ecto.create
but got:
** (RuntimeError) could not find executable psql in path, please guarantee it is available before running ecto commands
lib/ecto/adapters/postgres.ex:106: Ecto.Adapters.Postgres.run_with_psql/2
lib/ecto/adapters/postgres.ex:83: Ecto.Adapters.Postgres.storage_up/1
lib/mix/tasks/ecto.create.ex:34: anonymous fn/2 in Mix.Tasks.Ecto.Create.run/1
(elixir) lib/enum.ex:604: Enum."-each/2-lists^foreach/1-0-"/2
(elixir) lib/enum.ex:604: Enum.each/2
(mix) lib/mix/cli.ex:58: Mix.CLI.run_task/2
(elixir) lib/code.ex:363: Code.require_file/2
Also I tried to create symlink /usr/local/bin/psql:
#!/usr/bin/env bash
docker exec -ti pg-hello-phoenix sh -c "exec psql -h localhost -p 5432 -U postgres $#"
and then:
$ sudo chmod +x /usr/local/bin/psql
check:
$ which psql
/usr/local/bin/psql
$ psql --version
psql (PostgreSQL) 9.5.1
run again:
$ mix ecto.create
** (Mix) The database for HelloPhoenix.Repo couldn't be created, reason given: cannot enable tty mode on non tty input
.
Container with PostgreSQL is launched:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
013464d7227e postgres "/docker-entrypoint.s" 47 minutes ago Up 47 minutes 5432/tcp pg-hello-phoenix
I was able to do this by going into /config/.exs In my case it was development, so /config/dev.exs and left the hostname as localhost but added another setting for port: 32768 because that's the port that docker exposed.
Make sure to put a space between the port: and the number (not string). Otherwise it won't work.
Worked as usual after that. The natural assumption is that the username / password matches on the container as well.
To me I did the following:
sudo docker exec -it postgres-db bash
After I got the interactive shell
psql -h localhost -p 5432 -U postgres
Then I create my db manually:
CREATE DATABASE cars_dev;
Then finally:
mix ecto.migrate
Everything worked fine after that :) hope it helps.