PgSQL - How to import database dump only when database completely empty? - postgresql

The use-case actually to automate this with ansible. I want to import database dump only when database is completely empty (no tables inside). Of course there's always a way to execute sql statement, but this is last resort, I believe there should be more elegant solution for this.
pg_restore manual doesn't provide this option as far as I see.
Here's how I'm planning to do this with ansible:
- name: db_restore | Receive latest DB backup
shell: s3cmd --skip-existing get `s3cmd ls s3://{{ aws_bucket }}/ | grep sentry | tail -1 | awk '{print $4}'` sql.latest.tgz
args:
chdir: /root/
creates: sql.latest.tgz
- name: db_restore | Check if file exists
stat: path=/root/sql.latest.tgz
register: sql_latest
- name: db_restore | Restore latest DB backup if backup file found
shell: PGPASSWORD={{ dbpassword }} tar -xzOf /root/sentry*.tgz db.sql | psql -U{{ dbuser }} -h{{ pgsql_server }} --set ON_ERROR_STOP=on {{ dbname }}
when: sql_latest.stat.exists
ignore_errors: True
Ideally this should check if DB empty. No ansible module exist for this purpose. Google is also in silence.. Current solution actually also works, this will give error when import will fail, and I can just ignore error, but it's a bit painful to see a false alarm.

There's not really any such thing as "empty" as such; it generally has the built-in types, the default PL/PgSQL language, etc, even if you create from template0. If you create from a different template there could be a whole lot more in there.
PostgreSQL doesn't keep a record of the first non-template write to a DB, so you can't say "changed since created" either.
That's why there's no --if-empty option to pg_restore. It doesn't really make sense.
By far and away the best option is to execute psql to query the information_schema and determine if there are any tables in the public schema. Or, even better, query for the presence of specific tables and types you know will be created by the dump.
e.g.
psql -qAt mydbname -c "select 1 from information_schema.tables where table_schema = 'public' and table_name = 'testtable';"
You can then test for zero/nonzero rows returned on stdout. Or wrap it in SELECT EXISTS(...) to get a boolean from psql. Or use a DO block that ERRORs if the table exists if you need a zero/nonzero exit status from psql.

To regard the database as empty, we must know there nothing has been added from the point of creation. As postgres does not track this (as already mentioned by #Craig Ringer) I recommend a different approach with regards to ansible.
So, just use a handler mechanism like:
- name: Create zabbbix postgres DB
postgresql_db: name="{{zabbix_db_name}}"
notify:
- Init zabbix database

Since it is hard to tell, if a database is "empty", as explained by others, it is much easier to check, if the database exists, then create and restore in one step. I'm doing it like this:
- name: Check my_database database already exists
become: yes
become_user: postgres
shell: psql -l | grep my_database
ignore_errors: true
register: my_database_db_existence
- debug: var=my_database_db_existence
- name: Copy backup of the my-database database
shell: your-s3-command here
when: my_database_db_existence | failed
- name: Restore my_database database on first run
become_user: postgres
shell: createdb -O my_user my_database && psql -d my_database -f /path/to/my_dump.sql
when: my_database_db_existence | failed
P.S. Also written a detailed blog post explaining each ansible task in the implementation.

In my Ansible continuous deployment I prefer don't check empty DB or not. I run container with default properties and create DB if it not exist, after that I restore DB (create schemes, tables and etc):
- hosts: all
vars:
database_name: "maindb"
pg_admin_name: "postgres"
pg_admin_password: "postgres"
pghost: "localhost"
pg_user_name: "vr_user"
pg_user_password: "ChanGeMe2021"
tasks:
- name: Check if database is exist
community.postgresql.postgresql_info:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
filter:
- "databases"
register: pg_info
- name: Create database if not exist
block:
- name: Say status
ansible.builtin.debug:
msg: "Database is not exist!"
- name: Copy dadabase shchema
ansible.builtin.copy:
src: "./files/maindb.sql"
dest: "/tmp/maindb.sql"
- name: Create database
community.postgresql.postgresql_db:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
name: "{{ database_name }}"
encoding: UTF-8
# lc_collate: ru_RU.utf8
# lc_ctype: ru_RU.utf8
- name: Create role
community.postgresql.postgresql_user:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
name: "{{ pg_user_name }}"
password: "{{ pg_user_password }}"
- name: Restore database
community.postgresql.postgresql_db:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
name: "{{ database_name }}"
state: restore
target: "/tmp/maindb.sql"
register: pg_restore_result
failed_when: "'ERROR' in pg_restore_result.stderr"
- name: Print restore result
ansible.builtin.debug:
msg: "{{ pg_restore_result }}"
rescue:
- name: Rollback database
community.postgresql.postgresql_db:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
name: "{{ database_name }}"
state: absent
- name: Print when errors
ansible.builtin.debug:
msg: "Restore failed, because: {{ pg_restore_result.stderr_lines[1] }}"
when: pg_info.databases[database_name] is not defined
This code you can find here

Related

Vagrant - Ansible - Install PostgreSQL and Postgis [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 19 hours ago.
Improve this question
I'm trying to install PostgreSQL and Postgis with Ansible on a Vagrant VM.
But I'm reaching some issues to install and access to PostgreSQL (didn't reach the step of Postgis yet).
My Vagrant VM is an ubuntu/jammy64.
Firstly, I installed PHP on the VM.
Then I try to install PostrgreSQL. In following, my psql task to Ansible:
---
- name: Install
apt:
update_cache: true
name:
- bash
- openssl
- libssl-dev
- libssl-doc
- postgresql
- postgresql-contrib
- libpq-dev
- python3-psycopg2
state: present
- name: Check if initialized
stat:
path: "{{ postgresql_data_dir }}/pg_hba.conf"
register: postgres_data
- name: Empty data dir
file:
path: "{{ postgresql_data_dir }}"
state: absent
when: not postgres_data.stat.exists
- name: Initialize
shell: "{{ postgresql_bin_path }}/initdb -D {{ postgresql_data_dir }}"
become: true
become_user: postgres
when: not postgres_data.stat.exists
- name: Start and enable service
service:
name: postgresql
state: started
enabled: true
- name: Update pg_ident.conf - allow user to auth with postgres
lineinfile:
dest: "/etc/postgresql/{{ postgresql_version }}/main/pg_ident.conf"
insertafter: "# MAPNAME SYSTEM-USERNAME PG-USERNAME"
line: "user_{{ user }} {{ user }} postgres"
- name: Update pg_hba.conf - disable peer for postgres user
lineinfile:
dest: "/etc/postgresql/{{ postgresql_version }}/main/pg_hba.conf"
regexp: "local all postgres peer"
line: "#local all postgres peer"
- name: Update pg_hba.conf - trust all connection
lineinfile:
dest: "/etc/postgresql/{{ postgresql_version }}/main/pg_hba.conf"
regexp: "local all all peer"
line: "local all all trust"
- name: Restart
service:
name: postgresql
state: restarted
enabled: true
- name: "Create database {{ postgresql_db }}"
become: true
become_user: "{{ postgresql_user }}"
postgresql_db:
name: "{{ postgresql_db }}"
state: present
- name: "Create user {{ user }}"
become: yes
become_user: "{{ postgresql_user }}"
postgresql_user:
name: "{{ user }}"
password: "{{ user }}"
state: present
- name: "Grant user {{ user }}"
become: yes
become_user: "{{ postgresql_user }}"
postgresql_privs:
type: database
database: "{{ postgresql_db }}"
roles: "{{ user }}"
grant_option: no
privs: all
notify: psql restart
My vars:
---
postgresql_version: 14
postgresql_bin_path: "/usr/lib/postgresql/{{ postgresql_version }}/bin"
postgresql_data_dir: "/var/lib/postgresql/{{ postgresql_version }}/main"
postgresql_host: localhost
postgresql_port: 5432
postgresql_db: "db_{{ user }}"
postgresql_user: "{{ user }}"
postgresql_password: "{{ user }}"
ansible_ssh_pipelining: true
But when I play the Ansible's playbook I'm getting the following feedback:
TASK [include_role : psql] *****************************************************
TASK [psql : Install] **********************************************************
ok: [192.168.50.50]
TASK [psql : Check if initialized] *********************************************
ok: [192.168.50.50]
TASK [psql : Empty data dir] ***************************************************
skipping: [192.168.50.50]
TASK [psql : Initialize] *******************************************************
skipping: [192.168.50.50]
TASK [psql : Start and enable service] *****************************************
ok: [192.168.50.50]
TASK [psql : Create database db_ojirai] ****************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Is the server running locally and accepting connections on that socket?
fatal: [192.168.50.50]: FAILED! => {"changed": false, "msg": "unable to connect to database: connection to server on socket \"/var/run/postgresql/.s.PGSQL.5432\" failed: Connection refused\n\tIs the server running locally and accepting connections on that socket?\n"}
PLAY RECAP *********************************************************************
192.168.50.50 : ok=14 changed=0 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0
Can you, guys, explain to me where is my mistake, please? Is it my PostgreSQL installation which is wrong?
Thanks for your feedbacks!
Edit:
I try the suggested solution by β.εηοιτ.βε but the message persist. I tried with following process:
vagrant destroy > export vars (suggested in the post) > vagrant up > ansible deploy
export vars (suggested in the post) > vagrant reload > ansible deploy
export vars (suggested in the post) > vagrant destroy > vagrant up > ansible deploy
vagrant destroy > vagrant up > export vars (suggested in the post) > ansible deploy

How to use Ansible to label the nodes of K8S

At present, I can only get the host name of the host node by going to the corresponding node to store the file locally, and then I send the stored file to the master node and set the file as a registered variable to set the corresponding node label for k8s, but for more The configuration of each node is difficult. Is there any good way to achieve it?
gather_facts: yes
become: yes
tags: gets
tasks:
- name: install the latest version of rsync
yum:
name: rsync
state: latest
- name: Ansible gets the current hostname
debug: var=hostvars[inventory_hostname]['ansible_hostname']
register: result
- local_action:
module: copy
content: "{{ result }}"
dest: /tmp/Current_hostname.yml
- debug: msg="{{ item }}"
with_lines: cat /tmp/Current_hostname.yml |tr -s ' '|cut -d '"' -f6 |sort -o /tmp/Modified_hostname
- hosts: "{{ hosts | default('k8s-master') }}"
gather_facts: no
become: yes
tags: gets
tasks:
- name: install the latest version of rsync
yum:
name: rsync
state: latest
- name: Transfer file from ServerA to ServerB
synchronize:
src: /tmp/Modified_hostname
dest: /tmp/Modified_hostname
mode: pull
delegate_to: "{{ item }}"
with_items: "{{ groups['mysql'][0] }}"
- hosts: "{{ hosts | default('k8s-master') }}"
gather_facts: no
tags: gets
become: yes
vars:
k8s_mariadb: "{{ lookup('file', '/tmp/Modified_hostname') }}"
tasks:
- name: Gets the node host information
debug: msg="the value of is {{ k8s_mariadb }}"
- name: Tag MySQL fixed node
shell: kubectl label node {{ k8s_mariadb }} harbor-master1=mariadb
ignore_errors: yes

Postgresql - unable to connect to database: could not connect to server: No such file or directory

So I'm trying to create a Postgres Database on my remote server with Ansible, unfortunately I'm getting this error message
TASK [postgresql : Create database with name sola] *****************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
fatal: [some-remote-server]: FAILED! => {
"changed": false
}
MSG:
unable to connect to database: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
This would be my playbook
- name: enable the PostgreSQL package repository
copy:
src: pgdg-96-redhat.repo
dest: /etc/yum.repos.d/pgdg-96-redhat.repo
- name: install additional packages
yum:
name: "{{ item }}"
state: present
with_items:
- "{{ packages }}"
- name: Ensure bash and OpenSSL are the latest version
yum:
name: "{{ item }}"
update_cache: true
state: latest
with_items:
- bash
- openssl
tags: packages
- name: install system packages
yum:
name: "{{ item }}"
state: installed
with_items:
- "{{ packages }}"
become: yes
- name: Install PostgreSQL
yum:
name: "{{ item }}"
update_cache: true
state: installed
with_items:
- postgresql
- postgresql-contrib
- python-psycopg2
tags: packages
become: yes
- name: enabling postgresql services
service:
name: postgresql
state: started
enabled: yes
- name: Create database with name sola
postgresql_db:
name: sola
encoding: 'UTF-8'
lc_collate: 'en_US.UTF-8'
lc_ctype: 'en_US.UTF-8'
template: 'template0'
- name: Ensure database is created
sudo_user: postgres
postgresql_db:
name: dbname
encoding: 'UTF-8'
lc_collate: 'en_US.UTF-8'
lc_ctype: 'en_US.UTF-8'
template: 'template0'
state: present
My suspicions are, that either something went wrong witht the installation process, so that postgres hasn't even been properly installed on the remote server or that I'm not properly enabling and starting the postgres services. Any help is appreciated!

Enabling mongo authentication by ansible playbook

I trying install mongodb on my server and enable authentication. But I'm stuck on adding user for auth. When I try execute playbook it fails on Add user task with output:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: pymongo.errors.OperationFailure: there are no users authenticated
fatal: [***]: FAILED! => {"changed": false, "msg": "unable to connect to database: there are no users authenticated"}
How can I fix it?
playbook.yml
- name: Install mongodb
apt:
name: mongodb-org
update_cache: yes
state: present
- name: Set config
template:
src: templates/mongodb.yml
dest: /etc/mongod.conf
notify: restart mongodb
- name: Install pymongo
pip:
name: pymongo
state: present
- name: Add user
mongodb_user:
database: "{{ mongodb_name }}"
name: "{{ mongodb_user }}"
password: "{{ mongodb_password }}"
login_host: "{{ mongodb_bind_ip }}"
login_port: "{{ mongodb_port }}"
state: present
mongodb.yml
net:
port: {{ mongodb_port }}
bindIp: {{ mongodb_bind_ip }}
unixDomainSocket:
enabled: false
security:
authorization: enabled
If you don't have admin user in the database you need to start it with disabled security.authorization, add admin user, then restart mongodb with enabled security.authorization : https://docs.mongodb.com/manual/tutorial/enable-authentication/#procedure
After that you can add more users using admin's credentials:
- name: Add user
mongodb_user:
database: "{{ mongodb_name }}"
name: "{{ mongodb_user }}"
password: "{{ mongodb_password }}"
login_host: "{{ mongodb_bind_ip }}"
login_port: "{{ mongodb_port }}"
login_user: "{{ admin_login }}"
login_password: "{{ admin_password }}"
state: present
https://docs.ansible.com/ansible/2.4/mongodb_user_module.html

Unable to execute postgres sql script from ansible

I want to execute a postgresql sql which is fetched from my git resource.
I want to first fetch all the scripts from git repository and then wants to execute scripts from a particular directory which i taking as an input in ansible script.This is my complete ansible script
- hosts: "{{ HOST }}"
become: true
become_user: "admin"
become_method: su
gather_facts: true
vars:
app_path: "{{ app_path }}"
app_dir: "{{ app_dir }}"
tasks:
- stat:
path: "{{ app_path }}/{{ app_dir }}"
register: exist
- name: create "{{ app_path }}/{{ app_dir }}" directory
file: dest="{{ app_path }}/{{ app_dir }}" state=directory
when: exist.stat.exists != True
- git:
repo: http://git-url/postgres-demo.git
dest: "{{ app_path }}/{{ app_dir }}"
version: "{{ GIT_TAG }}"
refspec: '+refs/heads/{{ GIT_TAG }}:refs/remotes/origin/{{ GIT_TAG }}'
update: yes
force: true
register: cloned
- name: Execute postgres dump files
shell: "/usr/bin/psql -qAtw -f {{ item }}"
with_fileglob:
- "{{ app_path }}/{{ app_dir }}/{{ scripts_dir }}/*"
register: postgres_sql
become: true
become_user: "postgres"
become_method: su
Above script is executed successfully but postgres step throwing me following waning:
[WARNING]: Unable to find '/home/admin/postgres-dev/test' in expected paths.
When i checked my postgresql db,I don't find the table that i want to create using this scripts.
All lookups in Ansible are executed on local ansible host.
So:
with_fileglob:
- "{{ app_path }}/{{ app_dir }}/{{ scripts_dir }}/*"
uses fileglob lookup, which searches files on localhost!
You should refactor this code to use find module first to find all required scripts, register its output and then loop with with_items over registered variable to execute scripts.