I have a code like this:
- name: Check if postgres is running
community.postgresql.postgresql_ping:
db: "{{ stl_database }}"
port: "{{ stl_postgres_port }}"
login_host: "{{ db_host }}"
login_password: "{{ postgres_password }}"
register: postgres_availabe
- name: Check the database versions
postgresql_query:
db: "{{ stl_database }}"
port: "{{ stl_postgres_port }}"
login_host: "{{ db_host }}"
login_user: postgres
login_password: "{{ postgres_password }}"
query: "{{ get_db_version }}"
become: yes
become_user: postgres
register: db_version_return
when: postgres_availabe.is_available == true
It uses two community modules which I have installed with ansible-galaxy collection install community.postgresql.
The first module checks if postgresql is running on the remote server {{ db_host }}, the second module run a query(defined by {{ get_db_version }}) to get the code version from the postgresql DB on the remote server {{ db_host }}. When I run the code, I am getting the below output:
TASK [Check if postgres is running] *****************************************************************************************************************************************************************************************************
Wednesday 08 June 2022 16:13:25 +0000 (0:00:00.030) 0:00:01.676 ********
ok: [localhost]
TASK [Check the database versions] ******************************************************************************************************************************************************************************************************
Wednesday 08 June 2022 16:13:25 +0000 (0:00:00.419) 0:00:02.095 ********
fatal: [localhost]: FAILED! => {"msg": "Failed to set permissions on the temporary files Ansible needs to create when becoming an unprivileged user (rc: 1, err: chmod: invalid mode: ‘A+user:postgres:rx:allow’\nTry 'chmod --help' for more information.\n}). For information on working around this, see https://docs.ansible.com/ansible-core/2.12/user_guide/become.html#risks-of-becoming-an-unprivileged-user"}
The first module works. But the 2nd one errors. When I use "-vvv" in the CLI and I got the details like this:
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: philip.shangguan
<127.0.0.1> EXEC /bin/sh -c 'echo ~philip.shangguan && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /var/tmp `"&& mkdir "` echo /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213 `" && echo ansible-tmp-1654700298.6125453-341391-58979494615213="` echo /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213 `" ) && sleep 0'
redirecting (type: modules) ansible.builtin.postgresql_query to community.postgresql.postgresql_query
Using module file /home/philip.shangguan/.ansible/collections/ansible_collections/community/postgresql/plugins/modules/postgresql_query.py
<127.0.0.1> PUT /home/philip.shangguan/.ansible/tmp/ansible-local-341261szx105mk/tmppy72gy9u TO /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/AnsiballZ_postgresql_query.py
<127.0.0.1> EXEC /bin/sh -c 'setfacl -m u:postgres:r-x /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/ /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/AnsiballZ_postgresql_query.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/ /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/AnsiballZ_postgresql_query.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'chown postgres /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/ /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/AnsiballZ_postgresql_query.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'chmod +a '"'"'postgres allow read,execute'"'"' /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/ /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/AnsiballZ_postgresql_query.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'chmod A+user:postgres:rx:allow /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/ /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/AnsiballZ_postgresql_query.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /var/tmp/ansible-tmp-1654700298.6125453-341391-58979494615213/ > /dev/null 2>&1 && sleep 0'
It looks like the module is trying to do chmod +a and chmod A+user:postgres:rx:allow. If I manually try the commands, I got:
chmod A+user:postgres:rx:allow rename_appdb.sql
chmod: invalid mode: ‘A+user:postgres:rx:allow’
Try 'chmod --help' for more information.
Any idea why the module is doing that? I have the same code running on another ansible server that I used before and it has been working(still today). But when I try to run this on a new ansible server that I installed the community modules yesterday, I got the errors above.
Thanks!
As β.εηοιτ.βε suggested, I removed the become and become_user from the code and it works.
The new code:
- name: Check the database versions
postgresql_query:
db: "{{ stl_database }}"
port: "{{ stl_postgres_port }}"
login_host: "{{ db_host }}"
login_user: postgres
login_password: "{{ postgres_password }}"
query: "{{ get_db_version }}"
register: db_version_return
when: postgres_availabe.is_available == true
and the result:
TASK [debug] ************************************************************************************Wednesday 08 June 2022 20:16:19 +0000 (0:00:00.039) 0:00:02.880 ********
ok: [localhost] => {
"msg": "8-8-3"
}
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 19 hours ago.
Improve this question
I'm trying to install PostgreSQL and Postgis with Ansible on a Vagrant VM.
But I'm reaching some issues to install and access to PostgreSQL (didn't reach the step of Postgis yet).
My Vagrant VM is an ubuntu/jammy64.
Firstly, I installed PHP on the VM.
Then I try to install PostrgreSQL. In following, my psql task to Ansible:
---
- name: Install
apt:
update_cache: true
name:
- bash
- openssl
- libssl-dev
- libssl-doc
- postgresql
- postgresql-contrib
- libpq-dev
- python3-psycopg2
state: present
- name: Check if initialized
stat:
path: "{{ postgresql_data_dir }}/pg_hba.conf"
register: postgres_data
- name: Empty data dir
file:
path: "{{ postgresql_data_dir }}"
state: absent
when: not postgres_data.stat.exists
- name: Initialize
shell: "{{ postgresql_bin_path }}/initdb -D {{ postgresql_data_dir }}"
become: true
become_user: postgres
when: not postgres_data.stat.exists
- name: Start and enable service
service:
name: postgresql
state: started
enabled: true
- name: Update pg_ident.conf - allow user to auth with postgres
lineinfile:
dest: "/etc/postgresql/{{ postgresql_version }}/main/pg_ident.conf"
insertafter: "# MAPNAME SYSTEM-USERNAME PG-USERNAME"
line: "user_{{ user }} {{ user }} postgres"
- name: Update pg_hba.conf - disable peer for postgres user
lineinfile:
dest: "/etc/postgresql/{{ postgresql_version }}/main/pg_hba.conf"
regexp: "local all postgres peer"
line: "#local all postgres peer"
- name: Update pg_hba.conf - trust all connection
lineinfile:
dest: "/etc/postgresql/{{ postgresql_version }}/main/pg_hba.conf"
regexp: "local all all peer"
line: "local all all trust"
- name: Restart
service:
name: postgresql
state: restarted
enabled: true
- name: "Create database {{ postgresql_db }}"
become: true
become_user: "{{ postgresql_user }}"
postgresql_db:
name: "{{ postgresql_db }}"
state: present
- name: "Create user {{ user }}"
become: yes
become_user: "{{ postgresql_user }}"
postgresql_user:
name: "{{ user }}"
password: "{{ user }}"
state: present
- name: "Grant user {{ user }}"
become: yes
become_user: "{{ postgresql_user }}"
postgresql_privs:
type: database
database: "{{ postgresql_db }}"
roles: "{{ user }}"
grant_option: no
privs: all
notify: psql restart
My vars:
---
postgresql_version: 14
postgresql_bin_path: "/usr/lib/postgresql/{{ postgresql_version }}/bin"
postgresql_data_dir: "/var/lib/postgresql/{{ postgresql_version }}/main"
postgresql_host: localhost
postgresql_port: 5432
postgresql_db: "db_{{ user }}"
postgresql_user: "{{ user }}"
postgresql_password: "{{ user }}"
ansible_ssh_pipelining: true
But when I play the Ansible's playbook I'm getting the following feedback:
TASK [include_role : psql] *****************************************************
TASK [psql : Install] **********************************************************
ok: [192.168.50.50]
TASK [psql : Check if initialized] *********************************************
ok: [192.168.50.50]
TASK [psql : Empty data dir] ***************************************************
skipping: [192.168.50.50]
TASK [psql : Initialize] *******************************************************
skipping: [192.168.50.50]
TASK [psql : Start and enable service] *****************************************
ok: [192.168.50.50]
TASK [psql : Create database db_ojirai] ****************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Is the server running locally and accepting connections on that socket?
fatal: [192.168.50.50]: FAILED! => {"changed": false, "msg": "unable to connect to database: connection to server on socket \"/var/run/postgresql/.s.PGSQL.5432\" failed: Connection refused\n\tIs the server running locally and accepting connections on that socket?\n"}
PLAY RECAP *********************************************************************
192.168.50.50 : ok=14 changed=0 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0
Can you, guys, explain to me where is my mistake, please? Is it my PostgreSQL installation which is wrong?
Thanks for your feedbacks!
Edit:
I try the suggested solution by β.εηοιτ.βε but the message persist. I tried with following process:
vagrant destroy > export vars (suggested in the post) > vagrant up > ansible deploy
export vars (suggested in the post) > vagrant reload > ansible deploy
export vars (suggested in the post) > vagrant destroy > vagrant up > ansible deploy
vagrant destroy > vagrant up > export vars (suggested in the post) > ansible deploy
I am using the Helm chart for Apache Airflow and creating a user using the createUserJob section:
airflow:
createUserJob:
command: ~
args:
- "bash"
- "-c"
- |-
exec \
airflow {{ semverCompare ">=2.0.0" .Values.airflowVersion | ternary "users create" "create_user" }} "$#"
- --
- "-r"
- "{{ .Values.webserver.defaultUser.role }}"
- "-u"
- "{{ .Values.webserver.defaultUser.username }}"
- "-e"
- "{{ .Values.webserver.defaultUser.email }}"
- "-f"
- "{{ .Values.webserver.defaultUser.firstName }}"
- "-l"
- "{{ .Values.webserver.defaultUser.lastName }}"
- "-p"
- "{{ .Values.webserver.defaultUser.password }}"
However, if it exists, I would like to delete the user before creating it, but the following does not work:
airflow:
createUserJob:
command: ~
args:
- "bash"
- "-c"
- |-
exec \
airflow users delete --username admin ;
airflow {{ semverCompare ">=2.0.0" .Values.airflowVersion | ternary "users create" "create_user" }} "$#"
- --
- "-r"
- "{{ .Values.webserver.defaultUser.role }}"
- "-u"
- "{{ .Values.webserver.defaultUser.username }}"
- "-e"
- "{{ .Values.webserver.defaultUser.email }}"
- "-f"
- "{{ .Values.webserver.defaultUser.firstName }}"
- "-l"
- "{{ .Values.webserver.defaultUser.lastName }}"
- "-p"
- "{{ .Values.webserver.defaultUser.password }}"
Is there a way to run more than one command here?
Remove the word exec.
The syntax you show here runs bash -c 'command' args as the main container command, where args is a list of arguments that are expanded inside the command as $0, $1, $2, and so on. Inside the command string "$#" expands to those positional parameters starting at $1. This is set up correctly.
Inside the command string, exec is a special built-in utility: exec some_command replaces the current shell with some_command, and in effect ends the current script. In a container context this is useful for controlling which process is the main container process, but in this short-lived script it's not especially necessary. In particular if you're going to exec a command it must be the last command.
I might structure this as:
command:
- /bin/sh
- -c
- |-
airflow users delete --username "{{ .Values.webserver.defaultUser.username }}";
airflow users create "$#"
# (no `exec` in this command string)
- --
args:
- "-r"
- "{{ .Values.webserver.defaultUser.role }}"
- "-et"
- cetera
Here the command:/args: split is kind of artificial but it makes it slightly clearer how the command words are split up.
Hello I'm trying to start the docker compose (docker-compose_mysql.yml up) but ansible says no files in the directory. I've already looked at other solutions on github and stackoverflow but nothing that has allowed me to solve my problem.
Thanks you :)
my playbook
---
- name: Mettre en place Redmine - mySQL
connection: localhost
hosts: localhost
become_method: sudo
tasks:
- name: install docker-py
pip: name=docker-py
- name: Installer le docker compose
command: sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
register: command1
- debug: var=command1.stdout_lines
- name: Installer le docker compose
command: pip install docker-compose
- name: download docker compose
command: wget https://raw.githubusercontent.com/sameersbn/docker-redmine/master/docker-compose-mysql.yml
register: command2
- debug: var=command2.stdout_lines
- name: docker compose run
command: docker-compose-mysql.yml up-d
register: command3
- debug: var=command3.stdout_lines
my error
FAILED! => {"changed": false, "cmd": "docker-compose-mysql.yml up-d", "msg": "[Errno 2] Aucun fichier ou dossier de ce type: b'docker-compose-mysql.yml'", "rc": 2}
file docker-compose-mysql in directory
- name: copy sql schema
hosts: test-mysql
gather_facts: no
tasks:
- debug:
msg: "{{ playbook_dir }}"
- name: Docker compose
command: docker-compose -f {{ name }}_compose.yml up -d
Then, either move {{ name }}_compose.yml to the directory or provide an absolute path in command: docker-compose -f [abs_path]{{ name }}_compose.yml up -d
I'm deploying an Azure ARM template via Ansible playbook which seems to work fine, however I wish to add the ability to run 2x Powershell scripts after the machine has been deployed. I already have a custom script extension running when the machine is deployed via the ARM template, but I also wish to run 2 more Powershell scripts afterwards.
My Playbook:
---
- name: Deploy Azure ARM template.
hosts: localhost
connection: local
gather_facts: false
vars_files:
- ./vars/vault.yml
- ./vars/vars.yml
tasks:
- include_vars: vault.yml
- name: Create Azure Deploy
azure_rm_deployment:
client_id: "{{ client_id }}"
secret: "{{ secret }}"
subscription_id: "{{ subscription_id }}"
tenant: "{{ tenant }}"
state: present
resource_group_name: AnsibleTest1
location: UK South
template: "{{ lookup('file', 'WindowsVirtualMachine.json') }}"
parameters: "{{ (lookup('file', 'WindowsVirtualMachine.parameters.json') | from_json).parameters }}"
- name: Run powershell script
script: files/helloworld1.ps1
- name: Run powershell script
script: files/helloworld2.ps1
And the error after successfully deploying the template:
TASK [Run powershell script] ***************************************************************************************************************************************************************************
task path: /home/beefcake/.ansible/azure-json-deploy.yml:25
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: beefcake
<127.0.0.1> EXEC /bin/sh -c 'echo ~ && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230 `" && echo ansible-tmp-1507219682.48-53342098196230="` echo /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230 `" ) && sleep 0'
<127.0.0.1> PUT /home/beefcake/.ansible/files/helloworld1.ps1 TO /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/ /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c ' /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1 && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": true,
"failed": true,
"msg": "non-zero return code",
"rc": 127,
"stderr": "/home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1: 1: /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1: =: not found\n/home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1: 2: /home/beefcake/.ansible/tmp/ansible-tmp-1507219682.48-53342098196230/helloworld1.ps1: Set-Content: not found\n",
"stdout": "",
"stdout_lines": []
}
to retry, use: --limit #/home/beefcake/.ansible/azure-json-deploy.retry
PLAY RECAP *********************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1
As far as I can tell, the playbook script option should send the script to the machine and run it locally, but for some reason it cannot find the script I have in a subfolder of the playbook.
Folder structure:
.ansible (folder)
- ansible.cfg
- azure-json-deploy.yml
- azure_rm.ini
- azure_rm.py
- WindowsVirtualMachine.json
- WindowsVirtualMachine.parameters.json
- vars (folder)
- vars.yml
- vault.yml
- files (folder)
- helloworld1.ps1
- helloworld2.ps1
Am I missing something?
edit
This is the 2nd playbook I've created which 4c74356b41 advised me to do.
---
# This playbook tests the script module on Windows hosts
- name: Run powershell script 1
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Run powershell script
script: files/helloworld1.ps1
# This playbook tests the script module on Windows hosts
- name: Run powershell script 2
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Run powershell script
script: files/helloworld2.ps1
Which still generates the same error:
fatal: [localhost]: FAILED! => {
"changed": true,
"failed": true,
"msg": "non-zero return code",
"rc": 127,
"stderr": "/home/beefcake/.ansible/tmp/ansible-tmp-1507288326.16-187870805725578/helloworld1.ps1: 1: /home/beefcake/.ansible/tmp/ansible-tmp-1507288326.16-187870805725578/helloworld1.ps1: =: not found\n/home/beefcake/.ansible/tmp/ansible-tmp-1507288326.16-187870805725578/helloworld1.ps1: 2: /home/beefcake/.ansible/tmp/ansible-tmp-1507288326.16-187870805725578/helloworld1.ps1: Set-Content: not found\n",
"stdout": "",
"stdout_lines": []
}
to retry, use: --limit #/home/beefcake/.ansible/azure-json-deploy.retry
What ansible is trying to do is copy the file from localhost to localhost. Because the play is scoped to localhost.
I would imagine you dont have that host in the hosts file when you launch the playbook.
You need to add the host to ansible and scope script tasks to that host.
You can either create another playbook to do that or add a add_host step in the current one.
- add_host:
name: name
To scope tasks to the new hosts I'm using import_playbook directive, which imports another playbook that is scoped to the host(s) in question. There might be a better way.
The use-case actually to automate this with ansible. I want to import database dump only when database is completely empty (no tables inside). Of course there's always a way to execute sql statement, but this is last resort, I believe there should be more elegant solution for this.
pg_restore manual doesn't provide this option as far as I see.
Here's how I'm planning to do this with ansible:
- name: db_restore | Receive latest DB backup
shell: s3cmd --skip-existing get `s3cmd ls s3://{{ aws_bucket }}/ | grep sentry | tail -1 | awk '{print $4}'` sql.latest.tgz
args:
chdir: /root/
creates: sql.latest.tgz
- name: db_restore | Check if file exists
stat: path=/root/sql.latest.tgz
register: sql_latest
- name: db_restore | Restore latest DB backup if backup file found
shell: PGPASSWORD={{ dbpassword }} tar -xzOf /root/sentry*.tgz db.sql | psql -U{{ dbuser }} -h{{ pgsql_server }} --set ON_ERROR_STOP=on {{ dbname }}
when: sql_latest.stat.exists
ignore_errors: True
Ideally this should check if DB empty. No ansible module exist for this purpose. Google is also in silence.. Current solution actually also works, this will give error when import will fail, and I can just ignore error, but it's a bit painful to see a false alarm.
There's not really any such thing as "empty" as such; it generally has the built-in types, the default PL/PgSQL language, etc, even if you create from template0. If you create from a different template there could be a whole lot more in there.
PostgreSQL doesn't keep a record of the first non-template write to a DB, so you can't say "changed since created" either.
That's why there's no --if-empty option to pg_restore. It doesn't really make sense.
By far and away the best option is to execute psql to query the information_schema and determine if there are any tables in the public schema. Or, even better, query for the presence of specific tables and types you know will be created by the dump.
e.g.
psql -qAt mydbname -c "select 1 from information_schema.tables where table_schema = 'public' and table_name = 'testtable';"
You can then test for zero/nonzero rows returned on stdout. Or wrap it in SELECT EXISTS(...) to get a boolean from psql. Or use a DO block that ERRORs if the table exists if you need a zero/nonzero exit status from psql.
To regard the database as empty, we must know there nothing has been added from the point of creation. As postgres does not track this (as already mentioned by #Craig Ringer) I recommend a different approach with regards to ansible.
So, just use a handler mechanism like:
- name: Create zabbbix postgres DB
postgresql_db: name="{{zabbix_db_name}}"
notify:
- Init zabbix database
Since it is hard to tell, if a database is "empty", as explained by others, it is much easier to check, if the database exists, then create and restore in one step. I'm doing it like this:
- name: Check my_database database already exists
become: yes
become_user: postgres
shell: psql -l | grep my_database
ignore_errors: true
register: my_database_db_existence
- debug: var=my_database_db_existence
- name: Copy backup of the my-database database
shell: your-s3-command here
when: my_database_db_existence | failed
- name: Restore my_database database on first run
become_user: postgres
shell: createdb -O my_user my_database && psql -d my_database -f /path/to/my_dump.sql
when: my_database_db_existence | failed
P.S. Also written a detailed blog post explaining each ansible task in the implementation.
In my Ansible continuous deployment I prefer don't check empty DB or not. I run container with default properties and create DB if it not exist, after that I restore DB (create schemes, tables and etc):
- hosts: all
vars:
database_name: "maindb"
pg_admin_name: "postgres"
pg_admin_password: "postgres"
pghost: "localhost"
pg_user_name: "vr_user"
pg_user_password: "ChanGeMe2021"
tasks:
- name: Check if database is exist
community.postgresql.postgresql_info:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
filter:
- "databases"
register: pg_info
- name: Create database if not exist
block:
- name: Say status
ansible.builtin.debug:
msg: "Database is not exist!"
- name: Copy dadabase shchema
ansible.builtin.copy:
src: "./files/maindb.sql"
dest: "/tmp/maindb.sql"
- name: Create database
community.postgresql.postgresql_db:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
name: "{{ database_name }}"
encoding: UTF-8
# lc_collate: ru_RU.utf8
# lc_ctype: ru_RU.utf8
- name: Create role
community.postgresql.postgresql_user:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
name: "{{ pg_user_name }}"
password: "{{ pg_user_password }}"
- name: Restore database
community.postgresql.postgresql_db:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
name: "{{ database_name }}"
state: restore
target: "/tmp/maindb.sql"
register: pg_restore_result
failed_when: "'ERROR' in pg_restore_result.stderr"
- name: Print restore result
ansible.builtin.debug:
msg: "{{ pg_restore_result }}"
rescue:
- name: Rollback database
community.postgresql.postgresql_db:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
name: "{{ database_name }}"
state: absent
- name: Print when errors
ansible.builtin.debug:
msg: "Restore failed, because: {{ pg_restore_result.stderr_lines[1] }}"
when: pg_info.databases[database_name] is not defined
This code you can find here