Unable to execute postgres sql script from ansible - postgresql

I want to execute a postgresql sql which is fetched from my git resource.
I want to first fetch all the scripts from git repository and then wants to execute scripts from a particular directory which i taking as an input in ansible script.This is my complete ansible script
- hosts: "{{ HOST }}"
become: true
become_user: "admin"
become_method: su
gather_facts: true
vars:
app_path: "{{ app_path }}"
app_dir: "{{ app_dir }}"
tasks:
- stat:
path: "{{ app_path }}/{{ app_dir }}"
register: exist
- name: create "{{ app_path }}/{{ app_dir }}" directory
file: dest="{{ app_path }}/{{ app_dir }}" state=directory
when: exist.stat.exists != True
- git:
repo: http://git-url/postgres-demo.git
dest: "{{ app_path }}/{{ app_dir }}"
version: "{{ GIT_TAG }}"
refspec: '+refs/heads/{{ GIT_TAG }}:refs/remotes/origin/{{ GIT_TAG }}'
update: yes
force: true
register: cloned
- name: Execute postgres dump files
shell: "/usr/bin/psql -qAtw -f {{ item }}"
with_fileglob:
- "{{ app_path }}/{{ app_dir }}/{{ scripts_dir }}/*"
register: postgres_sql
become: true
become_user: "postgres"
become_method: su
Above script is executed successfully but postgres step throwing me following waning:
[WARNING]: Unable to find '/home/admin/postgres-dev/test' in expected paths.
When i checked my postgresql db,I don't find the table that i want to create using this scripts.

All lookups in Ansible are executed on local ansible host.
So:
with_fileglob:
- "{{ app_path }}/{{ app_dir }}/{{ scripts_dir }}/*"
uses fileglob lookup, which searches files on localhost!
You should refactor this code to use find module first to find all required scripts, register its output and then loop with with_items over registered variable to execute scripts.

Related

How to use Ansible to label the nodes of K8S

At present, I can only get the host name of the host node by going to the corresponding node to store the file locally, and then I send the stored file to the master node and set the file as a registered variable to set the corresponding node label for k8s, but for more The configuration of each node is difficult. Is there any good way to achieve it?
gather_facts: yes
become: yes
tags: gets
tasks:
- name: install the latest version of rsync
yum:
name: rsync
state: latest
- name: Ansible gets the current hostname
debug: var=hostvars[inventory_hostname]['ansible_hostname']
register: result
- local_action:
module: copy
content: "{{ result }}"
dest: /tmp/Current_hostname.yml
- debug: msg="{{ item }}"
with_lines: cat /tmp/Current_hostname.yml |tr -s ' '|cut -d '"' -f6 |sort -o /tmp/Modified_hostname
- hosts: "{{ hosts | default('k8s-master') }}"
gather_facts: no
become: yes
tags: gets
tasks:
- name: install the latest version of rsync
yum:
name: rsync
state: latest
- name: Transfer file from ServerA to ServerB
synchronize:
src: /tmp/Modified_hostname
dest: /tmp/Modified_hostname
mode: pull
delegate_to: "{{ item }}"
with_items: "{{ groups['mysql'][0] }}"
- hosts: "{{ hosts | default('k8s-master') }}"
gather_facts: no
tags: gets
become: yes
vars:
k8s_mariadb: "{{ lookup('file', '/tmp/Modified_hostname') }}"
tasks:
- name: Gets the node host information
debug: msg="the value of is {{ k8s_mariadb }}"
- name: Tag MySQL fixed node
shell: kubectl label node {{ k8s_mariadb }} harbor-master1=mariadb
ignore_errors: yes

How to create Google Kubernetes (GKE) cluster in Ansible with custom image?

I've used this pattern in the past to create a GKE and it's worked great, but now I need to define a custom image type to use.
Here's the ansible playbook i'm working with.
- name: GCE
hosts: localhost
gather_facts: no
vars_files:
- vars/default.yml
tasks:
- name: create cluster
gcp_container_cluster:
name: "{{ cluster_name }}"
initial_node_count: "{{ node_count}}"
initial_cluster_version: "{{ cluster_kubernetes_version }}"
master_auth:
username: admin
password: "{{ cloud_admin }}"
node_config:
machine_type: e2-medium
disk_size_gb: "{{ disk_size_gb }}"
location: "{{ cluster_zone}}"
project: "{{ project }}"
auth_kind: "{{ auth_kind }}"
service_account_file: "{{ service_account_file }}"
state: present
scopes: "{{ scopes }}"
register: cluster
- name: create a node pool
google.cloud.gcp_container_node_pool:
name: default-pool
autoscaling:
enabled: yes
min_node_count: "{{ node_count}}"
max_node_count: "{{ max_node_count }}"
initial_node_count: "{{ node_count }}"
cluster: "{{ cluster }}"
location: "{{ cluster_zone}}"
config:
machine_type: e2-medium
disk_size_gb: "{{ disk_size_gb }}"
project: "{{ gce_project}}"
auth_kind: serviceaccount
service_account_file: "{{ service_account_file }}"
state: presen
I'm trying to use an E2 based image with 16 cores and 70GB of RAM. The spec don't matter as much as the fact that I can't specify a 'machine type' that's already preconfigured.
Is it possible to still use ansible to create the cluster? Do I need to create a custom image type to reference?
Just to clarify, there are no errors being thrown out. defining the machine_type as e2-medium doesn't allow me to allocate the resources I need and define an instance with the resources required. I'm asking how to say use e2-medium as a base and increase the ram allocation to 70GB or if that is feasible?
IIUC, you should be able to reference your machine type as e2-custom-16-71680
i.e.:
- name: your-cluster
google.cloud.gcp_container_cluster:
...
node_config:
machine_type: e2-custom-16-71680
disk_size_gb: "{{ disk_size_gb }}"
...
The (hidden) documentation for specifying custom machine types:
https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-machine-type#gcloud

Why awx don't see pip module?

I use AWX 8.0.0.0. Have job on my SCM, that job connect to GCP and create instance. When i run this job under console like ansible-playbook job.yml its done fine. But when i run it from web UI i get error
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Please install the google-auth library"}
So it oblivious mean that i don't have this library. But I install it by
pip install google-auth and it work fine when I run it from console. This is my playbook:
- name: Create jenkins vm
hosts: localhost
connection: local
gather_facts: no
vars:
service_account_email: ansible#secret-app.iam.gserviceaccount.com
credentials_file: /etc/conf/awx/awx.json
project_id: geocitizen-app
machine_type: f1-micro
machine_name: jenkins-node-1
image: https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-7-v20191014
zone: europe-north1-a
tasks:
- name: Launch instances
gcp_compute_instance:
auth_kind: serviceaccount
name: "{{ machine_name }}"
machine_type: "{{ machine_type }}"
#service_account_email: "{{ service_account_email }}"
service_account_file: "{{ credentials_file }}"
project: "{{ project_id }}"
zone: "{{ zone }}"
network_interfaces:
- network:
access_configs:
- name: External NAT
type: ONE_TO_ONE_NAT
disks:
- auto_delete: 'true'
boot: 'true'
initialize_params:
source_image: "{{ image }}"
What I am doing wrong?
So the problem was that I was looking on my host machine. I install AWX via docker so I need to look in my docker container.

Enabling mongo authentication by ansible playbook

I trying install mongodb on my server and enable authentication. But I'm stuck on adding user for auth. When I try execute playbook it fails on Add user task with output:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: pymongo.errors.OperationFailure: there are no users authenticated
fatal: [***]: FAILED! => {"changed": false, "msg": "unable to connect to database: there are no users authenticated"}
How can I fix it?
playbook.yml
- name: Install mongodb
apt:
name: mongodb-org
update_cache: yes
state: present
- name: Set config
template:
src: templates/mongodb.yml
dest: /etc/mongod.conf
notify: restart mongodb
- name: Install pymongo
pip:
name: pymongo
state: present
- name: Add user
mongodb_user:
database: "{{ mongodb_name }}"
name: "{{ mongodb_user }}"
password: "{{ mongodb_password }}"
login_host: "{{ mongodb_bind_ip }}"
login_port: "{{ mongodb_port }}"
state: present
mongodb.yml
net:
port: {{ mongodb_port }}
bindIp: {{ mongodb_bind_ip }}
unixDomainSocket:
enabled: false
security:
authorization: enabled
If you don't have admin user in the database you need to start it with disabled security.authorization, add admin user, then restart mongodb with enabled security.authorization : https://docs.mongodb.com/manual/tutorial/enable-authentication/#procedure
After that you can add more users using admin's credentials:
- name: Add user
mongodb_user:
database: "{{ mongodb_name }}"
name: "{{ mongodb_user }}"
password: "{{ mongodb_password }}"
login_host: "{{ mongodb_bind_ip }}"
login_port: "{{ mongodb_port }}"
login_user: "{{ admin_login }}"
login_password: "{{ admin_password }}"
state: present
https://docs.ansible.com/ansible/2.4/mongodb_user_module.html

PgSQL - How to import database dump only when database completely empty?

The use-case actually to automate this with ansible. I want to import database dump only when database is completely empty (no tables inside). Of course there's always a way to execute sql statement, but this is last resort, I believe there should be more elegant solution for this.
pg_restore manual doesn't provide this option as far as I see.
Here's how I'm planning to do this with ansible:
- name: db_restore | Receive latest DB backup
shell: s3cmd --skip-existing get `s3cmd ls s3://{{ aws_bucket }}/ | grep sentry | tail -1 | awk '{print $4}'` sql.latest.tgz
args:
chdir: /root/
creates: sql.latest.tgz
- name: db_restore | Check if file exists
stat: path=/root/sql.latest.tgz
register: sql_latest
- name: db_restore | Restore latest DB backup if backup file found
shell: PGPASSWORD={{ dbpassword }} tar -xzOf /root/sentry*.tgz db.sql | psql -U{{ dbuser }} -h{{ pgsql_server }} --set ON_ERROR_STOP=on {{ dbname }}
when: sql_latest.stat.exists
ignore_errors: True
Ideally this should check if DB empty. No ansible module exist for this purpose. Google is also in silence.. Current solution actually also works, this will give error when import will fail, and I can just ignore error, but it's a bit painful to see a false alarm.
There's not really any such thing as "empty" as such; it generally has the built-in types, the default PL/PgSQL language, etc, even if you create from template0. If you create from a different template there could be a whole lot more in there.
PostgreSQL doesn't keep a record of the first non-template write to a DB, so you can't say "changed since created" either.
That's why there's no --if-empty option to pg_restore. It doesn't really make sense.
By far and away the best option is to execute psql to query the information_schema and determine if there are any tables in the public schema. Or, even better, query for the presence of specific tables and types you know will be created by the dump.
e.g.
psql -qAt mydbname -c "select 1 from information_schema.tables where table_schema = 'public' and table_name = 'testtable';"
You can then test for zero/nonzero rows returned on stdout. Or wrap it in SELECT EXISTS(...) to get a boolean from psql. Or use a DO block that ERRORs if the table exists if you need a zero/nonzero exit status from psql.
To regard the database as empty, we must know there nothing has been added from the point of creation. As postgres does not track this (as already mentioned by #Craig Ringer) I recommend a different approach with regards to ansible.
So, just use a handler mechanism like:
- name: Create zabbbix postgres DB
postgresql_db: name="{{zabbix_db_name}}"
notify:
- Init zabbix database
Since it is hard to tell, if a database is "empty", as explained by others, it is much easier to check, if the database exists, then create and restore in one step. I'm doing it like this:
- name: Check my_database database already exists
become: yes
become_user: postgres
shell: psql -l | grep my_database
ignore_errors: true
register: my_database_db_existence
- debug: var=my_database_db_existence
- name: Copy backup of the my-database database
shell: your-s3-command here
when: my_database_db_existence | failed
- name: Restore my_database database on first run
become_user: postgres
shell: createdb -O my_user my_database && psql -d my_database -f /path/to/my_dump.sql
when: my_database_db_existence | failed
P.S. Also written a detailed blog post explaining each ansible task in the implementation.
In my Ansible continuous deployment I prefer don't check empty DB or not. I run container with default properties and create DB if it not exist, after that I restore DB (create schemes, tables and etc):
- hosts: all
vars:
database_name: "maindb"
pg_admin_name: "postgres"
pg_admin_password: "postgres"
pghost: "localhost"
pg_user_name: "vr_user"
pg_user_password: "ChanGeMe2021"
tasks:
- name: Check if database is exist
community.postgresql.postgresql_info:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
filter:
- "databases"
register: pg_info
- name: Create database if not exist
block:
- name: Say status
ansible.builtin.debug:
msg: "Database is not exist!"
- name: Copy dadabase shchema
ansible.builtin.copy:
src: "./files/maindb.sql"
dest: "/tmp/maindb.sql"
- name: Create database
community.postgresql.postgresql_db:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
name: "{{ database_name }}"
encoding: UTF-8
# lc_collate: ru_RU.utf8
# lc_ctype: ru_RU.utf8
- name: Create role
community.postgresql.postgresql_user:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
name: "{{ pg_user_name }}"
password: "{{ pg_user_password }}"
- name: Restore database
community.postgresql.postgresql_db:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
name: "{{ database_name }}"
state: restore
target: "/tmp/maindb.sql"
register: pg_restore_result
failed_when: "'ERROR' in pg_restore_result.stderr"
- name: Print restore result
ansible.builtin.debug:
msg: "{{ pg_restore_result }}"
rescue:
- name: Rollback database
community.postgresql.postgresql_db:
login_host: "{{ pghost }}"
login_user: "{{ pg_admin_name }}"
login_password: "{{ pg_admin_password }}"
name: "{{ database_name }}"
state: absent
- name: Print when errors
ansible.builtin.debug:
msg: "Restore failed, because: {{ pg_restore_result.stderr_lines[1] }}"
when: pg_info.databases[database_name] is not defined
This code you can find here