Why awx don't see pip module? - centos

I use AWX 8.0.0.0. Have job on my SCM, that job connect to GCP and create instance. When i run this job under console like ansible-playbook job.yml its done fine. But when i run it from web UI i get error
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Please install the google-auth library"}
So it oblivious mean that i don't have this library. But I install it by
pip install google-auth and it work fine when I run it from console. This is my playbook:
- name: Create jenkins vm
hosts: localhost
connection: local
gather_facts: no
vars:
service_account_email: ansible#secret-app.iam.gserviceaccount.com
credentials_file: /etc/conf/awx/awx.json
project_id: geocitizen-app
machine_type: f1-micro
machine_name: jenkins-node-1
image: https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-7-v20191014
zone: europe-north1-a
tasks:
- name: Launch instances
gcp_compute_instance:
auth_kind: serviceaccount
name: "{{ machine_name }}"
machine_type: "{{ machine_type }}"
#service_account_email: "{{ service_account_email }}"
service_account_file: "{{ credentials_file }}"
project: "{{ project_id }}"
zone: "{{ zone }}"
network_interfaces:
- network:
access_configs:
- name: External NAT
type: ONE_TO_ONE_NAT
disks:
- auto_delete: 'true'
boot: 'true'
initialize_params:
source_image: "{{ image }}"
What I am doing wrong?

So the problem was that I was looking on my host machine. I install AWX via docker so I need to look in my docker container.

Related

Vagrant - Ansible - Install PostgreSQL and Postgis [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 19 hours ago.
Improve this question
I'm trying to install PostgreSQL and Postgis with Ansible on a Vagrant VM.
But I'm reaching some issues to install and access to PostgreSQL (didn't reach the step of Postgis yet).
My Vagrant VM is an ubuntu/jammy64.
Firstly, I installed PHP on the VM.
Then I try to install PostrgreSQL. In following, my psql task to Ansible:
---
- name: Install
apt:
update_cache: true
name:
- bash
- openssl
- libssl-dev
- libssl-doc
- postgresql
- postgresql-contrib
- libpq-dev
- python3-psycopg2
state: present
- name: Check if initialized
stat:
path: "{{ postgresql_data_dir }}/pg_hba.conf"
register: postgres_data
- name: Empty data dir
file:
path: "{{ postgresql_data_dir }}"
state: absent
when: not postgres_data.stat.exists
- name: Initialize
shell: "{{ postgresql_bin_path }}/initdb -D {{ postgresql_data_dir }}"
become: true
become_user: postgres
when: not postgres_data.stat.exists
- name: Start and enable service
service:
name: postgresql
state: started
enabled: true
- name: Update pg_ident.conf - allow user to auth with postgres
lineinfile:
dest: "/etc/postgresql/{{ postgresql_version }}/main/pg_ident.conf"
insertafter: "# MAPNAME SYSTEM-USERNAME PG-USERNAME"
line: "user_{{ user }} {{ user }} postgres"
- name: Update pg_hba.conf - disable peer for postgres user
lineinfile:
dest: "/etc/postgresql/{{ postgresql_version }}/main/pg_hba.conf"
regexp: "local all postgres peer"
line: "#local all postgres peer"
- name: Update pg_hba.conf - trust all connection
lineinfile:
dest: "/etc/postgresql/{{ postgresql_version }}/main/pg_hba.conf"
regexp: "local all all peer"
line: "local all all trust"
- name: Restart
service:
name: postgresql
state: restarted
enabled: true
- name: "Create database {{ postgresql_db }}"
become: true
become_user: "{{ postgresql_user }}"
postgresql_db:
name: "{{ postgresql_db }}"
state: present
- name: "Create user {{ user }}"
become: yes
become_user: "{{ postgresql_user }}"
postgresql_user:
name: "{{ user }}"
password: "{{ user }}"
state: present
- name: "Grant user {{ user }}"
become: yes
become_user: "{{ postgresql_user }}"
postgresql_privs:
type: database
database: "{{ postgresql_db }}"
roles: "{{ user }}"
grant_option: no
privs: all
notify: psql restart
My vars:
---
postgresql_version: 14
postgresql_bin_path: "/usr/lib/postgresql/{{ postgresql_version }}/bin"
postgresql_data_dir: "/var/lib/postgresql/{{ postgresql_version }}/main"
postgresql_host: localhost
postgresql_port: 5432
postgresql_db: "db_{{ user }}"
postgresql_user: "{{ user }}"
postgresql_password: "{{ user }}"
ansible_ssh_pipelining: true
But when I play the Ansible's playbook I'm getting the following feedback:
TASK [include_role : psql] *****************************************************
TASK [psql : Install] **********************************************************
ok: [192.168.50.50]
TASK [psql : Check if initialized] *********************************************
ok: [192.168.50.50]
TASK [psql : Empty data dir] ***************************************************
skipping: [192.168.50.50]
TASK [psql : Initialize] *******************************************************
skipping: [192.168.50.50]
TASK [psql : Start and enable service] *****************************************
ok: [192.168.50.50]
TASK [psql : Create database db_ojirai] ****************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Is the server running locally and accepting connections on that socket?
fatal: [192.168.50.50]: FAILED! => {"changed": false, "msg": "unable to connect to database: connection to server on socket \"/var/run/postgresql/.s.PGSQL.5432\" failed: Connection refused\n\tIs the server running locally and accepting connections on that socket?\n"}
PLAY RECAP *********************************************************************
192.168.50.50 : ok=14 changed=0 unreachable=0 failed=1 skipped=2 rescued=0 ignored=0
Can you, guys, explain to me where is my mistake, please? Is it my PostgreSQL installation which is wrong?
Thanks for your feedbacks!
Edit:
I try the suggested solution by β.εηοιτ.βε but the message persist. I tried with following process:
vagrant destroy > export vars (suggested in the post) > vagrant up > ansible deploy
export vars (suggested in the post) > vagrant reload > ansible deploy
export vars (suggested in the post) > vagrant destroy > vagrant up > ansible deploy
vagrant destroy > vagrant up > export vars (suggested in the post) > ansible deploy

How to use Ansible to label the nodes of K8S

At present, I can only get the host name of the host node by going to the corresponding node to store the file locally, and then I send the stored file to the master node and set the file as a registered variable to set the corresponding node label for k8s, but for more The configuration of each node is difficult. Is there any good way to achieve it?
gather_facts: yes
become: yes
tags: gets
tasks:
- name: install the latest version of rsync
yum:
name: rsync
state: latest
- name: Ansible gets the current hostname
debug: var=hostvars[inventory_hostname]['ansible_hostname']
register: result
- local_action:
module: copy
content: "{{ result }}"
dest: /tmp/Current_hostname.yml
- debug: msg="{{ item }}"
with_lines: cat /tmp/Current_hostname.yml |tr -s ' '|cut -d '"' -f6 |sort -o /tmp/Modified_hostname
- hosts: "{{ hosts | default('k8s-master') }}"
gather_facts: no
become: yes
tags: gets
tasks:
- name: install the latest version of rsync
yum:
name: rsync
state: latest
- name: Transfer file from ServerA to ServerB
synchronize:
src: /tmp/Modified_hostname
dest: /tmp/Modified_hostname
mode: pull
delegate_to: "{{ item }}"
with_items: "{{ groups['mysql'][0] }}"
- hosts: "{{ hosts | default('k8s-master') }}"
gather_facts: no
tags: gets
become: yes
vars:
k8s_mariadb: "{{ lookup('file', '/tmp/Modified_hostname') }}"
tasks:
- name: Gets the node host information
debug: msg="the value of is {{ k8s_mariadb }}"
- name: Tag MySQL fixed node
shell: kubectl label node {{ k8s_mariadb }} harbor-master1=mariadb
ignore_errors: yes

How to create Google Kubernetes (GKE) cluster in Ansible with custom image?

I've used this pattern in the past to create a GKE and it's worked great, but now I need to define a custom image type to use.
Here's the ansible playbook i'm working with.
- name: GCE
hosts: localhost
gather_facts: no
vars_files:
- vars/default.yml
tasks:
- name: create cluster
gcp_container_cluster:
name: "{{ cluster_name }}"
initial_node_count: "{{ node_count}}"
initial_cluster_version: "{{ cluster_kubernetes_version }}"
master_auth:
username: admin
password: "{{ cloud_admin }}"
node_config:
machine_type: e2-medium
disk_size_gb: "{{ disk_size_gb }}"
location: "{{ cluster_zone}}"
project: "{{ project }}"
auth_kind: "{{ auth_kind }}"
service_account_file: "{{ service_account_file }}"
state: present
scopes: "{{ scopes }}"
register: cluster
- name: create a node pool
google.cloud.gcp_container_node_pool:
name: default-pool
autoscaling:
enabled: yes
min_node_count: "{{ node_count}}"
max_node_count: "{{ max_node_count }}"
initial_node_count: "{{ node_count }}"
cluster: "{{ cluster }}"
location: "{{ cluster_zone}}"
config:
machine_type: e2-medium
disk_size_gb: "{{ disk_size_gb }}"
project: "{{ gce_project}}"
auth_kind: serviceaccount
service_account_file: "{{ service_account_file }}"
state: presen
I'm trying to use an E2 based image with 16 cores and 70GB of RAM. The spec don't matter as much as the fact that I can't specify a 'machine type' that's already preconfigured.
Is it possible to still use ansible to create the cluster? Do I need to create a custom image type to reference?
Just to clarify, there are no errors being thrown out. defining the machine_type as e2-medium doesn't allow me to allocate the resources I need and define an instance with the resources required. I'm asking how to say use e2-medium as a base and increase the ram allocation to 70GB or if that is feasible?
IIUC, you should be able to reference your machine type as e2-custom-16-71680
i.e.:
- name: your-cluster
google.cloud.gcp_container_cluster:
...
node_config:
machine_type: e2-custom-16-71680
disk_size_gb: "{{ disk_size_gb }}"
...
The (hidden) documentation for specifying custom machine types:
https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-machine-type#gcloud

Postgresql - unable to connect to database: could not connect to server: No such file or directory

So I'm trying to create a Postgres Database on my remote server with Ansible, unfortunately I'm getting this error message
TASK [postgresql : Create database with name sola] *****************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
fatal: [some-remote-server]: FAILED! => {
"changed": false
}
MSG:
unable to connect to database: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
This would be my playbook
- name: enable the PostgreSQL package repository
copy:
src: pgdg-96-redhat.repo
dest: /etc/yum.repos.d/pgdg-96-redhat.repo
- name: install additional packages
yum:
name: "{{ item }}"
state: present
with_items:
- "{{ packages }}"
- name: Ensure bash and OpenSSL are the latest version
yum:
name: "{{ item }}"
update_cache: true
state: latest
with_items:
- bash
- openssl
tags: packages
- name: install system packages
yum:
name: "{{ item }}"
state: installed
with_items:
- "{{ packages }}"
become: yes
- name: Install PostgreSQL
yum:
name: "{{ item }}"
update_cache: true
state: installed
with_items:
- postgresql
- postgresql-contrib
- python-psycopg2
tags: packages
become: yes
- name: enabling postgresql services
service:
name: postgresql
state: started
enabled: yes
- name: Create database with name sola
postgresql_db:
name: sola
encoding: 'UTF-8'
lc_collate: 'en_US.UTF-8'
lc_ctype: 'en_US.UTF-8'
template: 'template0'
- name: Ensure database is created
sudo_user: postgres
postgresql_db:
name: dbname
encoding: 'UTF-8'
lc_collate: 'en_US.UTF-8'
lc_ctype: 'en_US.UTF-8'
template: 'template0'
state: present
My suspicions are, that either something went wrong witht the installation process, so that postgres hasn't even been properly installed on the remote server or that I'm not properly enabling and starting the postgres services. Any help is appreciated!

Enabling mongo authentication by ansible playbook

I trying install mongodb on my server and enable authentication. But I'm stuck on adding user for auth. When I try execute playbook it fails on Add user task with output:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: pymongo.errors.OperationFailure: there are no users authenticated
fatal: [***]: FAILED! => {"changed": false, "msg": "unable to connect to database: there are no users authenticated"}
How can I fix it?
playbook.yml
- name: Install mongodb
apt:
name: mongodb-org
update_cache: yes
state: present
- name: Set config
template:
src: templates/mongodb.yml
dest: /etc/mongod.conf
notify: restart mongodb
- name: Install pymongo
pip:
name: pymongo
state: present
- name: Add user
mongodb_user:
database: "{{ mongodb_name }}"
name: "{{ mongodb_user }}"
password: "{{ mongodb_password }}"
login_host: "{{ mongodb_bind_ip }}"
login_port: "{{ mongodb_port }}"
state: present
mongodb.yml
net:
port: {{ mongodb_port }}
bindIp: {{ mongodb_bind_ip }}
unixDomainSocket:
enabled: false
security:
authorization: enabled
If you don't have admin user in the database you need to start it with disabled security.authorization, add admin user, then restart mongodb with enabled security.authorization : https://docs.mongodb.com/manual/tutorial/enable-authentication/#procedure
After that you can add more users using admin's credentials:
- name: Add user
mongodb_user:
database: "{{ mongodb_name }}"
name: "{{ mongodb_user }}"
password: "{{ mongodb_password }}"
login_host: "{{ mongodb_bind_ip }}"
login_port: "{{ mongodb_port }}"
login_user: "{{ admin_login }}"
login_password: "{{ admin_password }}"
state: present
https://docs.ansible.com/ansible/2.4/mongodb_user_module.html