/bin/sh: apt: command not found", "stderr_lines": ["/bin/sh: apt: command not found"], "stdout": "", "stdout_lines in awx - ansible-awx

If Im trying to install any packages from awx console using ansible playbook ,which pull it from git.
But its giving the below error in local ubuntu machine.
/bin/sh: apt: command not found", "stderr_lines": ["/bin/sh: apt: command not found"], "stdout": "", "stdout_lines
or sometimes.
changed": false, "cmd": "apt-get update", "msg": "[Errno 2] No such file or directory", "rc
its working with yum package but not with opt package ,what might be the reason , please help on this.
If Im trying to install any packages from awx console using ansible playbook ,which pulls it from github.
hosts: all
become: yes
become_method: sudo
tasks:
name: ensure apache is at the latest version
apt: name={{ item }} update_cache=yes
with_items:
apache2
/bin/sh: apt: command not found", "stderr_lines": ["/bin/sh: apt: command not found"], "stdout": "", "stdout_lines

Change your code to:
---
- hosts: all
become: True
tasks:
- name: ensure apache is at the latest version
yum:
name: "{{ item }}"
update_cache: yes
with_items:
- apache2
Your distribution does not has APT installed, which is the package manager. You're most likely on CentOS, which uses YUM as the package manager.
You should not use apt, use yum instead
--EDIT--
There could be a couple of issues going on. First we'll verify if you have targeted the correct machine. Can you run this Ansible playbook:
---
- hosts: all
become: True
tasks:
- name: test
shell: touch /tmp/file.txt
- name: ip address of targeted nodes
debug: var=hostvars[inventory_hostname]['ansible_default_ipv4']['address']
Now, connect to your AWS node and can you verify the file is at /tmp/file.txt.
On the node itself, what happens when you run the commands yum and apt.
Also, run ip -a on your Ubuntu node, and verify the IP addresses match.
If APT is really missing, then you should reïnstall your machine. Because of this

Related

How to connect postgres to ansible?

Objective:
My objective is to connect postgres 9.3 using Ansible 2.8.3, and perform postgres operations using ansible.
I have created a yaml file to install postgres, this file also creates a database using the yaml script.
I tried resolving this error by changing the contents of the sudoer file, but it damaged the file forcing me to reinstall ubuntu and ansible.
Ansible Code:
- hosts: localhost
become: yes
gather_facts: no
tasks:
- name: ensure apt cache is up to date
apt: update_cache=yes
- name: ensure packages are installed
apt: name={{item}}
with_items:
- postgresql
- libpq-dev
- python-psycopg2
- hosts: localhost
become: yes
become_user: emgda
gather_facts: no
vars:
dbname: myapp
dbuser: emgda
dbpassword: Entrib!23
tasks:
- name: ensure database is created
postgresql_db: name={{dbname}}
- name: ensure user has access to database
postgresql_user: db={{dbname}} name={{dbuser}} password={{dbpassword}} priv=ALL
- name: ensure user does not have unnecessary privilege
postgresql_user: name={{dbuser}} role_attr_flags=NOSUPERUSER,NOCREATEDB
- name: ensure no other user can access the database
postgresql_privs: db={{dbname}} role=PUBLIC type=database priv=ALL state=absent
...
After running this file I have come across below error:
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "sudo: a password is required\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
NOTE: Can anyone kindly help me resolve this issue. I am new to Ansible. I am following this link to practice already running Ansible script.
You've set become: yes in your playbook so ansible trying to switch to the root user. According to the error message - sudo: a password is required, you didn't set the --ask-become-pass option during the playbook run and didn't set passwordless sudo for your ansible user.
So you need to run your playbook with --ask-become-pass option, or setup ability to use sudo without password for user that you user for Ansible.
Escalation works fine in the first play
- hosts: localhost
become: yes
The default become_user is root. This means that the user who is running the playbook (see ansible_user) is able to escalate privilege sudo su -.
The second play escalates to user emgda. This means that the user who is running the playbook shall escalate privilege sudo su emgda
- hosts: localhost
become: yes
become_user: emgda
This requires a password which is missing, resulting in the error
sudo: a password is required
The solutions are
1) Provide the password in the command-line with --ask-become-pass, or
2) Provide the password in the variable ansible_become_password, or
3) configure sudoers to escalate the privilege without password
<user-running-playbook> ALL=(ALL) NOPASSWD: ALL

Deploying Flask app with psycopg2 dependency to Elastic Beanstalk. ec2 instance won't install yum packages

I'm having problems deploying my flask app to EB.
I'm using eb-cli. I've created a .ebextensions folder in the root folder of my application. The folder contains two files:
00dependencies.config
packages:
yum:
libffi-devel : []
postgresql95-devel: []
01setup.config
container_commands:
00_wsgi_pass_headers:
command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'
option_settings:
"aws:elasticbeanstalk:container:python":
WSGIPath: "api-siifra/manage.py"
"aws:elasticbeanstalk:container:python:staticfiles":
"/static/": "api-siifra/app/static/"
But when I run eb deploy I get the error:
ERROR: Update environment operation is complete, but with errors. For more information, see troubleshooting documentation.
Looking in the eb web interface under Health I see the error:
/opt/python/run/venv/bin/pip install -r /opt/python/ondeck/app/requirements.txt' returned non-zero exit status 1
And the command fails on: Error: pg_config executable not found. while installing psycopg2
If I ssh in to the ec2 instance in question and install postgresql95-devel manually the eb deploy command returns with out errors.
I though packages: yum: .... in a *.config file ran before the pip command?
Any help would be appreciated.
Thank you.

Vagrant with ansible stop when meet console questions

I'm installing mongo extension for PHP in my vagrant with this task
---
- name: Intall MongoDb PHP extension
sudo: yes
command: "pecl install mongo"
- name: Copy mongo extension INI to mods-available folder
template: >
src=mongodb_extension.ini.j2
dest={{ php_conf_dir }}/mongodb.ini
owner=root group root mode=644
- name: Enabling mongo config in PHP cli conf
sudo: yes
file: src={{ php_conf_dir }}/xhprof.ini dest=/etc/php5/cli/conf.d/20-mongodb.ini state=link
- name: Enabling xhprof config in PHP fpm conf
sudo: yes
file: src={{ php_conf_dir }}/xhprof.ini dest=/etc/php5/fpm/conf.d/20-mongodb.ini state=link
notify: reload php-fpm
The problem is it stucks at this Intall MongoDb PHP extension
I tried install mongo extension manually and see that the console asks this question Build with Cyrus SASL (MongoDB Enterprise Authentication) support? [no] :
I think the problem is this question.
Does anybody know how to answer this question in ansible, so it can run provision?
Thank you very much.
You can answer the prompt with yes or yes yes (if you want to reply 'yes') or yes no (if you want to reply no).
So, you can make your task like so:
- name: Intall MongoDb PHP extension
sudo: yes
shell: "yes {{ php_install_mongo }} | pecl install mongo"
And set php_install_mongo somewhere (or remove the var, and set it to a fixed value).
Note that it will reply the same answer to all questions (but it is not relevant in this case AFAIK).
EDIT: There is a better looking alternate way using thee-lesser-than operator:
- name: Intall MongoDb PHP extension
sudo: yes
shell: "pecl install mongo <<< '{{ php_install_mongo }}'"

Postgres unix socket directory not persisting

I'm provisioning an Ubuntu virtual machine using Vagrant + Ansible. Postgres installs correctly but on each machine restart and reload the directory /var/run/postgresql is missing and the postgresql service fails to start up. To get postgres running, I need to create the unix socket directory and manually start the service.
If I try and start the service without the socket directory I get the following error
Error: Cannot stat /var/run/postgresql
* No PostgreSQL clusters exist; see "man pg_createcluster"
How can I get around this?
EDIT
Here's my postgresql playbook instructions
---
- name: Install Postgres + PostGIS
apt: pkg={{ item }} state=installed update_cache=yes
with_items:
- libgeos-dev
- postgresql-9.3
- postgresql-contrib-9.3
- postgresql-client-9.3
- postgresql-server-dev-9.3
- postgresql-9.3-postgis-2.1
- postgresql-9.3-postgis-2.1-scripts
sudo: yes
notify:
- restart postgresql
- name: PostgreSQL on statup
service: name=postgresql enabled=yes state=started
sudo: yes
notify:
- restart postgresql
- name: Install PostgreSQL config file
sudo: yes
template: src=postgresql.conf
dest=/etc/postgresql/9.3/main/postgresql.conf
owner={{ postgresql.user }}
group={{ postgresql.group }}
notify:
- restart postgresql
- name: Install PostgreSQL Host-Based-Authentication file
template: src=pg_hba.conf.j2
dest=/etc/postgresql/9.3/main/pg_hba.conf
owner={{ postgresql.user }} group={{ postgresql.group }}
notify: restart postgresql
sudo: yes
/var/run/postgresql is not supposed to persist across reboots. Typically /var/run links inside a temporary filesystem anyway.
The start sequence of postgresql as packaged by Ubuntu is done by this function, in /usr/share/postgresql-common/init.d-functions:
# start all clusters of version $1
# output according to Debian Policy for init scripts
start() {
# create socket directory
if [ -d /var/run/postgresql ]; then
chmod 2775 /var/run/postgresql
else
install -d -m 2775 -o postgres -g postgres /var/run/postgresql
fi
do_ctl_all start "$1" "Starting PostgreSQL $1 database server"
}
This function creates the directory, so when it's missing, it's presumably because PostgreSQL was not started, as opposed to the other way round, PostgreSQL not starting because the directory is missing.

Mongos Install/Setup in Elastic Beanstalk

Looking down the road at sharding, we would like to be able to have multiple mongos instances. The recommendation seems to be to put mongos on each application server. I was thinking I'd just load balance them on their own servers, but this article http://craiggwilson.com/2013/10/21/load-balanced-mongos/ indicates that there are issue with this.
So I'm back to having it on the application servers. However, we are using Elastic Beanstalk. I could install Mongo on this as a package install. But, this creates an issue with Mongos. I have not been able to find out how to get a mongos startup going using the mongodb.conf file. For replicated servers, or config servers, additional entries in the conf file can cause it to start up the way I want. But I can't do that with Mongos. If I install Mongo, it actually starts up as mongodb. I need to kill that behaviour, and get it to start as Mongos, pointed at my config servers.
All I can think of is:
Kill the mongodb startup script, that autostarts the database in 'normal' mode.
Create a new upstart script that starts up mongos, pointed at the config servers.
Any thoughts on this? Or does anyone know if I'm just being obtuse, and I can copy a new mongodb.conf file into place on beanstalk that will start up the server as mongos?
We are not planning on doing this right off the bat, but we need to prepare somewhat, as if I don't have the pieces in place, I'll need to completely rebuild my beanstalk servers after the fact. I'd rather deploy ready to go, with all the software installed.
I created a folder called ".ebextensions" and a file called "aws.config". The contents of this file is as follows: -
files:
"/etc/yum.repos.d/mongodb.repo":
mode: "000644"
content: |
[MongoDB]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1
container_commands:
01_enable_rootaccess:
command: echo Defaults:root \!requiretty >> /etc/sudoers
02_install_mongo:
command: yum install -y mongo-10gen-server
ignoreErrors: true
03_turn_mongod_off:
command: sudo chkconfig mongod off
04_create_mongos_startup_script:
command: sudo sh -c "echo '/usr/bin/mongos -configdb $MONGO_CONFIG_IPS -fork -logpath /var/log/mongo/mongos.log --logappend' > /etc/init.d/mongos.sh"
05_update_mongos_startup_permissions:
command: sudo chmod +x /etc/init.d/mongos.sh
06_start_mongos:
command: sudo bash /etc/init.d/mongos.sh
What this file does is: -
Creates a "mongodb.repo" file (see http://docs.mongodb.org/manual/tutorial/install-mongodb-on-red-hat-centos-or-fedora-linux/).
Runs 4 container commands (these are run after the server is created but before the WAR is deployed. These are: -
Enable root access - this is required for "sudo" commands afaik.
Install Mongo - install mongo as a service using the yum command. We only need "mongos" but this has not been separated yet from the mongo server. This may change in future.
Change config for mongod to "off" - this means if the server restarts the mongod program isn't run if the server restarts.
Create script to run mongos. Note the $MONGO_CONFIG_IPS in step 4, you can pass these in using the configuration page in Elastic Beanstalk. This will run on a server reboot.
Set permissions to execute. These reason I did 4/5 as opposed to putting into into a files: section is that it did not create the IP addresses from the environment variable.
Run script created in step 4.
This works for me. My WAR file simply connects to localhost and all the traffic goes through the router. I stumbled about for a couple of days on this as the documentation is fairly slim in both Amazon AWS and MongoDB.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
UPDATE: - If you are having problems with my old answer, please try the following - it works for version 3 of Mongo and is currently being used in our production MongoDB cluster.
This version is more advanced in that it uses internal DNS (via AWS Route53) - note the mongo-cfg1.internal .... This is recommended best practices and well worth setting up your private zone using Route53. This means if there's an issue with one of the MongoDB Config instances you can replace the broken instance and update the private IP address in Route53 - no updates required in each elastic beanstalk which is really cool. However, if you don't want to create a zone you can simply insert the IP addresses in configDB attribute (like my first example).
files:
"/etc/yum.repos.d/mongodb.repo":
mode: "000644"
content: |
[mongodb-org-3.0]
name=MongoDB Repository
baseurl=http://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/3.0/x86_64/
gpgcheck=0
enabled=1
"/opt/mongos.conf":
mode: "000755"
content: |
net:
port: 27017
operationProfiling: {}
processManagement:
fork: "true"
sharding:
configDB: mongo-cfg1.internal.company.com:27019,mongo-cfg2.internal.company.com:27019,mongo-cfg3.internal.company.com:27019
systemLog:
destination: file
path: /var/log/mongos.log
container_commands:
01_install_mongo:
command: yum install -y mongodb-org-mongos-3.0.2
ignoreErrors: true
02_start_mongos:
command: "/usr/bin/mongos -f /opt/mongos.conf > /dev/null 2>&1 &"
I couldn't get #bobmarksie's solution to work, but thanks to anowak and avinci here for this .ebextensions/aws.config file:
files:
"/home/ec2-user/install_mongo.sh" :
mode: "0007555"
owner: root
group: root
content: |
#!/bin/bash
echo "[MongoDB]
name=MongoDB Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1" | tee -a /etc/yum.repos.d/mongodb.repo
yum -y update
yum -y install mongodb-org-server mongodb-org-shell mongodb-org-tools
commands:
01install_mongo:
command: ./install_mongo.sh
cwd: /home/ec2-user
test: '[ ! -f /usr/bin/mongo ] && echo "MongoDB not installed"'
services:
sysvinit:
mongod:
enabled: true
ensureRunning: true
commands: ['01install_mongo']