Unable to install ansible-awx Ubuntu 18.04 - ansible-awx

I am trying to install AWX on Ubuntu 18.04 and i am getting the Error.
I have checked out the latest version of awx from github and tried running the install using
ansible-playbook -i inventory install.yml -vvvv
TASK [local_docker : Start the containers] ************************************************************************************************************************************************************************
task path: /temp/awx/installer/roles/local_docker/tasks/compose.yml:25
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/cloud/docker/docker_service.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: sateesh
<localhost> EXEC /bin/sh -c 'echo ~sateesh && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/sateesh/.ansible/tmp/ansible-tmp-1555964996.64-166348838404173 `" && echo ansible-tmp-1555964996.64-166348838404173="` echo /home/sateesh/.ansible/tmp/ansible-tmp-1555964996.64-166348838404173 `" ) && sleep 0'
<localhost> PUT /home/sateesh/.ansible/tmp/ansible-local-18120SkKEmm/tmpaVUC61 TO /home/sateesh/.ansible/tmp/ansible-tmp-1555964996.64-166348838404173/docker_service.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/sateesh/.ansible/tmp/ansible-tmp-1555964996.64-166348838404173/ /home/sateesh/.ansible/tmp/ansible-tmp-1555964996.64-166348838404173/docker_service.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/env python /home/sateesh/.ansible/tmp/ansible-tmp-1555964996.64-166348838404173/docker_service.py && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/sateesh/.ansible/tmp/ansible-tmp-1555964996.64-166348838404173/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/tmp/ansible_oWaqla/ansible_module_docker_service.py", line 745, in cmd_up
timeout=self.timeout)
File "/usr/local/lib/python2.7/dist-packages/compose/project.py", line 559, in up
'Encountered errors while bringing up the project.'
fatal: [localhost]: FAILED! => {
"changed": false,
"errors": [],
"invocation": {
"module_args": {
"api_version": null,
"build": false,
"cacert_path": null,
"cert_path": null,
"debug": false,
"definition": null,
"dependencies": true,
"docker_host": null,
"files": null,
"filter_logger": false,
"hostname_check": false,
"key_path": null,
"nocache": false,
"project_name": null,
"project_src": "/tmp/awxcompose",
"pull": false,
"recreate": "smart",
"remove_images": null,
"remove_orphans": false,
"remove_volumes": false,
"restarted": false,
"scale": null,
"services": null,
"ssl_version": null,
"state": "present",
"stopped": false,
"timeout": 10,
"tls": null,
"tls_hostname": null,
"tls_verify": null
}
},
"module_stderr": "Creating awx_web ... \r\n\r\u001b[1B",
"module_stdout": "",
"msg": "Error starting project unknown cause"
}
to retry, use: --limit #/temp/awx/installer/install.retry
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=8 changed=0 unreachable=0 failed=1
Not sure why it is failing.
I have the following versions of Ansible , pip & Docker
ansible 2.5.4
python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]
Docker version 18.03.1-ce, build 9ee9f40
pip 19.0.3
Thanks
Sateesh

I've tried your solution and ran into the same issue.
My problem was that my host was running apache2, so the port 80 was already taken. After stopping and removing apache2, the build went through.
Thanks.

I've been following this question from the start as I did encounter the exact same message you did. I didn't have a possible solution to your question until now.
I just managed to install the latest version of AWX on my Ubuntu server running 18.04. What I've done to solve my issue (and I've tried this many times before) was:
Getting the latest AWX version from github
Edit the inventory file located in awx/installer keeping the path to postgres_data_dir the same as before
Use a command to kill all my running docker conatiners:
docker container kill | docker container ls $(awk '{print $1}')
Note!: I don't have any containers running except those used for AWX
Removing all containers on my system:
docker container rm <container>
Note!: Again, I don't have any containers except those used for AWX
I've used the TAB key to let bash suggest the container names
Used the ansible playbook for AWX:
ansible-playbook -i inventory install.yml
And thats it! This time I upgraded to the latest version of AWX. In my case, I wanted to update to the latest version. I don't know if you were updating or installing it "for the first time". But this is how I managed to do it, so maybe it works for you as well.
Good luck solving your issue if you haven't already.
P.S. Make sure project_src is not /tmp/awxcompose. This wil cause some issues I learned. It'll work, but if you reboot Ubuntu, AWX will run into a problem: See this link

Related

How to clone a git repository in a remote instance using terraform?

I am creating an instance(virtual machine) in Oracle cloud infrastructure using terraform.
I want to clone a git repository (especially, from azure devops) to that newly created instance.
Is there any terraform module to achieve this?
Or any shell/ansible scripts that can be used in provisioner to do this?
You can make use of a terraform null_resource. With the remote-exec provisioner it is just like ssh into the box, providing credentials, and list the shell commands. The best part is the dependency model. You can instruct the null_resource to be fired off after other dependent resources are available.This example specifies a jump-host/bastion-host for the connection section. This is optional.
resource "null_resource" "demo_webserver1_httpd" {
depends_on = [oci_core_instance.demo_webserver1,oci_core_instance.demo_bastionserver,null_resource.demo_webserver1_shared_filesystem]
provisioner "remote-exec" {
connection {
type = "ssh"
user = "opc"
host = data.oci_core_vnic.demo_webserver1_vnic1.private_ip_address
private_key = file(var.private_key_oci)
script_path = "/home/opc/myhttpd.sh"
agent = false
timeout = "10m"
bastion_host = data.oci_core_vnic.demo_bastionserver_vnic1.public_ip_address
bastion_port = "22"
bastion_user = "opc"
bastion_private_key = file(var.private_key_oci)
}
inline = ["echo '== 1. Installing HTTPD package with yum'",
"sudo -u root yum -y -q install httpd",
"echo '== 2. Creating /sharedfs/index.html'",
"sudo -u root touch /sharedfs/index.html",
"sudo /bin/su -c \"echo 'Welcome to demo.com! These are both WEBSERVERS under LB umbrella with shared index.html ...' > /sharedfs/index.html\"",
"echo '== 3. Adding Alias and Directory sharedfs to /etc/httpd/conf/httpd.conf'",
"sudo /bin/su -c \"echo 'Alias /shared/ /sharedfs/' >> /etc/httpd/conf/httpd.conf\"",
"sudo /bin/su -c \"echo '<Directory /sharedfs>' >> /etc/httpd/conf/httpd.conf\"",
"sudo /bin/su -c \"echo 'AllowOverride All' >> /etc/httpd/conf/httpd.conf\"",
"sudo /bin/su -c \"echo 'Require all granted' >> /etc/httpd/conf/httpd.conf\"",
"sudo /bin/su -c \"echo '</Directory>' >> /etc/httpd/conf/httpd.conf\"",
"echo '== 4. Disabling SELinux'",
"sudo -u root setenforce 0",
"echo '== 5. Disabling firewall and starting HTTPD service'",
"sudo -u root service firewalld stop",
"sudo -u root service httpd start"
]
}
}
You will find great OCI and terraform examples by visiting this resource: https://github.com/mlinxfeld/foggykitchen_tf_oci_course
Best of luck!

How to run multiple commands with gosu in Kubernetes job

I am defining a Kubernetes job to run a rake task but stuck in how to write the command...
I am new to K8s and trying to run a Rails application in K8s.
In my Rails app Dockerfile, I created a user , copied code to /home/abc and installed rvm and rails in this user, and also specified an entrypoint and command:
ENTRYPOINT ["/home/abc/docker-entrypoint.sh"]
CMD bash -l -c "cd /home/abc && rvm use 2.2.10 --default && rake db:migrate && exec /usr/bin/supervisord -c config/supervisord.conf"
In docker-entrypoint.sh, the last command is
exec gosu abc "$#"
The goal is to at the end, gosu to user abc, and then run db migration and start the server through supervisord. It works, although I dont know whether it is a good practice or not...
Now I would like to run another rake task for some purpose.
Firstly, I tried to run it using kubectl exec command:
kubectl exec my-app-deployment-xxxx -- gosu abc bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'
It worked, but it requires to know the pod id, which is dynamic. so I tried to create a K8s job and specify in the command:
containers:
- name: my-app
image: my-app:v0.2
command:
- "gosu"
- "abc"
- "bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'"
I expect the job can be completed successfully, but it failed, and the error info when kubectl logs job_pod is like:
error: exec: "bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'": stat bash -l -c cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task': no such file or directory
I think it should be because of how to write the 'command' part to run multiple commands with gosu...
Thanks for your help!
Since gosu takes the user name and the Bash shell as arguments, I'd say that this is one rather than three separate commands.
Given that, there can be only one single entrypoint in each container, you can try running it as follows:
containers:
- name: my-app
image: my-app:v0.2
command: ["/bin/sh", "-c", "gosu username bash -l -c 'cd /home/abc && rvm use 2.2.10 --default && rake app:init:test_task'"]
Notice that you have to spawn a new TTY in order to run the command as the image's entrypoint is replaced when running commands in the container spec in Kubernetes.

OWASP/ZAP dangling when trying to scan

I am trying out OWASP/ZAP to see if it is something we can use for our project, but I cannot make it work I don't know what I am doing wrong and the documentation really does not help. What I am trying is to run a scan on my api running in a docker container locally on my windows machine so I run the command:
docker run -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-stable zap-baseline.py -t http://172.21.0.2:8080/swagger.json -g gen.conf -r testreport.html the ip 172.21.0.2 is the IPAddress of my api container even tried with localhost and 127.0.0.1
but it just hangs in the following log message:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 1:43:31 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Nothing happens and my zap docker container is in a unhealthy state, after some time it just crashes and ends up with a bunch of NullPointerExceptions. Is zap docker only working for linux, something specifically I need to do when running it on a windows machine? I don't get why this is not working even when I am following specifically the guideline in https://github.com/zaproxy/zaproxy/wiki/Docker
Edit 1
My latest try where I am trying to target my host ip address directly and the port that I am exposing my api to gives me the following error:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 2:12:07 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Total of 3 URLs
ERROR Permission denied
2019-02-14 14:12:57,116 I/O error(13): Permission denied
Traceback (most recent call last):
File "/zap/zap-baseline.py", line 347, in main
with open(base_dir + generate, 'w') as f:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
Found Java version 1.8.0_151
Available memory: 3928 MB
Setting jvm heap size: -Xmx982m
213 [main] INFO org.zaproxy.zap.DaemonBootstrap
When you run docker with: docker run -v $(pwd):/zap/wrk/:rw ...
you are mapping the /zap/wrk/ directory in the docker image to the current working directory (cwd) of the machine in which you are running docker.
I think the problem is that your current user doesn't have write access to the cwd.
Try below command, hope it resolves issue.
$docker run --user $(id -u):$(id -g) -v $(pwd):/zap/wrk/:rw --rm -t owasp/zap2docker-stable zap-baseline.py -t https://your_url -g gen.conf -r testreport.html
The key error here is:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
This means that the script cannot write to the gen.conf file that you have mounted on /zap/wrk
Do you have write access to the cwd when its not mounted?
The reason for that is, if you use -r parameter, zap will attempt to generate the file report.html at location /zap/wrk/. In order to make this work, we have to mount a directory to this location /zap/wrk.
But when you do so, it is important that the zap container is able to perform the write operations on the mounted directory.
So, below is the working solution using gitlab ci yml. I started with this approach of using image: owasp/zap2docker-stable however then had to go to the vanilla docker commands to execute it.
test_site:
stage: test
image: docker:latest
script:
# The folder zap-reports created locally will be mounted to owasp/zap2docker docker container,
# On execution it will generate the reports in this folder. Current user is passed so reports can be generated"
- mkdir zap-reports
- cd zap-reports
- docker pull owasp/zap2docker-stable:latest || echo
- docker run --name zap-container --rm -v $(pwd):/zap/wrk -u $(id -u ${USER}):$(id -g ${USER}) owasp/zap2docker-stable zap-baseline.py -t "https://example.com" -r report.html
artifacts:
when: always
paths:
- zap-reports
allow_failure: true
So the trick in the above code is
Mount local directory zap-reports to /zap/wrk as in $(pwd):/zap/wrk
Pass the current user and group on the host machine to the docker container so the process is using the same user / group. This allows writing of reports on the directory mounted from local host. This is done by -u $(id -u ${USER}):$(id -g ${USER})
Below is the working code with image: owasp/zap2docker-stable
test_site:
variables:
GIT_STRATEGY: none
stage: test
image:
name: owasp/zap2docker-stable:latest
before_script:
- mkdir -p /zap/wrk
script:
- zap-baseline.py -t "https://example.com" -g gen.conf -I -r testreport.html
- cp /zap/wrk/testreport.html testreport.html
artifacts:
when: always
paths:
- zap.out
- testreport.html

Executing wait-for-it.sh in python Docker container

I have a Python docker container that needs to wait until another container (postgres server) finishes setup. I tried the standard wait-for-it.sh but several commands weren't included. I tried a basic sleep (again in an sh file) but now it's reporting exec: 300: not found when trying to finally execute the command I'm waiting on.
How do I get around this (preferably without changing the image, or having to extend an image.)
I know I could also just run a Python script, but ideally I'd like to use wait-for-it.sh to wait for the server to turn up rather than just sleep.
Dockerfile (for stuffer):
FROM python:2.7.13
ADD ./stuff/bin /usr/local/bin/
ADD ./stuff /usr/local/stuff
WORKDIR /usr/local/bin
COPY requirements.txt /opt/updater/requirements.txt
COPY internal_requirements.txt /opt/stuff/internal_requirements.txt
RUN pip install -r /opt/stuff/requirements.txt
RUN pip install -r /opt/stuff/other_requirements.txt
docker-compose.yml:
version: '3'
services:
local_db:
build: ./local_db
ports:
- "localhost:5432:5432"
stuffer:
build: ./
depends_on:
- local_db
command: ["./wait-for-postgres.sh", "-t", "300", "localhost:5432", "--", "python", "./stuffing.py", "--file", "./afile"]
Script I want to use (but can't because no psql or exec):
#!/bin/bash
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until psql -h "$host" -U "postgres" -c '\l'; do >&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
Sergey's comment. I had wrong argument order. This issue had nothing to do with docker and everything to do with my inability to read.
I made an example so you can see it working:
https://github.com/nitzap/wait-for-postgres
On the other hand also you can have errors inside the execution of the script to validate that the service is working. You should not refer as localhost .... because that is within the contexts of containers, if you want to point to another container has to be through the name of the service.

Run Python 2.7 by default in a Dotcloud custom service

I need to make Python 2.7 the default version of Python for running a Jenkins build server. I'm trying to use python_version to do this, but Python 2.6 remains the default version. I'm probably missing something really simple. Any suggestions?
dotcloud.yml
jenkins:
type: custom
buildscript: jenkins/builder
ports:
www: http
config:
python_version: v2.7
processes:
sshagent: ssh-agent /bin/bash
jenkins: ~/run
db:
type: postgresql
builder
#!/bin/bash
if [ -f ~/jenkins.war ]
then
echo 'Found jenkins installation.'
else
echo 'Installing jenkins.'
wget -O ~/jenkins.war http://mirrors.jenkins-ci.org/war/latest/jenkins.war
fi
echo 'Installing dotCloud scaffolding.'
cp -a jenkins/. ~
echo 'Setting up SSH.'
mkdir -p ~/.ssh
cp jenkins_id ~/.ssh/id_rsa
chmod 0600 ~/.ssh/id_rsa
ssh-keygen -R bitbucket.org
ssh-keyscan -H bitbucket.org >> ~/.ssh/known_hosts
I'm still not sure why my build file didn't solve the problem, but I was able to work around it by using the --python=/usr/bin/python2.7 option for virtualenv in my Jenkins build script.