Ansible: Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory - kubernetes

I'm setting up kubernetes cluster with ansible. I get the following error when trying to enable kernel IP routing:
Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
Is this a bug in ansible or is there something wrong with my playbook?
---
# file: site.yml
# description: Asentaa ja kaynnistaa kubernetes-klusterin riippuvuuksineen
#
# resources:
# - https://kubernetes.io/docs/setup/independent/install-kubeadm/
# - http://michele.sciabarra.com/2018/02/12/devops/Kubernetes-with-KubeAdm-Ansible-Vagrant/
# - https://docs.ansible.com/ansible/latest/modules/
# - https://github.com/geerlingguy/ansible-role-kubernetes/blob/master/tasks/setup-RedHat.yml
# - https://docs.docker.com/install/linux/docker-ce/centos/
#
# author: Tuomas Toivonen
# date: 30.12.2018
- name: Asenna docker ja kubernetes
hosts: k8s-machines
become: true
become_method: sudo
roles:
- common
vars:
ip_modules:
- ip_vs
- ip_vs_rr
- ip_vs_wrr
- ip_vs_sh
- nf_conntrack_ipv4
tasks:
- name: Poista swapfile
tags:
- os-settings
mount:
name: swap
fstype: swap
state: absent
- name: Disabloi swap-muisti
tags:
- os-settings
command: swapoff -a
when: ansible_swaptotal_mb > 0
- name: Konfiguroi verkkoasetukset
tags:
- os-settings
command: modprobe {{ item }}
loop: "{{ ip_modules }}"
- name: Modprobe
tags:
- os-settings
lineinfile:
path: "/etc/modules"
line: "{{ item }}"
create: yes
state: present
loop: "{{ ip_modules }}"
- name: Iptables
tags:
- os-settings
sysctl:
name: "{{ item }}"
value: 1
sysctl_set: yes
state: present
reload: yes
loop:
- 'net.bridge.bridge-nf-call-iptables'
- 'net.bridge.bridge-nf-call-ip6tables'
- name: Salli IP-reititys
sysctl:
name: net.ipv4.ip_forward
value: 1
state: present
reload: yes
sysctl_set: yes
- name: Lisaa docker-ce -repositorio
tags:
- repos
yum_repository:
name: docker-ce
description: docker-ce
baseurl: https://download.docker.com/linux/centos/7/x86_64/stable/
enabled: true
gpgcheck: true
repo_gpgcheck: true
gpgkey:
- https://download.docker.com/linux/centos/gpg
state: present
- name: Lisaa kubernetes -repositorio
tags:
- repos
yum_repository:
name: kubernetes
description: kubernetes
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled: true
gpgcheck: true
repo_gpgcheck: true
gpgkey:
- https://packages.cloud.google.com/yum/doc/yum-key.gpg
- https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
state: present
- name: Asenna docker-ce -paketti
tags:
- packages
yum:
name: docker-ce
state: present
- name: Asenna NTP -paketti
tags:
- packages
yum:
name: ntp
state: present
- name: Asenna kubernetes -paketit
tags:
- packages
yum:
name: "{{ item }}"
state: present
loop:
- kubelet
- kubeadm
- kubectl
- name: Kaynnista palvelut
tags:
- services
service: name={{ item }} state=started enabled=yes
loop:
- docker
- ntpd
- kubelet
- name: Alusta kubernetes masterit
become: true
become_method: sudo
hosts: k8s-masters
tags:
- cluster
tasks:
- name: kubeadm reset
shell: "kubeadm reset -f"
- name: kubeadm init
shell: "kubeadm init --token-ttl=0 --apiserver-advertise-address=10.0.0.101 --pod-network-cidr=20.0.0.0/8" # TODO
register: kubeadm_out
- set_fact:
kubeadm_join: "{{ kubeadm_out.stdout_lines[-1] }}"
when: kubeadm_out.stdout.find("kubeadm join") != -1
- debug:
var: kubeadm_join
- name: Aseta ymparistomuuttujat
shell: >
cp /etc/kubernetes/admin.conf /home/vagrant/ &&
chown vagrant:vagrant /home/vagrant/admin.conf &&
export KUBECONFIG=/home/vagrant/admin.conf &&
echo export KUBECONFIG=$KUBECONFIG >> /home/vagrant/.bashrc
- name: Konfiguroi CNI-verkko
become: true
become_method: sudo
hosts: k8s-masters
tags:
- cluster-network
tasks:
- sysctl: name=net.bridge.bridge-nf-call-iptables value=1 state=present reload=yes sysctl_set=yes
- sysctl: name=net.bridge.bridge-nf-call-ip6tables value=1 state=present reload=yes sysctl_set=yes
- name: Asenna Flannel-plugin
shell: >
export KUBECONFIG=/home/vagrant/admin.conf ;
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
- shell: sleep 10
- name: Alusta kubernetes workerit
become: true
become_method: sudo
hosts: k8s-workers
tags:
- cluster
tasks:
- name: kubeadm reset
shell: "kubeadm reset -f"
- name: kubeadm join
tags:
- cluster
shell: "{{ hostvars['k8s-n1'].kubeadm_join }}" # TODO
Here is the full ansible log
ansible-controller: Running ansible-playbook...
cd /vagrant && PYTHONUNBUFFERED=1 ANSIBLE_NOCOLOR=true ANSIBLE_CONFIG='ansible/ansible.cfg' ansible-playbook --limit="all" --inventory-file=ansible/hosts -v ansible/site.yml
Using /vagrant/ansible/ansible.cfg as config file
/vagrant/ansible/hosts did not meet host_list requirements, check plugin documentation if this is unexpected
/vagrant/ansible/hosts did not meet script requirements, check plugin documentation if this is unexpected
PLAY [Asenna docker ja kubernetes] *********************************************
TASK [Gathering Facts] *********************************************************
ok: [k8s-n1]
ok: [k8s-n3]
ok: [k8s-n2]
TASK [common : Testaa] *********************************************************
changed: [k8s-n3] => {"changed": true, "checksum": "6920e1826e439962050ec0ab4221719b3a045f04", "dest": "/template.test", "gid": 0, "group": "root", "md5sum": "a4f61c365318c3e23d466914fbd02687", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_runtime_t:s0", "size": 14, "src": "/home/vagrant/.ansible/tmp/ansible-tmp-1546760756.54-124542112178019/source", "state": "file", "uid": 0}
changed: [k8s-n2] => {"changed": true, "checksum": "6920e1826e439962050ec0ab4221719b3a045f04", "dest": "/template.test", "gid": 0, "group": "root", "md5sum": "a4f61c365318c3e23d466914fbd02687", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_runtime_t:s0", "size": 14, "src": "/home/vagrant/.ansible/tmp/ansible-tmp-1546760756.51-240329169302936/source", "state": "file", "uid": 0}
changed: [k8s-n1] => {"changed": true, "checksum": "6920e1826e439962050ec0ab4221719b3a045f04", "dest": "/template.test", "gid": 0, "group": "root", "md5sum": "a4f61c365318c3e23d466914fbd02687", "mode": "0644", "owner": "root", "secontext": "system_u:object_r:etc_runtime_t:s0", "size": 14, "src": "/home/vagrant/.ansible/tmp/ansible-tmp-1546760756.57-121244542660821/source", "state": "file", "uid": 0}
TASK [common : Asenna telnet] **************************************************
changed: [k8s-n2] => {"changed": true, "msg": "", "rc": 0, "results": ["Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: ftp.funet.fi\n * extras: ftp.funet.fi\n * updates: ftp.funet.fi\nResolving Dependencies\n--> Running transaction check\n---> Package telnet.x86_64 1:0.17-64.el7 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n telnet x86_64 1:0.17-64.el7 base 64 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 64 k\nInstalled size: 113 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : 1:telnet-0.17-64.el7.x86_64 1/1 \n Verifying : 1:telnet-0.17-64.el7.x86_64 1/1 \n\nInstalled:\n telnet.x86_64 1:0.17-64.el7 \n\nComplete!\n"]}
changed: [k8s-n1] => {"changed": true, "msg": "", "rc": 0, "results": ["Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: centos.mirror.gnu.fi\n * extras: centos.mirror.gnu.fi\n * updates: centos.mirror.gnu.fi\nResolving Dependencies\n--> Running transaction check\n---> Package telnet.x86_64 1:0.17-64.el7 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n telnet x86_64 1:0.17-64.el7 base 64 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 64 k\nInstalled size: 113 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : 1:telnet-0.17-64.el7.x86_64 1/1 \n Verifying : 1:telnet-0.17-64.el7.x86_64 1/1 \n\nInstalled:\n telnet.x86_64 1:0.17-64.el7 \n\nComplete!\n"]}
changed: [k8s-n3] => {"changed": true, "msg": "", "rc": 0, "results": ["Loaded plugins: fastestmirror\nLoading mirror speeds from cached hostfile\n * base: ftp.funet.fi\n * extras: ftp.funet.fi\n * updates: ftp.funet.fi\nResolving Dependencies\n--> Running transaction check\n---> Package telnet.x86_64 1:0.17-64.el7 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n telnet x86_64 1:0.17-64.el7 base 64 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package\n\nTotal download size: 64 k\nInstalled size: 113 k\nDownloading packages:\nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : 1:telnet-0.17-64.el7.x86_64 1/1 \n Verifying : 1:telnet-0.17-64.el7.x86_64 1/1 \n\nInstalled:\n telnet.x86_64 1:0.17-64.el7 \n\nComplete!\n"]}
TASK [Poista swapfile] *********************************************************
ok: [k8s-n1] => {"changed": false, "dump": "0", "fstab": "/etc/fstab", "fstype": "swap", "name": "swap", "opts": "defaults", "passno": "0"}
ok: [k8s-n2] => {"changed": false, "dump": "0", "fstab": "/etc/fstab", "fstype": "swap", "name": "swap", "opts": "defaults", "passno": "0"}
ok: [k8s-n3] => {"changed": false, "dump": "0", "fstab": "/etc/fstab", "fstype": "swap", "name": "swap", "opts": "defaults", "passno": "0"}
TASK [Disabloi swap-muisti] ****************************************************
changed: [k8s-n3] => {"changed": true, "cmd": ["swapoff", "-a"], "delta": "0:00:00.009581", "end": "2019-01-06 07:46:08.414842", "rc": 0, "start": "2019-01-06 07:46:08.405261", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n1] => {"changed": true, "cmd": ["swapoff", "-a"], "delta": "0:00:00.119638", "end": "2019-01-06 07:46:08.484265", "rc": 0, "start": "2019-01-06 07:46:08.364627", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n2] => {"changed": true, "cmd": ["swapoff", "-a"], "delta": "0:00:00.133924", "end": "2019-01-06 07:46:08.519646", "rc": 0, "start": "2019-01-06 07:46:08.385722", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
TASK [Konfiguroi verkkoasetukset] **********************************************
changed: [k8s-n2] => (item=ip_vs) => {"changed": true, "cmd": ["modprobe", "ip_vs"], "delta": "0:00:00.036881", "end": "2019-01-06 07:46:10.606797", "item": "ip_vs", "rc": 0, "start": "2019-01-06 07:46:10.569916", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n3] => (item=ip_vs) => {"changed": true, "cmd": ["modprobe", "ip_vs"], "delta": "0:00:00.036141", "end": "2019-01-06 07:46:10.815043", "item": "ip_vs", "rc": 0, "start": "2019-01-06 07:46:10.778902", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n1] => (item=ip_vs) => {"changed": true, "cmd": ["modprobe", "ip_vs"], "delta": "0:00:00.035888", "end": "2019-01-06 07:46:10.768267", "item": "ip_vs", "rc": 0, "start": "2019-01-06 07:46:10.732379", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n2] => (item=ip_vs_rr) => {"changed": true, "cmd": ["modprobe", "ip_vs_rr"], "delta": "0:00:00.005942", "end": "2019-01-06 07:46:12.763004", "item": "ip_vs_rr", "rc": 0, "start": "2019-01-06 07:46:12.757062", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n1] => (item=ip_vs_rr) => {"changed": true, "cmd": ["modprobe", "ip_vs_rr"], "delta": "0:00:00.006084", "end": "2019-01-06 07:46:12.896763", "item": "ip_vs_rr", "rc": 0, "start": "2019-01-06 07:46:12.890679", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n3] => (item=ip_vs_rr) => {"changed": true, "cmd": ["modprobe", "ip_vs_rr"], "delta": "0:00:00.006325", "end": "2019-01-06 07:46:12.899750", "item": "ip_vs_rr", "rc": 0, "start": "2019-01-06 07:46:12.893425", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n2] => (item=ip_vs_wrr) => {"changed": true, "cmd": ["modprobe", "ip_vs_wrr"], "delta": "0:00:00.006195", "end": "2019-01-06 07:46:14.795507", "item": "ip_vs_wrr", "rc": 0, "start": "2019-01-06 07:46:14.789312", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n1] => (item=ip_vs_wrr) => {"changed": true, "cmd": ["modprobe", "ip_vs_wrr"], "delta": "0:00:00.007328", "end": "2019-01-06 07:46:14.819072", "item": "ip_vs_wrr", "rc": 0, "start": "2019-01-06 07:46:14.811744", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n3] => (item=ip_vs_wrr) => {"changed": true, "cmd": ["modprobe", "ip_vs_wrr"], "delta": "0:00:00.007251", "end": "2019-01-06 07:46:14.863192", "item": "ip_vs_wrr", "rc": 0, "start": "2019-01-06 07:46:14.855941", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n3] => (item=ip_vs_sh) => {"changed": true, "cmd": ["modprobe", "ip_vs_sh"], "delta": "0:00:00.007590", "end": "2019-01-06 07:46:16.815226", "item": "ip_vs_sh", "rc": 0, "start": "2019-01-06 07:46:16.807636", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n1] => (item=ip_vs_sh) => {"changed": true, "cmd": ["modprobe", "ip_vs_sh"], "delta": "0:00:00.006380", "end": "2019-01-06 07:46:16.941470", "item": "ip_vs_sh", "rc": 0, "start": "2019-01-06 07:46:16.935090", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n2] => (item=ip_vs_sh) => {"changed": true, "cmd": ["modprobe", "ip_vs_sh"], "delta": "0:00:00.006619", "end": "2019-01-06 07:46:16.808432", "item": "ip_vs_sh", "rc": 0, "start": "2019-01-06 07:46:16.801813", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n3] => (item=nf_conntrack_ipv4) => {"changed": true, "cmd": ["modprobe", "nf_conntrack_ipv4"], "delta": "0:00:00.007618", "end": "2019-01-06 07:46:18.825593", "item": "nf_conntrack_ipv4", "rc": 0, "start": "2019-01-06 07:46:18.817975", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n1] => (item=nf_conntrack_ipv4) => {"changed": true, "cmd": ["modprobe", "nf_conntrack_ipv4"], "delta": "0:00:00.008181", "end": "2019-01-06 07:46:18.910050", "item": "nf_conntrack_ipv4", "rc": 0, "start": "2019-01-06 07:46:18.901869", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [k8s-n2] => (item=nf_conntrack_ipv4) => {"changed": true, "cmd": ["modprobe", "nf_conntrack_ipv4"], "delta": "0:00:00.007427", "end": "2019-01-06 07:46:18.962850", "item": "nf_conntrack_ipv4", "rc": 0, "start": "2019-01-06 07:46:18.955423", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
TASK [Modprobe] ****************************************************************
changed: [k8s-n2] => (item=ip_vs) => {"backup": "", "changed": true, "item": "ip_vs", "msg": "line added"}
changed: [k8s-n1] => (item=ip_vs) => {"backup": "", "changed": true, "item": "ip_vs", "msg": "line added"}
changed: [k8s-n3] => (item=ip_vs) => {"backup": "", "changed": true, "item": "ip_vs", "msg": "line added"}
changed: [k8s-n2] => (item=ip_vs_rr) => {"backup": "", "changed": true, "item": "ip_vs_rr", "msg": "line added"}
changed: [k8s-n1] => (item=ip_vs_rr) => {"backup": "", "changed": true, "item": "ip_vs_rr", "msg": "line added"}
changed: [k8s-n3] => (item=ip_vs_rr) => {"backup": "", "changed": true, "item": "ip_vs_rr", "msg": "line added"}
changed: [k8s-n2] => (item=ip_vs_wrr) => {"backup": "", "changed": true, "item": "ip_vs_wrr", "msg": "line added"}
changed: [k8s-n1] => (item=ip_vs_wrr) => {"backup": "", "changed": true, "item": "ip_vs_wrr", "msg": "line added"}
changed: [k8s-n3] => (item=ip_vs_wrr) => {"backup": "", "changed": true, "item": "ip_vs_wrr", "msg": "line added"}
changed: [k8s-n2] => (item=ip_vs_sh) => {"backup": "", "changed": true, "item": "ip_vs_sh", "msg": "line added"}
changed: [k8s-n1] => (item=ip_vs_sh) => {"backup": "", "changed": true, "item": "ip_vs_sh", "msg": "line added"}
changed: [k8s-n3] => (item=ip_vs_sh) => {"backup": "", "changed": true, "item": "ip_vs_sh", "msg": "line added"}
changed: [k8s-n2] => (item=nf_conntrack_ipv4) => {"backup": "", "changed": true, "item": "nf_conntrack_ipv4", "msg": "line added"}
changed: [k8s-n1] => (item=nf_conntrack_ipv4) => {"backup": "", "changed": true, "item": "nf_conntrack_ipv4", "msg": "line added"}
changed: [k8s-n3] => (item=nf_conntrack_ipv4) => {"backup": "", "changed": true, "item": "nf_conntrack_ipv4", "msg": "line added"}
TASK [Iptables] ****************************************************************
failed: [k8s-n3] (item=net.bridge.bridge-nf-call-iptables) => {"changed": false, "item": "net.bridge.bridge-nf-call-iptables", "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\n"}
failed: [k8s-n1] (item=net.bridge.bridge-nf-call-iptables) => {"changed": false, "item": "net.bridge.bridge-nf-call-iptables", "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\n"}
failed: [k8s-n2] (item=net.bridge.bridge-nf-call-iptables) => {"changed": false, "item": "net.bridge.bridge-nf-call-iptables", "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\n"}
failed: [k8s-n3] (item=net.bridge.bridge-nf-call-ip6tables) => {"changed": false, "item": "net.bridge.bridge-nf-call-ip6tables", "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\nsysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory\n"}
failed: [k8s-n2] (item=net.bridge.bridge-nf-call-ip6tables) => {"changed": false, "item": "net.bridge.bridge-nf-call-ip6tables", "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\nsysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory\n"}
failed: [k8s-n1] (item=net.bridge.bridge-nf-call-ip6tables) => {"changed": false, "item": "net.bridge.bridge-nf-call-ip6tables", "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\nsysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory\n"}
to retry, use: --limit #/vagrant/ansible/site.retry
PLAY RECAP *********************************************************************
k8s-n1 : ok=7 changed=5 unreachable=0 failed=1
k8s-n2 : ok=7 changed=5 unreachable=0 failed=1
k8s-n3 : ok=7 changed=5 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.

In the playbook, add the following task to load the br_netfilter module:
- name: Ensure br_netfilter is enabled.
modprobe:
name: br_netfilter
state: present

Loading br_netfilter kernel module fixed the problem. I simply appended it to the ip_modules list in playbook vars declaration. I'm using Centos 7.

Related

Centos8 podman exiting all containers (139)

Any image I would try to run the behavior is always the same "Exited (139)"
OS: Centos 8 with podman running inside an Azure VM. The Centos image is the one provided by Azure when creating a VM.
VM: Azure B2S Gen 2 | 2vCPU(s) | 4 GiB RAM | 8 GiB SSD
I paste below the exact extract from the terminal:
pull
$ podman pull fedora
Trying to pull registry.access.redhat.com/fedora...
name unknown: Repo not found
Trying to pull registry.redhat.io/fedora...
unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication
Trying to pull docker.io/library/fedora...
Getting image source signatures
Copying blob ae7b613df528 done
Copying config b3048463dc done
Writing manifest to image destination
Storing signatures
b3048463dcefbe4920ef2ae1af43171c9695e2077f315b2bc12ed0f6f67c86c7
run
$ podman run --rm fedora /bin/echo "Hello Geeks! Welcome to Podman"
ps
$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
feb43e01e777 docker.io/library/ubuntu:latest bash 3 minutes ago Exited (139) 3 minutes ago magical_carson
inspect
$ podman inspect feb43e01e777
[
{
"Id": "feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac",
"Created": "2020-12-10T11:35:16.863809294Z",
"Path": "bash",
"Args": [
"bash"
],
"State": {
"OciVersion": "1.0.2-dev",
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 139,
"Error": "",
"StartedAt": "2020-12-10T11:35:17.280743295Z",
"FinishedAt": "2020-12-10T11:35:17.280874897Z",
"Healthcheck": {
"Status": "",
"FailingStreak": 0,
"Log": null
}
},
"Image": "f643c72bc25212974c16f3348b3a898b1ec1eb13ec1539e10a103e6e217eb2f1",
"ImageName": "docker.io/library/ubuntu:latest",
"Rootfs": "",
"Pod": "",
"ResolvConfPath": "/run/user/1000/containers/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/resolv.conf",
"HostnamePath": "/run/user/1000/containers/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/hostname",
"HostsPath": "/run/user/1000/containers/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/hosts",
"StaticDir": "/home/brais/.local/share/containers/storage/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata",
"OCIConfigPath": "/home/brais/.local/share/containers/storage/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/config.json",
"OCIRuntime": "runc",
"LogPath": "/home/brais/.local/share/containers/storage/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/ctr.log",
"LogTag": "",
"ConmonPidFile": "/run/user/1000/containers/overlay-containers/feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac/userdata/conmon.pid",
"Name": "magical_carson",
"RestartCount": 0,
"Driver": "overlay",
"MountLabel": "system_u:object_r:container_file_t:s0:c375,c701",
"ProcessLabel": "system_u:system_r:container_t:s0:c375,c701",
"AppArmorProfile": "",
"EffectiveCaps": [
"CAP_AUDIT_WRITE",
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_MKNOD",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"BoundingCaps": [
"CAP_AUDIT_WRITE",
"CAP_CHOWN",
"CAP_DAC_OVERRIDE",
"CAP_FOWNER",
"CAP_FSETID",
"CAP_KILL",
"CAP_MKNOD",
"CAP_NET_BIND_SERVICE",
"CAP_NET_RAW",
"CAP_SETFCAP",
"CAP_SETGID",
"CAP_SETPCAP",
"CAP_SETUID",
"CAP_SYS_CHROOT"
],
"ExecIDs": [],
"GraphDriver": {
"Name": "overlay",
"Data": {
"LowerDir": "/home/brais/.local/share/containers/storage/overlay/6581dd55e4fe0935a32a688d74513db86632efb162fd41431e7d69318802dfae/diff:/home/brais/.local/share/containers/storage/overlay/1bd27dc7c1c2e7a36c599becda69d0cd905f4f1a122f2b7a95c81a78abc452ec/diff:/home/brais/.local/share/containers/storage/overlay/bacd3af13903e13a43fe87b6944acd1ff21024132aad6e74b4452d984fb1a99a/diff",
"UpperDir": "/home/brais/.local/share/containers/storage/overlay/ccc5801aaacb05d0ed1e64cee2e38f7b4dd8a29890e6fdf780887d296a1c9696/diff",
"WorkDir": "/home/brais/.local/share/containers/storage/overlay/ccc5801aaacb05d0ed1e64cee2e38f7b4dd8a29890e6fdf780887d296a1c9696/work"
}
},
"Mounts": [],
"Dependencies": [],
"NetworkSettings": {
"EndpointID": "",
"Gateway": "",
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "",
"Bridge": "",
"SandboxID": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": ""
},
"ExitCommand": [
"/usr/bin/podman",
"--root",
"/home/brais/.local/share/containers/storage",
"--runroot",
"/run/user/1000/containers",
"--log-level",
"error",
"--cgroup-manager",
"cgroupfs",
"--tmpdir",
"/run/user/1000/libpod/tmp",
"--runtime",
"runc",
"--storage-driver",
"overlay",
"--storage-opt",
"overlay.mount_program=/usr/bin/fuse-overlayfs",
"--events-backend",
"file",
"container",
"cleanup",
"feb43e01e7771ca0a5a1b4cdf5a7b2587341493f1ecd7b2723d1ad5a45076aac"
],
"Namespace": "",
"IsInfra": false,
"Config": {
"Hostname": "feb43e01e777",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": true,
"OpenStdin": true,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"TERM=xterm",
"container=podman",
"HOSTNAME=feb43e01e777",
"HOME=/root"
],
"Cmd": [
"bash"
],
"Image": "docker.io/library/ubuntu:latest",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": "",
"OnBuild": null,
"Labels": null,
"Annotations": {
"io.container.manager": "libpod",
"io.kubernetes.cri-o.Created": "2020-12-10T11:35:16.863809294Z",
"io.kubernetes.cri-o.TTY": "true",
"io.podman.annotations.autoremove": "FALSE",
"io.podman.annotations.init": "FALSE",
"io.podman.annotations.privileged": "FALSE",
"io.podman.annotations.publish-all": "FALSE",
"org.opencontainers.image.stopSignal": "15"
},
"StopSignal": 15,
"CreateCommand": [
"podman",
"run",
"-it",
"ubuntu",
"bash"
]
},
"HostConfig": {
"Binds": [],
"CgroupMode": "host",
"ContainerIDFile": "",
"LogConfig": {
"Type": "k8s-file",
"Config": null
},
"NetworkMode": "slirp4netns",
"PortBindings": {},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": [],
"CapDrop": [],
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": [],
"GroupAdd": [],
"IpcMode": "private",
"Cgroup": "",
"Cgroups": "default",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "private",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [],
"Tmpfs": {},
"UTSMode": "private",
"UsernsMode": "",
"ShmSize": 65536000,
"Runtime": "oci",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": 0,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
}
}
]
podman info
$ podman info
host:
arch: amd64
buildahVersion: 1.15.1
cgroupVersion: v1
conmon:
package: conmon-2.0.20-2.module_el8.3.0+475+c50ce30b.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.20, commit: 1019ecdeda3936be22162bb1cca308192145de53'
cpus: 2
distribution:
distribution: '"centos"'
version: "8"
eventLogger: file
hostname: vm-test1
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 4.18.0-193.28.1.el8_2.x86_64
linkmode: dynamic
memFree: 247398400
memTotal: 4129382400
ociRuntime:
name: runc
package: runc-1.0.0-68.rc92.module_el8.3.0+475+c50ce30b.x86_64
path: /usr/bin/runc
version: 'runc version spec: 1.0.2-dev'
os: linux
remoteSocket:
path: /run/user/1000/podman/podman.sock
rootless: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.4-2.module_el8.3.0+475+c50ce30b.x86_64
version: |-
slirp4netns version 1.1.4
commit: b66ffa8e262507e37fca689822d23430f3357fe8
libslirp: 4.3.1
SLIRP_CONFIG_VERSION_MAX: 3
swapFree: 0
swapTotal: 0
uptime: 17h 48m 18.07s (Approximately 0.71 days)
registries:
search:
- registry.access.redhat.com
- registry.redhat.io
- docker.io
store:
configFile: /home/brais/.config/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 0
stopped: 1
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.1.2-3.module_el8.3.0+507+aa0970ae.x86_64
Version: |-
fuse-overlayfs: version 1.1.0
FUSE library version 3.2.1
using FUSE kernel interface version 7.26
graphRoot: /home/brais/.local/share/containers/storage
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 8
runRoot: /run/user/1000/containers
volumePath: /home/brais/.local/share/containers/storage/volumes
version:
APIVersion: 1
Built: 1600970293
BuiltTime: Thu Sep 24 17:58:13 2020
GitCommit: ""
GoVersion: go1.14.7
OsArch: linux/amd64
Version: 2.0.5

Azure devops - minimal code coverage pull reqeust

I have an Angular 9 project and I am trying to get the minimal code coverage working on an pull request in Azure devops conform the documentation. However the minimal code coverage isn't working, probably missing some step....
Steps to reproduce:
Create an new Angular 9 project: "ng new DefaultWebsite"
Create build pipeline and edit the karma and protractor config conform Microsoft "Build, test, and deploy JavaScript and Node.js apps" documentation
Add and "azurepipelines-coverage.yml" in the root of my project to enable the code coverage check in an pull request conform the Microsoft "Code coverage for pull requests" documentation
Disable some test in de app.spec.ts file, so the code coverage isn't 100% any more, it now is 77%. Changed the minimal code coverag in the yml file to 95% so the pull request cannot be complited conform the theory and it should give an "Coverage status check failed" error conform the Microsoft documentation.
However when the pull request is started there is an code coverage check below the 'Status' part. When the build (with unit and e2e tests) is done, there is no code coverage error which I espect te see below the 'Status' part.
Pull reqeust with code coverage check Pull reqeust build completed
When I look at the build there are test results and code coverage results.
Build test result Build code coverage result
When I look at the code coverage result I see an Line coverage of 75% which should be an minimal of 90% conform the yml-file.
Karma config file
// Karma configuration file, see link for more information
// https://karma-runner.github.io/1.0/config/configuration-file.html
module.exports = function (config) {
const process = require('process');
process.env.CHROME_BIN = require('puppeteer').executablePath();
config.set({
basePath: '',
frameworks: ['jasmine', '#angular-devkit/build-angular'],
plugins: [
require('karma-jasmine'),
require('karma-chrome-launcher'),
require('karma-jasmine-html-reporter'),
require('karma-coverage-istanbul-reporter'),
require('#angular-devkit/build-angular/plugins/karma'),
require('karma-junit-reporter')
],
client: {
clearContext: false // leave Jasmine Spec Runner output visible in browser
},
coverageIstanbulReporter: {
dir: require('path').join(__dirname, './coverage'),
reports: ['html', 'lcovonly', 'text-summary', 'cobertura'],
fixWebpackSourcePaths: true
},
coverageReporter: {
type : 'html',
dir : 'coverage/'
},
junitReporter: {
outputDir: './coverage', // results will be saved as $outputDir/$browserName.xml
outputFile: 'junit.xml', // if included, results will be saved as $outputDir/$browserName/$outputFile
suite: '', // suite will become the package name attribute in xml testsuite element
useBrowserName: true, // add browser name to report and classes names
nameFormatter: undefined, // function (browser, result) to customize the name attribute in xml testcase element
classNameFormatter: undefined, // function (browser, result) to customize the classname attribute in xml testcase element
properties: {}, // key value pair of properties to add to the <properties> section of the report
xmlVersion: null // use '1' if reporting to be per SonarQube 6.2 XML format
},
reporters: ['progress', 'kjhtml','junit'],
port: 9876,
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
browsers: ['ChromeHeadless'],
singleRun: false,
restartOnFileChange: true
});
};
Protractor config file:
// Protractor configuration file, see link for more information
// https://github.com/angular/protractor/blob/master/lib/config.ts
const { SpecReporter } = require('jasmine-spec-reporter');
const { JUnitXmlReporter } = require('jasmine-reporters');
process.env.CHROME_BIN = process.env.CHROME_BIN || require("puppeteer").executablePath();
exports.config = {
allScriptsTimeout: 11000,
specs: [
'./src/**/*.e2e-spec.ts'
],
capabilities: {
'browserName': 'chrome',
chromeOptions: {
args: ["--headless", "--disable-gpu", "--window-size=1200,900"],
binary: process.env.CHROME_BIN
}
},
directConnect: true,
baseUrl: 'http://localhost:4200/',
framework: 'jasmine',
jasmineNodeOpts: {
showColors: true,
defaultTimeoutInterval: 30000,
print: function () { }
},
onPrepare() {
require('ts-node').register({
project: require('path').join(__dirname, './tsconfig.json')
});
jasmine.getEnv().addReporter(new SpecReporter({ spec: { displayStacktrace: true } }));
var junitReporter = new JUnitXmlReporter({
savePath: require('path').join(__dirname, './junit'),
consolidateAll: true
});
jasmine.getEnv().addReporter(junitReporter);
}
};
azurepipelines-coverage.yml
coverage:
status: #Code coverage status will be posted to pull requests based on targets defined below.
diff: #diff coverage is code coverage only for the lines changed in a pull request.
target: 95% #set this to a desired %. Default is 70%.
Azure Build pipeline steps:
"steps": [
{
"environment": {},
"enabled": true,
"continueOnError": false,
"alwaysRun": false,
"displayName": "npm install",
"timeoutInMinutes": 0,
"condition": "succeeded()",
"task": {
"id": "fe47e961-9fa8-4106-8639-368c022d43ad",
"versionSpec": "1.*",
"definitionType": "task"
},
"inputs": {
"command": "install",
"workingDir": "Project\\Frontend\\DefaultWebsite",
"verbose": "false",
"customCommand": "",
"customRegistry": "useNpmrc",
"customFeed": "",
"customEndpoint": "",
"publishRegistry": "useExternalRegistry",
"publishFeed": "",
"publishPackageMetadata": "true",
"publishEndpoint": ""
}
},
{
"environment": {},
"enabled": true,
"continueOnError": false,
"alwaysRun": false,
"displayName": "npm custom - test ",
"timeoutInMinutes": 0,
"condition": "succeeded()",
"task": {
"id": "fe47e961-9fa8-4106-8639-368c022d43ad",
"versionSpec": "1.*",
"definitionType": "task"
},
"inputs": {
"command": "custom",
"workingDir": "Project\\Frontend\\DefaultWebsite",
"verbose": "false",
"customCommand": "run test",
"customRegistry": "useNpmrc",
"customFeed": "",
"customEndpoint": "",
"publishRegistry": "useExternalRegistry",
"publishFeed": "",
"publishPackageMetadata": "true",
"publishEndpoint": ""
}
},
{
"environment": {},
"enabled": true,
"continueOnError": false,
"alwaysRun": false,
"displayName": "Publish code coverage from Project\\Frontend\\DefaultWebsite\\coverage\\cobertura-coverage.xml",
"timeoutInMinutes": 0,
"condition": "succeeded()",
"task": {
"id": "2a7ebc54-c13e-490e-81a5-d7561ab7cd97",
"versionSpec": "1.*",
"definitionType": "task"
},
"inputs": {
"codeCoverageTool": "Cobertura",
"summaryFileLocation": "Project\\Frontend\\DefaultWebsite\\coverage\\cobertura-coverage.xml",
"pathToSources": "",
"reportDirectory": "",
"additionalCodeCoverageFiles": "",
"failIfCoverageEmpty": "false"
}
},
{
"environment": {},
"enabled": true,
"continueOnError": false,
"alwaysRun": false,
"displayName": "Publish Test Results Project\\Frontend\\DefaultWebsite\\**\\junit.xml copy",
"timeoutInMinutes": 0,
"condition": "succeeded()",
"task": {
"id": "0b0f01ed-7dde-43ff-9cbb-e48954daf9b1",
"versionSpec": "2.*",
"definitionType": "task"
},
"inputs": {
"testRunner": "JUnit",
"testResultsFiles": "Project\\Frontend\\DefaultWebsite\\**\\junit.xml",
"searchFolder": "$(System.DefaultWorkingDirectory)",
"mergeTestResults": "false",
"failTaskOnFailedTests": "false",
"testRunTitle": "",
"platform": "",
"configuration": "",
"publishRunAttachments": "true"
}
},
{
"environment": {},
"enabled": true,
"continueOnError": false,
"alwaysRun": false,
"displayName": "npm custom - e2e",
"timeoutInMinutes": 0,
"condition": "succeeded()",
"task": {
"id": "fe47e961-9fa8-4106-8639-368c022d43ad",
"versionSpec": "1.*",
"definitionType": "task"
},
"inputs": {
"command": "custom",
"workingDir": "Project\\Frontend\\DefaultWebsite",
"verbose": "false",
"customCommand": "run e2e",
"customRegistry": "useNpmrc",
"customFeed": "",
"customEndpoint": "",
"publishRegistry": "useExternalRegistry",
"publishFeed": "",
"publishPackageMetadata": "true",
"publishEndpoint": ""
}
},
{
"environment": {},
"enabled": true,
"continueOnError": false,
"alwaysRun": false,
"displayName": "Publish Test Results Project\\Frontend\\DefaultWebsite\\e2e\\**\\junitresults.xml",
"timeoutInMinutes": 0,
"condition": "succeeded()",
"task": {
"id": "0b0f01ed-7dde-43ff-9cbb-e48954daf9b1",
"versionSpec": "2.*",
"definitionType": "task"
},
"inputs": {
"testRunner": "JUnit",
"testResultsFiles": "Project\\Frontend\\DefaultWebsite\\e2e\\**\\junitresults.xml",
"searchFolder": "$(System.DefaultWorkingDirectory)",
"mergeTestResults": "false",
"failTaskOnFailedTests": "false",
"testRunTitle": "",
"platform": "",
"configuration": "",
"publishRunAttachments": "true"
}
},
{
"environment": {},
"enabled": true,
"continueOnError": false,
"alwaysRun": false,
"displayName": "npm custom - prodBuild",
"timeoutInMinutes": 0,
"condition": "succeeded()",
"task": {
"id": "fe47e961-9fa8-4106-8639-368c022d43ad",
"versionSpec": "1.*",
"definitionType": "task"
},
"inputs": {
"command": "custom",
"workingDir": "Project\\Frontend\\DefaultWebsite",
"verbose": "false",
"customCommand": "run prodBuild",
"customRegistry": "useNpmrc",
"customFeed": "",
"customEndpoint": "",
"publishRegistry": "useExternalRegistry",
"publishFeed": "",
"publishPackageMetadata": "true",
"publishEndpoint": ""
}
},
{
"environment": {},
"enabled": true,
"continueOnError": false,
"alwaysRun": false,
"displayName": "Publish Artifact: app",
"timeoutInMinutes": 0,
"condition": "succeeded()",
"task": {
"id": "2ff763a7-ce83-4e1f-bc89-0ae63477cebe",
"versionSpec": "1.*",
"definitionType": "task"
},
"inputs": {
"PathtoPublish": "Project\\Frontend\\DefaultWebsite\\dist",
"ArtifactName": "app",
"ArtifactType": "Container",
"TargetPath": "",
"Parallel": "false",
"ParallelCount": "8",
"FileCopyOptions": ""
}
}
],

Ansible fails with runuser: command not found

I am trying to provision a PostgreSQL server using role galaxyproject.postgresql. Using Vagrant box generic/centos7 this role fails with message
TASK [galaxyproject.postgresql : Initialize database (RedHat >= 7)] ************
fatal: [postgresql]: FAILED! => {"changed": true, "cmd": ["/usr/pgsql-9.6/bin/postgresql96-setup", "initdb"], "delta": "0:00:00.181409", "end": "2019-10-16 01:45:59.495713", "msg": "non-zero return code", "rc": 1, "start": "2019-10-16 01:45:59.314304", "stderr": "", "stderr_lines": [], "stdout": "Initializing database ... failed, see /var/lib/pgsql/9.6/initdb.log", "stdout_lines": ["Initializing database ... failed, see /var/lib/pgsql/9.6/initdb.log"]}
The file /var/lib/pgsql/9.6/initdb.log has the following message
/usr/pgsql-9.6/bin/postgresql96-setup: line 143: runuser: command not found
On the target node runuser is available
[root#postgresql ~]# which runuser
/sbin/runuser
So the problem seems to be that /sbin is not on the PATH when Ansible runs on target nodes.
How can I make runuser command available to Ansible? I don't want to change the external role galaxyproject.postgresql of course.
When I output PATH using Ansible debug it shows that PATH does not include sbin.
TASK [galaxyproject.postgresql : debug] ****************************************
ok: [postgresql] => {
"PATH": {
"changed": true,
"cmd": "echo $PATH",
"delta": "0:00:00.010478",
"end": "2019-10-16 02:12:14.882341",
"failed": false,
"rc": 0,
"start": "2019-10-16 02:12:14.871863",
"stderr": "",
"stderr_lines": [],
"stdout": "/usr/local/bin:/usr/bin",
"stdout_lines": [
"/usr/local/bin:/usr/bin"
]
}
}
If you need to add something into PATH, you can try to set environment for this module. I hadn't tried this for postgres module, but it should work:
- postgres_user: # or other module name you are using
user: foo # and other normal arguments
environment:
PATH: '$PATH:/sbin'
See also: https://docs.ansible.com/ansible/latest/user_guide/playbooks_environment.html

Why Jelastic environment not working when using postgres9 in jps?

I have created a jps file using documentation https://docs.jelastic.com/application-manifest.
But there is no clear documentation to use PostgreSQL.
Jelastic JPS Node:
{
"nodeType": "postgres9",
"restart": false,
"database": {
"name": "xxxx",
"user": "xxx",
"dump": "xxx.sql"
}
}
Error while configuring environment,
"data": {
"result": 11005,
"source": "marketplace",
"error": "database query error: java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=10.101.3.225)(port=3306)(type=master) : Connection refused (Connection refused)"
}
I have provided whole JPS file content here. In this, i got error when importing database and others are working fine in configs object.
{
"jpsVersion": "0.1",
"jpsType": "install",
"application": {
"id": "xxx",
"name": "xxx",
"version": "0.0.1",
"logo": "http://example.com/img/logo.png",
"type": "php",
"homepage": "http://example.com/",
"description": {
"en": "xxx"
},
"env": {
"topology": {
"ha": false,
"engine": "php7.2",
"ssl": false,
"nodes": [
{
"extip": false,
"count": 1,
"cloudlets": 16,
"nodeType": "nginxphp"
},
{
"extip": false,
"count": 1,
"cloudlets": 16,
"nodeType": "postgres9"
}
]
},
"upload": [
{
               "nodeType": "nginxphp",
               "sourcePath": "https://example.com/xxx.conf",
               "destPath": "${SERVER_CONF_D}/xxx.conf"
}
],
"deployments": [
{
"archive": "https://example.com/xxx.zip",
"name": "xxx.zip",
"context": "ROOT"
}
],
"configs": [
{
"nodeType": "nginxphp",
"restart": true,
"path": "${SERVER_CONF_D}/xxx.conf",
"replacements": [
                    {
                       "pattern":"/usr/share/nginx/html",
                       "replacement":"${SERVER_WEBROOT}"
                    }
                    ]
},
{
"nodeType": "postgres9",
"restart": false,
"database": {
"name": "xxx",
"user": "xxx",
"dump": "https://example.com/xxx.sql"
}
}, {
"restart": false,
"nodeType": "nginxphp",
"path": "${SERVER_WEBROOT}/ROOT/server/php/config.inc.php",
"replacements": [{
"replacement": "${nodes.postgres9.address}",
"pattern": "localhost"
}, {
"replacement": "${nodes.postgres9.database.password}",
"pattern": "xxx"
}
]
}
]
},
"success": {
"text": "Installation completed. username: admin and password: xxx"
}
}
}
Since Actions are disabled for the Postgres so far (The action is executed only for mysql5, mariadb, and mariadb10 containers) we've improved your manifest based on the recent updates. Yaml was used because it's more clear for reading and understanding:
jpsVersion: 0.1
jpsType: install
name: xxx
version: 0.0.1
logo: http://example.com/img/logo.png
engine: php7.2
nodes:
- cloudlets: 16
nodeType: nginxphp
- cloudlets: 16
nodeType: postgres9
onInstall:
- upload [nginxphp]:
sourcePath: https://example.com/xxx.conf
destPath: ${SERVER_CONF_D}/xxx.conf
- deploy:
archive: https://example.com/xxx.zip
name: xxx.zip
context: ROOT
- replaceInFile [nginxphp]:
path: ${SERVER_CONF_D}/xxx.conf
replacements:
- pattern: /usr/share/nginx/html
replacement: ${SERVER_WEBROOT}
- restartNodes [nginxphp]
- replaceInFile [nginxphp]:
path: ${SERVER_WEBROOT}/ROOT/server/php/config.inc.php
replacements:
- pattern: localhost
replacement: ${nodes.postgres9.address}
- pattern: xxx
replacement: ${nodes.postgres9.password}
- cmd [postgres9]: printf "PGPASSWORD=${nodes.postgres9.password};\nexport PGPASSWORD;\npsql postgres webadmin -c \"CREATE DATABASE Jelastic;\"\n" > /tmp/createDb
- cmd [postgres9]: chmod +x /tmp/createDb && /tmp/createDb
success: Installation completed. username admin and password xxx
Please note, that you can debug every action in the /console tab

Error running postgresql96-setup initdb with Ansible

I am trying to automate the installation of a PostgreSQL database using Ansible.
However, the following task:
- name: Initialize Postgres
command: /usr/pgsql-9.6/bin/postgresql96-setup initdb
become: true
Results in this error:
fatal: [nexus-staging.chop.edu]: FAILED! => {
"changed": true,
"cmd": "/usr/pgsql-9.6/bin/postgresql96-setup initdb",
"delta": "0:00:00.043311",
"end": "2017-02-16 23:39:12.512727",
"failed": true,
"invocation": {
"module_args": {
"_raw_params": "/usr/pgsql-9.6/bin/postgresql96-setup initdb",
"_uses_shell": true,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"warn": true
},
"module_name": "command"
},
"rc": 1,
"start": "2017-02-16 23:39:12.469416",
"stderr": "",
"stdout": "Initializing database ... failed, see /var/lib/pgsql/9.6/initdb.log",
"stdout_lines": [
"Initializing database ... failed, see /var/lib/pgsql/9.6/initdb.log"
],
"warnings": []
}
The error in /var/lib/pgsql/9.6/initdb.log is:
/usr/pgsql-9.6/bin/postgresql96-setup: line 140: runuser: command not found
What is interesting is that if I run sudo /usr/pgsql-9.6/bin/postgresql96-setup initdb on the host, it runs successfully...
Any help would be appreciated.
Try with PATH environment variable defined explicitly in the task:
- name: Initialize Postgres
command: /usr/pgsql-9.6/bin/postgresql96-setup initdb
environment:
PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
become: true
Most likely the value of the path is set differently for interactive and non-interactive shell sessions.
Or locate the runuser executable and add the path before running the script:
command: PATH=/runuser/location:${PATH} /usr/pgsql-9.6/bin/postgresql96-setup initdb