Some tasks in ansible are't executed - deployment

I'm using ansible for some deployment issues.
I want to do the following:
Install virtualenv
Activate installed virtual environment
Check if I'm in virtual environment
For this purposes I have the following playbook:
---
- hosts: servers
tasks:
- name: update repository
apt: update_cache=yes
sudo: true
tasks:
- name: install git
apt: name=git state=latest
sudo: true
tasks:
- name: install pip
apt: name=python-pip state=latest
sudo: true
tasks:
- name: installing postgres
sudo: true
apt: name=postgresql state=latest
tasks:
- name: installing libpd-dev
sudo: true
apt: name=libpq-dev state=latest
tasks:
- name: installing psycopg
sudo: true
apt: name=python-psycopg2 state=latest
tasks:
- name: configuration of virtual env
sudo: true
pip: name=virtualenvwrapper state=latest
tasks:
- name: create virtualenv
command: virtualenv venv
tasks:
- name: virtualenv activate
shell: . ~/venv/bin/activate
tasks:
- name: "Guard code, so we are more certain we are in a virtualenv"
shell: echo $VIRTUAL_ENV
register: command_result
failed_when: command_result.stdout == ""
The problem is that sometimes some tasks are not executed, but they have to... For instance in my case the task:
tasks:
- name: create virtualenv
command: virtualenv venv
Is not executed.
But if I will comment 2 last tasks:
tasks:
- name: virtualenv activate
shell: . ~/venv/bin/activate
tasks:
- name: "Guard code, so we are more certain we are in a virtualenv"
shell: echo $VIRTUAL_ENV
register: command_result
failed_when: command_result.stdout == ""
The previous one works...
Can't get what I'm doing wrong. Can somebody hint me?

assuming hosts: servers covers the correct servers, you should only have one tasks entry. Here's an optimized and simplified playbook.
---
- hosts: servers
sudo: yes
tasks:
- name: update repository daily
apt: update_cache=yes cache_valid_time=86400
- name: install development dependencies
apt: name={{item}} state=latest
with_items:
- git
- python-pip
- postgresql
- libpq-dev
- python-psycopg2
- name: configuration of virtual env
pip: name=virtualenvwrapper state=present
- name: create virtualenv
command: virtualenv venv
- name: virtualenv activate
shell: . ~/venv/bin/activate
- name: "Guard code, so we are more certain we are in a virtualenv"
shell: echo $VIRTUAL_ENV
register: command_result
failed_when: command_result.stdout == ""
Note I've cached the apt call and I've also changed state to present. You likely want to install specific versions, rather than rechecking on every run of ansible.

Related

add kubernetes worker node with ansible but it doesn't get join

I'm trying to stablish a kubernetes system and I'm using ansible.
and these are my playbooks:
hosts:
[masters]
master ansible_host=157.90.96.140 ansible_user=root
[workers]
worker1 ansible_host=157.90.96.138 ansible_user=root
worker2 ansible_host=157.90.96.139 ansible_user=root
[all:vars]
ansible_user=ubuntu
ansible_python_interpreter=/usr/bin/python3
kubelet_cgroup_driver=cgroupfs
ansible_ssh_common_args='-o StrictHostKeyChecking=no
initial
become: yes
tasks:
- name: create the 'ubuntu' user
user: name=ubuntu append=yes state=present createhome=yes shell=/bin/bash
- name: allow 'ubuntu' to have passwordless sudo
lineinfile:
dest: /etc/sudoers
line: 'ubuntu ALL=(ALL) NOPASSWD: ALL'
validate: 'visudo -cf %s'
- name: set up authorized keys for the ubuntu user
authorized_key: user=ubuntu key="{{item}}"
with_file:
- ~/.ssh/id_rsa.pub
kube-dependencies
- hosts: all
become: yes
tasks:
- name: install Docker
apt:
name: docker.io
state: present
update_cache: true
- name: install APT Transport HTTPS
apt:
name: apt-transport-https
state: present
- name: add Kubernetes apt-key
apt_key:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
state: present
- name: add Kubernetes' APT repository
apt_repository:
repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
state: present
filename: 'kubernetes'
- name: install kubernetes-cni
apt:
name: kubernetes-cni=0.7.5-00
state: present
force: yes
update_cache: true
- name: install kubelet
apt:
name: kubelet=1.14.0-00
state: present
update_cache: true
- name: install kubeadm
apt:
name: kubeadm=1.14.0-00
state: present
- hosts: master
become: yes
tasks:
- name: install kubectl
apt:
name: kubectl=1.14.0-00
state: present
force: yes
master
- hosts: master
become: yes
tasks:
- name: initialize the cluster
shell: kubeadm init --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
args:
chdir: $HOME
creates: cluster_initialized.txt
- name: create .kube directory
become: yes
become_user: ubuntu
file:
path: $HOME/.kube
state: directory
mode: 0755
- name: copy admin.conf to user's kube config
copy:
src: /etc/kubernetes/admin.conf
dest: /home/ubuntu/.kube/config
remote_src: yes
owner: ubuntu
- name: install Pod network
become: yes
become_user: ubuntu
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml >> pod_network_setup.txt
args:
chdir: $HOME
creates: pod_network_setup.txt
worker
- hosts: master
become: yes
gather_facts: false
tasks:
- name: get join command
shell: kubeadm token create --print-join-command
register: join_command_raw
- name: set join command
set_fact:
join_command: "{{ join_command_raw.stdout_lines[0] }}"
- hosts: workers
become: yes
tasks:
- name: join cluster
shell: "{{ hostvars['master'].join_command }} >> node_joined.txt"
args:
chdir: $HOME
creates: node_joined.txt
my manual for installation was
https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-ansible-on-ubuntu-16-04
now I have several question:
Are these configs right?
2.In master playbook I'm using manual pod network and I didn't change it is that correct?
my main problem is my workers don't get join what's the problem?

Github Action: [!] Error: Cannot find module 'rollup-plugin-commonjs'

In my package.json there are rollup and rollup-plugin-commonjs
but inside github actions it could not find those packages!
If I do not add rollup in global package installation step of github-action it shows that rollup is not found. But after adding both rollup and rollup-plugin-commonjs I get [!] Error: Cannot find module 'rollup-plugin-commonjs'
this is my workflow file:
name: Github Action
on:
push:
branches:
- fix/auto-test
jobs:
test:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v1
- name: Bootstrap app on Ubuntu
uses: actions/setup-node#v1
with:
node-version: '11.x.x'
- name: Install global packages
run: npm install -g prisma rollup rollup-plugin-commonjs
- name: Get yarn cache directory path
id: yarn-cache-dir-path
run: echo "::set-output name=dir::$(yarn cache dir)"
- name: Cache Project dependencies test
uses: actions/cache#v1
id: yarn-cache
with:
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
- name: Install project deps
if: steps.yarn-cache.outputs.cache-hit != 'true'
run: yarn
- name: Run docker
run: docker-compose -f docker-compose.test.prisma.yml up --build -d
- name: Sleep
uses: jakejarvis/wait-action#master
with:
time: '30s'
- name: Reset the database for safety
run: yarn reset:backend
- name: Deploy
run: yarn deploy:backend
- name: Build this great app
run: yarn build
- name: start app and worker concurrently and create some instances
run: |
yarn start &
yarn start:worker &
xvfb-run --auto-servernum yarn test:minimal:runner

Can github actions include postgresql with uuid support?

I have a working GitHub action which installs PostgreSQL 11. But now I use UUIDs and those are not supported.
I need to run CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; to install UUIDs but it is not clear how to do that with GitHub Actions.
I've thrashed and tried several other Docker images which have UUID support enabled but they are old user throw-always and do not support Actions.
My Rust.yml is below:
name: Rust
on: [push]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest]
rust: [stable, beta]
services:
postgres:
image: postgres:11.6
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
env:
RUSTFLAGS: -D warnings
CARGO_INCREMENTAL: 0
RUN_SLOW_TESTS: 1
RUSTUP_MAX_RETRIES: 10
CARGO_NET_RETRY: 10
steps:
- uses: hecrj/setup-rust-action#v1
with:
rust-version: ${{ matrix.rust }}
components: rustfmt
targets: wasm32-unknown-unknown
- uses: actions/checkout#master
- name: Install Dependencies
if: matrix.os == 'ubuntu-latest'
run: sudo apt-get update && sudo apt-get install libudev-dev zlib1g-dev alsa libasound2-dev
- name: Build
run: cargo build --verbose
- name: Install Diesel CLI
run: cargo install diesel_cli --no-default-features --features postgres
- name: Setup Diesel
env:
DATABASE_URL: postgres://postgres:postgres#localhost/nof1_time_series
run: diesel setup
- name: Run tests
env:
DATABASE_URL: postgres://postgres:postgres#localhost/nof1_time_series
run: cargo test --verbose
The solution was to include the following /migrations/00000000000010_install_uuid_feature/Up.sql as an own migration early in the migration set and run as superuser postgres:
-- Install the UUID extension to this database
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
Other UUID tips if you are using Rust Diesel as your ORM:
Make sure to include the Diesel features you need:
[dependencies]
diesel = { version = "1.4", features = ["postgres", "chrono", "uuidv07", "serde_json"] }
uuid = {version = "0.8", features = ["serde", "v4"]}
chrono = { version = "0.4", features = ["serde"] }
Do not check formatting in the Github Action- the generated Diesel Schema will cause that to fail the build.
My updated Github Action for Rust Diesel with UUID support is below:
name: Rust
on: [push]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest]
rust: [stable, beta]
services:
postgres:
image: postgres:11.6
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
env:
RUSTFLAGS: -D warnings
CARGO_INCREMENTAL: 0
RUN_SLOW_TESTS: 1
RUSTUP_MAX_RETRIES: 10
CARGO_NET_RETRY: 10
steps:
- uses: hecrj/setup-rust-action#v1
with:
rust-version: ${{ matrix.rust }}
components: rustfmt
targets: wasm32-unknown-unknown
- uses: actions/checkout#master
- name: Install Dependencies
if: matrix.os == 'ubuntu-latest'
run: sudo apt-get update && sudo apt-get install libudev-dev zlib1g-dev alsa libasound2-dev
- name: Build
run: cargo build --verbose
- name: Install Diesel CLI
run: cargo install diesel_cli --no-default-features --features postgres
- name: Setup Diesel
env:
DATABASE_URL: postgres://postgres:postgres#localhost/timeseries
run: diesel setup
- name: Run tests
env:
DATABASE_URL: postgres://postgres:postgres#localhost/timeseries
run: cargo test --verbose
Complete example time-series-database

Destination /etc/default/kubelet does not exist

I am trying to install kubernetes cluster with vagrant and ansible and it does not work.
As the error message, I've got:
TASK [Configure node ip] *******************************************************
fatal: [k8s-node-3]: FAILED! => {"changed": false, "msg": "Destination /etc/default/kubelet does not exist !", "rc": 257}
RUNNING HANDLER [docker status] ************************************************
PLAY RECAP *********************************************************************
k8s-node-3 : ok=10 changed=8 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
The vagrant file:
IMAGE_NAME = "ubuntu/bionic64"
Nodes = 3
Vagrant.configure("2") do |config|
config.ssh.insert_key = false
config.vm.provider "virtualbox" do |v|
v.memory = 1024
v.cpus = 2
end
config.vm.define "k8s-master" do |master|
master.vm.box = IMAGE_NAME
master.vm.network "private_network", ip: "192.168.99.100", name: "vboxnet0", adapter: 2
master.vm.hostname = "k8s-master"
master.vm.provision "ansible" do |ansible|
ansible.playbook = "k8s-setup/master-playbook.yml"
ansible.extra_vars = {
node_ip: "192.168.99.100",
}
end
end
(1..Nodes).each do |i|
config.vm.define "k8s-node-#{i}" do |node|
node.vm.box = IMAGE_NAME
node.vm.network "private_network", ip: "192.168.99.#{100 + i}", name: "vboxnet0", adapter: 2
node.vm.hostname = "k8s-node-#{i}"
node.vm.provision "ansible" do |ansible|
ansible.playbook = "k8s-setup/node-playbook.yml"
ansible.extra_vars = {
node_ip: "192.168.99.#{100 + i}",
}
end
end
end
end
and the master-playbook.yml file
---
- hosts: all
become: true
tasks:
- name: Install packages that allow apt to be used over HTTPS
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
- name: Add an apt signing key for Docker
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add apt repository for stable version
apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
state: present
- name: Install docker and its dependecies
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- docker-ce
- docker-ce-cli
- containerd.io
notify:
- docker status
- name: Add vagrant user to docker group
user:
name: vagrant
group: docker
- name: Remove swapfile from /etc/fstab
mount:
name: "{{ item }}"
fstype: swap
state: absent
with_items:
- swap
- none
- name: Disable swap
command: swapoff -a
when: ansible_swaptotal_mb > 0
- name: Add an apt signing key for Kubernetes
apt_key:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
state: present
- name: Adding apt repository for Kubernetes
apt_repository:
repo: deb https://apt.kubernetes.io/ kubernetes-xenial main
state: present
filename: kubernetes.list
- name: Install Kubernetes binaries
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- kubelet
- kubeadm
- kubectl
- name: Configure node ip
lineinfile:
path: /etc/default/kubelet
line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }}
- name: Restart kubelet
service:
name: kubelet
daemon_reload: yes
state: restarted
- name: Initialize the Kubernetes cluster using kubeadm
command: kubeadm init --apiserver-advertise-address="192.168.99.100" --apiserver-cert-extra-sans="192.168.99.100" --node-name k8s-master --pod-network-cidr=192.168.0.0/16
- name: Setup kubeconfig for vagrant user
command: "{{ item }}"
with_items:
- mkdir -p /home/vagrant/.kube
- cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
- chown vagrant:vagrant /home/vagrant/.kube/config
- name: Install calico pod network
become: false
command: kubectl create -f https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml
- name: Generate join command
command: kubeadm token create --print-join-command
register: join_command
- name: Copy join command to local file
local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command"
handlers:
- name: docker status
service: name=docker state=started
and the node-playbook.yml
---
- hosts: all
become: true
tasks:
- name: Install packages that allow apt to be used over HTTPS
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
- name: Add an apt signing key for Docker
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add apt repository for stable version
apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
state: present
- name: Install docker and its dependecies
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- docker-ce
- docker-ce-cli
- containerd.io
notify:
- docker status
- name: Add vagrant user to docker group
user:
name: vagrant
group: docker
- name: Remove swapfile from /etc/fstab
mount:
name: "{{ item }}"
fstype: swap
state: absent
with_items:
- swap
- none
- name: Disable swap
command: swapoff -a
when: ansible_swaptotal_mb > 0
- name: Add an apt signing key for Kubernetes
apt_key:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
state: present
- name: Adding apt repository for Kubernetes
apt_repository:
repo: deb https://apt.kubernetes.io/ kubernetes-xenial main
state: present
filename: kubernetes.list
- name: Install Kubernetes binaries
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- kubelet
- kubeadm
- kubectl
- name: Configure node ip
lineinfile:
path: /etc/default/kubelet
line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }}
- name: Restart kubelet
service:
name: kubelet
daemon_reload: yes
state: restarted
- name: Copy the join command to server location
copy: src=join-command dest=/tmp/join-command.sh mode=0777
- name: Join the node to cluster
command: sh /tmp/join-command.sh
handlers:
- name: docker status
service: name=docker state=starte
What is wrong? Why the kubelet file can not be found?
The error occurs, because /etc/default/kubelet does not exist on the VMs. Add create: yes to the "Configure node ip" tasks in master-playbook.yml and node-playbook.yml, so that they look like this:
- name: Configure node ip
lineinfile:
path: /etc/default/kubelet
line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }}
create: yes
This way, the file will be created if it does not exist.
I found this generic ansible-playbook I see at git that generally follows the official manual. Initially, it was created (half year ago?) for ubuntu 16.04. I tried to run (instructions from official manual) against ubuntu 18 (as you using bionic), but I should say, there is no /etc/default/kubelet installed (after apt install ...).
Update:
And here is why...
P.S.
I would suggest using Kubespray as local vagrant/kubernetes setup, but it's because it just works from the box.
You are following the tutorial on kubernetes.io.
I got the same error as you:
"Destination /etc/default/kubelet does not exist".
Just look at the instructions here.
You need to adjust the playbook slightly to the other instructions:
Change the line: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --node-name k8s-master --pod-network-cidr=192.168.0.0/16" according to the other instruction to kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address="192.168.50.10"
The result will be a join command that you need to register and re-use to join the two worker nodes.
I had the same error last time:
TASK [Configure node ip]
*******************************************************
fatal: [k8s-master]: FAILED! => {"changed": false, "msg": "Destination
/etc/default/kubelet does not exist !", "rc": 257}
So please check your ansible-playbook and verify that the kubelet will be installed. If not please add create parameter:
create: yes
So in your case, it should look like this:
- name: Configure node ip
lineinfile:
path: /etc/default/kubelet
line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }}
create: yes

Ansible and postgresql error with psycopg2

I'm trying to configure postgresql by ansible in a VPS.
Look for a solution, I tried to change peer for md5 and trust too in the postgre conf.
My role:
- name: Install o Postgresql
become: yes
apt:
name: ['libpq-dev', 'python3-dev', 'postgresql', 'postgresql-contrib']
- name: Install o psycopg2
become: yes
pip:
name: psycopg2-binary
executable: pip3
- name: ensure postgresql is running
service:
name: postgresql
state: started
enabled: yes
- name: ensure database is created
become: true
become_user: postgres
postgresql_db:
name: "{{ db_name }}"
The tasks 1,2,3 is ok. But the task 4 "ensure database is created" I receive this error:
psycopg2.OperationalError: FATAL: role "postgresql" does not exist
My playbook
- hosts: dev
remote_user: develop
roles:
- update_apt
- nginx
- webapp
- postgresql
- git