Ansible command from inside virtualenv? - virtualenv

This seems like it should be really simple:
tasks:
- name: install python packages
pip: name=${item} virtualenv=~/buildbot-env
with_items: [ buildbot ]
- name: create buildbot master
command: buildbot create-master ~/buildbot creates=~/buildbot/buildbot.tac
However, the command will not succeed unless the virtualenv's activate script is sourced first, and there doesn't seem to be provision to do that in the Ansible command module.
I've experimented with sourcing the activate script in various of .profile, .bashrc, .bash_login, etc, with no luck. Alternatively, there's the shell command, but it seems like kind of an awkward hack:
- name: create buildbot master
shell: source ~/buildbot-env/bin/activate && \
buildbot create-master ~/buildbot \
creates=~/buildbot/buildbot.tac executable=/bin/bash
Is there a better way?

The better way is to use the full path to installed script - it will run in its virtualenv automatically:
tasks:
- name: install python packages
pip: name={{ item }} virtualenv={{ venv }}
with_items: [ buildbot ]
- name: create buildbot master
command: "{{ venv }}/bin/buildbot create-master ~/buildbot
creates=~/buildbot/buildbot.tac"

This is a genericized version of the wrapper method.
venv_exec.j2:
#!/bin/bash
source {{ venv }}/bin/activate
$#
And then the playbook:
tasks:
- pip: name={{ item }} virtualenv={{ venv }}
with_items:
- buildbot
- template: src=venv_exec.j2 dest={{ venv }}/exec mode=755
- command: "{{ venv }}/exec buildbot create-master {{ buildbot_master }}"

Here's a way to enable the virtualenv for an entire play; this example builds the virtualenv in one play, then starts using it the next.
Not sure how clean it is, but it works. I'm just building a bit on what mikepurvis mentioned here.
---
# Build virtualenv
- hosts: all
vars:
PROJECT_HOME: "/tmp/my_test_home"
ansible_python_interpreter: "/usr/local/bin/python"
tasks:
- name: "Create virtualenv"
shell: virtualenv "{{ PROJECT_HOME }}/venv"
creates="{{ PROJECT_HOME }}/venv/bin/activate"
- name: "Copy virtualenv wrapper file"
synchronize: src=pyvenv
dest="{{ PROJECT_HOME }}/venv/bin/pyvenv"
# Use virtualenv
- hosts: all
vars:
PROJECT_HOME: "/tmp/my_test_home"
ansible_python_interpreter: "/tmp/my_test_home/venv/bin/pyvenv"
tasks:
- name: "Guard code, so we are more certain we are in a virtualenv"
shell: echo $VIRTUAL_ENV
register: command_result
failed_when: command_result.stdout == ""
pyenv wrapper file:
#!/bin/bash
source "$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/activate"
python $#

Just run the virtualenvs pip in a shell:
shell: ~/buildbot-env/pip install ${item}
Works like a charm. I have no idea what the pip module does with virtualenvs, but it seems pretty useless.

As I commented above, I create a script, say it is called buildbot.sh:
source ~/buildbot-env/bin/activate
buildbot create-master [and more stuff]
Then run it on the remote with a task like this:
- name: Create buildbot master
script: buildbot.sh
To me this still seems unneccessary, but it maybe cleaner than running it in a shell command. Your playbook looks cleaner at the cost of not seeing immediately what the script does.
At least some modules do seem to use virtualenv, as both django_manage and rax_clb already have an inbuilt virtualenv parameter. It may not be such a big step for Ansible to include a command-in-virtenv sort of module.

Related

TYPO3: How to publish an extension to TER with Github actions and tailor and add third party library on the fly

I would like to publish an extension automatically to TER by using the Github actions and tailor, the CLI Tool for maintaining public TYPO3 Extensions . This works perfectly fine with the following workflow configuration:
name: TYPO3 Extension TER Release
on:
push:
tags:
- '*'
jobs:
publish:
name: Publish new version to TER
if: startsWith(github.ref, 'refs/tags/')
runs-on: ubuntu-20.04
env:
TYPO3_EXTENSION_KEY: ${{ secrets.TYPO3_EXTENSION_KEY }}
TYPO3_API_TOKEN: ${{ secrets.TYPO3_API_TOKEN }}
steps:
- name: Checkout repository
uses: actions/checkout#v2
- name: Check tag
run: |
if ! [[ ${{ github.ref }} =~ ^refs/tags/[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}$ ]]; then
exit 1
fi
- name: Get version
id: get-version
run: echo ::set-output name=version::${GITHUB_REF/refs\/tags\//}
- name: Get comment
id: get-comment
run: |
readonly local comment=$(git tag -n10 -l ${{ steps.get-version.outputs.version }} | sed "s/^[0-9.]*[ ]*//g")
if [[ -z "${comment// }" ]]; then
echo ::set-output name=comment::Released version ${{ steps.get-version.outputs.version }} of ${{ env.TYPO3_EXTENSION_KEY }}
else
echo ::set-output name=comment::$comment
fi
- name: Setup PHP
uses: shivammathur/setup-php#v2
with:
php-version: 7.4
extensions: intl, mbstring, json, zip, curl
tools: composer:v2
- name: Install tailor
run: composer global require typo3/tailor --prefer-dist --no-progress --no-suggest
- name: Publish to TER
run: php ~/.composer/vendor/bin/tailor ter:publish --comment "${{ steps.get-comment.outputs.comment }}" ${{ steps.get-version.outputs.version }}
Since my extension depends on a third party PHP library (which is loaded if the extension is installed with composer) I need to add this library on the fly, when the extension gets deployed to the TER. Therefore I added in Resources/Private/PHP/ a composer.json and a composer.lock.
Now I would like to tell tailor to execute composer install in Resources/Private/PHP/ and package the extension including the external library. Is this possible? If so, how?
Generally it is useful to put everything in Composer command scripts so that you are actually able to execute actions without depending on Github actions or a CI in general.
For example you could add a few commands like this:
{
"scripts": {
"build:cleanup": [
"git reset --hard",
"git clean -xfd"
],
"deploy:ter:setup": [
"#composer global require clue/phar-composer typo3/tailor"
],
"build:ter:vendors": [
"(mkdir -p /tmp/vendors && cd /tmp/vendors && composer require acme/foo:^1.0 acme/bar:^2.0 && composer global exec phar-composer build -v)",
"cp /tmp/vendors/vendors.phar ./Resources/Private/libraries.phar",
"echo \"require 'phar://' . \\TYPO3\\CMS\\Core\\Utility\\ExtensionManagementUtility::extPath('$(composer config extra.typo3/cms.extension-key)') . 'Resources/Private/libraries.phar/vendor/autoload.php';\" >> ext_localconf.php"
],
"deploy:ter:upload": [
"composer global exec -v -- tailor ter:publish --comment \"$(git tag -l --format='%(contents)' $TAG)\" $TAG"
],
"deploy:ter": [
"#build:cleanup",
"#deploy:ter:setup",
"#build:ter:vendors",
"#deploy:ter:upload"
]
}
}
The various scripts explained:
build:cleanup drops all pending files to ensure no undesired files are uploaded to the TER
build:ter:setup installs typo3/tailor for the TER upload and clue/phar-composer for building a Phar of your vendor dependencies
build:ter:vendors installs a manually maintained list of dependencies in a temporary directory and builds a Phar from that; it then copies that Phar to the current directory and adds a require call to your ext_localconf.php
deploy:ter:upload finally invokes Tailor to upload the current directory including the Phar and the ext_localconf.php adjustment and fetches the comment of the specified Git tag; notice that tagging should be done with --message here to have an annotated Git tag (e.g. git tag -a 1.2.3 -m "Bugfix release")
Now assuming you export/provide the environment variables TYPO3_API_USERNAME, TYPO3_API_PASSWORD and TAG you can instantly deploy the latest release like this:
# Provide username and password for TER, if not done yet
export TYPO3_API_USERNAME=YourName
export TYPO3_API_PASSWORD=YourSecretPassword
# Provide the tag to deploy
TAG=1.2.3 composer deploy:ter
Subsequently the related Github action becomes very simple:
jobs:
build:
# ...
release-ter:
name: TYPO3 TER release
if: startsWith(github.ref, 'refs/tags/')
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Deploy to TER
env:
TYPO3_API_USERNAME: ${{secrets.TYPO3_API_USERNAME}}
TYPO3_API_PASSWORD: ${{secrets.TYPO3_API_PASSWORD}}
TAG: ${{github.ref_name}}
run: composer deploy:ter
Here is a live example:
.github/workflows/ci.yml
composer.json

Permission denied error in github actions

I have written a github action to retrieve the changed sql files and lint those changed files using sqlfluff.
Here is my github action code:
name: files_lint
on:
- pull_request
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout#v2
- name: Install Python
uses: "actions/setup-python#v2"
with:
python-version: "3.7"
- name: install sqlfluff
run: "pip install sqlfluff"
- name: Get changed .sql files
id: linting
run: some code to get the changed files
- name: Linting files started
id: sql_linting
if: steps.linting.outputs.lintees != ''
shell: bash -l {0}
run: ${{ steps.linting.outputs.lintees }} > sqlfluff fix --force
But when I run ${{ steps.linting.outputs.lintees }} > sqlfluff fix --force on the changed sql files in the above github action, I'm getting an error
/home/runner/work/_temp/a41i1c89a4.sh: line 1: test.sql: Permission denied
Error: Process completed with exit code 126.
You can’t redirect files like this:
run: ${{ steps.linting.outputs.lintees }} > sqlfluff fix --force
This is attempting to write the output of whatever that command is - but I’d guess it’s a list of files rather than a command?
You should pass as parameters (assuming it’s a list of files):
run: sqlfluff fix --force ${{ steps.linting.outputs.lintees }}
Also I presume you’re going to do something with it afterwards? If not the fixed files will not do anything. If you just want to check the files sqlfluff lint would be better than sqlfluff fix (and catches more issues as sqlfluff fix only looks at rules it can fix).
For all developers who created shell script (.sh) locally on Windows or in Windows Subsystem Linux (WSL), or cloned the git repository without knowing on which file system this shell script was created, make sure that shell script is Linux executable!
Linux
chmod +x script.sh
Windows
git update-index --chmod=+x script.sh
Finally, don't forget to push your changes.
git add script.sh
git commit -m'Making script.sh executable'
git push

CircleCI "Could not ensure that workspace directory exists"

I am using CircleCI with a GameCI docker image in order to build a Unity project. The build works, but I am trying to make use of the h-matsuo/github-release orb in order to create a release on GitHub for the build. I have created a new separate job for this, so I needed to share data between the jobs. I am using persist_to_workspace in order to do that, as specified in the documentation, but the solution doesn't seem to work. I get the following error:
Could not ensure that workspace directory /root/project/Zipped exists
For the workspace persist logic, I've added the following lines of code in my config.yml file:
working_directory: /root/project - Inside the executor of the main job
persist_to_workspace - As a last command inside my main job's steps
attach_workspace - As a beginning command inside my second job's steps
Here's my full config.yml file:
version: 2.1
orbs:
github-release: h-matsuo/github-release#0.1.3
executors:
unity_exec:
docker:
- image: unityci/editor:ubuntu-2019.4.19f1-windows-mono-0.9.0
environment:
BUILD_NAME: speedrun-circleci-build
working_directory: /root/project
.build: &build
executor: unity_exec
steps:
- checkout
- run: mkdir -p /root/project/Zipped
- run:
name: Git submodule recursive
command: git submodule update --init --recursive
- run:
name: Remove editor folder in shared project
command: rm -rf ./Assets/Shared/Movement/Generic/Attributes/Editor/
- run:
name: Converting Unity license
command: chmod +x ./ci/unity_license.sh && ./ci/unity_license.sh
- run:
name: Building game binaries
command: chmod +x ./ci/build.sh && ./ci/build.sh
- run:
name: Zipping build
command: apt update && apt -y install zip && zip -r "/root/project/Zipped/build.zip" ./Builds/
- store_artifacts:
path: /root/project/Zipped/build.zip
- run:
name: Show all files
command: find "$(pwd)"
- persist_to_workspace:
root: Zipped
paths:
- build.zip
jobs:
build_windows:
<<: *build
environment:
BUILD_TARGET: StandaloneWindows64
release:
description: Build project and publish a new release tagged `v1.1.1`.
executor: github-release/default
steps:
- attach_workspace:
at: /root/project/Zipped
- run:
name: Show all files
command: sudo find "/root/project"
- github-release/create:
tag: v1.1.1
title: Version v1.1.1
description: This release is version v1.1.1.
file-path: ./build.zip
workflows:
version: 2
build:
jobs:
- build_windows
- release:
requires:
- build_windows
Can somebody help me with this please?
If somebody ever encounters the same issue, try to avoid making use of the /root path. I've stored the artifacts somewhere inside /tmp/, and before storing artifacts, I've manually created the folder with chmod 777 by using mkdir with the -m flag to specify chmod permissions.

How can I cache Android NDK in my Github Actions workflow?

I want to cache the Android NDK in my Github Actions workflow. The reason is that I require a specific version of the NDK and CMake which aren't pre-installed on MacOS runners.
I tried to use the following workflow job to achieve this:
jobs:
build:
runs-on: macos-latest
steps:
- name: Cache NDK
id: cache-primes
uses: actions/cache#v1
with:
path: ${{ env.ANDROID_NDK_HOME }}
key: ${{ runner.os }}-ndk-${{ hashFiles(env.ANDROID_NDK_HOME) }}
- name: Install NDK
run: echo "y" | $ANDROID_HOME/tools/bin/sdkmanager "ndk;21.0.6113669" "cmake;3.10.2.4988404"
The problem with this is that the env context doesn't contain the ANDROID_NDK_HOME variable. So this means build.steps.with.path evaluates empty.
The regular environment variable is present and prints the correct path if I debug using the following step:
jobs:
build:
steps:
- name: Debug print ANDROID_NDK_HOME
run: echo $ANDROID_NDK_HOME
But the regular environment variable can only be used in shell scripts and not in build.steps.with as far as I understand.
- name: Prepare NDK dir for caching ( workaround for https://github.com/actions/virtual-environments/issues/1337 )
run: |
sudo mkdir -p /usr/local/lib/android/sdk/ndk
sudo chmod -R 777 /usr/local/lib/android/sdk/ndk
sudo chown -R $USER:$USER /usr/local/lib/android/sdk/ndk
- name: NDK Cache
id: ndk-cache
uses: actions/cache#v2
with:
path: /usr/local/lib/android/sdk/ndk
key: ndk-cache-21.0.6113669-v2
- name: Install NDK
if: steps.ndk-cache.outputs.cache-hit != 'true'
run: echo "y" | sudo /usr/local/lib/android/sdk/tools/bin/sdkmanager --install "ndk;21.0.6113669"
Here is a config I use in my project.
Couple of note:
You need to create ndk directory and change permission to workaround https://github.com/actions/virtual-environments/issues/1337
Make sure
you use proper id (ndk-cache in example above) in if statement so
you can actually use cache
You can easily specify the NDK installation directory to be cached.
- name: Cache (NDK)
uses: actions/cache#v2
with:
path: ${ANDROID_HOME}/ndk/21.0.6113669
key: ndk-cache
- name: Install NDK
run: echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install 'ndk;21.0.6113669'
As long as you're using an NDK version that is pre-installed with Github Actions runners, then you no longer need to worry about caching your NDK :)
Find the runners list here:
https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-software
Example runner list:
https://github.com/actions/runner-images/blob/main/images/linux/Ubuntu2204-Readme.md
For the Ubuntu 22.04 runner, it comes with three NDKs pre-installed
23.2.8568313
24.0.8215888
25.2.9519653 (default)

How to add a callback-plugin to AWX docker

Installed AWX docker from here - https://github.com/ansible/awx. I am trying to add a callback-plugin for a specific project as written here - https://docs.ansible.com/ansible-tower/latest/html/administration/tipsandtricks.html#using-callback-plugins-with-tower. Does not work. I add to Template-> EXTRA VARIABLES lines
---
bin_ansible_callbacks: true
callback_plugins: /callback_plugins
stdout_callback: selective
Does not work.
I add the directory /var/lib/awx/projects/test/callback_plugins/ to SETTINGS-> JOBS-> ANSIBLE CALLBACK PLUGINS - it doesn't work either.
Tell me, please, how to do it correctly, so that another (custom) plugin picks up and earns.
I'm issuing the same problem, after some debugs the problem I've open a issue on AWX project https://github.com/ansible/awx/issues/4149
In the meantime I've applied a workaround that consists in create a symlinks for each callback plugin you want to use in the callback_plugins folder of your roles project
For example, if you are using the ara project
- name: Research for callbacks in virtualenv libs
find:
path: '{{ ansible_playbook_python|dirname|dirname }}/{{ item }}'
file_type: file
depth: 1
patterns: '*.py'
excludes: '__init__*'
register: _internal__callbacks
with_items:
- lib/python3.6/site-packages/ara/plugins/callbacks
# TODO : prevent existing callbacks to be overwritten
- name: Create symlinks from virtualenv lib directory to local callback_plugins/
file:
src: '{{ item }}'
dest: '{{ playbook_dir }}/callback_plugins/{{ item|basename }}'
state: link
with_items: "{{ _internal__callbacks.results|map(attribute='files')|flatten|map(attribute='path')|list }}"
seems like you have to use callbacks_enabled instead of callback_plugins. put this example configuration in the /var/lib/awx/ansible.cfg file:
[defaults]
callback_whitelist = profile_tasks
--- works on AWX 17.x