How to remove an attribute in YAML file using yq? - docker-compose

Because of a recent change in the ubuntu-latest image that introduced a buggy version of docker-compose, I had to lock down the version of docker-compose on my pipelines.
However, there was a task that I used to help clean up my deploy scripts namely DockerCompose#0. What I am trying to implement the equivalent of
- task: DockerCompose#0
displayName: 'Remove build options'
inputs:
action: 'Combine configuration'
removeBuildOptions: true
So basically I was thinking of using yq which will parse the YAML file and remove the build options which are not applicable on the stack deployment. However, I am not exactly sure how to do it. Since I need to remove it from every service that MAY include it.
So given the following input
services:
def:
build: ./def
image: trajano/def
ghi:
image: trajano/ghi
version: '3.7'
I want to get
services:
def:
image: trajano/def
ghi:
image: trajano/ghi
version: '3.7'

For newer yq versions (see Docs):
yq eval 'del(services.[].build)' foo.yml

yq d foo.yml 'services.*.build'
To do this in Azure pipelines
steps:
- bash: |
URL="https://github.com/docker/compose/releases/download/1.26.2/docker-compose-Linux-x86_64"
sudo curl -sL $URL -o /usr/local/bin/docker-compose
sudo snap install yq
displayName: Install additional software
- bash: |
docker-compose config | yq d - 'services.*.build' > $(Build.ArtifactStagingDirectory)/docker-compose.yml
displayName: Clean up docker-compose.yml

Related

npm run prod not actually running in github action despite showing successful

I don't believe that nmp run prod is actually running(?) in my github action despite not throwing any kind of error. The reasons why I believe that are:
If I delete my public/js/app.js file locally and push the change, it doesn't get rebuilt and my production site breaks as there's no app.js file.
If I leave the file in place and push my code to production, it's not minified, and one of the keys I need to reference still contains the dev value.
If I replace the aforementioned key with a different value and run npm run prod locally, then app.js is minified and contains my updated value.
Why would the npm run prod command not work within a github action, and also indicate that it ran successfully?
Here's my entire workflow file:
name: Prod
on:
push:
branches: [ main ]
jobs:
laravel_tests:
runs-on: ubuntu-20.04
env:
DB_CONNECTION: mysql
DB_HOST: localhost
DB_PORT: 3306
DB_DATABASE: testdb
DB_USERNAME: root
DB_PASSWORD: root
steps:
- name: Set up MySQL
run: |
sudo systemctl start mysql
mysql -e 'CREATE DATABASE testdb;' -uroot -proot
mysql -e 'SHOW DATABASES;' -uroot -proot
- uses: actions/checkout#main
- name: Copy .env
run: php -r "file_exists('.env') || copy('.env.example', '.env');"
- name: Install Dependencies
run: composer install -q --no-ansi --no-interaction --no-scripts --no-progress
- name: Generate key
run: php artisan key:generate
- name: Directory Permissions
run: chmod -R 777 storage bootstrap/cache
- name: Clean Install
run: npm ci
- name: Compile assets
run: npm run prod
- name: Execute tests (Unit and Feature tests) via PHPUnit
run: vendor/bin/phpunit
forge_deploy:
runs-on: ubuntu-20.04
needs: laravel_tests
steps:
- name: Make Get Request
uses: satak/webrequest-action#master
with:
url: ${{ secrets.PROD_DEPLOY_URL }}
method: GET
UPDATE:
My suspicion is that running the build process in the action isn't actually updating the repo (actually I'm fairly certain it's probably not as that would likely not be the desired behavior). So then the deploy url that I'm using to push the code is likely just grabbing the repo as-is and deploying it.
I need a way to update only the public folder on the repo with the output of the npm run prod command. Not sure if this is possible, or advisable, but I'm nearly positive that's what's going on.

Using podman instead of docker for the Docker#2 task in Azure DevOps

Our build agent is running Podman 3.4.2 and there is a global alias in place for each terminal session that simply replaces docker with podman, so the command docker --version yields podman version 3.4.2 as a result.
The goal is to use podman for the Docker#2 task in a Azure DevOps pipeline:
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: aspnet-web-mhi
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
Turns out I was a bit naive in my assumptions, that this would work as the ado_agent is having none of it:
##[error]Unhandled: Unable to locate executable file: 'docker'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
Is there a way to make that replacement work without too much fuss? I'd avoid scripting everything by myself to use podman instead of docker and push it to a registry, if I can avoid it.
Since I needed to make progress on this, I've decided to go the down the bash-route and built, pushed, pulled and run the images manually. This is the gist of it:
steps:
- task: Bash#3
displayName: Build Docker Image for DemoWeb
inputs:
targetType: inline
script: |
podman build -f $(dockerfilePath) -t demoweb:$(tag) .
- task: Bash#3
displayName: Login and Push to ACR
inputs:
targetType: inline
script: |
podman login -u $(acrServicePrincipal) -p $(acrPassword) $(acrName)
podman push demoweb-mhi:$(tag) $(acrName)/demoweb:$(tag)
- task: Bash#3
displayName: Pull image from ACR
inputs:
targetType: inline
script: |
podman pull $(acrName)/demoweb:$(tag) --creds=$(acrServicePrincipal):$(acrPassword)
- task: Bash#3
displayName: Run container
inputs:
targetType: inline
script: |
podman run -p 8080:80 --restart unless-stopped $(acrName)/demoweb:$(tag)
If you decide to go down that route, please make sure to not expose your service principal and password as variables in your yml file, but create them as secrets.
I'll keep this question open - maybe someone with more expertise in handling GNU/Linux finds a more elegant way.
You could install package podman-docker as well. It installs a wrapper in /usr/bin/docker that points to /usr/bin/podman. So tasks that originally use docker binary (or even docker socket) can be run transparently as podman like Docker#2 build and push.
cat /usr/bin/docker
#!/bin/sh
[ -e /etc/containers/nodocker ] || \
echo "Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg." >&2
exec /usr/bin/podman "$#"

Gitlab CI not able to use pg_prove

I'm struggling to get a Gitlab CI up and running that uses the correct version of postgres (13) and has PGTap installed.
I deploy my project locally using a Dockerfile which uses postgres:13.3-alpine and then installs PGTap too. However, I'm not sure if I can use this Dockerfile to help with my CI issues.
In my gitlab-ci.yml file, I currently have:
variables:
GIT_SUBMODULE_STRATEGY: recursive
pgtap:
only:
refs:
- merge_request
- master
changes:
- ddl/**/*
image: postgres:13.1-alpine
services:
- name: postgres:13.1-alpine
alias: db
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ""
POSTGRES_HOST_AUTH_METHOD: trust
script:
- psql postgres://postgres#db/postgres -c 'create extension pgtap;'
- psql postgres://postgres#db/postgres -f ddl/01.sql
- cd ddl/
- psql postgres://postgres#db/postgres -f 02.sql
- psql postgres://postgres#db/postgres -f 03.sql
- pg_prove -d postgres://postgres#db/postgres --recurse test_*
The above works until it gets to the pg_prove command at the bottom as I get the below error:
pg_prove: command not found
Is there a way I can install pg_prove using the script commands? Or is there a better way to do this?
There is an old issue closed.
To summarize, either you build you own image based on postgres:13.1-alpine installing PGTap or you use a non official image where PGTap is installed 1maa/postgres:13-alpine :
docker run -it 1maa/postgres:13-alpine sh
/ # which pg_prove
/usr/local/bin/pg_prove
Since your step image is alpine based, you can try:
script:
- apk add --no-cache --update build-base make perl perl-dev git openssl-dev
- cpan TAP::Parser::SourceHandler::pgTAP
- psql.. etc
You can probably omit some of the packages...

How can I trigger a single deployment hook from a matrix in github?

So I have this matrix
name: test
on: [create, push]
jobs:
build:
strategy:
matrix:
context: [test, pgtest]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: docker compose
run: |
docker-compose -f docker-compose.${{ matrix.context }}.yml up -d
docker ps
- name: install liquibase
run: |
wget --quiet https://github.com/liquibase/liquibase/releases/download/v3.8.4/liquibase-3.8.4.tar.gz
wget --quiet https://jdbc.postgresql.org/download/postgresql-42.2.9.jar
mkdir -p liquibase
tar --extract --file liquibase-*.tar.gz --directory liquibase
- name: wait for dbs
run: |
set -x
wget --quiet https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh
chmod +x wait-for-it.sh
./wait-for-it.sh localhost:5432
docker pull postgres:alpine
isReady() {
docker run --network host --rm postgres:alpine pg_isready \
--host localhost --dbname test --username postgres --timeout 30
}
until isReady
do
sleep 1
done
- name: db migration
run: |
./liquibase/liquibase --defaultsFile=liquibase-${{ matrix.context }}.properties update \
|| ( docker-compose logs && exit 1)
the matrix's only point is to test different contexts for liquibase. I don't actually want to create different binaries for each matrix or anything like that. I see matrix as kind of a thread fork, but I don't know how to join at the end so I can kick off a single deployment event.
I think that running on check_run.completed should allow me to do this, but... that event doesn't seem to trigger either.
how can I kick off a single deployment event after the entire matrix has run?
If I'm understanding your requirement correctly you can just add another job that depends on the build job containing the matrix using needs. It will wait for all the matrix jobs to finish before running deploy.
on: push
jobs:
build:
strategy:
matrix:
context: [test, pgtest]
runs-on: ubuntu-latest
steps:
- name: Tests
run: echo "Testing ${{ matrix.context }}"
deploy:
needs: build
runs-on: ubuntu-latest
steps:
- name: Deploy
run: echo "Deploying"
Perhaps this answer completes the solution for you?
I see two ways of accomplishing this:
Split the current matrix into two jobs and have the deployment hook depend on the test job. This way test and pgtest run in parallel and when test finishes the deployment will start. The problem with this approach is code readbility and maintenance as you'll have the code completely duplicated unless you actually encapsulated it into an action itself, extreme overkill.
Run the deployment hook as a conditional last step of test. This seems the best option given the question you've asked, but there might be situations where this itself is not optimal.
The last step for solution (2) would look something like this
- name: Deployment
if: matrix.context == 'test'
run: echo "Do something"
Hope this helps.

How to build/run one Dockerfile per build job in Travis-CI?

I've got a repository with multiple Dockerfiles which take ~20min each to build: https://github.com/fredrikaverpil/pyside2-wheels
I'd like to efficiently divide these Dockerfiles to be built in its own jobs.
Right now, this is my .travis.yml:
language: python
sudo: required
dist: trusty
python:
- 2.7
- 3.5
services:
- docker
install:
- docker build -f Dockerfile-Ubuntu16.04-py${TRAVIS_PYTHON_VERSION} -t fredrikaverpil/pyside2-ubuntu16.04-py${TRAVIS_PYTHON_VERSION} .
- docker run --rm -v $(pwd):/pyside-setup/dist fredrikaverpil/pyside2-ubuntu16.04-py${TRAVIS_PYTHON_VERSION}
script:
- ls -al *.whl /
This creates two jobs, one per Python version. However, I'd rather have one job per Dockerfile, as I'm about to add more such files.
How can this be achieved?
Managed to solve it, I think.
language: python
sudo: required
dist: trusty
services:
- docker
matrix:
include:
- env: DOCKER_OS=ubuntu16.04
python: 2.7
- env: DOCKER_OS=ubuntu16.04
python: 3.5
- env: DOCKER_OS=centos7
python: 2.7
install:
- docker build -f Dockerfile-$DOCKER_OS-py${TRAVIS_PYTHON_VERSION} -t fredrikaverpil/pyside2-$DOCKER_OS-py${TRAVIS_PYTHON_VERSION} .
- docker run --rm -v $(pwd):/pyside-setup/dist fredrikaverpil/pyside2-$DOCKER_OS-py${TRAVIS_PYTHON_VERSION}
script:
- ls -al *.whl /
This results in three job builds.