MariaDB fails to start in Github workflow - github

I'm building an app that needs MariaDB 10.3 for testing in a CI pipeline. I'm trying to set this up with Github actions but I'm having trouble getting MariaDB to start. The ubuntu runner includes MySQL 8.0 by default so my workflow removes that first and then installs MariaDB, but MariaDB fails to start. At first I saw this error:
2022-09-17T01:24:30.8867144Z + sudo cat /var/log/mysql/error.log
2022-09-17T01:24:30.8926847Z 2022-09-17 1:24:20 0 [ERROR] InnoDB: Invalid flags 0x4800 in ./ibdata1
2022-09-17T01:24:30.8927599Z 2022-09-17 1:24:20 0 [ERROR] InnoDB: Plugin initialization aborted with error Data structure corruption
2022-09-17T01:24:30.8928456Z 2022-09-17 1:24:21 0 [ERROR] Plugin 'InnoDB' init function returned error.
2022-09-17T01:24:30.8929215Z 2022-09-17 1:24:21 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2022-09-17T01:24:30.8930300Z 2022-09-17 1:24:21 0 [ERROR] Unknown/unsupported storage engine: InnoDB
2022-09-17T01:24:30.8930738Z 2022-09-17 1:24:21 0 [ERROR] Aborting
I think this is because MySQL 8.0 leaves behind some old files, so I added a step to remove /var/lib/mysql but now the action stalls after the MariaDB installation.
I made a copy in a new public repo to show the issue here: https://github.com/llamafilm/mariadb_test/actions/runs/3071684353/jobs/4962586417
The workflow is like this:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Install dependencies
run: |
set -x
sudo apt update
sudo apt autoremove mysql*
- name: Look around
run: |
set -x
sudo ls -l /var/lib/mysql
sudo rm -rf /var/lib/mysql
- name: Install MariaDB
run: |
sudo apt install -y mariadb-server-10.3
- name: Validate DB
run: |
set -x
sudo cat /var/log/mysql/error.log
sudo ls -l /var/lib/mysql
mysql -e "SHOW STATUS"

Related

Run Pytest in Gitlab-CI as User

I had the following gitlab-ci.yml in my python-package repository:
image: python:latest
unit-test:
stage: test
tags:
- docker
script:
- pip install tox
- tox
formatting-check:
stage: test
tags:
- docker
script:
- pip install black
- black --check .
using this tox.ini file:
[tox]
envlist = my_env
[testenv]
deps =
-rrequirements.txt
commands =
python -m pytest tests -s
This did work as I wanted it to.
However, then I added tests to test my code against a local Postgresql database using https://pypi.org/project/pytest-postgresql/. For this, I had to install PostgreSQL(apt -y install postgresql postgresql-contrib libpq5).
When I added this to my gitlab-ci.yml:
image: python:latest
unit-test:
stage: test
tags:
- docker
script:
- apt -y install postgresql postgresql-contrib libpq5
- pip install tox
- tox
formatting-check:
stage: test
tags:
- docker
script:
- pip install black
- black --check .
I got the error from tox, that some module in Postgres (pg_ctl) wouldn't allow being run as the root. Log here: https://pastebin.com/fMu1JY5L
So, I must execute tox as a user, not the root.
My first idea was to create a new user (useradd) and then switch to that user, but su requires inputting a password.
From a quick google search I found out the easiest solution to creating a new user is to create a new Docker Image using Docker-in-Docker.
So, as of now I have this configuration:
gitlab-ci.yml:
image: docker:19.03.12
services:
- docker:19.03.12-dind
stages:
- build
- test
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker info
docker-build:
stage: build
script:
- docker build --pull -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
formatting-check:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE black --check .
unit-test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE tox
Dockerfile:
FROM python:latest
RUN apt update
RUN apt -y install postgresql postgresql-contrib libpq5
RUN useradd -m exec_user
USER exec_user
ENV PATH "$PATH:/home/exec_user/.local/bin"
RUN pip install black tox
(I had to add ENV PATH "$PATH:/home/exec_user/.local/bin" because pip would cry about it not being in the Path)
tox.ini:
[tox]
envlist = my_env
[testenv]
deps =
-rrequirements.txt
commands =
python -m pytest tests -s
The job docker-build completes — the other two fail.
As for formatting-check:
$ docker run $CONTAINER_TEST_IMAGE black --check .
ERROR: Job failed: execution took longer than 1h0m0s seconds
The black command usually executes extremely fast (<1s).
As for unit-test:
/bin/sh: eval: line 120: tox: not found
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 127
I have also found, that replacing docker run $CONTAINER_TEST_IMAGE tox with docker run $CONTAINER_TEST_IMAGE python3 -m tox doesn't work. Here, python3 isn't found (which seems odd given that the base image is python:latest).
If you have any idea how to solve this issue, let me know :D
My first idea was to create a new user (useradd) and then switch to that user, but su requires inputting a password.
This should work for your use case. Running su as root will not require a password. You could also use sudo -u postgres tox (must apt install sudo first).
As a basic working example using su (as seen here - job) using the postgres user, which is created automatically when postgres is installed.
myjob:
image: python:3.9-slim
script:
- apt update && apt install -y --no-install-recommends libpq-dev postgresql-client postgresql build-essential
- pip install psycopg2 psycopg pytest pytest-postgresql
- su postgres -c pytest
# or in your case, you might use: su postgres -c tox
Alternatively, you might consider just using GitLab's services feature to run your postgres server if that's the only obstacle in your way. You can pass --postgresql-host and --postgresql-password to pytest to tell the extension to use the services.

Update PostgreSQL version to 13 in TravisCI

I have some issues with the TravisCI platform when I try to test the code when Pull Requesting some changes. It works fine so far, but now I get this warning in the build-run log when I run the travis_apt_get_update command:
The PostgreSQL version 9.4 is obsolete, but the server or client packages
are still installed. Please install the latest packages (postgresql-13 and
postgresql-client-13) and upgrade the existing clusters with
pg_upgradecluster (see manpage).
I have added the postgresql-13 and postgresql-client-13 packages, here is the .travis.yml file:
language: perl
perl:
- "5.30"
dist: xenial
env:
- HOST_URL="localhost"
cache:
directories:
- $HOME/path/to/local
services:
- postgresql
addons:
postgresql: 13
apt:
packages:
- postgresql-13
- postgresql-client-13
- libpq-dev
- build-essential
- libssl-dev
- zlib1g-dev
- clang-tidy
env:
global:
- PGPORT=5433
before_install:
- wget http://mirrors.kernel.org/ubuntu/pool/universe/a/astyle/astyle_3.1-1ubuntu2_amd64.deb
- sudo dpkg -i astyle_3.1-1ubuntu2_amd64.deb
- sudo chmod -R 777 /var/log/
Now, in the log it says that I have to upgrade the clusters with pg_upgradecluster, but I really don't know what that means.
This example (.travis.yml) shows how to run Postgres13 within your Ubuntu jobs. You can configure/add additional configuration as your setup and see if that helps.
---
dist: focal
language: ruby
addons:
postgresql: '13'
apt:
packages:
- postgresql-13
env:
global:
- PGUSER=postgres
- PGPORT=5432
- PGHOST=localhost
before_install:
- sudo sed -i -e '/local.*peer/s/postgres/all/' -e 's/peer\|md5/trust/g' /etc/postgresql/*/main/pg_hba.conf
- sudo service postgresql restart
- sleep 1
- postgres --version
script:
- psql -c 'create database travis_ci_test;' -U postgres
Let me know if you have further questions.
Thanks.

How to repackage a Visual Studio Code extension into a Che-Theia plug-in with its own set of dependencies

I am trying to repackage a Visual Studio Code extension into Eclipse Che as a Che-Theia plug-in.
The plug-in extracts source code metrics from Ansible files, as shown below:
It does so by executing a command-line of a tool written in Python, namely ansiblemetrics, that must be installed on the user's environment.
Therefore, I cannot add that dependency to the VSC extension's package.json. Rather, the user has to install it on the Eclipse Che workspace. Nevertheless, I want that Eclipse Che users do not need to install the dependencies when using the extension. A container looks the way to go.
I have the following Eclipse Che DevFile
Eclipse Che DevFile
apiVersion: 1.0.0
metadata:
name: python-bd3zh
attributes:
persistVolumes: 'false'
projects:
- name: python-hello-world
source:
location: 'https://github.com/che-samples/python-hello-world.git'
type: git
branch: master
components:
- type: chePlugin
reference: 'https://raw.githubusercontent.com/radon-h2020/radon-plugin-registry/master/radon/radon-defect-predictor/latest/meta.yaml'
alias: radon-dpt
The Eclipse docs says "To repackage a VS Code extension as a Che-Theia plug-in with its own set of dependencies, package the dependencies into a container."
Containers can be added in the chePlugin reference's metadata under the spec keyword:
spec:
containers:
- image:
memoryLimit:
memoryRequest:
cpuLimit:
cpuRequest:
Therefore, my plugin's metadata (meta.yaml) is as follows:
meta.yaml
apiVersion: v2
publisher: radon
name: radon-defect-predictor
version: 0.0.5
type: VS Code extension
displayName: RADON-h2020 Defect Predictor
title: A Defect Predictor for Infrastructure-as-Code by RADON
description: A customized extension for analyzing the defectiveness of IaC blueprints
icon: https://www.eclipse.org/che/images/logo-eclipseche.svg
repository: https://github.com/radon-h2020/radon-defect-prediction-plugin
category: Other
spec:
containers:
- image: stefadp/radon-dpt-plugin
extensions:
- https://raw.githubusercontent.com/radon-h2020/radon-defect-prediction-plugin/master/radon-defect-predictor-0.0.5.vsix
where the image stefadp/radon-dpt-plugin was built upon the following Dockerfile:
Dockerfile
FROM ubuntu:latest
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
RUN pip3 install ansiblemetrics
However, when I run the workspace in Eclipse Che, I observe the following error:
pulling image "quay.io/eclipse/che-plugin-metadata-broker:v3.4.0"
Successfully pulled image "quay.io/eclipse/che-plugin-metadata-broker:v3.4.0"
Created container
Started container
Starting plugin metadata broker
List of plugins and editors to install
- radon/radon-defect-predictor/0.0.6 - A customized extension for analyzing the defectiveness of IaC blueprints
- eclipse/che-workspace-telemetry-woopra-plugin/0.0.1 - Telemetry plugin to send information to Woopra
- eclipse/che-machine-exec-plugin/7.24.2 - Che Plug-in with che-machine-exec service to provide creation terminal or tasks for Eclipse Che workspace containers.
- eclipse/che-theia/7.24.2 - Eclipse Theia
All plugin metadata has been successfully processed
pulling image "quay.io/eclipse/che-theia-endpoint-runtime-binary:7.24.2"
Successfully pulled image "quay.io/eclipse/che-theia-endpoint-runtime-binary:7.24.2"
Created container
Started container
pulling image "quay.io/eclipse/che-plugin-artifacts-broker:v3.4.0"
Successfully pulled image "quay.io/eclipse/che-plugin-artifacts-broker:v3.4.0"
Created container
Started container
Starting plugin artifacts broker
Cleaning /plugins dir
Processing plugin radon/radon-defect-predictor/0.0.6
Installing plugin extension 1/1
Downloading plugin from https://raw.githubusercontent.com/radon-h2020/radon-plugin-registry/master/radon/radon-defect-predictor/0.0.6/radon-defect-predictor-0.0.6.vsix
Saving log of installed plugins
All plugin artifacts have been successfully downloaded
pulling image "quay.io/eclipse/che-jwtproxy:0.10.0"
Successfully pulled image "quay.io/eclipse/che-jwtproxy:0.10.0"
Created container
Started container
pulling image "stefadp/radon-dpt-plugin"
Successfully pulled image "stefadp/radon-dpt-plugin"
Created container
Started container
pulling image "quay.io/eclipse/che-workspace-telemetry-woopra-plugin:latest"
Successfully pulled image "quay.io/eclipse/che-workspace-telemetry-woopra-plugin:latest"
Created container
Started container
pulling image "quay.io/eclipse/che-machine-exec:7.24.2"
Successfully pulled image "quay.io/eclipse/che-machine-exec:7.24.2"
Created container
Started container
pulling image "quay.io/eclipse/che-theia:7.24.2"
Error: Failed to run the workspace: "The following containers have terminated:
nt0: reason = 'Completed', exit code = 0, message = 'null'"
Do you have any hint?
You have to customize your docker image to work in the sidecar container. As an example you can take a look at images which are already used in Che in sidecars: https://github.com/eclipse/che-plugin-registry/blob/master/CONTRIBUTE.md#sidecars
Try to create next structure:
radon
etc
entrypoint.sh
Dockerfile
The content of entrypoint.sh is:
#!/bin/sh
set -e
set -x
USER_ID=$(id -u)
export USER_ID
GROUP_ID=$(id -g)
export GROUP_ID
if ! whoami >/dev/null 2>&1; then
echo "${USER_NAME:-user}:x:${USER_ID}:0:${USER_NAME:-user} user:${HOME}:/bin/sh" >> /etc/passwd
fi
# Grant access to projects volume in case of non root user with sudo rights
if [ "${USER_ID}" -ne 0 ] && command -v sudo >/dev/null 2>&1 && sudo -n true > /dev/null 2>&1; then
sudo chown "${USER_ID}:${GROUP_ID}" /projects
fi
exec "$#"
and the Dockerfile is:
FROM ubuntu:latest
ENV HOME=/home/theia
RUN mkdir /projects ${HOME} && \
# Change permissions to let any arbitrary user
for f in "${HOME}" "/etc/passwd" "/projects"; do \
echo "Changing permissions on ${f}" && chgrp -R 0 ${f} && \
chmod -R g+rwX ${f}; \
done
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
RUN pip3 install ansiblemetrics
ADD etc/entrypoint.sh /entrypoint.sh
ENTRYPOINT [ "/entrypoint.sh" ]
CMD ${PLUGIN_REMOTE_ENDPOINT_EXECUTABLE}
Then build this Dockerfile and use it in your plugin meta.yaml

Gitlab-runner failed to remove permission denied

I'm setting up a CI/CD pipeline with Gitlab. I've installed gitlab-runner on a Digital Ocean Ubuntu 18.04 droplet and gave permissions in /etc/sudoers to the gitlab-runner as:
gitlab-runner ALL=(ALL:ALL)ALL
The first commit to the associated repository correctly build the docker-compose (the app itself is Django+postgres), but following commits are not able to clean previous builds and fail:
Running with gitlab-runner 12.8.0 (1b659122)
on ubuntu-s-4vcpu-8gb-fra1-01 52WypZsE
Using Shell executor...
00:00
Running on ubuntu-s-4vcpu-8gb-fra1-01...
00:00
Fetching changes with git depth set to 50...
00:01
Reinitialized existing Git repository in /home/gitlab-runner/builds/52WypZsE/0/lorePieri/djangocicd/.git/
From https://gitlab.com/lorePieri/djangocicd
* [new ref] refs/pipelines/120533457 -> refs/pipelines/120533457
0072002..bd28ba4 develop -> origin/develop
Checking out bd28ba46 as develop...
warning: failed to remove app/staticfiles/admin/img/selector-icons.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/search.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/icon-alert.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/tooltag-arrowright.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/icon-unknown-alt.svg: Permission denied
This is the relevant portion of the .gitlab-ci.yml file:
image: docker:latest
services:
- docker:dind
stages:
- test
- deploy_staging
- deploy_production
step-test:
stage: test
before_script:
- export DYNAMIC_ENV_VAR=DEVELOP
only:
- develop
tags:
- develop
script:
- echo running tests in $DYNAMIC_ENV_VAR
- sudo apt-get install -y python-pip
- sudo pip install docker-compose
- sudo docker image prune -f
- sudo docker-compose -f docker-compose.yml build --no-cache
- sudo docker-compose -f docker-compose.yml up -d
- echo do tests now
- sudo docker-compose exec -T web python3 -m coverage run --source='.' manage.py test
...
What I've tried:
usermod -aG docker gitlab-runner
sudo service docker restart
The best solution for me was adding
pre_clone_script = "sudo chown -R gitlab-runner:gitlab-runner ."
into /etc/gitlab-runner/config.toml
Even if you won't have permissions after a previous job it'll set correct permissions before cleaning up the workdir and cloning the repo.
I would recommend setting a GIT_STRATEGY to none in the afflicted job.
I have had the exact same problem. Therefore I will explain how it was resolved in details.
Try finding your config.toml file and run the gitlab-runner command with root privileges, since permission denied is a very common UNIX-based operating systems error.
After finding the location of config.toml pass it:
sudo gitlab-runner run --config <absolute_location_of_config_toml>
P.S. You can find all config.toml file easily using locate config.toml command. Make sure you have already installed by executing sudo apt-get install mlocate
After facing to permission denied error, I have tried using sudo gitlab-runner run instead of gitlab-runner, but it has its own problem:
ERROR: Failed to load config stat /etc/gitlab-runner/config.toml: no such
file or directory builds=0
while executing gitlab-runner without root permissions doesn't have any config file problem.
Try implementing the ways and solutions as #Grumbanks and #vlad-Mazurkov mentioned. But they didn't work properly.
It MAY be because you write a file in cloned out codebase. What I do is simply create another directory outside of gitlab-runner directory:
WORKSPACE_DIR="/home/abcd_USER/a/b"
rm -rf $WORKSPACE_DIR
mkdir -p $WORKSPACE_DIR
cd $WORKSPACE_DIR
ls -la
git clone ..................
AND DO whatever
I never faced the issue again.

Cannot cache Github action with docker compose

I'm trying to create a cache for the following Github action:
name: dockercompose
on: push
jobs:
test:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v1
- name: Cache Docker Compose
id: cache-docker
uses: actions/cache#v1
with:
path: fhe_app/
key: cache-docker
- name: Build the stack
run: docker-compose up -d
working-directory: fhe_app/
whit the following Dockerfile:
FROM tensorflow/tensorflow:nightly-py3
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN python3 -m pip install --upgrade pip
COPY local_requirements.txt /usr/src/app/local_requirements.txt
RUN \
apt-get update && \
apt-get -y install python3 postgresql-server-dev-10 gcc python3-dev musl-dev netcat
RUN python3 -m pip install -r local_requirements.txt
# copy entrypoint.sh
COPY entrypoint.sh /usr/src/app/entrypoint.sh
RUN chmod +x entrypoint.sh
# copy project
COPY . /usr/src/app/
# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
When pushing to Github, instead of a success message, I get:
Cache Docker Compose
Cache not found for input keys: cache-docker.
And:
Post Cache Docker Compose
Post job cleanup.
/bin/tar -cz -f /home/runner/work/_temp/e13e2694-e020-476d-888e-cb29cb9184b6/cache.tgz -C /home/runner/work/fhe_server/fhe_server/fhe_app .
/bin/tar: ./app: file changed as we read it
##[warning]The process '/bin/tar' failed with exit code 1
I've other yml files not using Docker that are caching properly, so the overall structure of the yml should be fine. Is this the right way to cache docker-compose?