Continuous integration using bitbucket pipelines FAILED for OpenEdx - docker-compose

I am trying to set up CI process using bitbucket pipelines for my Openedx site. The script that i am using in my bitbucket-pipelines.yml file is given below. I am trying to just set up the build process on LMS (themes), so that whenever someone make any change in the front end of the site, the build updates paver assets and recompile the assets.The problem is that it is failing on paver update assets.
I have tried to copy the devstack code to my bitbucket repo instead of cloning from git, the problem is Devstack has been updated to Ironwood, but my site is using hawthorn version. I am trying to make the devstack repo hawthorn compatible due to which i am using "hawthorn.master" branch. I have also increased the memory to the most I possibly could.
Also, i saw that cloning was not working well due to which i have set up origin inside the docker environment and then it was fetching all the required files but then it gives the Subprocess return code 1 error. The script in my bitbucket-pipleines.yml is :
image: python:3.5.6
definitions:
services:
docker:
memory: 7168
options:
size: 2x # all steps in this repo get 8GB memory
pipelines:
default:
- step:
services:
- docker
script:
# Upgrade Docker Compose to the latest version test
- python --version
- export DOCKER_COMPOSE_VERSION=1.13.0
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- mv docker-compose /usr/local/bin
- export OPENEDX_RELEASE=hawthorn.master
- git clone https://github.com/sanjaysample/devstack.git
- cd devstack
- git checkout open-release/hawthorn.master
- make dev.checkout
- make requirements
- make dev.clone
- ls
- make pull
- make dev.up
- sleep 60 # LMS needs like 60 seconds to come up
- docker cp ../metronic edx.devstack.lms:/edx/app/edxapp/edx-platform/themes
- docker cp ../pavelib edx.devstack.lms:/edx/app/edxapp/edx-platform
- wget https://raw.githubusercontent.com/sumbul03/edx-theme/master/lms.env.json
- docker cp lms.env.json edx.devstack.lms:/edx/app/edxapp/lms.env.json
- rm lms.env.json
- docker cp edx.devstack.lms:/edx/app/edxapp/lms.env.json .
- cat lms.env.json
- docker ps
- docker-compose restart lms
- docker-compose exec -T lms bash -c 'source /edx/app/edxapp/edxapp_env && cd /edx/app/edxapp/edx-platform && git init && git remote add origin https://github.com/edx/edx-platform.git && git fetch origin open-release/hawthorn.master && git checkout -f open-release/hawthorn.master && paver install_prereqs && paver update_assets lms --settings=devstack_docker --debug'
The build fails at this error:
python manage.py lms --settings=devstack_docker print_setting STATIC_ROOT 2>/dev/null
Build failed running pavelib.assets.update_assets: Subprocess return code: 1
Does anyone know the solution to this problem? Please suggest.

Related

Gitlab - deploy changes only using FTP

We’re running an older gen project and we need to deploy pushes to our main branch using LFTP from Gitlab.
The problem we’re having is that each push uploads all of the files instead of only changes. Currently our pipeline looks like this:
image: ubuntu:18.04
before_script:
- apt-get update -qy
- apt-get install -y lftp
build:
script:
# Sync to FTP
- lftp -e "set ftp:ssl-allow no;open $FTP_IP; user $FTP_USERNAME $FTP_PASSWORD; mirror -X .* -X .*/ --reverse --verbose -n localDir/ remoteDir/; bye"
I’ve googled what to do, but didn’t find a clear answer. Can anyone help me with this situation?
Thanks
The problem is that the mirror command is doing a full sync instead of just changes. To solve this, you can use the --only-newer option with the mirror command.
So the only-newer option didn't work due to the timestamps not being correct. There's a tool in GIT that restore timestamps so I just chucked that in and it now works as it should...
Looks like this now:
build:
image: ubuntu:latest
stage: build
script:
- apt update
- apt install git-restore-mtime -y
# - ls -la
# This command restores the modified timestamps from commits
- /usr/lib/git-core/git-restore-mtime
# - ls -la
- apt-get update -qy
- apt-get install -y lftp
# Sync to FTP
- lftp -e "set ftp:ssl-allow no;open $FTP_IP; user $FTP_USERNAME $FTP_PASSWORD; mirror -X .* -X .*/ --reverse --verbose --only-newer -n localDir/ remoteDir/; bye"

Gitlab CI not able to use pg_prove

I'm struggling to get a Gitlab CI up and running that uses the correct version of postgres (13) and has PGTap installed.
I deploy my project locally using a Dockerfile which uses postgres:13.3-alpine and then installs PGTap too. However, I'm not sure if I can use this Dockerfile to help with my CI issues.
In my gitlab-ci.yml file, I currently have:
variables:
GIT_SUBMODULE_STRATEGY: recursive
pgtap:
only:
refs:
- merge_request
- master
changes:
- ddl/**/*
image: postgres:13.1-alpine
services:
- name: postgres:13.1-alpine
alias: db
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ""
POSTGRES_HOST_AUTH_METHOD: trust
script:
- psql postgres://postgres#db/postgres -c 'create extension pgtap;'
- psql postgres://postgres#db/postgres -f ddl/01.sql
- cd ddl/
- psql postgres://postgres#db/postgres -f 02.sql
- psql postgres://postgres#db/postgres -f 03.sql
- pg_prove -d postgres://postgres#db/postgres --recurse test_*
The above works until it gets to the pg_prove command at the bottom as I get the below error:
pg_prove: command not found
Is there a way I can install pg_prove using the script commands? Or is there a better way to do this?
There is an old issue closed.
To summarize, either you build you own image based on postgres:13.1-alpine installing PGTap or you use a non official image where PGTap is installed 1maa/postgres:13-alpine :
docker run -it 1maa/postgres:13-alpine sh
/ # which pg_prove
/usr/local/bin/pg_prove
Since your step image is alpine based, you can try:
script:
- apk add --no-cache --update build-base make perl perl-dev git openssl-dev
- cpan TAP::Parser::SourceHandler::pgTAP
- psql.. etc
You can probably omit some of the packages...

Gitlab-runner failed to remove permission denied

I'm setting up a CI/CD pipeline with Gitlab. I've installed gitlab-runner on a Digital Ocean Ubuntu 18.04 droplet and gave permissions in /etc/sudoers to the gitlab-runner as:
gitlab-runner ALL=(ALL:ALL)ALL
The first commit to the associated repository correctly build the docker-compose (the app itself is Django+postgres), but following commits are not able to clean previous builds and fail:
Running with gitlab-runner 12.8.0 (1b659122)
on ubuntu-s-4vcpu-8gb-fra1-01 52WypZsE
Using Shell executor...
00:00
Running on ubuntu-s-4vcpu-8gb-fra1-01...
00:00
Fetching changes with git depth set to 50...
00:01
Reinitialized existing Git repository in /home/gitlab-runner/builds/52WypZsE/0/lorePieri/djangocicd/.git/
From https://gitlab.com/lorePieri/djangocicd
* [new ref] refs/pipelines/120533457 -> refs/pipelines/120533457
0072002..bd28ba4 develop -> origin/develop
Checking out bd28ba46 as develop...
warning: failed to remove app/staticfiles/admin/img/selector-icons.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/search.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/icon-alert.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/tooltag-arrowright.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/icon-unknown-alt.svg: Permission denied
This is the relevant portion of the .gitlab-ci.yml file:
image: docker:latest
services:
- docker:dind
stages:
- test
- deploy_staging
- deploy_production
step-test:
stage: test
before_script:
- export DYNAMIC_ENV_VAR=DEVELOP
only:
- develop
tags:
- develop
script:
- echo running tests in $DYNAMIC_ENV_VAR
- sudo apt-get install -y python-pip
- sudo pip install docker-compose
- sudo docker image prune -f
- sudo docker-compose -f docker-compose.yml build --no-cache
- sudo docker-compose -f docker-compose.yml up -d
- echo do tests now
- sudo docker-compose exec -T web python3 -m coverage run --source='.' manage.py test
...
What I've tried:
usermod -aG docker gitlab-runner
sudo service docker restart
The best solution for me was adding
pre_clone_script = "sudo chown -R gitlab-runner:gitlab-runner ."
into /etc/gitlab-runner/config.toml
Even if you won't have permissions after a previous job it'll set correct permissions before cleaning up the workdir and cloning the repo.
I would recommend setting a GIT_STRATEGY to none in the afflicted job.
I have had the exact same problem. Therefore I will explain how it was resolved in details.
Try finding your config.toml file and run the gitlab-runner command with root privileges, since permission denied is a very common UNIX-based operating systems error.
After finding the location of config.toml pass it:
sudo gitlab-runner run --config <absolute_location_of_config_toml>
P.S. You can find all config.toml file easily using locate config.toml command. Make sure you have already installed by executing sudo apt-get install mlocate
After facing to permission denied error, I have tried using sudo gitlab-runner run instead of gitlab-runner, but it has its own problem:
ERROR: Failed to load config stat /etc/gitlab-runner/config.toml: no such
file or directory builds=0
while executing gitlab-runner without root permissions doesn't have any config file problem.
Try implementing the ways and solutions as #Grumbanks and #vlad-Mazurkov mentioned. But they didn't work properly.
It MAY be because you write a file in cloned out codebase. What I do is simply create another directory outside of gitlab-runner directory:
WORKSPACE_DIR="/home/abcd_USER/a/b"
rm -rf $WORKSPACE_DIR
mkdir -p $WORKSPACE_DIR
cd $WORKSPACE_DIR
ls -la
git clone ..................
AND DO whatever
I never faced the issue again.

How to deploy with .gitlab-ci.yml in runner used by docker?

I installed docker and gitlab + a runner using this tutorial: https://frenchco.de/article/Add-un-Runner-Gitlab-CE-Docker
The problem is that when I try to modify the .gitlab-ci.yml to make a deployment on my host machine I can not do it.
My .yml :
stages:
- deploy
deploy_develop:
stage: deploy
before_script:
- apk update && apk add bash && apk add openssh && apk add rsync
- apk add --no-cache bash
script:
- mkdir -p ~/.ssh
- ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
- cat ~/.ssh/id_rsa.pub
- rsync -hrvz ~/ root#172.16.1.97:~/web_dev/www/test/
environment:
name: develop
And the problem is that in ssh or rsync I always have the same error message in my job:
$ rsync -hrvz ~/ root#172.16.1.97:~/web_dev/www/test/
Host key verification failed.
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.3]
I tried to copy the ssh id_rsa and id_rsa.pub in the host, it's the same.
Surely a problem because my runner is in a docker can be? It is strange because I manage to ping my host (172.16.1.97) since the execution of the .yml. An idea has my problem?
Looks like you did not add the public key into your authorized_keys on the host server for the deploy-user?
For example, I use gitlab-ci to deploy my webapp, and therefore I added the user gitlab on my host machine, and added the public key to authorized_keys and then I can connect with ssh gitlab#IP -i PRIVATE_KEY to that server.
My gitlab-ci.yml looks like this:
deploy-app:
stage: deploy
image: ubuntu
before_script:
- apt-get update -qq
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(cat "$DEPLOY_SERVER_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- chmod 755 ./deploy.sh
script:
- ./deploy.sh
where I added the private key's content as a variable to my gitlab-instance. (see https://docs.gitlab.com/ee/ci/variables/)
The deploy.sh looks like this:
#!/bin/bash
set -eo pipefail
scp app/docker-compose.yml gitlab#"${DEPLOY_SERVER_IP}":~/apps/${NGINX_SERVER_NAME}/
ssh gitlab#$DEPLOY_SERVER_IP "apps/${NGINX_SERVER_NAME}/app.sh update" # this is just doing docker-compose pull && docker-compose up in the app's directory.
Maybe this helps? It's working fine for me and scp/ssh are giving more intuitive error messages than what you got with rsync in this particular case.

CircleCI 2.0 testing with docker-compose and code checkout

This is my circle.yml:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache py-pip bash
pip install docker-compose
- run:
name: Start service containers and run tests
command: |
docker-compose -f docker-compose.test.yml up -d db es redis
docker-compose run web bash -c "cd myDir && ./manage.py test"
This works fine in that it brings up my service containers (db, es, redis) and I build a new image for my web container. However, my working code is not inside the freshly built image (so "cd myDir" always fails).
I figure the following lines in my Dockerfile should make my code available when it's built but it appears that it doesn't work like that:
ENV APPLICATION_ROOT /app/
RUN mkdir -p $APPLICATION_ROOT
WORKDIR $APPLICATION_ROOT
ADD . $APPLICATION_ROOT
What am I doing wrong and how can I make my code available inside my test container?
Thanks,
Use COPY, Your Dockerfile should look something like this.
FROM image
COPY . /opt/app
WORKDIR "/opt/app"
(More commands)
ENTRYPOINT