We’re running an older gen project and we need to deploy pushes to our main branch using LFTP from Gitlab.
The problem we’re having is that each push uploads all of the files instead of only changes. Currently our pipeline looks like this:
image: ubuntu:18.04
before_script:
- apt-get update -qy
- apt-get install -y lftp
build:
script:
# Sync to FTP
- lftp -e "set ftp:ssl-allow no;open $FTP_IP; user $FTP_USERNAME $FTP_PASSWORD; mirror -X .* -X .*/ --reverse --verbose -n localDir/ remoteDir/; bye"
I’ve googled what to do, but didn’t find a clear answer. Can anyone help me with this situation?
Thanks
The problem is that the mirror command is doing a full sync instead of just changes. To solve this, you can use the --only-newer option with the mirror command.
So the only-newer option didn't work due to the timestamps not being correct. There's a tool in GIT that restore timestamps so I just chucked that in and it now works as it should...
Looks like this now:
build:
image: ubuntu:latest
stage: build
script:
- apt update
- apt install git-restore-mtime -y
# - ls -la
# This command restores the modified timestamps from commits
- /usr/lib/git-core/git-restore-mtime
# - ls -la
- apt-get update -qy
- apt-get install -y lftp
# Sync to FTP
- lftp -e "set ftp:ssl-allow no;open $FTP_IP; user $FTP_USERNAME $FTP_PASSWORD; mirror -X .* -X .*/ --reverse --verbose --only-newer -n localDir/ remoteDir/; bye"
Related
Could somebody help me run my Jmeter script to our Github? FYI the Jmeter I'm using different plugins. Your response is highly appreciated. Thank you so much
This is how I install my Jmeter machine on linux box/playground
sudo apt-get update
sudo apt install curl -y
sudo apt install -y default-jdk
sudo curl -O https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.3.tgz
sudo tar -xvf apache-jmeter-5.3.tgz
cd apache-jmeter-5.3/lib
sudo curl -O https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.2.1/cmdrunner-2.2.1.jar
cd ext/
sudo curl -O https://repo1.maven.org/maven2/kg/apc/jmeter-plugins-manager/1.6/jmeter-plugins-manager-1.6.jar
cd ..
sudo java -jar cmdrunner-2.2.1.jar --tool org.jmeterplugins.repository.PluginManagerCMD install-all-except jpgc-hadoop,jpgc-oauth,ulp-jmeter-autocorrelator-plugin,ulp-jmeter-videostreaming-plugin,ulp-jmeter-gwt-plugin,tilln-iso8583
Output: Jmeter script able to run on Github.
What do you mean by "Jmeter script able to run on Github"? Github is one (of many) implementations of a Git repository, it only stores files and their version history, you cannot "run" anything there.
If you're talking about Github Actions then just use run keyword and put your commands there.
Example workflow definition would be something like:
name: CI
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: setup-jmeter
run: |
sudo apt-get update
sudo apt install curl -y
sudo apt install -y default-jdk
sudo curl -O https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.3.tgz
sudo tar -xvf apache-jmeter-5.3.tgz
cd $GITHUB_WORKSPACE/apache-jmeter-5.3/lib && sudo curl -O https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.2.1/cmdrunner-2.2.1.jar
cd $GITHUB_WORKSPACE/apache-jmeter-5.3/lib/ext && sudo curl -O https://repo1.maven.org/maven2/kg/apc/jmeter-plugins-manager/1.6/jmeter-plugins-manager-1.6.jar
cd $GITHUB_WORKSPACE/apache-jmeter-5.3/lib && sudo java -jar cmdrunner-2.2.1.jar --tool org.jmeterplugins.repository.PluginManagerCMD install-all-except jpgc-hadoop,jpgc-oauth,ulp-jmeter-autocorrelator-plugin,ulp-jmeter-videostreaming-plugin,ulp-jmeter-gwt-plugin,tilln-iso8583
- name: run-jmeter-test
run: |
$GITHUB_WORKSPACE/apache-jmeter-5.3/bin/./jmeter.sh -n -t test.jmx -l result.jtl
Also be informed that according to JMeter Best Practices you should be using the latest version of JMeter so consider upgrading to JMeter 5.5 or whatever is the latest stable version which is available at JMeter Downloads page
I had the following gitlab-ci.yml in my python-package repository:
image: python:latest
unit-test:
stage: test
tags:
- docker
script:
- pip install tox
- tox
formatting-check:
stage: test
tags:
- docker
script:
- pip install black
- black --check .
using this tox.ini file:
[tox]
envlist = my_env
[testenv]
deps =
-rrequirements.txt
commands =
python -m pytest tests -s
This did work as I wanted it to.
However, then I added tests to test my code against a local Postgresql database using https://pypi.org/project/pytest-postgresql/. For this, I had to install PostgreSQL(apt -y install postgresql postgresql-contrib libpq5).
When I added this to my gitlab-ci.yml:
image: python:latest
unit-test:
stage: test
tags:
- docker
script:
- apt -y install postgresql postgresql-contrib libpq5
- pip install tox
- tox
formatting-check:
stage: test
tags:
- docker
script:
- pip install black
- black --check .
I got the error from tox, that some module in Postgres (pg_ctl) wouldn't allow being run as the root. Log here: https://pastebin.com/fMu1JY5L
So, I must execute tox as a user, not the root.
My first idea was to create a new user (useradd) and then switch to that user, but su requires inputting a password.
From a quick google search I found out the easiest solution to creating a new user is to create a new Docker Image using Docker-in-Docker.
So, as of now I have this configuration:
gitlab-ci.yml:
image: docker:19.03.12
services:
- docker:19.03.12-dind
stages:
- build
- test
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker info
docker-build:
stage: build
script:
- docker build --pull -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
formatting-check:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE black --check .
unit-test:
stage: test
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker run $CONTAINER_TEST_IMAGE tox
Dockerfile:
FROM python:latest
RUN apt update
RUN apt -y install postgresql postgresql-contrib libpq5
RUN useradd -m exec_user
USER exec_user
ENV PATH "$PATH:/home/exec_user/.local/bin"
RUN pip install black tox
(I had to add ENV PATH "$PATH:/home/exec_user/.local/bin" because pip would cry about it not being in the Path)
tox.ini:
[tox]
envlist = my_env
[testenv]
deps =
-rrequirements.txt
commands =
python -m pytest tests -s
The job docker-build completes — the other two fail.
As for formatting-check:
$ docker run $CONTAINER_TEST_IMAGE black --check .
ERROR: Job failed: execution took longer than 1h0m0s seconds
The black command usually executes extremely fast (<1s).
As for unit-test:
/bin/sh: eval: line 120: tox: not found
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 127
I have also found, that replacing docker run $CONTAINER_TEST_IMAGE tox with docker run $CONTAINER_TEST_IMAGE python3 -m tox doesn't work. Here, python3 isn't found (which seems odd given that the base image is python:latest).
If you have any idea how to solve this issue, let me know :D
My first idea was to create a new user (useradd) and then switch to that user, but su requires inputting a password.
This should work for your use case. Running su as root will not require a password. You could also use sudo -u postgres tox (must apt install sudo first).
As a basic working example using su (as seen here - job) using the postgres user, which is created automatically when postgres is installed.
myjob:
image: python:3.9-slim
script:
- apt update && apt install -y --no-install-recommends libpq-dev postgresql-client postgresql build-essential
- pip install psycopg2 psycopg pytest pytest-postgresql
- su postgres -c pytest
# or in your case, you might use: su postgres -c tox
Alternatively, you might consider just using GitLab's services feature to run your postgres server if that's the only obstacle in your way. You can pass --postgresql-host and --postgresql-password to pytest to tell the extension to use the services.
I am trying to set up CI process using bitbucket pipelines for my Openedx site. The script that i am using in my bitbucket-pipelines.yml file is given below. I am trying to just set up the build process on LMS (themes), so that whenever someone make any change in the front end of the site, the build updates paver assets and recompile the assets.The problem is that it is failing on paver update assets.
I have tried to copy the devstack code to my bitbucket repo instead of cloning from git, the problem is Devstack has been updated to Ironwood, but my site is using hawthorn version. I am trying to make the devstack repo hawthorn compatible due to which i am using "hawthorn.master" branch. I have also increased the memory to the most I possibly could.
Also, i saw that cloning was not working well due to which i have set up origin inside the docker environment and then it was fetching all the required files but then it gives the Subprocess return code 1 error. The script in my bitbucket-pipleines.yml is :
image: python:3.5.6
definitions:
services:
docker:
memory: 7168
options:
size: 2x # all steps in this repo get 8GB memory
pipelines:
default:
- step:
services:
- docker
script:
# Upgrade Docker Compose to the latest version test
- python --version
- export DOCKER_COMPOSE_VERSION=1.13.0
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- mv docker-compose /usr/local/bin
- export OPENEDX_RELEASE=hawthorn.master
- git clone https://github.com/sanjaysample/devstack.git
- cd devstack
- git checkout open-release/hawthorn.master
- make dev.checkout
- make requirements
- make dev.clone
- ls
- make pull
- make dev.up
- sleep 60 # LMS needs like 60 seconds to come up
- docker cp ../metronic edx.devstack.lms:/edx/app/edxapp/edx-platform/themes
- docker cp ../pavelib edx.devstack.lms:/edx/app/edxapp/edx-platform
- wget https://raw.githubusercontent.com/sumbul03/edx-theme/master/lms.env.json
- docker cp lms.env.json edx.devstack.lms:/edx/app/edxapp/lms.env.json
- rm lms.env.json
- docker cp edx.devstack.lms:/edx/app/edxapp/lms.env.json .
- cat lms.env.json
- docker ps
- docker-compose restart lms
- docker-compose exec -T lms bash -c 'source /edx/app/edxapp/edxapp_env && cd /edx/app/edxapp/edx-platform && git init && git remote add origin https://github.com/edx/edx-platform.git && git fetch origin open-release/hawthorn.master && git checkout -f open-release/hawthorn.master && paver install_prereqs && paver update_assets lms --settings=devstack_docker --debug'
The build fails at this error:
python manage.py lms --settings=devstack_docker print_setting STATIC_ROOT 2>/dev/null
Build failed running pavelib.assets.update_assets: Subprocess return code: 1
Does anyone know the solution to this problem? Please suggest.
I want to deploy ma test app from local repo to gitlab repo and with gitlab ci push it to my remote server. SSH connection is working, gitlab CI shows that job is passed, but code on remote server is not updated.
I made bare repo in: /home/repos/testDeploy.git
And folder for files is in: /home/example.com/web/testDeploy
I added
My .gitlab-ci.yml file
stages:
- deploy
deployment:
stage: deploy
environment:
name: production
url: http://www.example.com/testDeploy
only:
- master
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- chmod 600 ~/.ssh/id_rsa_gitlab && chmod 700 ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- git remote add production ssh://user#server:port/home/repos/testDeploy.git
- git push -f production master
- echo "Deployed to production!"
Also, i have post-receive hook:
#!/bin/sh
git --git-dir=/home/repos/testDeploy.git --work-tree=/home/example.com/web/testDeploy checkout -f
I make changes in my local repo, commit and push to origin master to gitlab. Job is passed, but as I mention above, file on remote server is not update.
Output from gitlab job is:
Fetching changes...
HEAD is now at 595db67 as
Checking out 595db67b as master...
Skipping Git submodules setup
$ which ssh-agent || ( apt-get update -y && apt-get install openssh- client -y )
/usr/bin/ssh-agent
$ eval $(ssh-agent -s)
Agent pid 40589
$ chmod 600 ~/.ssh/id_rsa_gitlab && chmod 700 ~/.ssh
$ [[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
$ git branch
* (HEAD detached at 595db67)
master
production
$ git push -f production master
Everything up-to-date
$ echo "Deployed to production!"
Deployed to production!
Job succeeded
What I am doing wrong? Can you someone help me please to figure it out? Thank you for all your answers.
I have a project on github, that contains a .travis.yml file with a before_install hook to do things that require sudo. To move the project to container type infrastructure I have to remove the sudo dependency of the project. Question is - how?
On this page of Travis CI documentation, in the before_install section they're providing scripts to run in this hook:
http://docs.travis-ci.com/user/installing-dependencies/
However those scripts depend on sudo which I'm trying to get rid of. What are possible workaround for this? I still need to have the scripts run, but they won't without sudo.
Thanks.
Edit:
Had to replace most of the data with Xs, but you can still get the idea os what's happening in the code:
- "sudo apt-key adv --keyserver hkp://xxxxxxxx.ubuntu.com:XX --recv XXXXXX"
- "echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist XXgen' | sudo tee /etc/apt/sources.list.x/xxx.list"
- "sudo apt-get update"
- "sudo apt-get install mongodb-org-server"
- curl -O https://download.xxxxxx.org/xxxxxx/xxxxxx/xxxxxx-X.X.X.deb && sudo dpkg -i --force-confnew xxxxxx-X.X.X.deb
- sudo service xxxxxx start && sleep 10
As you can see, there are multiple sudo calls that need to be cleared up.
Edit:
I need to install ElasticSearch 1.7 and MongoDB 2.6