I installed docker and gitlab + a runner using this tutorial: https://frenchco.de/article/Add-un-Runner-Gitlab-CE-Docker
The problem is that when I try to modify the .gitlab-ci.yml to make a deployment on my host machine I can not do it.
My .yml :
stages:
- deploy
deploy_develop:
stage: deploy
before_script:
- apk update && apk add bash && apk add openssh && apk add rsync
- apk add --no-cache bash
script:
- mkdir -p ~/.ssh
- ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
- cat ~/.ssh/id_rsa.pub
- rsync -hrvz ~/ root#172.16.1.97:~/web_dev/www/test/
environment:
name: develop
And the problem is that in ssh or rsync I always have the same error message in my job:
$ rsync -hrvz ~/ root#172.16.1.97:~/web_dev/www/test/
Host key verification failed.
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.3]
I tried to copy the ssh id_rsa and id_rsa.pub in the host, it's the same.
Surely a problem because my runner is in a docker can be? It is strange because I manage to ping my host (172.16.1.97) since the execution of the .yml. An idea has my problem?
Looks like you did not add the public key into your authorized_keys on the host server for the deploy-user?
For example, I use gitlab-ci to deploy my webapp, and therefore I added the user gitlab on my host machine, and added the public key to authorized_keys and then I can connect with ssh gitlab#IP -i PRIVATE_KEY to that server.
My gitlab-ci.yml looks like this:
deploy-app:
stage: deploy
image: ubuntu
before_script:
- apt-get update -qq
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(cat "$DEPLOY_SERVER_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- chmod 755 ./deploy.sh
script:
- ./deploy.sh
where I added the private key's content as a variable to my gitlab-instance. (see https://docs.gitlab.com/ee/ci/variables/)
The deploy.sh looks like this:
#!/bin/bash
set -eo pipefail
scp app/docker-compose.yml gitlab#"${DEPLOY_SERVER_IP}":~/apps/${NGINX_SERVER_NAME}/
ssh gitlab#$DEPLOY_SERVER_IP "apps/${NGINX_SERVER_NAME}/app.sh update" # this is just doing docker-compose pull && docker-compose up in the app's directory.
Maybe this helps? It's working fine for me and scp/ssh are giving more intuitive error messages than what you got with rsync in this particular case.
Related
Hi i have a problem configuring bitbucket pipeline with ssh login on my remote server.
The output of error is:
ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory
Host key verification failed
These are the steps i follow:
generate private and public keys (without password) on my server using this command: ssh-keygen -t rsa -b 4096
add base64 encoded private key under Repository Settings->Pipelines->Deployments->Staging environments
push file "my_known_hosts" on the repository created with: ssh-keyscan -t rsa myserverip > my_known_hosts
I also tried to do another test:
generate keys from Repository Settings
copy public key to authorized_keys file on my remote server
type the ip of my remote server in "Known hosts" click fetch and add
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
This is how i configure pipeline ssh connection
image: atlassian/default-image:latest
pipelines:
default:
- step:
name: Deploy to staging
deployment: staging
script:
- echo "Deploying to staging environment"
- mkdir -p ~/.ssh
- cat ./my_known_hosts >> ~/.ssh/known_hosts
- (umask 077 ; echo $SSH_KEY | base64 --decode > ~/.ssh/id_rsa)
- ssh $USER#$SERVER -p$PORT 'echo "connected to remote host as $USER"'
I'm trying all possible things but still can't connect.
Can anyone help me?
This happen when you try to ssh the first time to the server, you can remove host checking by this option StrictHostKeyChecking=no, below is the complete command for your reference.
ssh -o StrictHostKeyChecking=no $USER#$SERVER -p$PORT 'echo "connected to remote host as $USER"'
PS: disabling host checking is not secure way to do, you can add server key to your ~/.ssh/known_host , run this command ssh-keyscan host1 , replace host1 to the host you want to connect.
I'm setting up a CI/CD pipeline with Gitlab. I've installed gitlab-runner on a Digital Ocean Ubuntu 18.04 droplet and gave permissions in /etc/sudoers to the gitlab-runner as:
gitlab-runner ALL=(ALL:ALL)ALL
The first commit to the associated repository correctly build the docker-compose (the app itself is Django+postgres), but following commits are not able to clean previous builds and fail:
Running with gitlab-runner 12.8.0 (1b659122)
on ubuntu-s-4vcpu-8gb-fra1-01 52WypZsE
Using Shell executor...
00:00
Running on ubuntu-s-4vcpu-8gb-fra1-01...
00:00
Fetching changes with git depth set to 50...
00:01
Reinitialized existing Git repository in /home/gitlab-runner/builds/52WypZsE/0/lorePieri/djangocicd/.git/
From https://gitlab.com/lorePieri/djangocicd
* [new ref] refs/pipelines/120533457 -> refs/pipelines/120533457
0072002..bd28ba4 develop -> origin/develop
Checking out bd28ba46 as develop...
warning: failed to remove app/staticfiles/admin/img/selector-icons.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/search.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/icon-alert.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/tooltag-arrowright.svg: Permission denied
warning: failed to remove app/staticfiles/admin/img/icon-unknown-alt.svg: Permission denied
This is the relevant portion of the .gitlab-ci.yml file:
image: docker:latest
services:
- docker:dind
stages:
- test
- deploy_staging
- deploy_production
step-test:
stage: test
before_script:
- export DYNAMIC_ENV_VAR=DEVELOP
only:
- develop
tags:
- develop
script:
- echo running tests in $DYNAMIC_ENV_VAR
- sudo apt-get install -y python-pip
- sudo pip install docker-compose
- sudo docker image prune -f
- sudo docker-compose -f docker-compose.yml build --no-cache
- sudo docker-compose -f docker-compose.yml up -d
- echo do tests now
- sudo docker-compose exec -T web python3 -m coverage run --source='.' manage.py test
...
What I've tried:
usermod -aG docker gitlab-runner
sudo service docker restart
The best solution for me was adding
pre_clone_script = "sudo chown -R gitlab-runner:gitlab-runner ."
into /etc/gitlab-runner/config.toml
Even if you won't have permissions after a previous job it'll set correct permissions before cleaning up the workdir and cloning the repo.
I would recommend setting a GIT_STRATEGY to none in the afflicted job.
I have had the exact same problem. Therefore I will explain how it was resolved in details.
Try finding your config.toml file and run the gitlab-runner command with root privileges, since permission denied is a very common UNIX-based operating systems error.
After finding the location of config.toml pass it:
sudo gitlab-runner run --config <absolute_location_of_config_toml>
P.S. You can find all config.toml file easily using locate config.toml command. Make sure you have already installed by executing sudo apt-get install mlocate
After facing to permission denied error, I have tried using sudo gitlab-runner run instead of gitlab-runner, but it has its own problem:
ERROR: Failed to load config stat /etc/gitlab-runner/config.toml: no such
file or directory builds=0
while executing gitlab-runner without root permissions doesn't have any config file problem.
Try implementing the ways and solutions as #Grumbanks and #vlad-Mazurkov mentioned. But they didn't work properly.
It MAY be because you write a file in cloned out codebase. What I do is simply create another directory outside of gitlab-runner directory:
WORKSPACE_DIR="/home/abcd_USER/a/b"
rm -rf $WORKSPACE_DIR
mkdir -p $WORKSPACE_DIR
cd $WORKSPACE_DIR
ls -la
git clone ..................
AND DO whatever
I never faced the issue again.
I am trying out OWASP/ZAP to see if it is something we can use for our project, but I cannot make it work I don't know what I am doing wrong and the documentation really does not help. What I am trying is to run a scan on my api running in a docker container locally on my windows machine so I run the command:
docker run -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-stable zap-baseline.py -t http://172.21.0.2:8080/swagger.json -g gen.conf -r testreport.html the ip 172.21.0.2 is the IPAddress of my api container even tried with localhost and 127.0.0.1
but it just hangs in the following log message:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 1:43:31 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Nothing happens and my zap docker container is in a unhealthy state, after some time it just crashes and ends up with a bunch of NullPointerExceptions. Is zap docker only working for linux, something specifically I need to do when running it on a windows machine? I don't get why this is not working even when I am following specifically the guideline in https://github.com/zaproxy/zaproxy/wiki/Docker
Edit 1
My latest try where I am trying to target my host ip address directly and the port that I am exposing my api to gives me the following error:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 2:12:07 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Total of 3 URLs
ERROR Permission denied
2019-02-14 14:12:57,116 I/O error(13): Permission denied
Traceback (most recent call last):
File "/zap/zap-baseline.py", line 347, in main
with open(base_dir + generate, 'w') as f:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
Found Java version 1.8.0_151
Available memory: 3928 MB
Setting jvm heap size: -Xmx982m
213 [main] INFO org.zaproxy.zap.DaemonBootstrap
When you run docker with: docker run -v $(pwd):/zap/wrk/:rw ...
you are mapping the /zap/wrk/ directory in the docker image to the current working directory (cwd) of the machine in which you are running docker.
I think the problem is that your current user doesn't have write access to the cwd.
Try below command, hope it resolves issue.
$docker run --user $(id -u):$(id -g) -v $(pwd):/zap/wrk/:rw --rm -t owasp/zap2docker-stable zap-baseline.py -t https://your_url -g gen.conf -r testreport.html
The key error here is:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
This means that the script cannot write to the gen.conf file that you have mounted on /zap/wrk
Do you have write access to the cwd when its not mounted?
The reason for that is, if you use -r parameter, zap will attempt to generate the file report.html at location /zap/wrk/. In order to make this work, we have to mount a directory to this location /zap/wrk.
But when you do so, it is important that the zap container is able to perform the write operations on the mounted directory.
So, below is the working solution using gitlab ci yml. I started with this approach of using image: owasp/zap2docker-stable however then had to go to the vanilla docker commands to execute it.
test_site:
stage: test
image: docker:latest
script:
# The folder zap-reports created locally will be mounted to owasp/zap2docker docker container,
# On execution it will generate the reports in this folder. Current user is passed so reports can be generated"
- mkdir zap-reports
- cd zap-reports
- docker pull owasp/zap2docker-stable:latest || echo
- docker run --name zap-container --rm -v $(pwd):/zap/wrk -u $(id -u ${USER}):$(id -g ${USER}) owasp/zap2docker-stable zap-baseline.py -t "https://example.com" -r report.html
artifacts:
when: always
paths:
- zap-reports
allow_failure: true
So the trick in the above code is
Mount local directory zap-reports to /zap/wrk as in $(pwd):/zap/wrk
Pass the current user and group on the host machine to the docker container so the process is using the same user / group. This allows writing of reports on the directory mounted from local host. This is done by -u $(id -u ${USER}):$(id -g ${USER})
Below is the working code with image: owasp/zap2docker-stable
test_site:
variables:
GIT_STRATEGY: none
stage: test
image:
name: owasp/zap2docker-stable:latest
before_script:
- mkdir -p /zap/wrk
script:
- zap-baseline.py -t "https://example.com" -g gen.conf -I -r testreport.html
- cp /zap/wrk/testreport.html testreport.html
artifacts:
when: always
paths:
- zap.out
- testreport.html
I want to deploy ma test app from local repo to gitlab repo and with gitlab ci push it to my remote server. SSH connection is working, gitlab CI shows that job is passed, but code on remote server is not updated.
I made bare repo in: /home/repos/testDeploy.git
And folder for files is in: /home/example.com/web/testDeploy
I added
My .gitlab-ci.yml file
stages:
- deploy
deployment:
stage: deploy
environment:
name: production
url: http://www.example.com/testDeploy
only:
- master
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- chmod 600 ~/.ssh/id_rsa_gitlab && chmod 700 ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- git remote add production ssh://user#server:port/home/repos/testDeploy.git
- git push -f production master
- echo "Deployed to production!"
Also, i have post-receive hook:
#!/bin/sh
git --git-dir=/home/repos/testDeploy.git --work-tree=/home/example.com/web/testDeploy checkout -f
I make changes in my local repo, commit and push to origin master to gitlab. Job is passed, but as I mention above, file on remote server is not update.
Output from gitlab job is:
Fetching changes...
HEAD is now at 595db67 as
Checking out 595db67b as master...
Skipping Git submodules setup
$ which ssh-agent || ( apt-get update -y && apt-get install openssh- client -y )
/usr/bin/ssh-agent
$ eval $(ssh-agent -s)
Agent pid 40589
$ chmod 600 ~/.ssh/id_rsa_gitlab && chmod 700 ~/.ssh
$ [[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
$ git branch
* (HEAD detached at 595db67)
master
production
$ git push -f production master
Everything up-to-date
$ echo "Deployed to production!"
Deployed to production!
Job succeeded
What I am doing wrong? Can you someone help me please to figure it out? Thank you for all your answers.
I need to make Python 2.7 the default version of Python for running a Jenkins build server. I'm trying to use python_version to do this, but Python 2.6 remains the default version. I'm probably missing something really simple. Any suggestions?
dotcloud.yml
jenkins:
type: custom
buildscript: jenkins/builder
ports:
www: http
config:
python_version: v2.7
processes:
sshagent: ssh-agent /bin/bash
jenkins: ~/run
db:
type: postgresql
builder
#!/bin/bash
if [ -f ~/jenkins.war ]
then
echo 'Found jenkins installation.'
else
echo 'Installing jenkins.'
wget -O ~/jenkins.war http://mirrors.jenkins-ci.org/war/latest/jenkins.war
fi
echo 'Installing dotCloud scaffolding.'
cp -a jenkins/. ~
echo 'Setting up SSH.'
mkdir -p ~/.ssh
cp jenkins_id ~/.ssh/id_rsa
chmod 0600 ~/.ssh/id_rsa
ssh-keygen -R bitbucket.org
ssh-keyscan -H bitbucket.org >> ~/.ssh/known_hosts
I'm still not sure why my build file didn't solve the problem, but I was able to work around it by using the --python=/usr/bin/python2.7 option for virtualenv in my Jenkins build script.