Run Python 2.7 by default in a Dotcloud custom service - dotcloud

I need to make Python 2.7 the default version of Python for running a Jenkins build server. I'm trying to use python_version to do this, but Python 2.6 remains the default version. I'm probably missing something really simple. Any suggestions?
dotcloud.yml
jenkins:
type: custom
buildscript: jenkins/builder
ports:
www: http
config:
python_version: v2.7
processes:
sshagent: ssh-agent /bin/bash
jenkins: ~/run
db:
type: postgresql
builder
#!/bin/bash
if [ -f ~/jenkins.war ]
then
echo 'Found jenkins installation.'
else
echo 'Installing jenkins.'
wget -O ~/jenkins.war http://mirrors.jenkins-ci.org/war/latest/jenkins.war
fi
echo 'Installing dotCloud scaffolding.'
cp -a jenkins/. ~
echo 'Setting up SSH.'
mkdir -p ~/.ssh
cp jenkins_id ~/.ssh/id_rsa
chmod 0600 ~/.ssh/id_rsa
ssh-keygen -R bitbucket.org
ssh-keyscan -H bitbucket.org >> ~/.ssh/known_hosts

I'm still not sure why my build file didn't solve the problem, but I was able to work around it by using the --python=/usr/bin/python2.7 option for virtualenv in my Jenkins build script.

Related

I have a bash script for deploying my app to my production server, how can I use GitHub to automate when someone pushes to master?

The script is in ./bin/deploy.sh and uses an SSH key to connect to the server.
I have a script in package.json called npm run deploy:prod which runs this bash script.
What do I need to get GitHub to run this script automatically when someone pushes or merges a PR to master?
Here is my deploy.sh script:
#!/usr/bin/env bash
. $HOME/.bashrc
. .env
. .env.local
args=(-azvP --delete --exclude=node_modules --exclude=.idea --exclude=.git)
hosts=($HOST_DOMAIN) # tornado lightning thunder tundra jefferson
dry=() #add --dry-run to enable testing
user=$HOST_USER
name=$HOST_PATH
project=$HOST_PROJECT
for host in "${hosts[#]}"
do
echo ""
date
echo "---------------------"
echo "syncing ${host}"
echo "---------------------"
rsync ${dry[#]} ${args[#]} ./ ${user}#${host}:www/${name}/${project}
ssh -t ${user}#${host} \$HOME/www/${name}/${project}/bin/post-deploy.sh
done
version=$(jq -r .version package.json)
say "$HOST_PROJECT is live!"
exit
Here is my post-deploy.sh script that gets executed on the server:
#!/usr/bin/env bash
cd "$(dirname "$0")/.."
. $HOME/.bashrc
. .env
. .env.local
host=$HOST_DOMAIN
name=$HOST_PATH
project=$HOST_PROJECT
echo "current name: $name"
cd $HOME/www/${name}/${project}
nvm install v18
node -v
npm -v
npm i
sudo /etc/init.d/nginx reload
sudo systemctl daemon-reload
sudo systemctl restart ${META_SERVICE}

Scaleway scw init Inside Docker Container

I am trying to make the Scaleway CLI installed as part of a docker image I'm building to run Azure Pipelines.
My Dockerfile looks like this:
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat\
docker.io \
s3cmd
# Install Scaleway CLI
RUN curl -o /usr/local/bin/scw -L "https://github.com/scaleway/scaleway-cli/releases/download/v2.1.0/scw-2-1-0-linux-x86_64"
RUN chmod +x /usr/local/bin/scw
# Add config for Scaleway CLI
RUN mkdir -p ./config
RUN mkdir -p ./config/scw
COPY ./config/config.yaml $HOME/.config/scw/config.yaml
RUN scw init
# Add private key for SSH connections
COPY ./config/id_rsa $HOME/.ssh/id_rsa
# Config s3cmd
COPY ./config/.s3cfg $HOME/.s3cfg
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh"]
The key section being:
# Install Scaleway CLI
RUN curl -o /usr/local/bin/scw -L "https://github.com/scaleway/scaleway-cli/releases/download/v2.1.0/scw-2-1-0-linux-x86_64"
RUN chmod +x /usr/local/bin/scw
# Add config for Scaleway CLI
RUN mkdir -p ./config
RUN mkdir -p ./config/scw
COPY ./config/config.yaml $HOME/.config/scw/config.yaml
RUN scw init
The config.yaml file referenced above looks like the following (minus the real values of course):
access_key: <key>
secret_key: <secret>
default_organization_id: <orgId>
default_project_id: <projectId>
default_region: nl-ams
default_zone: nl-ams-1
However, when it executes RUN scw init, the output is Invalid email or secret-key: ''
I have tried without running scw init at all, but then calls to scw fail, saying
Access key is required
Details: Access_key can be initialised using the command "scw init".
After initialisation, there are three ways to provide access_key:
with the Scaleway config file, in the access_key key: /root/.config/scw/config.yaml;
with the SCW_ACCESS_KEY environement variable;
Note that the last method has the highest priority.
More info:
https://github.com/scaleway/scaleway-sdk-go/tree/master/scw#scaleway-config
Hint: You can get your credentials here:
https://console.scaleway.com/account/credentials
Which admittedly is one of the better error messages I've seen, but nonetheless has not helped me. I am going to try the Environment Variable approach, which I suspect may do the trick, but I'd still like to know what I'm doing wrong with this config.yaml file.
Lastly... someone with more rep than me needs to create the tag "scaleway". Hard to reference the actual technology in question when the tag doesn't exist.

How to deploy with .gitlab-ci.yml in runner used by docker?

I installed docker and gitlab + a runner using this tutorial: https://frenchco.de/article/Add-un-Runner-Gitlab-CE-Docker
The problem is that when I try to modify the .gitlab-ci.yml to make a deployment on my host machine I can not do it.
My .yml :
stages:
- deploy
deploy_develop:
stage: deploy
before_script:
- apk update && apk add bash && apk add openssh && apk add rsync
- apk add --no-cache bash
script:
- mkdir -p ~/.ssh
- ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
- cat ~/.ssh/id_rsa.pub
- rsync -hrvz ~/ root#172.16.1.97:~/web_dev/www/test/
environment:
name: develop
And the problem is that in ssh or rsync I always have the same error message in my job:
$ rsync -hrvz ~/ root#172.16.1.97:~/web_dev/www/test/
Host key verification failed.
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(226) [sender=3.1.3]
I tried to copy the ssh id_rsa and id_rsa.pub in the host, it's the same.
Surely a problem because my runner is in a docker can be? It is strange because I manage to ping my host (172.16.1.97) since the execution of the .yml. An idea has my problem?
Looks like you did not add the public key into your authorized_keys on the host server for the deploy-user?
For example, I use gitlab-ci to deploy my webapp, and therefore I added the user gitlab on my host machine, and added the public key to authorized_keys and then I can connect with ssh gitlab#IP -i PRIVATE_KEY to that server.
My gitlab-ci.yml looks like this:
deploy-app:
stage: deploy
image: ubuntu
before_script:
- apt-get update -qq
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(cat "$DEPLOY_SERVER_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- chmod 755 ./deploy.sh
script:
- ./deploy.sh
where I added the private key's content as a variable to my gitlab-instance. (see https://docs.gitlab.com/ee/ci/variables/)
The deploy.sh looks like this:
#!/bin/bash
set -eo pipefail
scp app/docker-compose.yml gitlab#"${DEPLOY_SERVER_IP}":~/apps/${NGINX_SERVER_NAME}/
ssh gitlab#$DEPLOY_SERVER_IP "apps/${NGINX_SERVER_NAME}/app.sh update" # this is just doing docker-compose pull && docker-compose up in the app's directory.
Maybe this helps? It's working fine for me and scp/ssh are giving more intuitive error messages than what you got with rsync in this particular case.

Install MongoDB on Debian Buster

How to install the latest MongoDB 3.4 or even 3.6?
They support with Ubuntu, but my server is Debian Buster and I am stuck with MongoDB 3.2.
I don't know if this is a good idea yet, but I just installed it by adding the sid repos and installing using the mongodb-server package. For me this installs version 3.4.18.
I created /etc/apt/sources.list.d/sid.list with:
deb http://deb.debian.org/debian/ sid main
deb-src http://deb.debian.org/debian/ sid main
then did
apt update
apt install mongodb-server
and verified that it's working by connecting with mongo.
I have found the solution for a build script, the description is found here:
https://github.com/patrikx3/docker-debian-testing-mongodb-stable
The description:
Debian Stretch / Buster / Bullseye / Testing MongoDB and MongoDB Tools build stable builder script, what it does as exactly:
It is basically a built for the latest MongoDB for Debian.
The current varsion is the r4.0.x build (release).
Warning It will remove all mongodb* apt packages in ./scripts/build-server.sh and /etc/systemd/system/mongodb-server.service is replaced.
It install the required apt dependencies and generates the SystemD service and makes it enabled.
Check if the build works (building is below). It runs all tests, so if it works, then it really does, actually. If there is an error, of course, you will not deploy on your server. So, if building and testing works, then it puts the binaries as it follow and you are sure and done.
The build as follows build-server.sh:
#!/usr/bin/env bash
# based on https://github.com/mongodb/mongo/wiki/Build-Mongodb-From-Source
# the current directory
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# if an error exit right away, don't continue the build
set -e
# some info
echo
#echo "Works like command, use a tag: sudo ./scripts/build-server.sh r4.2.0"
echo "Works like command, use a tag: sudo ./scripts/build-server.sh r4.0.12"
echo
# check if we are root
if [[ $EUID -ne 0 ]]; then
echo "This script must be ran via root 'sudo' command or using in 'sudo -i'."
exit 1
fi
# require mongo branch
#if [ -z "${1}" ]; then
# echo "First argument must be the MONGODB_BRANCH for example 'v4.1'"
# exit 1
#fi
#MONGODB_BRANCH="${1}"
# require mongo release
#if [ -z "${2}" ]; then
# echo "The second argument must be the MONGODB_RELEASE for example 'r4.1.0'"
# exit 1
#fi
#MONGODB_RELEASE="${2}"
# require mongo release
if [ -z "${1}" ]; then
echo "The first argument must be the MONGODB_RELEASE for example 'r4.0.12'"
exit 1
fi
MONGODB_RELEASE="${1}"
# delete all mongo other programs, we self compile
apt remove --purge mongo*
# the required packages for debian
apt -y install libboost-filesystem-dev libboost-program-options-dev libboost-system-dev libboost-thread-dev build-essential gcc python scons git glibc-source libssl-dev python-pip libffi-dev python-dev libcurl4-openssl-dev #libcurl-dev
pip install -U pip pyyaml typing
# generate build directory variable
BUILD=$DIR/../build
# delete previous build directory
rm -rf $BUILD/mongo
# generate new build directory
mkdir -p $BUILD
# the mongodb.conf and systemd services files in a directory variable
ROOT_FS=$DIR/../artifacts/root-filesystem
# find out how many cores we have and we use that many
if [ -z "$CORES" ]; then
CORES=$(grep -c ^processor /proc/cpuinfo)
fi
echo Using $CORES cores
# go to the build directory
pushd $BUILD
# clone the mongo by branch
#git clone -b ${MONGODB_BRANCH} https://github.com/mongodb/mongo.git
# clone the mongo by branch
git clone https://github.com/mongodb/mongo.git
# the mongo directory is a variables
MONGO=$BUILD/mongo
# go to the mongo directory
pushd $MONGO
# checkout the mongo release
git checkout tags/${MONGODB_RELEASE}
# hack to old version python pip cryptography from 1.7.2 to use the latest
sed -i 's#cryptography == 1.7.2#\#cryptography == 1.7.2#g' buildscripts/requirements.txt
# this is only because 4.0.12 uses 1.7.2 and
# https://github.com/pyca/cryptography/issues/4193#issuecomment-381236459
# support minimum latest (2.2)
pip install cryptography
# install the python requirements
#pip install -r etc/pip/dev-requirements.txt
pip install -r buildscripts/requirements.txt
# somewhere in the build it says if we install this, it is faster to build
pip2 install --user regex
# build everything
scons all --disable-warnings-as-errors -j $CORES --ssl
# install the mongo programs all
scons install --disable-warnings-as-errors -j $CORES --prefix /usr
# create a copy of the old config
#TIMESTAMP=$(($(date +%s%N)/1000000))
#cp /etc/mongodb.conf /etc/mongodb.conf.$TIMESTAMP.save
# copy the mongodb.conf configured and the systemd service file
# dangerous!!! removed
# cp -avr $ROOT_FS/. /
MONGODB_SERVICE=etc/systemd/system/mongodb-server.service
cp $ROOT_FS/$MONGODB_SERVICE /$MONGODB_SERVICE
chown root:root /$MONGODB_SERVICE
chmod o-rwx /$MONGODB_SERVICE
# generate mongodb user and group
useradd mongodb -d /var/lib/mongodb -s /bin/false || true
# create the required mongodb database directory and add safety
mkdir -p /var/lib/mongodb
chmod o-rwx -R /var/lib/mongodb
chown -R mongodb:mongodb /var/lib/mongodb
# create the required mongodb log directory and add safety
mkdir -p /var/log/mongodb
chmod o-rwx -R /var/log/mongodb
chown -R mongodb:mongodb /var/log/mongodb
# create the required run socket directory and add safety
mkdir -p /run/mongodb
chmod o-rwx -R /run/mongodb
chown -R mongodb:mongodb /run/mongodb
# add safety to the mongodb config file
chmod o-rwx /etc/mongodb.conf || true
chown mongodb:mongodb /etc/mongodb.conf || true
# reload systemd services
systemctl daemon-reload
# enable the mongodb-server
systemctl enable mongodb-server
# start the mongodb-server
#service mongodb-server start
# exit of the mongo directory
popd
# exit the build directory
popd
# delete current build directory
rm -rf $BUILD/mongo
The build as follows build-tools.sh:
#!/usr/bin/env bash
# based on https://github.com/mongodb/mongo-tools
# the current directory
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# if an error exit right away, don't continue the build
set -e
# some info
echo
echo "Works like command: sudo ./scripts/build-tools.sh r4.0.12"
echo
# check if we are root
if [[ $EUID -ne 0 ]]; then
echo "This script must be ran via root 'sudo' command or using in 'sudo -i'."
exit 1
fi
# require mongo release
if [ -z "${1}" ]; then
echo "The first argument must be the MONGODB_RELEASE for example 'r4.0.12'"
exit 1
fi
MONGODB_RELEASE="${1}"
## delete all mongo other programs, we self compile
##apt remove --purge mongo*
## the required packages for debian
##apt -y install gcc python scons git glibc-source libssl-dev python-pip
apt -y install golang libpcap-dev
export GOROOT=$(go env GOROOT)
# generate build directory variable
BUILD=$DIR/../build/src/github.com/mongodb/
# delete previous build directory
rm -rf $BUILD/mongo-tools
# generate new build directory
mkdir -p $BUILD
# find out how many cores we have and we use that many
CORES=$(grep -c ^processor /proc/cpuinfo)
# go to the build directory
pushd $BUILD
# clone the mongo by branch
git clone https://github.com/mongodb/mongo-tools
# the mongo directory is a variables
MONGO_TOOLS=$BUILD/mongo-tools
# go to the mongo directory
pushd $MONGO_TOOLS
# checkout the mongo release
git checkout tags/${MONGODB_RELEASE}
bash ./build.sh
chown root:adm -R ./bin
chmod o-rwx -R ./bin
chmod ug+rx ./bin/*
cp -r ./bin/. /usr/bin
# for PROGRAM in bsondump mongodump mongoexport mongofiles mongoimport mongoreplay mongorestore mongostat mongotop
# do
# go build -o bin/${PROGRAM} -tags "ssl sasl" ${PROGRAM}/main/${PROGRAM}.go
# done
# exit of the mongo directory
popd
# exit the build directory
popd
# delete current build directory
rm -rf $BUILD/mongo-tools
popd
# delete current build directory
rm -rf $BUILD/mongo-tools

Call Perl Script using Ansible

I have the below .sh code which need to get converted to Ansible tasks.
#!/bin/sh
echo "Installing Sonar"
SONAR_HOME=/tui/hybris/sonar
if [ ! -d "$SONAR_HOME" ]; then
mkdir -p $SONAR_HOME
fi
cd $SONAR_HOME
wget https://s3-eu-west-1.amazonaws.com/tuiuk/source/sonarqube/sonarqube-5.4.zip
unzip sonarqube-5.4.zip
echo "Modifying Sonar config file"
cd sonarqube-5.4/conf
perl -p -i -e 's/#sonar.jdbc.username=/sonar.jdbc.username=sonar/g' sonar.properties
perl -p -i -e 's/#sonar.jdbc.password=/sonar.jdbc.password=sonar/g' sonar.properties
perl -p -i -e 's/#sonar.jdbc.url=jdbc:mysql/sonar.jdbc.url=jdbc:mysql/g' sonar.properties
cd $SONAR_HOME
echo "downloading and copying plugins"
wget https://s3-eu-west-1.amazonaws.com/tuiuk/source/sonarqube/sonarqube5.4_plugins.zip
unzip sonarqube5.4_plugins.zip
cp plugins/* sonarqube-5.4/extensions/plugins/
cd sonarqube-5.4/bin/linux-x86-64
echo "Starting Sonar"
./sonar.sh start
Below is my task.I got stuck where I need to execute perl script. Could any of you help me in proceeding further.
- hosts: docker_test
tasks:
- name: Creates directory
file: path=/tui/hybris/sonar state=directory mode=0777
sudo: yes
- name: Installing Sonar
get_url:
url: "https://s3-eu-west-1.amazonaws.com/tuiuk/source/sonarqube/sonarqube-5.4.zip"
dest: "/tui/hybris/sonar/sonarqube-5.4.zip"
register: get_solr
- debug:
msg: "solr was downloaded"
when: get_solr|changed
- name: Unzip SonarQube
unarchive: src=/tui/hybris/sonar/sonarqube-5.4.zip dest=/tui/hybris/sonar copy=no
I bet you don't need perl here, use lineinfile with regex option (if you need to modify a single line in the file) or replace module (if you need to modify all occurrences).
Just call perl with command or shell-module:
- task: Modifying Sonar config file
shell: cd sonarqube-5.4/conf && perl -p -i -e ...