kubectl not found error in jenkins pipeline - kubernetes

stage('Deploy Test chart') {
steps{
container('ubuntu-kubectl-helm') {
script {
kubeconfig(credentialsId: 'kubeconfig-test'){
sh "which kubectl"
sh "kubectl config view"
sh "kubectl get nodes"
sh "kubectl get pods -n jenkins-new"
sh "kubectl get pods -n test"
}
}
}
}
}
We have 3 containers to run the whole pipeline. and we create an Ubuntu container to run Kubernetes on it(we have install kubectl on it). when we run this step as a part of the whole pipe line. It gives us errors.
ERROR: Failed to run "kubectl version". Returned status code 127.
stdout:
sh: 47: kubectl: not found
But when we run this step separately as a pipeline, then the pipeline is working and we get the results.Now we are stuck and not able to figure out how to proceed further.
We also checked the environment path and checked if it was running on the Ubuntu container or not.

Inside Jenkins you need to install kubectl. Same if you want to use docker in your pipeline.
Here my Dockerfile which does it.
FROM jenkins/jenkins
ARG HOST_UID=1004
ARG HOST_GID=999
USER root
RUN apt-get -y update && \
apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose \
&& chmod +x /usr/local/bin/docker-compose \
&& ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose \
&& curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list \
&& apt-get -y update \
&& apt install -y kubectl
RUN usermod -u $HOST_UID jenkins
RUN groupmod -g $HOST_GID docker
RUN usermod -aG docker jenkins
USER jenkins

Related

Deploying Jenkins using skaffold via GitHub Action Runner

I am deploying Jenkins Using GitHub Action Runner using Skaffold.
While the Skaffold is installed over the default image of GitHub Runner
The pod is restarting due to crash loop back off error and causing it to restart.
I am not sure why it is happening.
When I am deploying runner over Google Kubernetes Engine my runner is failing because of following error:
'''A runner exists with the same name
√ Successfully replaced the runner
√ Runner connection is good
# Runner settings
√ Settings Saved.
√ Connected to GitHub
Current runner version: '2.294.0'
2022-12-01 06:03:57Z: Listening for Jobs
Runner update in progress, do not shutdown runner.
Downloading 2.299.1 runner
Waiting for current job finish running.
Generate and execute update script.
Runner will exit shortly for update, should be back online within 10 seconds.
Runner update process finished.
Runner listener exit because of updating, re-launch runner in 5 seconds
Restarting runner...
/home/docker/actions-runner/run-helper.sh: line 20: /home/docker/actions-runner/bin/Runner.Listener: No such file or directory
Exiting with unknown error code: 127
Exiting runner...
'''
Following is the Dockerfile used for runner :
'''FROM ubuntu:22.04
#instalIng skaffold
RUN apt-get update -y && apt-get upgrade -y sudo
RUN apt-get install -y curl
RUN curl -LO https://storage.googleapis.com/skaffold/releases/v2.0.2/skaffold-linux-amd64 \
&& sudo chmod +x skaffold-linux-amd64 \
&& sudo mv skaffold-linux-amd64 /usr/local/bin/skaffold
#install
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
RUN apt-get install -y curl jq build-essential libssl-dev libffi-dev python3 python3-venv python3-dev ca-certificates gnupg2 iputils-ping software-properties-common apt-transport-https lsb-release git zip unzip postgresql-client python3-pip npm
RUN ln -sf /usr/bin/python3 /usr/bin/python
# set the github runner version
ARG RUNNER_VERSION="2.294.0"
# cd into the user directory, download and unzip the github actions runner
RUN cd /home/docker && mkdir actions-runner && cd actions-runner \
&& curl -O -L https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz
# install some additional dependencies
RUN chown -R docker ~docker && /home/docker/actions-runner/bin/installdependencies.sh
#Docker
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" && \
apt-get update && \
apt-get -y install docker-ce
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
# Installing the package
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
# Adding the package path to local
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
#Install Kubectl
RUN curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" \
&& chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl
# copy over the start.sh script
COPY start.sh start.sh
# make the script executable
RUN chmod +x start.sh && mv start.sh /home/docker
# since the config and run script for actions are not allowed to be run by root,
# set the user to "docker" so all subsequent commands are run as the docker user
USER docker
# set the entrypoint to the start.sh script
ENTRYPOINT ["/home/docker/start.sh"] '''
Below is the startup script :
#!/bin/bash
SNAPTIME=`date '+%Y%m%d%H%M%S'`
echo "Started $SNAPTIME"
ORGANIZATION=$ORGANIZATION
ACCESS_TOKEN=`cat /etc/pat/pat`
GH_PROJECT=$GH_PROJECT
RUNNER_NAME="${RUNNER_NAME:-RUN$SNAPTIME}"
RUNNER_LABELS="${RUNNER_LABELS:-simple}"
REG_TOKEN=$(curl -sX POST -H "Authorization: token ${ACCESS_TOKEN}" https://api.github.com/repos/${ORGANIZATION}/$GH_PROJECT/actions/runners/registration-token | jq .token --raw-output)
# gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}
cd /home/docker/actions-runner
./config.sh --name $RUNNER_NAME --labels ${RUNNER_LABELS} --url https://github.com/${ORGANIZATION}/${GH_PROJECT} --unattended --replace --token ${REG_TOKEN}
cleanup() {
echo "Removing runner..."
./config.sh remove --unattended --token ${REG_TOKEN}
}
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
./run.sh & wait $!
The pod is restarting whenever the load is coming into it:
runner-automation-dev-docker-595f48c7dc-k2wbz 1/2 CrashLoopBackOff 7 (67s ago) 18m
I am not sure what exactly is causing this issue.

Running a script with Terraform

For learning purpose, I'm trying to install and setup my own Kubernetes Cluster on GCP.
I want to provision my instances on GCP with a bootstrap script.
Here is my google_compute_instance config
resource "google_compute_instance" "default" {
name = var.vm_name
machine_type = "f1-micro"
zone = "europe-west1-b"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = var.network
access_config {
// Include this section to give the VM an external IP address
}
}
provisioner "remote-exec" {
script = var.script_path
connection {
type = "ssh"
host = var.ip_address
user = "root"
}
}
tags = ["node"]
}
I have this issue when I do terraform apply
Error: Failed to open script 'sudo apt-get update
sudo apt-get install
apt-transport-https
ca-certificates
curl
gnupg-agent
software-properties-common
zsh
vim
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key
add - sudo add-apt-repository \ "deb [arch=amd64]
https://download.docker.com/linux/debian \ $(lsb_release -cs) \
stable" sudo apt-get update && sudo apt-get install docker-ce
docker-ce-cli containerd.io
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo
apt-key add - cat <<EOF | sudo tee
/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/
kubernetes-xenial main EOF sudo apt-get update sudo apt-get install -y
kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl ':
open sudo apt-get update
sudo apt-get install
apt-transport-https
ca-certificates
curl
gnupg-agent
software-properties-common
zsh
vim
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key
add - sudo add-apt-repository \ "deb [arch=amd64]
https://download.docker.com/linux/debian \ $(lsb_release -cs) \
stable" sudo apt-get update && sudo apt-get install docker-ce
docker-ce-cli containerd.io
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo
apt-key add - cat <<EOF | sudo tee
/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/
kubernetes-xenial main EOF sudo apt-get update sudo apt-get install -y
kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl :
no such file or directory
All my instances are created on the cloud, It's seems to find the bootstrap script but it is showing this error.
What did I miss? Is there a better way to do it ?
Here is the script:
#bin/bash
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common \
zsh \
vim
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
sudo apt-get update && sudo apt-get install docker-ce docker-ce-cli containerd.io
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
You should provide the private_key argument in the connection block of remote-exec.
private_key - The contents of an SSH key to use for the connection. These can be loaded from a file on disk using the file function. This takes preference over the password if provided.
A sample block could be like this:
provisioner "remote-exec" {
script = var.script_path
connection {
host = var.ip_address
type = "ssh"
user = "root"
private_key = fileexists("/temp/private_key") ? file("/temp/private_key") : file("C:/private_key")
}
}
For those who are interested, I have found an easier solution, without using ssh but by using the google metadata available at creation of the resource.
metadata_startup_script = file("./scripts/bootstrap.sh")
resource "google_compute_instance" "default" {
name = var.vm_name
machine_type = "e2-standard-2"
zone = "europe-west1-b"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = var.network
access_config {
// Include this section to give the VM an external IP address
}
}
metadata_startup_script = file("./scripts/bootstrap.sh")
tags = ["node"]
}

ERROR Dockerfile returned a non-zero code: 127

ERROR: Service 'remote_host' failed to build: The command '/bin/sh -c echo "1234" | passwd remote_user --stdin' returned a non-zero code: 127
FROM centos
RUN yum -y install openssh-server
RUN useradd remote_user
RUN echo "1234" | passwd remote_user --stdin
RUN mkdir /home/remote_user/.ssh
RUN chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen > /dev/null 2>&1
RUN yum -y install mysql
RUN yum -y install epel-release && \
yum -y install python-pip && \
pip install --upgrade pip && \
pip install awscli
CMD /usr/sbin/sshd -D
To set password for the remote_user, we can use
RUN echo remote_user:1234 | chpasswd
To set a password for the user remote_user, you can update the RUN statement as follow
RUN echo remote_user:1234 |/usr/sbin/chpasswd

mongodb container start failed with error:IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating

mongodb docker container start failed with errors: IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
I use offical mongodb Dockerfile to build the container, and use docker-compose to start it.
Here is my Dockerfile:
FROM ubuntu:bionic
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mongodb && useradd -r -g mongodb mongodb
RUN echo "deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse" > /etc/apt/sources.list \
&& echo "deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse" >> /etc/apt/sources.list \
&& echo "deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse" >> /etc/apt/sources.list \
&& echo "deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse" >> /etc/apt/sources.list
RUN export all_proxy=http:192.168.1.177:1080
RUN set -eux; \
apt-get update; \
apt-get install -y --no-install-recommends \
ca-certificates \
jq \
numactl \
; \
if ! command -v ps > /dev/null; then \
apt-get install -y --no-install-recommends procps; \
fi; \
rm -rf /var/lib/apt/lists/*
# grab gosu for easy step-down from root (https://github.com/tianon/gosu/releases)
ENV GOSU_VERSION 1.11
# grab "js-yaml" for parsing mongod's YAML config files (https://github.com/nodeca/js-yaml/releases)
ENV JSYAML_VERSION 3.13.0
RUN mkdir ~/.gnupg && echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf
RUN set -ex; \
\
apt-get update; \
apt-get install -y --no-install-recommends \
wget \
; \
if ! command -v gpg > /dev/null; then \
apt-get install -y --no-install-recommends gnupg dirmngr; \
fi; \
rm -rf /var/lib/apt/lists/*; \
\
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
command -v gpgconf && gpgconf --kill all || :; \
rm -r "$GNUPGHOME" /usr/local/bin/gosu.asc; \
chmod +x /usr/local/bin/gosu; \
gosu --version; \
gosu nobody true; \
\
wget -O /js-yaml.js "https://github.com/nodeca/js-yaml/raw/${JSYAML_VERSION}/dist/js-yaml.js"; \
# TODO some sort of download verification here
\
apt-get purge -y --auto-remove wget
RUN mkdir /docker-entrypoint-initdb.d
ENV GPG_KEYS E162F504A20CDF15827F718D4B7C549A058F8B6B
RUN set -ex; \
export GNUPGHOME="$(mktemp -d)"; \
for key in $GPG_KEYS; do \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
done; \
gpg --batch --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mongodb.gpg; \
command -v gpgconf && gpgconf --kill all || :; \
rm -r "$GNUPGHOME"; \
apt-key list
# Allow build-time overrides (eg. to build image with MongoDB Enterprise version)
# Options for MONGO_PACKAGE: mongodb-org OR mongodb-enterprise
# Options for MONGO_REPO: repo.mongodb.org OR repo.mongodb.com
# Example: docker build --build-arg MONGO_PACKAGE=mongodb-enterprise --build-arg MONGO_REPO=repo.mongodb.com .
ARG MONGO_PACKAGE=mongodb-org-unstable
ARG MONGO_REPO=repo.mongodb.org
ENV MONGO_PACKAGE=${MONGO_PACKAGE} MONGO_REPO=${MONGO_REPO}
ENV MONGO_MAJOR 4.1
ENV MONGO_VERSION 4.1.10
# bashbrew-architectures:amd64 arm64v8 s390x
RUN echo "deb http://$MONGO_REPO/apt/ubuntu bionic/${MONGO_PACKAGE%-unstable}/$MONGO_MAJOR multiverse" | tee "/etc/apt/sources.list.d/${MONGO_PACKAGE%-unstable}.list"
RUN set -x \
&& apt-get update \
&& apt-get install -y \
${MONGO_PACKAGE}=$MONGO_VERSION \
${MONGO_PACKAGE}-server=$MONGO_VERSION \
${MONGO_PACKAGE}-shell=$MONGO_VERSION \
${MONGO_PACKAGE}-mongos=$MONGO_VERSION \
${MONGO_PACKAGE}-tools=$MONGO_VERSION \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /var/lib/mongodb \
&& mv /etc/mongod.conf /etc/mongod.conf.orig
RUN mkdir -p /data/db /data/configdb \
&& chown -R mongodb:mongodb /data/db /data/configdb \
&& chmod g+w -R /data/db \
&& chmod g+w -R /data/configdb
VOLUME /data/db /data/configdb
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 27017
CMD ["mongod"]
and docker-compose below:
mongodb:
build: ./dockerfiles/mongodb
volumes:
- ./data/mongodb/db:/data/db
- ./data/mongodb/configdb:/data/configdb
ports:
- 7017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=super
- MONGO_INITDB_ROOT_PASSWORD=123456
restart: always
I expect the container start successfully. But i got failure with :IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
If i remove volumes sections, It works!
The host is ubuntu18.04, and /data is writeable!
I had solved my problem le 老铁!
AS i use vmware to start an Ubuntu server, and shared a folder with windows 7. The container volume on the shared folder.
check the inspect information of the mongodb container.
"Mounts": [
{
"Type": "bind",
"Source": "/mnt/hgfs/ubuntu/dockers/data/mongodb/configdb",
"Destination": "/data/configdb",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/mnt/hgfs/ubuntu/dockers/data/mongodb/db",
"Destination": "/data/db",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
the Source contains string "/mnt/hgfs/ubuntu" tell us: It shared folder!

How to create docker image for postgis that will enable extension at build time or before container fully running?

What I mean is that I want to create a docker image for postgis that will be completely usable right after build. So that if user runs
docker run -e POSTGRES_USER=user somepostgis
the user database would be created and extensions already installed?
The official postgres image can't be used for that AFAIK.
Basically need to write script and tell that it would be entrypoint. This script should create database and create extensions with porstgres server running on different port and then restart it on port 5432.
But I don't know sh enough and docker to do that. Right now it's saying that there is no pg_ctl command
If you want to help you can fork
FROM ubuntu:15.04
#ENV RELEASE_NAME lsb_release -sc
#RUN apt-get update && apt-get install wget
#RUN echo "deb http://apt.postgresql.org/pub/repos/apt ${RELEASE_NAME}-pgdg main" >> /etc/apt/sources.list
#RUN cat /etc/apt/sources.list
#RUN wget --quiet -O - http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc | sudo apt-key add -
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-9.4-postgis-2.1 \
curl \
&& curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu \
&& apt-get purge -y --auto-remove curl
RUN mkdir /docker-entrypoint-initdb.d
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
RUN chmod +x /docker-entrypoint.sh
RUN ls -l /docker-entrypoint.sh
EXPOSE 5432
CMD ["postgres"]
So I'm trying to do somethink like that, but it doesn't work.
#!/bin/bash
${POSTGRES_DB:=$POSTGRES_USER}
gosu postgres pg_ctl start -w -D ${PGDATA} -0 "-p 5433"
gosu postgres createuser ${POSTGRES_USER}
gosu postgres createdb ${POSTGRES_DB} -s -E UTF8
gosu postgres psql -d ${POSTGRES_DB} -c "create extension if not exists postgis;"
gosu postgres psql -d ${POSTGRES_DB} -c "create extension if not exists postgis_topology;"
pg_ctl -w restart