Deploying Jenkins using skaffold via GitHub Action Runner - kubernetes

I am deploying Jenkins Using GitHub Action Runner using Skaffold.
While the Skaffold is installed over the default image of GitHub Runner
The pod is restarting due to crash loop back off error and causing it to restart.
I am not sure why it is happening.
When I am deploying runner over Google Kubernetes Engine my runner is failing because of following error:
'''A runner exists with the same name
√ Successfully replaced the runner
√ Runner connection is good
# Runner settings
√ Settings Saved.
√ Connected to GitHub
Current runner version: '2.294.0'
2022-12-01 06:03:57Z: Listening for Jobs
Runner update in progress, do not shutdown runner.
Downloading 2.299.1 runner
Waiting for current job finish running.
Generate and execute update script.
Runner will exit shortly for update, should be back online within 10 seconds.
Runner update process finished.
Runner listener exit because of updating, re-launch runner in 5 seconds
Restarting runner...
/home/docker/actions-runner/run-helper.sh: line 20: /home/docker/actions-runner/bin/Runner.Listener: No such file or directory
Exiting with unknown error code: 127
Exiting runner...
'''
Following is the Dockerfile used for runner :
'''FROM ubuntu:22.04
#instalIng skaffold
RUN apt-get update -y && apt-get upgrade -y sudo
RUN apt-get install -y curl
RUN curl -LO https://storage.googleapis.com/skaffold/releases/v2.0.2/skaffold-linux-amd64 \
&& sudo chmod +x skaffold-linux-amd64 \
&& sudo mv skaffold-linux-amd64 /usr/local/bin/skaffold
#install
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
RUN apt-get install -y curl jq build-essential libssl-dev libffi-dev python3 python3-venv python3-dev ca-certificates gnupg2 iputils-ping software-properties-common apt-transport-https lsb-release git zip unzip postgresql-client python3-pip npm
RUN ln -sf /usr/bin/python3 /usr/bin/python
# set the github runner version
ARG RUNNER_VERSION="2.294.0"
# cd into the user directory, download and unzip the github actions runner
RUN cd /home/docker && mkdir actions-runner && cd actions-runner \
&& curl -O -L https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz
# install some additional dependencies
RUN chown -R docker ~docker && /home/docker/actions-runner/bin/installdependencies.sh
#Docker
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" && \
apt-get update && \
apt-get -y install docker-ce
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
# Installing the package
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
# Adding the package path to local
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
#Install Kubectl
RUN curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" \
&& chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl
# copy over the start.sh script
COPY start.sh start.sh
# make the script executable
RUN chmod +x start.sh && mv start.sh /home/docker
# since the config and run script for actions are not allowed to be run by root,
# set the user to "docker" so all subsequent commands are run as the docker user
USER docker
# set the entrypoint to the start.sh script
ENTRYPOINT ["/home/docker/start.sh"] '''
Below is the startup script :
#!/bin/bash
SNAPTIME=`date '+%Y%m%d%H%M%S'`
echo "Started $SNAPTIME"
ORGANIZATION=$ORGANIZATION
ACCESS_TOKEN=`cat /etc/pat/pat`
GH_PROJECT=$GH_PROJECT
RUNNER_NAME="${RUNNER_NAME:-RUN$SNAPTIME}"
RUNNER_LABELS="${RUNNER_LABELS:-simple}"
REG_TOKEN=$(curl -sX POST -H "Authorization: token ${ACCESS_TOKEN}" https://api.github.com/repos/${ORGANIZATION}/$GH_PROJECT/actions/runners/registration-token | jq .token --raw-output)
# gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}
cd /home/docker/actions-runner
./config.sh --name $RUNNER_NAME --labels ${RUNNER_LABELS} --url https://github.com/${ORGANIZATION}/${GH_PROJECT} --unattended --replace --token ${REG_TOKEN}
cleanup() {
echo "Removing runner..."
./config.sh remove --unattended --token ${REG_TOKEN}
}
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
./run.sh & wait $!
The pod is restarting whenever the load is coming into it:
runner-automation-dev-docker-595f48c7dc-k2wbz 1/2 CrashLoopBackOff 7 (67s ago) 18m
I am not sure what exactly is causing this issue.

Related

node-poppler installation of poppler-utils and poppler-data

https://github.com/Fdawgs/node-poppler
When following the readme one thing I ran into is poppler-utils and poppler-data didn't get installed in /usr/bin. Based on that readme I'm expecting them to get installed there by default.
After running a find inside the container I found the files in /usr/share/doc. This doesn't seem right based on the readme.
How do I ensure poppler-utils and poppler-data get added into /usr/bin as expected in the readme.
Ultimately this is the code I'm instantiating:
const poppler = new Poppler('/usr/bin');
Dockerfile:
FROM docker.registry.sfg.corp.local/devops/nodejs-build-docker:16.16.60850 as build
ARG NAME
ARG IMAGE
ARG SNYK_TOKEN
ARG SNYK_ORGANIZATION
ARG SNYK_PROJECT
COPY . .
RUN apt-get update
RUN apt-get install poppler-utils -y
RUN apt-get install poppler-data -y
RUN chmod +x install-puppeteer.sh
RUN ./install-puppeteer.sh
RUN ./build.sh -n $NAME -i $IMAGE -t $SNYK_TOKEN -o $SNYK_ORGANIZATION -p $SNYK_PROJECT
FROM docker.registry.sfg.corp.local/node:16-buster
RUN apt-get update
RUN apt-get install poppler-utils -y
RUN apt-get install poppler-data -y
COPY ./install-puppeteer.sh .
RUN chmod +x install-puppeteer.sh
RUN ./install-puppeteer.sh
# add the chrome folder to the PATH
ENV PATH "$PATH:/opt/google/chrome"
COPY --from=build package.json package.json
COPY --from=build src src/
COPY --from=build node_modules node_modules/
COPY --from=build artifacts artifacts/
# Add tini to help prevent zombie chrome processes.
ADD https://github.com/krallin/tini/releases/download/v0.19.0/tini /tini
RUN chmod +x /tini
# Add user so we don't need --no-sandbox.
RUN groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& mkdir -p /home/pptruser/.cache \
&& mkdir -p /home/pptruser/.cache/puppeteer \
&& chown -R pptruser:pptruser /home/pptruser \
&& chown -R pptruser:pptruser /node_modules
USER pptruser
RUN chmod +x node_modules/riley/bin/riley.sh
ENTRYPOINT ["/tini", "--"]
CMD ["node_modules/riley/bin/riley.sh"]

kubectl not found error in jenkins pipeline

stage('Deploy Test chart') {
steps{
container('ubuntu-kubectl-helm') {
script {
kubeconfig(credentialsId: 'kubeconfig-test'){
sh "which kubectl"
sh "kubectl config view"
sh "kubectl get nodes"
sh "kubectl get pods -n jenkins-new"
sh "kubectl get pods -n test"
}
}
}
}
}
We have 3 containers to run the whole pipeline. and we create an Ubuntu container to run Kubernetes on it(we have install kubectl on it). when we run this step as a part of the whole pipe line. It gives us errors.
ERROR: Failed to run "kubectl version". Returned status code 127.
stdout:
sh: 47: kubectl: not found
But when we run this step separately as a pipeline, then the pipeline is working and we get the results.Now we are stuck and not able to figure out how to proceed further.
We also checked the environment path and checked if it was running on the Ubuntu container or not.
Inside Jenkins you need to install kubectl. Same if you want to use docker in your pipeline.
Here my Dockerfile which does it.
FROM jenkins/jenkins
ARG HOST_UID=1004
ARG HOST_GID=999
USER root
RUN apt-get -y update && \
apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose \
&& chmod +x /usr/local/bin/docker-compose \
&& ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose \
&& curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list \
&& apt-get -y update \
&& apt install -y kubectl
RUN usermod -u $HOST_UID jenkins
RUN groupmod -g $HOST_GID docker
RUN usermod -aG docker jenkins
USER jenkins

ECS container exit code 2

i am creating an ECS cluster with docker image library/wordpress:latest and i get the desired task in running state but when i build this image using following Dockerfile and push it to my dockerhub repo and then try to create this cluster using my new image the containers fails by giving Exit code 2
Could you please suggest me what am i doing wrong here?
#Base image
FROM wordpress:latest
LABEL version="latest" maintainer="xxxxxxx <xxxxxx>"
# Update apt
RUN apt-get update
# Add a user for running applications.
RUN useradd apps
RUN mkdir -p /home/apps && chown apps:apps /home/apps
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
# Install all necessary packages
RUN apt-get -y install build-essential libpoppler-cpp-dev pkg-config x11vnc xvfb fluxbox wget wmctrl gnupg2 unzip zip
# Set the Chrome repo.
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list
# Install Chrome.
RUN apt-get update && apt-get -y install google-chrome-stable
# Install Chrome driver
RUN wget https://chromedriver.storage.googleapis.com/94.0.4606.61/chromedriver_linux64.zip \
&& unzip chromedriver_linux64.zip \
&& mv chromedriver /usr/bin/chromedriver \
&& chown root:root /usr/bin/chromedriver \
&& chmod +x /usr/bin/chromedriver
# create folder to store requirements.txt file
RUN mkdir /home/automation
RUN mkdir /home/automation/FrontEnd
WORKDIR /home/automation
# Copy config and scripts
COPY requirements.txt ./requirements.txt
COPY TestSuites /home/automation/FrontEnd/TestSuites
COPY Resources /home/automation/FrontEnd/Resources
COPY TestRunner.py /home/automation/FrontEnd
COPY TestRail/ /home/automation/TestRail
COPY run-frontend-tests.sh /home/automation/run-tests.sh
COPY FrontEndResultParser.py /home/automation/FrontEnd/FrontEndResultParser.py
# Install python 3.9 and pip3
RUN apt-get -y install python3-dev python3.9 python3-pip
# Install dependencies
RUN pip install "setuptools==58.0.0"
RUN pip install -r requirements.txt
CMD ["sh", "run-tests.sh"]
i am basically just trying to run a script into the container
I used a worpress image and built my own image out of it, thought it would keep the container up and my script will be executed but that didnt happen. My ECS cluster didnt have any running task, all i saw iin the events was service stage-fe-auto has started 1 tasks: task e83587e734c94f77. and when i opened the task details, it had Exit Code 2 and Working directory /home/app but in my Dockerfile my work directory is differen. Not sure what i did wrong

How to mount devcontainer to $HOME

I'm trying to setup a devcontainer which mounts the workspace in VSCode to /home/node and puts everything in the project into $HOME in docker.
the reason why I want to do this is so when the project mounts into docker I have everything contained in the individual projects I'm working on.
I get an error, when running with this configuration.
anyone have suggestions on how to achieve this?
this is my devcontainer.json
// For format details, see https://aka.ms/vscode-remote/devcontainer.json or the definition README at
// https://github.com/microsoft/vscode-dev-containers/tree/master/containers/typescript-node-lts
{
"name": "Node.js (latest LTS) & TypeScript",
"dockerFile": "Dockerfile",
// Use 'settings' to set *default* container specific settings.json values on container create.
// You can edit these settings after create using File > Preferences > Settings > Remote.
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
// Uncomment the next line if you want to publish any ports.
"appPort": [3000, 8000, 6006],
// Uncomment the next line to run commands after the container is created.
"postCreateCommand": "mkdir ~/.ssh && cp .ssh/* ~/.ssh && chmod 600 ~/.ssh/id_rsa",
// Uncomment the next line to use a non-root user. On Linux, this will prevent
// new files getting created as root, but you may need to update the USER_UID
// and USER_GID in .devcontainer/Dockerfile to match your user if not 1000.
"runArgs": ["-u", "node"],
// Add the IDs of extensions you want installed when the container is created in the array below.
"extensions": [
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode",
"Orta.vscode-jest"
],
"workspaceMount": "src=/home/node,dst=/home/node,type=volume,volume-driver=local",
"workspaceFolder": "/home/node"
}
and my Dockerfile
#-------------------------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See https://go.microsoft.com/fwlink/?linkid=2090316 for license information.
#-------------------------------------------------------------------------------------------------------------
FROM node:lts
# Avoid warnings by switching to noninteractive
ENV DEBIAN_FRONTEND=noninteractive
# The node image comes with a base non-root 'node' user which this Dockerfile
# gives sudo access. However, for Linux, this user's GID/UID must match your local
# user UID/GID to avoid permission issues with bind mounts. Update USER_UID / USER_GID
# if yours is not 1000. See https://aka.ms/vscode-remote/containers/non-root-user.
ARG USER_UID=1000
ARG USER_GID=$USER_UID
# Configure apt and install packages
RUN apt-get update \
&& apt-get -y install --no-install-recommends apt-utils dialog 2>&1 \
#
# Verify git and needed tools are installed
&& apt-get -y install git iproute2 procps \
#
# Remove outdated yarn from /opt and install via package
# so it can be easily updated via apt-get upgrade yarn
&& rm -rf /opt/yarn-* \
&& rm -f /usr/local/bin/yarn \
&& rm -f /usr/local/bin/yarnpkg \
&& apt-get install -y curl apt-transport-https lsb-release \
&& curl -sS https://dl.yarnpkg.com/$(lsb_release -is | tr '[:upper:]' '[:lower:]')/pubkey.gpg | apt-key add - 2>/dev/null \
&& echo "deb https://dl.yarnpkg.com/$(lsb_release -is | tr '[:upper:]' '[:lower:]')/ stable main" | tee /etc/apt/sources.list.d/yarn.list \
&& apt-get update \
&& apt-get -y install --no-install-recommends yarn \
#
# Install tools globally
&& npm install -g jest prettier eslint typescript localtunnel \
#
# [Optional] Update a non-root user to match UID/GID - see https://aka.ms/vscode-remote/containers/non-root-user.
&& if [ "$USER_GID" != "1000" ]; then groupmod node --gid $USER_GID; fi \
&& if [ "$USER_UID" != "1000" ]; then usermod --uid $USER_UID node; fi \
# [Optional] Add add sudo support for non-root user
&& apt-get install -y sudo \
&& echo node ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/node \
&& chmod 0440 /etc/sudoers.d/node \
#
# Clean up
&& apt-get autoremove -y \
&& apt-get clean -y \
&& rm -rf /var/lib/apt/lists/*
# Switch back to dialog for any ad-hoc use of apt-get
ENV DEBIAN_FRONTEND=
not 100% sure, as you did not share the error you encoutered, but
it might be because you use:
"workspaceMount": "src=/home/node,dst=/home/node,type=volume,volume-driver=local",
when it's not a 'volume' in your case. The type should be 'bind' here.
see here for more information.

Installation of debian file failing in Grafana

I've got a custom built grafana docker image that I build using
go run build.go build package
This all works fine, and I get a deb image from the process (grafana_4.3.0-1490275845pre1_amd64.deb) as well as a .tar.gz file and an rpm package as well.
When using the dockerfile (essentially copied from grafana/grafana-docker):
FROM debian:jessie
COPY ./grafana.deb /tmp/grafana.deb
RUN apt-get update && \
apt-get -y --no-install-recommends install libfontconfig curl ca-certificates && \
apt-get clean && \
dpkg -i --debug=3773 /tmp/grafana.deb && \
rm /tmp/grafana.deb && \
I get the following error:
dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/grafana.postinst): No such file or directory
dpkg: error processing package grafana (--install):
subprocess installed post-installation script returned error exit status 2
D000001: ensure_diversions: same, skipping
D000002: fork/exec /var/lib/dpkg/info/systemd.postinst ( triggered /etc/init.d )
D000001: ensure_diversions: same, skipping
Errors were encountered while processing:
grafana
Setting up grafana (4.3.0-1490275845pre1) ...
Processing triggers for systemd (215-17+deb8u6) ...
The command '/bin/sh -c apt-get update && apt-get -y --no-install-recommends install libfontconfig curl ca-certificates && apt-get clean && dpkg -i -- debug=3773 --force-all /tmp/grafana.deb && rm /tmp/grafana.deb && curl -L https://github.com/tianon/gosu/releases/download/1.7/gosu-amd64 > /usr/sbin/gosu && chmod +x /usr/sbin/gosu && apt-get remove -y curl && apt-get autoremove -y && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 1
The obvious issue is (/var/lib/dpkg/info/grafana.postinst): No such file or directory but not knowing anything about dpkg I don't really know where to start trying to debug it. As far as I'm aware, I've not altered the deployment scripts so I'm at a loss to know where the issue has arisen.
As I was developing Grafana on a shared Windows folder, with Grafana running in a Docker container on VirtualBox, it seems that (despite not editing the files) SourceTree or something else edited the source to add Windows new lines which messed up the packaging step. I just used dos2unix to remove newlines and everything started working as expected.
The particular error message was related to newlines in the postinst file, which I debugged manually with bash on the VM.