Trigger Jenkins job with i/o timeout error, using GitHub Actions - github

Could anyone help me to understand the reason of getting error:
Post http://192.168.100.5:8080/job/pipeline-selenium-tests/build: dial tcp 192.168.100.5:8080: i/o timeout
##[debug]Docker Action run completed with exit code 1
##[debug]Finishing: trigger single Job
Here is a screenshot of an error?
Here is my yaml file created under '.github/workflows' path:
name: trigger pipeline on merged code changes
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
types: [ opened, reopened ]
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- name: trigger single Job
uses: appleboy/jenkins-action#master
with:
url: "http://192.168.100.5:8080"
user: "ivan"
token: ${{ secrets.TOKEN }}
job: "pipeline-selenium-tests"
Jenkins is running based on DockerFile bellow:
FROM jenkins/jenkins:2.359
USER root
# install maven inside container
ARG MAVEN_VERSION=3.6.3
ARG USER_HOME_DIR="/root"
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries
RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
&& rm -f /tmp/apache-maven.tar.gz \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
# install docker inside container
RUN mkdir -p /tmp/download && \
curl -L https://download.docker.com/linux/static/stable/x86_64/docker-18.03.1-ce.tgz | tar -xz -C /tmp/download && \
rm -rf /tmp/download/docker/dockerd && \
mv /tmp/download/docker/docker* /usr/local/bin/ && \
rm -rf /tmp/download && \
groupadd -g 999 docker && \
usermod -aG staff,docker jenkins
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose && \
chmod +x /usr/local/bin/docker-compose && \
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose \
I also generated a token for current Jenkins user and added required configuration for a job
I also see in Jenkins logs:
WARNING hudson.security.csrf.CrumbFilter#doFilter: No valid crumb was included in request for /manage/configureSecurity/
configure by ivan. Returning 403.

Related

Deploying Jenkins using skaffold via GitHub Action Runner

I am deploying Jenkins Using GitHub Action Runner using Skaffold.
While the Skaffold is installed over the default image of GitHub Runner
The pod is restarting due to crash loop back off error and causing it to restart.
I am not sure why it is happening.
When I am deploying runner over Google Kubernetes Engine my runner is failing because of following error:
'''A runner exists with the same name
√ Successfully replaced the runner
√ Runner connection is good
# Runner settings
√ Settings Saved.
√ Connected to GitHub
Current runner version: '2.294.0'
2022-12-01 06:03:57Z: Listening for Jobs
Runner update in progress, do not shutdown runner.
Downloading 2.299.1 runner
Waiting for current job finish running.
Generate and execute update script.
Runner will exit shortly for update, should be back online within 10 seconds.
Runner update process finished.
Runner listener exit because of updating, re-launch runner in 5 seconds
Restarting runner...
/home/docker/actions-runner/run-helper.sh: line 20: /home/docker/actions-runner/bin/Runner.Listener: No such file or directory
Exiting with unknown error code: 127
Exiting runner...
'''
Following is the Dockerfile used for runner :
'''FROM ubuntu:22.04
#instalIng skaffold
RUN apt-get update -y && apt-get upgrade -y sudo
RUN apt-get install -y curl
RUN curl -LO https://storage.googleapis.com/skaffold/releases/v2.0.2/skaffold-linux-amd64 \
&& sudo chmod +x skaffold-linux-amd64 \
&& sudo mv skaffold-linux-amd64 /usr/local/bin/skaffold
#install
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
RUN apt-get install -y curl jq build-essential libssl-dev libffi-dev python3 python3-venv python3-dev ca-certificates gnupg2 iputils-ping software-properties-common apt-transport-https lsb-release git zip unzip postgresql-client python3-pip npm
RUN ln -sf /usr/bin/python3 /usr/bin/python
# set the github runner version
ARG RUNNER_VERSION="2.294.0"
# cd into the user directory, download and unzip the github actions runner
RUN cd /home/docker && mkdir actions-runner && cd actions-runner \
&& curl -O -L https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz
# install some additional dependencies
RUN chown -R docker ~docker && /home/docker/actions-runner/bin/installdependencies.sh
#Docker
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" && \
apt-get update && \
apt-get -y install docker-ce
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
# Installing the package
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
# Adding the package path to local
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
#Install Kubectl
RUN curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" \
&& chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl
# copy over the start.sh script
COPY start.sh start.sh
# make the script executable
RUN chmod +x start.sh && mv start.sh /home/docker
# since the config and run script for actions are not allowed to be run by root,
# set the user to "docker" so all subsequent commands are run as the docker user
USER docker
# set the entrypoint to the start.sh script
ENTRYPOINT ["/home/docker/start.sh"] '''
Below is the startup script :
#!/bin/bash
SNAPTIME=`date '+%Y%m%d%H%M%S'`
echo "Started $SNAPTIME"
ORGANIZATION=$ORGANIZATION
ACCESS_TOKEN=`cat /etc/pat/pat`
GH_PROJECT=$GH_PROJECT
RUNNER_NAME="${RUNNER_NAME:-RUN$SNAPTIME}"
RUNNER_LABELS="${RUNNER_LABELS:-simple}"
REG_TOKEN=$(curl -sX POST -H "Authorization: token ${ACCESS_TOKEN}" https://api.github.com/repos/${ORGANIZATION}/$GH_PROJECT/actions/runners/registration-token | jq .token --raw-output)
# gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}
cd /home/docker/actions-runner
./config.sh --name $RUNNER_NAME --labels ${RUNNER_LABELS} --url https://github.com/${ORGANIZATION}/${GH_PROJECT} --unattended --replace --token ${REG_TOKEN}
cleanup() {
echo "Removing runner..."
./config.sh remove --unattended --token ${REG_TOKEN}
}
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
./run.sh & wait $!
The pod is restarting whenever the load is coming into it:
runner-automation-dev-docker-595f48c7dc-k2wbz 1/2 CrashLoopBackOff 7 (67s ago) 18m
I am not sure what exactly is causing this issue.

kubectl not found error in jenkins pipeline

stage('Deploy Test chart') {
steps{
container('ubuntu-kubectl-helm') {
script {
kubeconfig(credentialsId: 'kubeconfig-test'){
sh "which kubectl"
sh "kubectl config view"
sh "kubectl get nodes"
sh "kubectl get pods -n jenkins-new"
sh "kubectl get pods -n test"
}
}
}
}
}
We have 3 containers to run the whole pipeline. and we create an Ubuntu container to run Kubernetes on it(we have install kubectl on it). when we run this step as a part of the whole pipe line. It gives us errors.
ERROR: Failed to run "kubectl version". Returned status code 127.
stdout:
sh: 47: kubectl: not found
But when we run this step separately as a pipeline, then the pipeline is working and we get the results.Now we are stuck and not able to figure out how to proceed further.
We also checked the environment path and checked if it was running on the Ubuntu container or not.
Inside Jenkins you need to install kubectl. Same if you want to use docker in your pipeline.
Here my Dockerfile which does it.
FROM jenkins/jenkins
ARG HOST_UID=1004
ARG HOST_GID=999
USER root
RUN apt-get -y update && \
apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable" && \
apt-get update && \
apt-get -y install docker-ce docker-ce-cli containerd.io
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose \
&& chmod +x /usr/local/bin/docker-compose \
&& ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose \
&& curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list \
&& apt-get -y update \
&& apt install -y kubectl
RUN usermod -u $HOST_UID jenkins
RUN groupmod -g $HOST_GID docker
RUN usermod -aG docker jenkins
USER jenkins

Error deploying a custom image with the crunchydata postgres operator

I've created a custom image based on timescaledb where I've installed wal2json and I'm trying to deploy it on my kubernetes cluster using the crunchydata-postgres-operator. I've managed to set everything up except credentials to access the database.
I'm trying to create the pgo cluster with the following command:
pgo create cluster my-db --ccp-image-prefix="eu.gcr.io/<project-id>" --ccp-image="timescale-custom" -c latest -d <dbname> -u <username> --password="<my_password>"
This command executes successfully, but the database deployment enters a CrashLoopBackOff because of the following error:
Error: Database is uninitialized and superuser password is not specified.
You must specify POSTGRES_PASSWORD to a non-empty value for the
superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all
connections without a password. This is *not* recommended.
See PostgreSQL documentation about "trust":
https://www.postgresql.org/docs/current/auth-trust.html
Seeing this I tried to set the POSTGRES_PASSWORD and POSTGRES_USER in the Dockerfile, but this does not alleviate the problem?
I know that usually these environment variables are set in the k8s deployment.yaml or in docker-compose.yml. But even though the postgres operator has some default credentials they don't seem to be applied to the container?
Dockerfile:
FROM postgres:12 AS build
ENV VERSION 1_0
RUN buildDeps="curl build-essential ca-certificates git pkg-config glib2.0 postgresql-server-dev-$PG_MAJOR" \
&& apt-get update \
&& apt-get install -y --no-install-recommends ${buildDeps} \
&& echo "deb http://apt.postgresql.org/pub/repos/apt/ stretch-pgdg main" > /etc/apt/sources.list.d/pgdg.list \
&& curl https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
&& apt-get update \
&& apt-get install -y --no-install-recommends libc++1 postgresql-server-dev-$PG_MAJOR \
&& mkdir -p /tmp/build \
&& curl -o /tmp/build/${VERSIONN}.tar.gz -SL "https://github.com/eulerto/wal2json/archive/wal2json_${VERSION}.tar.gz" \
&& cd /tmp/build/ \
&& tar -xzf /tmp/build/${VERSIONN}.tar.gz -C /tmp/build/ \
&& cd /tmp/build/wal2json-wal2json_${VERSION} \
&& make && make install \
&& mkdir /outputs \
&& cp wal2json.so /outputs/ \
&& cd / \
&& rm -rf /tmp/build \
&& apt-get remove -y --purge ${buildDeps} \
&& apt-get autoremove -y --purge \
&& rm -rf /var/lib/apt/lists/
FROM timescale/timescaledb:1.7.4-pg12
COPY --from=build /outputs/wal2json.so /usr/local/lib/postgresql/
RUN echo "host replication all 127.0.0.1/32 trust" >> /var/lib/postgresql/data/pg_hba.conf
ENV POSTGRES_USER=<username> POSTGRES_PASSWORD=<password>
Edit:
After running the same command with the --debug flag I've received this output:
DEBU[0000] debug flag is set to true
DEBU[0000] in initConfig with url=https://127.0.0.1:8443
DEBU[0000] using PGO_NAMESPACE env var pgo
DEBU[0000] GetSessionCredentials called
DEBU[0000] PGOUSER environment variable is being used at /home/mycloud/.pgo/pgo/pgouser
DEBU[0000] pgouser file found at /home/mycloud/.pgo/pgo/pgouser contains admin:examplepassword
DEBU[0000] [admin examplepassword]
DEBU[0000] username=[admin] password=[examplepassword]
DEBU[0000] setting up httpclient with TLS
DEBU[0000] GetTLSTransport called
DEBU[0000] create cluster called
DEBU[0000] IsValidForResourceName: my-db
DEBU[0000] createCluster called...[https://127.0.0.1:8443/clusters]
DEBU[0000] &{200 OK 200 HTTP/1.1 1 1 map[Content-Length:[193] Content-Type:[application/json] Date:[Sat, 09 Jan 2021 19:56:05 GMT] Www-Authenticate:[Basic realm="Restricted"]] 0xc00017c080 193 [] false false map[]

mongodb container start failed with error:IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating

mongodb docker container start failed with errors: IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
I use offical mongodb Dockerfile to build the container, and use docker-compose to start it.
Here is my Dockerfile:
FROM ubuntu:bionic
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mongodb && useradd -r -g mongodb mongodb
RUN echo "deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse" > /etc/apt/sources.list \
&& echo "deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse" >> /etc/apt/sources.list \
&& echo "deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse" >> /etc/apt/sources.list \
&& echo "deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse" >> /etc/apt/sources.list
RUN export all_proxy=http:192.168.1.177:1080
RUN set -eux; \
apt-get update; \
apt-get install -y --no-install-recommends \
ca-certificates \
jq \
numactl \
; \
if ! command -v ps > /dev/null; then \
apt-get install -y --no-install-recommends procps; \
fi; \
rm -rf /var/lib/apt/lists/*
# grab gosu for easy step-down from root (https://github.com/tianon/gosu/releases)
ENV GOSU_VERSION 1.11
# grab "js-yaml" for parsing mongod's YAML config files (https://github.com/nodeca/js-yaml/releases)
ENV JSYAML_VERSION 3.13.0
RUN mkdir ~/.gnupg && echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf
RUN set -ex; \
\
apt-get update; \
apt-get install -y --no-install-recommends \
wget \
; \
if ! command -v gpg > /dev/null; then \
apt-get install -y --no-install-recommends gnupg dirmngr; \
fi; \
rm -rf /var/lib/apt/lists/*; \
\
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
command -v gpgconf && gpgconf --kill all || :; \
rm -r "$GNUPGHOME" /usr/local/bin/gosu.asc; \
chmod +x /usr/local/bin/gosu; \
gosu --version; \
gosu nobody true; \
\
wget -O /js-yaml.js "https://github.com/nodeca/js-yaml/raw/${JSYAML_VERSION}/dist/js-yaml.js"; \
# TODO some sort of download verification here
\
apt-get purge -y --auto-remove wget
RUN mkdir /docker-entrypoint-initdb.d
ENV GPG_KEYS E162F504A20CDF15827F718D4B7C549A058F8B6B
RUN set -ex; \
export GNUPGHOME="$(mktemp -d)"; \
for key in $GPG_KEYS; do \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
done; \
gpg --batch --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mongodb.gpg; \
command -v gpgconf && gpgconf --kill all || :; \
rm -r "$GNUPGHOME"; \
apt-key list
# Allow build-time overrides (eg. to build image with MongoDB Enterprise version)
# Options for MONGO_PACKAGE: mongodb-org OR mongodb-enterprise
# Options for MONGO_REPO: repo.mongodb.org OR repo.mongodb.com
# Example: docker build --build-arg MONGO_PACKAGE=mongodb-enterprise --build-arg MONGO_REPO=repo.mongodb.com .
ARG MONGO_PACKAGE=mongodb-org-unstable
ARG MONGO_REPO=repo.mongodb.org
ENV MONGO_PACKAGE=${MONGO_PACKAGE} MONGO_REPO=${MONGO_REPO}
ENV MONGO_MAJOR 4.1
ENV MONGO_VERSION 4.1.10
# bashbrew-architectures:amd64 arm64v8 s390x
RUN echo "deb http://$MONGO_REPO/apt/ubuntu bionic/${MONGO_PACKAGE%-unstable}/$MONGO_MAJOR multiverse" | tee "/etc/apt/sources.list.d/${MONGO_PACKAGE%-unstable}.list"
RUN set -x \
&& apt-get update \
&& apt-get install -y \
${MONGO_PACKAGE}=$MONGO_VERSION \
${MONGO_PACKAGE}-server=$MONGO_VERSION \
${MONGO_PACKAGE}-shell=$MONGO_VERSION \
${MONGO_PACKAGE}-mongos=$MONGO_VERSION \
${MONGO_PACKAGE}-tools=$MONGO_VERSION \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /var/lib/mongodb \
&& mv /etc/mongod.conf /etc/mongod.conf.orig
RUN mkdir -p /data/db /data/configdb \
&& chown -R mongodb:mongodb /data/db /data/configdb \
&& chmod g+w -R /data/db \
&& chmod g+w -R /data/configdb
VOLUME /data/db /data/configdb
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 27017
CMD ["mongod"]
and docker-compose below:
mongodb:
build: ./dockerfiles/mongodb
volumes:
- ./data/mongodb/db:/data/db
- ./data/mongodb/configdb:/data/configdb
ports:
- 7017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=super
- MONGO_INITDB_ROOT_PASSWORD=123456
restart: always
I expect the container start successfully. But i got failure with :IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
If i remove volumes sections, It works!
The host is ubuntu18.04, and /data is writeable!
I had solved my problem le 老铁!
AS i use vmware to start an Ubuntu server, and shared a folder with windows 7. The container volume on the shared folder.
check the inspect information of the mongodb container.
"Mounts": [
{
"Type": "bind",
"Source": "/mnt/hgfs/ubuntu/dockers/data/mongodb/configdb",
"Destination": "/data/configdb",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/mnt/hgfs/ubuntu/dockers/data/mongodb/db",
"Destination": "/data/db",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
the Source contains string "/mnt/hgfs/ubuntu" tell us: It shared folder!

Jar file is not executed in with Dockerfile

I am building docker image of my application. And I would like to run jar file when I run my docker image. However, I get this error:
Could not find or load main class
Main class is set in the manifest file of the jar file. If I run my jar file from terminal or bash script it works fine. So this error is only observed while running docker:
docker run -v my-volume:/workdir container-name
Are there some configurations missing in my Dockerfile or jar file should be copied/added?
Here is my Dockerfile:
FROM java:8
ENV SCALA_VERSION 2.11.8
ENV SBT_VERSION 1.1.1
ENV SPARK_VERSION 2.2.0
ENV SPARK_DIST spark-$SPARK_VERSION-bin-hadoop2.6
ENV SPARK_ARCH $SPARK_DIST.tgz
WORKDIR /opt
# Install Scala
RUN \
cd /root && \
curl -o scala-$SCALA_VERSION.tgz http://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz && \
tar -xf scala-$SCALA_VERSION.tgz && \
rm scala-$SCALA_VERSION.tgz && \
echo >> /root/.bashrc && \
echo 'export PATH=~/scala-$SCALA_VERSION/bin:$PATH' >> /root/.bashrc
# Install SBT
RUN \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb
# Install Spark
RUN \
cd /opt && \
curl -o $SPARK_ARCH http://d3kbcqa49mib13.cloudfront.net/$SPARK_ARCH && \
tar xvfz $SPARK_ARCH && \
rm $SPARK_ARCH && \
echo 'export PATH=$SPARK_DIST/bin:$PATH' >> /root/.bashrc
EXPOSE 9851 9852 4040 9092 9200 9300 5601 7474 7687 7473
VOLUME /workdir
CMD java -cp "target/scala-2.11/demo_consumer.jar" consumer.SparkConsumer
I believe this is because the command you execute from the Docker container are not in the right folder. You could try to execute the commands from the workdir:
docker run -v my-volume:/workdir -w /workdir container_name
If that does not work, you could inspect what's inside the container. Either with a ls:
docker run -v my-volume:/workdir -w /workdir container_name bash -c 'ls -lah'
Or by accessing its bash session:
docker run -v my-volume:/workdir -w /workdir container_name bash
p.s: if bash does not work, try with sh.