node-poppler installation of poppler-utils and poppler-data - poppler

https://github.com/Fdawgs/node-poppler
When following the readme one thing I ran into is poppler-utils and poppler-data didn't get installed in /usr/bin. Based on that readme I'm expecting them to get installed there by default.
After running a find inside the container I found the files in /usr/share/doc. This doesn't seem right based on the readme.
How do I ensure poppler-utils and poppler-data get added into /usr/bin as expected in the readme.
Ultimately this is the code I'm instantiating:
const poppler = new Poppler('/usr/bin');
Dockerfile:
FROM docker.registry.sfg.corp.local/devops/nodejs-build-docker:16.16.60850 as build
ARG NAME
ARG IMAGE
ARG SNYK_TOKEN
ARG SNYK_ORGANIZATION
ARG SNYK_PROJECT
COPY . .
RUN apt-get update
RUN apt-get install poppler-utils -y
RUN apt-get install poppler-data -y
RUN chmod +x install-puppeteer.sh
RUN ./install-puppeteer.sh
RUN ./build.sh -n $NAME -i $IMAGE -t $SNYK_TOKEN -o $SNYK_ORGANIZATION -p $SNYK_PROJECT
FROM docker.registry.sfg.corp.local/node:16-buster
RUN apt-get update
RUN apt-get install poppler-utils -y
RUN apt-get install poppler-data -y
COPY ./install-puppeteer.sh .
RUN chmod +x install-puppeteer.sh
RUN ./install-puppeteer.sh
# add the chrome folder to the PATH
ENV PATH "$PATH:/opt/google/chrome"
COPY --from=build package.json package.json
COPY --from=build src src/
COPY --from=build node_modules node_modules/
COPY --from=build artifacts artifacts/
# Add tini to help prevent zombie chrome processes.
ADD https://github.com/krallin/tini/releases/download/v0.19.0/tini /tini
RUN chmod +x /tini
# Add user so we don't need --no-sandbox.
RUN groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& mkdir -p /home/pptruser/.cache \
&& mkdir -p /home/pptruser/.cache/puppeteer \
&& chown -R pptruser:pptruser /home/pptruser \
&& chown -R pptruser:pptruser /node_modules
USER pptruser
RUN chmod +x node_modules/riley/bin/riley.sh
ENTRYPOINT ["/tini", "--"]
CMD ["node_modules/riley/bin/riley.sh"]

Related

Deploying Jenkins using skaffold via GitHub Action Runner

I am deploying Jenkins Using GitHub Action Runner using Skaffold.
While the Skaffold is installed over the default image of GitHub Runner
The pod is restarting due to crash loop back off error and causing it to restart.
I am not sure why it is happening.
When I am deploying runner over Google Kubernetes Engine my runner is failing because of following error:
'''A runner exists with the same name
√ Successfully replaced the runner
√ Runner connection is good
# Runner settings
√ Settings Saved.
√ Connected to GitHub
Current runner version: '2.294.0'
2022-12-01 06:03:57Z: Listening for Jobs
Runner update in progress, do not shutdown runner.
Downloading 2.299.1 runner
Waiting for current job finish running.
Generate and execute update script.
Runner will exit shortly for update, should be back online within 10 seconds.
Runner update process finished.
Runner listener exit because of updating, re-launch runner in 5 seconds
Restarting runner...
/home/docker/actions-runner/run-helper.sh: line 20: /home/docker/actions-runner/bin/Runner.Listener: No such file or directory
Exiting with unknown error code: 127
Exiting runner...
'''
Following is the Dockerfile used for runner :
'''FROM ubuntu:22.04
#instalIng skaffold
RUN apt-get update -y && apt-get upgrade -y sudo
RUN apt-get install -y curl
RUN curl -LO https://storage.googleapis.com/skaffold/releases/v2.0.2/skaffold-linux-amd64 \
&& sudo chmod +x skaffold-linux-amd64 \
&& sudo mv skaffold-linux-amd64 /usr/local/bin/skaffold
#install
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
RUN apt-get install -y curl jq build-essential libssl-dev libffi-dev python3 python3-venv python3-dev ca-certificates gnupg2 iputils-ping software-properties-common apt-transport-https lsb-release git zip unzip postgresql-client python3-pip npm
RUN ln -sf /usr/bin/python3 /usr/bin/python
# set the github runner version
ARG RUNNER_VERSION="2.294.0"
# cd into the user directory, download and unzip the github actions runner
RUN cd /home/docker && mkdir actions-runner && cd actions-runner \
&& curl -O -L https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz
# install some additional dependencies
RUN chown -R docker ~docker && /home/docker/actions-runner/bin/installdependencies.sh
#Docker
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" && \
apt-get update && \
apt-get -y install docker-ce
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
# Installing the package
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
# Adding the package path to local
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
#Install Kubectl
RUN curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" \
&& chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl
# copy over the start.sh script
COPY start.sh start.sh
# make the script executable
RUN chmod +x start.sh && mv start.sh /home/docker
# since the config and run script for actions are not allowed to be run by root,
# set the user to "docker" so all subsequent commands are run as the docker user
USER docker
# set the entrypoint to the start.sh script
ENTRYPOINT ["/home/docker/start.sh"] '''
Below is the startup script :
#!/bin/bash
SNAPTIME=`date '+%Y%m%d%H%M%S'`
echo "Started $SNAPTIME"
ORGANIZATION=$ORGANIZATION
ACCESS_TOKEN=`cat /etc/pat/pat`
GH_PROJECT=$GH_PROJECT
RUNNER_NAME="${RUNNER_NAME:-RUN$SNAPTIME}"
RUNNER_LABELS="${RUNNER_LABELS:-simple}"
REG_TOKEN=$(curl -sX POST -H "Authorization: token ${ACCESS_TOKEN}" https://api.github.com/repos/${ORGANIZATION}/$GH_PROJECT/actions/runners/registration-token | jq .token --raw-output)
# gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}
cd /home/docker/actions-runner
./config.sh --name $RUNNER_NAME --labels ${RUNNER_LABELS} --url https://github.com/${ORGANIZATION}/${GH_PROJECT} --unattended --replace --token ${REG_TOKEN}
cleanup() {
echo "Removing runner..."
./config.sh remove --unattended --token ${REG_TOKEN}
}
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
./run.sh & wait $!
The pod is restarting whenever the load is coming into it:
runner-automation-dev-docker-595f48c7dc-k2wbz 1/2 CrashLoopBackOff 7 (67s ago) 18m
I am not sure what exactly is causing this issue.

ECS container exit code 2

i am creating an ECS cluster with docker image library/wordpress:latest and i get the desired task in running state but when i build this image using following Dockerfile and push it to my dockerhub repo and then try to create this cluster using my new image the containers fails by giving Exit code 2
Could you please suggest me what am i doing wrong here?
#Base image
FROM wordpress:latest
LABEL version="latest" maintainer="xxxxxxx <xxxxxx>"
# Update apt
RUN apt-get update
# Add a user for running applications.
RUN useradd apps
RUN mkdir -p /home/apps && chown apps:apps /home/apps
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
# Install all necessary packages
RUN apt-get -y install build-essential libpoppler-cpp-dev pkg-config x11vnc xvfb fluxbox wget wmctrl gnupg2 unzip zip
# Set the Chrome repo.
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list
# Install Chrome.
RUN apt-get update && apt-get -y install google-chrome-stable
# Install Chrome driver
RUN wget https://chromedriver.storage.googleapis.com/94.0.4606.61/chromedriver_linux64.zip \
&& unzip chromedriver_linux64.zip \
&& mv chromedriver /usr/bin/chromedriver \
&& chown root:root /usr/bin/chromedriver \
&& chmod +x /usr/bin/chromedriver
# create folder to store requirements.txt file
RUN mkdir /home/automation
RUN mkdir /home/automation/FrontEnd
WORKDIR /home/automation
# Copy config and scripts
COPY requirements.txt ./requirements.txt
COPY TestSuites /home/automation/FrontEnd/TestSuites
COPY Resources /home/automation/FrontEnd/Resources
COPY TestRunner.py /home/automation/FrontEnd
COPY TestRail/ /home/automation/TestRail
COPY run-frontend-tests.sh /home/automation/run-tests.sh
COPY FrontEndResultParser.py /home/automation/FrontEnd/FrontEndResultParser.py
# Install python 3.9 and pip3
RUN apt-get -y install python3-dev python3.9 python3-pip
# Install dependencies
RUN pip install "setuptools==58.0.0"
RUN pip install -r requirements.txt
CMD ["sh", "run-tests.sh"]
i am basically just trying to run a script into the container
I used a worpress image and built my own image out of it, thought it would keep the container up and my script will be executed but that didnt happen. My ECS cluster didnt have any running task, all i saw iin the events was service stage-fe-auto has started 1 tasks: task e83587e734c94f77. and when i opened the task details, it had Exit Code 2 and Working directory /home/app but in my Dockerfile my work directory is differen. Not sure what i did wrong

I got "ERROR nothing RPROVIDES" during bitbake

I tried to run
bitbake core-image-minimal
but I got
ERROR: Nothing RPROVIDES 'libcrypto' (but /home/yocto/fsl-4-14-98/sources/poky/meta/recipes-core/images/core-image-minimal.bb RDEPENDS on or otherwise requires it)
In core-image-minimal.bb I have
SUMMARY = "A console-only image that fully supports the target device \ hardware."
IMAGE_FEATURES += "splash"
LICENSE = "MIT"
inherit core-image
I also got a second error
ERROR: Required build target 'core-image-minimal' has no buildable providers.
Missing or unbuildable dependency chain was: ['core-image-minimal', 'libcrypto']
What am I missing? It's my first time using Yocto.
You can find the tutorial I'm using below:
- prepare system for Yocto
$ sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib \
build-essential chrpath socat cpio python python3 python3-pip python3-pexpect \
xz-utils debianutils iputils-ping
$ sudo apt-get install libsdl1.2-dev xterm
$ sudo apt-get install make xsltproc docbook-utils fop dblatex xmlto
$ sudo apt-get install ncurses-dev
- setting up repo
$ mkdir ~/bin
$ curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
$ chmod a+x ~/bin/repo
- Add the following line to the .bashrc file to ensure that the ~/bin folder is in your PATH variable.
export PATH=~/bin:$PATH
- configure git:
$ git config --global user.name "Your Name"
$ git config --global user.email "Your Email"
$ git config --list
NEW YOCTO RELEASE WITH KERNEL 4.14.98 AND OPEN SSL 1.1.1J
----
$ cd /usr/bin
$ sudo add-apt-repository ppa:deadsnakes/ppa
$ sudo apt-get update
$ sudo apt-get install python3.6
$ sudo rm python
$ sudo ln -s python3.6 python
- load iMX recipes
$ cd
### OLD RELEASE ### $ mkdir fsl-release-bsp
### OLD RELEASE ### $ cd fsl-release-bsp
### OLD RELEASE ### $ repo init -u https://source.codeaurora.org/external/imx/fsl-arm-yocto-bsp -b imx-4.1-krogoth
### OLD RELEASE ### $ repo sync
$ mkdir fsl-4-14-98
$ cd fsl-4-14-98
$ repo init -u https://source.codeaurora.org/external/imx/imx-manifest -b imx-linux-sumo -m imx-4.14.98-2.3.3.xml
### IMPORTANT ### $ git config --global url."https://".insteadOf git://
$ repo sync
$ sudo rm /usr/bin/python
$ sudo ln -s /usr/bin/python2 /usr/bin/python
- configure machine
$ DISTRO=fsl-imx-fb MACHINE=imx6solosabresd source fsl-setup-release.sh -b rsr1296
- create shared directory
$ mkdir ~/yocto
$ mkdir ~/yocto/download
$ mkdir ~/yocto/sstate-cache
$ gedit conf/local.conf
and add this lines:
DL_DIR="/home/multi/yocto/download"
SSTATE_DIR="/home/multi/yocto/sstate-cache"
CONNECTIVITY_CHECK_URIS ?= "https://www.google.com"
IMAGE_INSTALL_append = "pcsc-lite openssl-bin libcrypto"
PREFERRED_VERSION_openssl = "1.1.1j"
IMAGE_INSTALL_remove += "packagegroup-fsl-optee-imx"
BAD_RECOMMENDATIONS += "udev-hwdb"
comment out this lines
#PACKAGECONFIG_append_pn-qemu-native = " sdl"
#PACKAGECONFIG_append_pn-nativesdk-qemu = " sdl"
- update Open SSL to 1.1.1J (http://cgit.openembedded.org/openembedded-core/tree/meta/recipes-connectivity/openssl?h=master)
$ cd ~/fsl-4-14-98/sources/poky/meta/recipes-connectivity
$ mv openssl /home/multi/Documents
$ tar xvzf /home/multi/Documents/openssl_1.1.1j.tar.gz
$ cd -
- compile full system
$ bitbake core-image-minimal
Everything is fine with no errors in log until bitbake core-image-minimal

Docker-Compose up Failed Because `Service 'nginx' failed to build`

I'm new to docker, and have been trying to troubleshoot this error for a while. I've read similar posts and nothing seems to work.
Full error:
failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to copy: httpReadSeeker: failed open: could not fetch content descriptor sha256:eff196a3849ad6541fd3afe676113896be214753740e567575bb562986bd2cd4 (application/vnd.docker.distribution.manifest.v1+json) from remote: not found
ERROR: Service 'nginx' failed to build : Build failed
I have three Dockerfiles, one for my react frontend, one for my django backend, and one for nginx.
Frontend dockerfile:
COPY ./react_app/package.json .
RUN apk add --no-cache --virtual .gyp \
python \
make \
g++ \
&& npm install \
&& apk del .gyp
COPY ./react_app .
ARG API_SERVER
ENV REACT_APP_API_SERVER=${API_SERVER}
RUN REACT_APP_API_SERVER=${API_SERVER} \
npm run build
WORKDIR /usr/src/app
RUN npm install -g serve
COPY --from=builder /usr/src/app/build ./build
Django Python backend Dockerfile
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
FROM python:3.7.9-slim-stretch
RUN apt-get update && apt-get install -y --no-install-recommends netcat && \
apt-get autoremove -y && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --no-cache /wheels/*
WORKDIR /usr/src/app
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
COPY ./django_app .
RUN chmod +x /usr/src/app/entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
and the nginx dockerfile
FROM nginx:1.19.0-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
WORKDIR /usr/src/app
Backend Dockerfile
###########
# BUILDER #
###########
# pull official base image
FROM python:3.7.9-slim-stretch as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.7.9-slim-stretch
# installing netcat (nc) since we are using that to listen to postgres server in entrypoint.sh
RUN apt-get update && apt-get install -y --no-install-recommends netcat && \
apt-get autoremove -y && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# install dependencies
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --no-cache /wheels/*
# set work directory
WORKDIR /usr/src/app
# copy entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
# copy our django project
COPY ./django_app .
# run entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
Nginx Dockerfile
FROM nginx:1.19.0-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
WORKDIR /usr/src/app
I don't know where to go from here. I've tried following 5 or 6 similar stack overflows and many more github issues, to no avail. Thanks, please let me know.

mongo inside ansible-operator image

I need the mongo shell installed inside the ansible-operator image.
My first attempt was to use this Dockerfile:
FROM mongo:4.2.9
FROM quay.io/operator-framework/ansible-operator:v1.0.0
COPY --from=0 /usr/bin/mongo /usr/bin/mongo
COPY requirements.yml ${HOME}/requirements.yml
RUN ansible-galaxy collection install -r ${HOME}/requirements.yml \
&& chmod -R ug+rwx ${HOME}/.ansible
COPY watches.yaml ${HOME}/watches.yaml
COPY roles/ ${HOME}/roles/
COPY playbooks/ ${HOME}/playbooks/
Unsurprisingly, this didn't worked.
"stderr_lines": ["/usr/bin/mongo: /lib64/libcurl.so.4: no version information available (required by /usr/bin/mongo)", "Failed global initialization: InvalidSSLConfiguration Can not set up PEM key file."]
Can anyone help me?
I finally figured it out...
Just add this to your Dockerfile.
(mongodb-org-4.2.repo)
USER 0
COPY mongodb-org-4.2.repo /etc/yum.repos.d/mongodb-org-4.2.repo
RUN yum -y update \
&& yum install -y mongodb-org-shell \
&& yum clean all \
&& rm -rf /var/cache/yum
RUN rm /etc/yum.repos.d/mongodb-org-4.2.repo
USER ${USER_UID}