https://github.com/Fdawgs/node-poppler
When following the readme one thing I ran into is poppler-utils and poppler-data didn't get installed in /usr/bin. Based on that readme I'm expecting them to get installed there by default.
After running a find inside the container I found the files in /usr/share/doc. This doesn't seem right based on the readme.
How do I ensure poppler-utils and poppler-data get added into /usr/bin as expected in the readme.
Ultimately this is the code I'm instantiating:
const poppler = new Poppler('/usr/bin');
Dockerfile:
FROM docker.registry.sfg.corp.local/devops/nodejs-build-docker:16.16.60850 as build
ARG NAME
ARG IMAGE
ARG SNYK_TOKEN
ARG SNYK_ORGANIZATION
ARG SNYK_PROJECT
COPY . .
RUN apt-get update
RUN apt-get install poppler-utils -y
RUN apt-get install poppler-data -y
RUN chmod +x install-puppeteer.sh
RUN ./install-puppeteer.sh
RUN ./build.sh -n $NAME -i $IMAGE -t $SNYK_TOKEN -o $SNYK_ORGANIZATION -p $SNYK_PROJECT
FROM docker.registry.sfg.corp.local/node:16-buster
RUN apt-get update
RUN apt-get install poppler-utils -y
RUN apt-get install poppler-data -y
COPY ./install-puppeteer.sh .
RUN chmod +x install-puppeteer.sh
RUN ./install-puppeteer.sh
# add the chrome folder to the PATH
ENV PATH "$PATH:/opt/google/chrome"
COPY --from=build package.json package.json
COPY --from=build src src/
COPY --from=build node_modules node_modules/
COPY --from=build artifacts artifacts/
# Add tini to help prevent zombie chrome processes.
ADD https://github.com/krallin/tini/releases/download/v0.19.0/tini /tini
RUN chmod +x /tini
# Add user so we don't need --no-sandbox.
RUN groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& mkdir -p /home/pptruser/.cache \
&& mkdir -p /home/pptruser/.cache/puppeteer \
&& chown -R pptruser:pptruser /home/pptruser \
&& chown -R pptruser:pptruser /node_modules
USER pptruser
RUN chmod +x node_modules/riley/bin/riley.sh
ENTRYPOINT ["/tini", "--"]
CMD ["node_modules/riley/bin/riley.sh"]
I am deploying Jenkins Using GitHub Action Runner using Skaffold.
While the Skaffold is installed over the default image of GitHub Runner
The pod is restarting due to crash loop back off error and causing it to restart.
I am not sure why it is happening.
When I am deploying runner over Google Kubernetes Engine my runner is failing because of following error:
'''A runner exists with the same name
√ Successfully replaced the runner
√ Runner connection is good
# Runner settings
√ Settings Saved.
√ Connected to GitHub
Current runner version: '2.294.0'
2022-12-01 06:03:57Z: Listening for Jobs
Runner update in progress, do not shutdown runner.
Downloading 2.299.1 runner
Waiting for current job finish running.
Generate and execute update script.
Runner will exit shortly for update, should be back online within 10 seconds.
Runner update process finished.
Runner listener exit because of updating, re-launch runner in 5 seconds
Restarting runner...
/home/docker/actions-runner/run-helper.sh: line 20: /home/docker/actions-runner/bin/Runner.Listener: No such file or directory
Exiting with unknown error code: 127
Exiting runner...
'''
Following is the Dockerfile used for runner :
'''FROM ubuntu:22.04
#instalIng skaffold
RUN apt-get update -y && apt-get upgrade -y sudo
RUN apt-get install -y curl
RUN curl -LO https://storage.googleapis.com/skaffold/releases/v2.0.2/skaffold-linux-amd64 \
&& sudo chmod +x skaffold-linux-amd64 \
&& sudo mv skaffold-linux-amd64 /usr/local/bin/skaffold
#install
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
RUN apt-get install -y curl jq build-essential libssl-dev libffi-dev python3 python3-venv python3-dev ca-certificates gnupg2 iputils-ping software-properties-common apt-transport-https lsb-release git zip unzip postgresql-client python3-pip npm
RUN ln -sf /usr/bin/python3 /usr/bin/python
# set the github runner version
ARG RUNNER_VERSION="2.294.0"
# cd into the user directory, download and unzip the github actions runner
RUN cd /home/docker && mkdir actions-runner && cd actions-runner \
&& curl -O -L https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz
# install some additional dependencies
RUN chown -R docker ~docker && /home/docker/actions-runner/bin/installdependencies.sh
#Docker
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" && \
apt-get update && \
apt-get -y install docker-ce
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
# Installing the package
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
# Adding the package path to local
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
#Install Kubectl
RUN curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" \
&& chmod +x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl
# copy over the start.sh script
COPY start.sh start.sh
# make the script executable
RUN chmod +x start.sh && mv start.sh /home/docker
# since the config and run script for actions are not allowed to be run by root,
# set the user to "docker" so all subsequent commands are run as the docker user
USER docker
# set the entrypoint to the start.sh script
ENTRYPOINT ["/home/docker/start.sh"] '''
Below is the startup script :
#!/bin/bash
SNAPTIME=`date '+%Y%m%d%H%M%S'`
echo "Started $SNAPTIME"
ORGANIZATION=$ORGANIZATION
ACCESS_TOKEN=`cat /etc/pat/pat`
GH_PROJECT=$GH_PROJECT
RUNNER_NAME="${RUNNER_NAME:-RUN$SNAPTIME}"
RUNNER_LABELS="${RUNNER_LABELS:-simple}"
REG_TOKEN=$(curl -sX POST -H "Authorization: token ${ACCESS_TOKEN}" https://api.github.com/repos/${ORGANIZATION}/$GH_PROJECT/actions/runners/registration-token | jq .token --raw-output)
# gcloud auth activate-service-account --key-file=${GOOGLE_APPLICATION_CREDENTIALS}
cd /home/docker/actions-runner
./config.sh --name $RUNNER_NAME --labels ${RUNNER_LABELS} --url https://github.com/${ORGANIZATION}/${GH_PROJECT} --unattended --replace --token ${REG_TOKEN}
cleanup() {
echo "Removing runner..."
./config.sh remove --unattended --token ${REG_TOKEN}
}
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
./run.sh & wait $!
The pod is restarting whenever the load is coming into it:
runner-automation-dev-docker-595f48c7dc-k2wbz 1/2 CrashLoopBackOff 7 (67s ago) 18m
I am not sure what exactly is causing this issue.
i am creating an ECS cluster with docker image library/wordpress:latest and i get the desired task in running state but when i build this image using following Dockerfile and push it to my dockerhub repo and then try to create this cluster using my new image the containers fails by giving Exit code 2
Could you please suggest me what am i doing wrong here?
#Base image
FROM wordpress:latest
LABEL version="latest" maintainer="xxxxxxx <xxxxxx>"
# Update apt
RUN apt-get update
# Add a user for running applications.
RUN useradd apps
RUN mkdir -p /home/apps && chown apps:apps /home/apps
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
# Install all necessary packages
RUN apt-get -y install build-essential libpoppler-cpp-dev pkg-config x11vnc xvfb fluxbox wget wmctrl gnupg2 unzip zip
# Set the Chrome repo.
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list
# Install Chrome.
RUN apt-get update && apt-get -y install google-chrome-stable
# Install Chrome driver
RUN wget https://chromedriver.storage.googleapis.com/94.0.4606.61/chromedriver_linux64.zip \
&& unzip chromedriver_linux64.zip \
&& mv chromedriver /usr/bin/chromedriver \
&& chown root:root /usr/bin/chromedriver \
&& chmod +x /usr/bin/chromedriver
# create folder to store requirements.txt file
RUN mkdir /home/automation
RUN mkdir /home/automation/FrontEnd
WORKDIR /home/automation
# Copy config and scripts
COPY requirements.txt ./requirements.txt
COPY TestSuites /home/automation/FrontEnd/TestSuites
COPY Resources /home/automation/FrontEnd/Resources
COPY TestRunner.py /home/automation/FrontEnd
COPY TestRail/ /home/automation/TestRail
COPY run-frontend-tests.sh /home/automation/run-tests.sh
COPY FrontEndResultParser.py /home/automation/FrontEnd/FrontEndResultParser.py
# Install python 3.9 and pip3
RUN apt-get -y install python3-dev python3.9 python3-pip
# Install dependencies
RUN pip install "setuptools==58.0.0"
RUN pip install -r requirements.txt
CMD ["sh", "run-tests.sh"]
i am basically just trying to run a script into the container
I used a worpress image and built my own image out of it, thought it would keep the container up and my script will be executed but that didnt happen. My ECS cluster didnt have any running task, all i saw iin the events was service stage-fe-auto has started 1 tasks: task e83587e734c94f77. and when i opened the task details, it had Exit Code 2 and Working directory /home/app but in my Dockerfile my work directory is differen. Not sure what i did wrong
I want to develop and deploy IaC with Pulumi .NET / C# from a VS Code .devcontainer. How can I add this capability to the environment?
I included the build container steps from https://github.com/pulumi/pulumi-docker-containers/blob/main/docker/dotnet/Dockerfile into .devcontainer/Dockerfile:
ARG DOTNET_VARIANT="3.1"
ARG PULUMI_VERSION=latest
ARG INSTALL_NODE="true"
ARG NODE_VERSION="lts/*"
# --------------------------------------------------------------------------------
FROM debian:11-slim AS builder
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get install -y \
curl \
build-essential \
git
RUN if [ "$PULUMI_VERSION" = "latest" ]; then \
curl -fsSL https://get.pulumi.com/ | bash; \
else \
curl -fsSL https://get.pulumi.com/ | bash -s -- --version $PULUMI_VERSION ; \
fi
# --------------------------------------------------------------------------------
FROM mcr.microsoft.com/vscode/devcontainers/dotnetcore:0-${DOTNET_VARIANT}
RUN if [ "${INSTALL_NODE}" = "true" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi
COPY --from=builder /root/.pulumi/bin/pulumi /pulumi/bin/pulumi
COPY --from=builder /root/.pulumi/bin/*-dotnet* /pulumi/bin/
ENV PATH "/pulumi/bin:${PATH}"
and I control the process with this .devcontainer/devcontainer.json:
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.203.0/containers/alpine
{
"name": "C# (.NET)",
"build": {
"dockerfile": "Dockerfile",
"args": {
"DOTNET_VARIANT": "3.1",
"PULUMI_VERSION": "latest",
"INSTALL_NODE": "true",
"NODE_VERSION": "lts/*"
}
},
"features": {
"azure-cli": "latest"
},
// Set *default* container specific settings.json values on container create.
"settings": {},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-dotnettools.csharp"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",
// Replace when using a ptrace-based debugger like C++, Go, and Rust
// "runArgs": [ "--init", "--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined" ],
"runArgs": [
"--init"
],
// Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
}
be aware that the Rebuild will take a while and that you probably have to reload the devcontainer once it indicates success in the GitHub Codespaces: Details.
After that pulumi login and e.g. pulumi new azure-csharp should work on the container.
You can spin up a codespace, and configure the devcontainer from within the codespace.
I did it just now,
Access the Command Palette (Shift + Command + P / Ctrl + Shift + P), then >start typing "dev container". Select Codespaces: Add Development Container >Configuration Files....
just follow this guide
from within following this guide - https://docs.github.com/en/codespaces/setting-up-your-project-for-codespaces/setting-up-your-project-for-codespaces
I was inspired by the builder container approach for DOTNET mentioned in this thread, and here is how I added pulumi to my GO codespace dockerfile.
Dockerfile
ARG GO_VARIANT="1"
ARG PULUMI_VERSION=latest
FROM debian:11-slim AS builder
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get install -y \
curl \
build-essential \
git
RUN if [ "$PULUMI_VERSION" = "latest" ]; then \
curl -fsSL https://get.pulumi.com/ | bash; \
else \
curl -fsSL https://get.pulumi.com/ | bash -s -- --version $PULUMI_VERSION ; \
fi
FROM mcr.microsoft.com/vscode/devcontainers/go:0-${GO_VARIANT}
# [Choice] Node.js version: none, lts/*, 16, 14, 12, 10
ARG NODE_VERSION="none"
RUN if [ "${NODE_VERSION}" != "none" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1"; fi
COPY --from=builder /root/.pulumi/bin/pulumi /pulumi/bin/pulumi
COPY --from=builder /root/.pulumi/bin/*-go* /pulumi/bin/
ENV PATH "/pulumi/bin:${PATH}"
ENV GOOS "linux"
ENV GOARCH "amd64"
Snippit of code from devcontainer.json
"args": {
... //redacted for clarity
"GO_VARIANT": "1.18",
}
)
I can login with azure, pulumi and pulumi up is also working.
Full example here
https://github.com/DevOpsJava/solution-using-secret
I fixed an issue with pulumi up, which turned out to be a resource problem regarding memory. Switched codespace from 2 to 4 cores, which also doubled the memory to 8GB.
I have an sbt project producing my artifact xyz.
I would like to put it along with all its dependencies in the docker container so it can be used using
coursier launch --mode offline xyz
The important part is that preparation should take use of local cursier cache from host.
I tried
executing sbt publishLocal,
then resolving my artifact dependencies (cursier resolve xyz),
then preparing to directories - local & cache - by copying resolved artifact into them
then copying those directories into docker container (as coursier cache and ivy local respectively).
This didn't work because coursier doesn't list .pom and .xml files in its output. I tried copying whole directories (abc/1.0.0 instead of abc/1.0.0/some.jar) but AFAIK there is no reliable way to know how many folders up one has to go because maven and ivy have different dir structures.
while my usecase is not quite identical to yours -- I figure I'd write up my findings and maybe my solution works for you as well!
here's my sample dockerfile, I used this to install scalafmt in an offline-compatible way
FROM ubuntu:jammy
RUN : \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* # */ stackoverflow highlighting bug
ARG CS=v2.1.0-RC4
ARG CS_SHA256=176e92e08ab292531aa0c4993dbc9f2c99dec79578752f3b9285f54f306db572
ARG JDK_SHA256=aef49cc7aa606de2044302e757fa94c8e144818e93487081c4fd319ca858134b
ENV PATH=/opt/coursier/bin:$PATH
RUN : \
&& curl --location --silent --output /tmp/cs.gz "https://github.com/coursier/coursier/releases/download/${CS}/cs-x86_64-pc-linux.gz" \
&& echo "${CS_SHA256} /tmp/cs.gz" | sha256sum --check \
&& curl --location --silent --output /tmp/jdk.tgz "https://download.java.net/openjdk/jdk17/ri/openjdk-17+35_linux-x64_bin.tar.gz" \
&& echo "${JDK_SHA256} /tmp/jdk.tgz" | sha256sum --check \
&& mkdir -p /opt/coursier \
&& tar --strip-components=1 -C /opt/coursier -xf /tmp/jdk.tgz \
&& gunzip /tmp/cs.gz \
&& mv /tmp/cs /opt/coursier/bin \
&& chmod +x /opt/coursier/bin/cs \
&& rm /tmp/jdk.tgz
ENV COURSIER_CACHE=/opt/.cs-cache
RUN : \
&& cs fetch scalafmt:3.6.1 \
&& cs install scalafmt:3.6.1 --dir /opt/wd/bin
the key to offline execution for me was to use cs fetch and set COURSIER_CACHE
here's the offline execution succeeding:
$ docker run --net=none --rm -ti cs /opt/wd/bin/scalafmt --version
scalafmt 3.6.1