Getting error "Not Found" when the script is trying to find .sh in Dockerfile - docker-compose

DockerFile - Path is rootDirectory/odm-ondocker/decisionserver/decisionserverconsole.
ARG FROMLIBERTY
FROM maven:3.5.2-jdk-8-alpine AS builder
ARG ODMDOCKERDIR
ENV ODMDOCKERDIR $ODMDOCKERDIR
ENV SCRIPT /script
ENV APPS /config/apps
COPY $ODMDOCKERDIR/welcomepage /welcomepage
COPY $ODMDOCKERDIR/common/script $SCRIPT
COPY $ODMDOCKERDIR/common/drivers /config/resources
COPY $ODMDOCKERDIR/common/features $SCRIPT
COPY $ODMDOCKERDIR/decisionserver/decisionserverconsole/script $SCRIPT
COPY $ODMDOCKERDIR/decisionserver/decisionserverruntime/script $SCRIPT
# Use production liberty if needed
RUN echo $SCRIPT
COPY $ODMDOCKERDIR/resources/* /wlp-embeddable/
RUN chmod a+x $SCRIPT/fixWLPForProduction.sh && sync && $SCRIPT/fixWLPForProduction.sh
# Install missing require package in the alpine builder image
RUN apk add --no-cache bash perl ca-certificates wget
# Build Welcome page
RUN cd /welcomepage; mvn -B clean install | grep -v 'Download.*' && mkdir -p $APPS
I am getting error on line RUN chmod a+x $SCRIPT/fixWLPForProduction.sh && sync && $SCRIPT/fixWLPForProduction.sh as below:-
/bin/sh: /script/fixWLPForProduction.sh: not found
ERROR: Service 'odm-decisionserverconsole' failed to build: The command '/bin/sh -c chmod a+x $SCRIPT/fixWLPForProduction.sh && sync && $SCRIPT/fixWLPForProduction.sh' returned a non-zero code: 127
I am new to docker so not able to figure out why it's coming although the file is present inside root directory/common/scripts folder. I figured out, it's trying to find script folder which is under /odm-ondocker/common/script. I tried giving value of SCRIPT variable as /common/script but still it's giving as Not Found.

RUN chmod a+x $SCRIPT/fixWLPForProduction.sh && sync && $SCRIPT/fixWLPForProduction.sh
is looking for the script /script/fixWLPForProduction.sh as defined by your ENV SCRIPT command. But you said in your comment the script is /common/script/fixWLPForProduction.sh Trying using this instead
ENV SCRIPT /common/script Or use the absolute path for the script. From there, you can know how to properly set your ENV variable

Related

ECS container exit code 2

i am creating an ECS cluster with docker image library/wordpress:latest and i get the desired task in running state but when i build this image using following Dockerfile and push it to my dockerhub repo and then try to create this cluster using my new image the containers fails by giving Exit code 2
Could you please suggest me what am i doing wrong here?
#Base image
FROM wordpress:latest
LABEL version="latest" maintainer="xxxxxxx <xxxxxx>"
# Update apt
RUN apt-get update
# Add a user for running applications.
RUN useradd apps
RUN mkdir -p /home/apps && chown apps:apps /home/apps
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
# Install all necessary packages
RUN apt-get -y install build-essential libpoppler-cpp-dev pkg-config x11vnc xvfb fluxbox wget wmctrl gnupg2 unzip zip
# Set the Chrome repo.
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list
# Install Chrome.
RUN apt-get update && apt-get -y install google-chrome-stable
# Install Chrome driver
RUN wget https://chromedriver.storage.googleapis.com/94.0.4606.61/chromedriver_linux64.zip \
&& unzip chromedriver_linux64.zip \
&& mv chromedriver /usr/bin/chromedriver \
&& chown root:root /usr/bin/chromedriver \
&& chmod +x /usr/bin/chromedriver
# create folder to store requirements.txt file
RUN mkdir /home/automation
RUN mkdir /home/automation/FrontEnd
WORKDIR /home/automation
# Copy config and scripts
COPY requirements.txt ./requirements.txt
COPY TestSuites /home/automation/FrontEnd/TestSuites
COPY Resources /home/automation/FrontEnd/Resources
COPY TestRunner.py /home/automation/FrontEnd
COPY TestRail/ /home/automation/TestRail
COPY run-frontend-tests.sh /home/automation/run-tests.sh
COPY FrontEndResultParser.py /home/automation/FrontEnd/FrontEndResultParser.py
# Install python 3.9 and pip3
RUN apt-get -y install python3-dev python3.9 python3-pip
# Install dependencies
RUN pip install "setuptools==58.0.0"
RUN pip install -r requirements.txt
CMD ["sh", "run-tests.sh"]
i am basically just trying to run a script into the container
I used a worpress image and built my own image out of it, thought it would keep the container up and my script will be executed but that didnt happen. My ECS cluster didnt have any running task, all i saw iin the events was service stage-fe-auto has started 1 tasks: task e83587e734c94f77. and when i opened the task details, it had Exit Code 2 and Working directory /home/app but in my Dockerfile my work directory is differen. Not sure what i did wrong

Building postgres from source throws "'utils/errcodes.h' file not found" when called from other makefile

I am currently constructing a Makefile and one of the things it will be doing is downloading and building postgres from source. Before starting to write the makefile file I always used the following set of commands to do this:
curl -LJ https://github.com/postgres/postgres/archive/refs/tags/REL_13_3.zip -o postgres.zip
unzip postgres.zip
rm postgres.zip
cd postgres-REL_13_3
./configure --prefix "`pwd`" --without-readline --without-zlib
make
make install
Executing the above listed commands in the terminal results in the successful installation of postgres. Then I translated these into a makefile which looks as follows:
build:
curl -LJ https://github.com/postgres/postgres/archive/refs/tags/REL_13_3.zip -o postgres.zip
unzip postgres.zip
rm postgres.zip
cd postgres-REL_13_3 \
&& ./configure --prefix "`pwd`" --without-readline --without-zlib \
&& $(MAKE) \
&& $(MAKE) install
Running this Makefile results in the error:
../../src/include/utils/elog.h:71:10: fatal error: 'utils/errcodes.h' file not found
It seems that something about calling the make from another Makefile causes a referencing issue with the files during the build process, but I just can figure out for the life of me what I have to change to fix this.
It appears to be the influence of Makefile enviromental variables. I didn't discover the exact mechanism, but unsetting them helps.
build:
curl -LJ https://github.com/postgres/postgres/archive/refs/tags/REL_13_3.zip -o postgres.zip
unzip postgres.zip
rm postgres.zip
unset MAKELEVEL && unset MAKEFLAGS && unset MFLAGS && cd postgres-REL_13_3 \
&& ./configure --prefix "`pwd`" --without-readline --without-zlib \
&& $(MAKE) \
&& $(MAKE) install

Scaleway scw init Inside Docker Container

I am trying to make the Scaleway CLI installed as part of a docker image I'm building to run Azure Pipelines.
My Dockerfile looks like this:
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat\
docker.io \
s3cmd
# Install Scaleway CLI
RUN curl -o /usr/local/bin/scw -L "https://github.com/scaleway/scaleway-cli/releases/download/v2.1.0/scw-2-1-0-linux-x86_64"
RUN chmod +x /usr/local/bin/scw
# Add config for Scaleway CLI
RUN mkdir -p ./config
RUN mkdir -p ./config/scw
COPY ./config/config.yaml $HOME/.config/scw/config.yaml
RUN scw init
# Add private key for SSH connections
COPY ./config/id_rsa $HOME/.ssh/id_rsa
# Config s3cmd
COPY ./config/.s3cfg $HOME/.s3cfg
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh"]
The key section being:
# Install Scaleway CLI
RUN curl -o /usr/local/bin/scw -L "https://github.com/scaleway/scaleway-cli/releases/download/v2.1.0/scw-2-1-0-linux-x86_64"
RUN chmod +x /usr/local/bin/scw
# Add config for Scaleway CLI
RUN mkdir -p ./config
RUN mkdir -p ./config/scw
COPY ./config/config.yaml $HOME/.config/scw/config.yaml
RUN scw init
The config.yaml file referenced above looks like the following (minus the real values of course):
access_key: <key>
secret_key: <secret>
default_organization_id: <orgId>
default_project_id: <projectId>
default_region: nl-ams
default_zone: nl-ams-1
However, when it executes RUN scw init, the output is Invalid email or secret-key: ''
I have tried without running scw init at all, but then calls to scw fail, saying
Access key is required
Details: Access_key can be initialised using the command "scw init".
After initialisation, there are three ways to provide access_key:
with the Scaleway config file, in the access_key key: /root/.config/scw/config.yaml;
with the SCW_ACCESS_KEY environement variable;
Note that the last method has the highest priority.
More info:
https://github.com/scaleway/scaleway-sdk-go/tree/master/scw#scaleway-config
Hint: You can get your credentials here:
https://console.scaleway.com/account/credentials
Which admittedly is one of the better error messages I've seen, but nonetheless has not helped me. I am going to try the Environment Variable approach, which I suspect may do the trick, but I'd still like to know what I'm doing wrong with this config.yaml file.
Lastly... someone with more rep than me needs to create the tag "scaleway". Hard to reference the actual technology in question when the tag doesn't exist.

VS Code Remote-Containers: cannot create directory ‘/home/appuser’:

I'm trying to use the Remote - Containers extension for Visual Studio Code, but when I "Open Folder in Container", I get this error:
Run: docker exec 0d0c1eac6f38b81566757786f853d6f6a4f3a836c15ca7ed3a3aaf29b9faab14 /bin/sh -c set -o noclobber ; mkdir -p '/home/appuser/.vscode-server/data/Machine' && { > '/home/appuser/.vscode-server/data/Machine/.writeMachineSettingsMarker' ; } 2> /dev/null
mkdir: cannot create directory ‘/home/appuser’: Permission denied
My Dockerfile uses:
FROM python:3.7-slim
...
RUN useradd -ms /bin/bash appuser
USER appuser
I've also tried:
RUN adduser -D appuser
RUN groupadd -g 999 appuser && \
useradd -r -u 999 -g appuser appuser
USER appuser
Both of these work if I build them directly. How do I get this to work?
What works for me is to create a non-root user in my Dockerfile and then configure the VS Code dev container to use that user.
Step 1. Create the non-root user in your Docker image
ARG USER_ID=1000
ARG GROUP_ID=1000
RUN groupadd --system --gid ${GROUP_ID} MY_GROUP && \
useradd --system --uid ${USER_ID} --gid MY_GROUP --home /home/MY_USER --shell /sbin/nologin MY_USER
Step 2. Configure .devcontainer/devcontainer.json file in the root of your project (should be created when you start remote dev)
"remoteUser": "MY_USER" <-- this is the setting you want to update
If you use docker compose, it's possible to configure VS Code to run the entire container as the non-root user by configuring .devcontainer/docker-compose.yml, but I've been happy with the process described above so I haven't experimented further.
You might get some additional insight by reading through the VS Code docs on this topic.
go into your WSL2 and check what is your local uid (non-root) using command id.
in my case it is UID=1000(ubuntu).
Change your dockerfile, to something like this:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /home/ubuntu
COPY . /home/ubuntu
# Creates a non-root user and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN useradd -u 1000 ubuntu && chown -R ubuntu /home/ubuntu
USER ubuntu
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "app.py"]

Install MongoDB on Debian Buster

How to install the latest MongoDB 3.4 or even 3.6?
They support with Ubuntu, but my server is Debian Buster and I am stuck with MongoDB 3.2.
I don't know if this is a good idea yet, but I just installed it by adding the sid repos and installing using the mongodb-server package. For me this installs version 3.4.18.
I created /etc/apt/sources.list.d/sid.list with:
deb http://deb.debian.org/debian/ sid main
deb-src http://deb.debian.org/debian/ sid main
then did
apt update
apt install mongodb-server
and verified that it's working by connecting with mongo.
I have found the solution for a build script, the description is found here:
https://github.com/patrikx3/docker-debian-testing-mongodb-stable
The description:
Debian Stretch / Buster / Bullseye / Testing MongoDB and MongoDB Tools build stable builder script, what it does as exactly:
It is basically a built for the latest MongoDB for Debian.
The current varsion is the r4.0.x build (release).
Warning It will remove all mongodb* apt packages in ./scripts/build-server.sh and /etc/systemd/system/mongodb-server.service is replaced.
It install the required apt dependencies and generates the SystemD service and makes it enabled.
Check if the build works (building is below). It runs all tests, so if it works, then it really does, actually. If there is an error, of course, you will not deploy on your server. So, if building and testing works, then it puts the binaries as it follow and you are sure and done.
The build as follows build-server.sh:
#!/usr/bin/env bash
# based on https://github.com/mongodb/mongo/wiki/Build-Mongodb-From-Source
# the current directory
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# if an error exit right away, don't continue the build
set -e
# some info
echo
#echo "Works like command, use a tag: sudo ./scripts/build-server.sh r4.2.0"
echo "Works like command, use a tag: sudo ./scripts/build-server.sh r4.0.12"
echo
# check if we are root
if [[ $EUID -ne 0 ]]; then
echo "This script must be ran via root 'sudo' command or using in 'sudo -i'."
exit 1
fi
# require mongo branch
#if [ -z "${1}" ]; then
# echo "First argument must be the MONGODB_BRANCH for example 'v4.1'"
# exit 1
#fi
#MONGODB_BRANCH="${1}"
# require mongo release
#if [ -z "${2}" ]; then
# echo "The second argument must be the MONGODB_RELEASE for example 'r4.1.0'"
# exit 1
#fi
#MONGODB_RELEASE="${2}"
# require mongo release
if [ -z "${1}" ]; then
echo "The first argument must be the MONGODB_RELEASE for example 'r4.0.12'"
exit 1
fi
MONGODB_RELEASE="${1}"
# delete all mongo other programs, we self compile
apt remove --purge mongo*
# the required packages for debian
apt -y install libboost-filesystem-dev libboost-program-options-dev libboost-system-dev libboost-thread-dev build-essential gcc python scons git glibc-source libssl-dev python-pip libffi-dev python-dev libcurl4-openssl-dev #libcurl-dev
pip install -U pip pyyaml typing
# generate build directory variable
BUILD=$DIR/../build
# delete previous build directory
rm -rf $BUILD/mongo
# generate new build directory
mkdir -p $BUILD
# the mongodb.conf and systemd services files in a directory variable
ROOT_FS=$DIR/../artifacts/root-filesystem
# find out how many cores we have and we use that many
if [ -z "$CORES" ]; then
CORES=$(grep -c ^processor /proc/cpuinfo)
fi
echo Using $CORES cores
# go to the build directory
pushd $BUILD
# clone the mongo by branch
#git clone -b ${MONGODB_BRANCH} https://github.com/mongodb/mongo.git
# clone the mongo by branch
git clone https://github.com/mongodb/mongo.git
# the mongo directory is a variables
MONGO=$BUILD/mongo
# go to the mongo directory
pushd $MONGO
# checkout the mongo release
git checkout tags/${MONGODB_RELEASE}
# hack to old version python pip cryptography from 1.7.2 to use the latest
sed -i 's#cryptography == 1.7.2#\#cryptography == 1.7.2#g' buildscripts/requirements.txt
# this is only because 4.0.12 uses 1.7.2 and
# https://github.com/pyca/cryptography/issues/4193#issuecomment-381236459
# support minimum latest (2.2)
pip install cryptography
# install the python requirements
#pip install -r etc/pip/dev-requirements.txt
pip install -r buildscripts/requirements.txt
# somewhere in the build it says if we install this, it is faster to build
pip2 install --user regex
# build everything
scons all --disable-warnings-as-errors -j $CORES --ssl
# install the mongo programs all
scons install --disable-warnings-as-errors -j $CORES --prefix /usr
# create a copy of the old config
#TIMESTAMP=$(($(date +%s%N)/1000000))
#cp /etc/mongodb.conf /etc/mongodb.conf.$TIMESTAMP.save
# copy the mongodb.conf configured and the systemd service file
# dangerous!!! removed
# cp -avr $ROOT_FS/. /
MONGODB_SERVICE=etc/systemd/system/mongodb-server.service
cp $ROOT_FS/$MONGODB_SERVICE /$MONGODB_SERVICE
chown root:root /$MONGODB_SERVICE
chmod o-rwx /$MONGODB_SERVICE
# generate mongodb user and group
useradd mongodb -d /var/lib/mongodb -s /bin/false || true
# create the required mongodb database directory and add safety
mkdir -p /var/lib/mongodb
chmod o-rwx -R /var/lib/mongodb
chown -R mongodb:mongodb /var/lib/mongodb
# create the required mongodb log directory and add safety
mkdir -p /var/log/mongodb
chmod o-rwx -R /var/log/mongodb
chown -R mongodb:mongodb /var/log/mongodb
# create the required run socket directory and add safety
mkdir -p /run/mongodb
chmod o-rwx -R /run/mongodb
chown -R mongodb:mongodb /run/mongodb
# add safety to the mongodb config file
chmod o-rwx /etc/mongodb.conf || true
chown mongodb:mongodb /etc/mongodb.conf || true
# reload systemd services
systemctl daemon-reload
# enable the mongodb-server
systemctl enable mongodb-server
# start the mongodb-server
#service mongodb-server start
# exit of the mongo directory
popd
# exit the build directory
popd
# delete current build directory
rm -rf $BUILD/mongo
The build as follows build-tools.sh:
#!/usr/bin/env bash
# based on https://github.com/mongodb/mongo-tools
# the current directory
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# if an error exit right away, don't continue the build
set -e
# some info
echo
echo "Works like command: sudo ./scripts/build-tools.sh r4.0.12"
echo
# check if we are root
if [[ $EUID -ne 0 ]]; then
echo "This script must be ran via root 'sudo' command or using in 'sudo -i'."
exit 1
fi
# require mongo release
if [ -z "${1}" ]; then
echo "The first argument must be the MONGODB_RELEASE for example 'r4.0.12'"
exit 1
fi
MONGODB_RELEASE="${1}"
## delete all mongo other programs, we self compile
##apt remove --purge mongo*
## the required packages for debian
##apt -y install gcc python scons git glibc-source libssl-dev python-pip
apt -y install golang libpcap-dev
export GOROOT=$(go env GOROOT)
# generate build directory variable
BUILD=$DIR/../build/src/github.com/mongodb/
# delete previous build directory
rm -rf $BUILD/mongo-tools
# generate new build directory
mkdir -p $BUILD
# find out how many cores we have and we use that many
CORES=$(grep -c ^processor /proc/cpuinfo)
# go to the build directory
pushd $BUILD
# clone the mongo by branch
git clone https://github.com/mongodb/mongo-tools
# the mongo directory is a variables
MONGO_TOOLS=$BUILD/mongo-tools
# go to the mongo directory
pushd $MONGO_TOOLS
# checkout the mongo release
git checkout tags/${MONGODB_RELEASE}
bash ./build.sh
chown root:adm -R ./bin
chmod o-rwx -R ./bin
chmod ug+rx ./bin/*
cp -r ./bin/. /usr/bin
# for PROGRAM in bsondump mongodump mongoexport mongofiles mongoimport mongoreplay mongorestore mongostat mongotop
# do
# go build -o bin/${PROGRAM} -tags "ssl sasl" ${PROGRAM}/main/${PROGRAM}.go
# done
# exit of the mongo directory
popd
# exit the build directory
popd
# delete current build directory
rm -rf $BUILD/mongo-tools
popd
# delete current build directory
rm -rf $BUILD/mongo-tools