Premise · What I want to realize
I've getting an error into writing a Dockerfile, so I'm worried about a getting error.
That command is cp.
testing environment
base container image centos:7
My laptop is MacBook Pro (it may not need info, isn't it?)
What I did
the Dockerfile is here,
FROM centos:7 # Official centos image. this is a comment for asking here.
ENV JAVA_HOME=/usr/lib/jvm/java-11-openjdk \
SCALA_HOME=/usr/local/scala \
SCALA_VERSION=scala-2.13.0
WORKDIR /usr/local/lib/
RUN : " *** nginx install ***" \
&& { \
echo '[nginx-stable]'; \
echo 'name=nginx stable repo'; \
echo 'baseurl=http://nginx.org/packages/centos/$releasever/$basearch/'; \
echo 'gpgcheck=1'; \
echo 'enabled=1'; \
echo 'gpgkey=https://nginx.org/keys/nginx_signing.key'; \
echo 'gpgkey=https://nginx.org/keys/nginx_signing.key'; \
} > /etc/yum.repos.d/nginx.repo \
&& yum install -y nginx \
&& yum install -y rsyslog \
&& rsyslogd \
&& cp /usr/lib/systemd/system/nginx.service /etc/systemd/system/ \
&& nginx -version \
&& : " *** JDK install ***" \
&& JAVA_HOME=${JAVA_HOME}/bin \
&& PATH=$PATH:${JAVA_HOME}/bin \
&& java -version \
&& javac -version \
&& : "*** Scala install ***" \
&& wget http://downloads.typesafe.com/scala/2.13.0/scala-2.13.0.tgz \
&& tar zxvf scala-2.13.0.tgz \
&& ln -s ${SCALA_VERSION} scala \
&& mkdir ${SCALA_HOME} \
&& mv ${SCALA_VERSION} SCALA_HOME \
&& SCALA_HOME=${SCALA_HOME}/bin >> /etc/profile.d/scala.sh \
&& PATH=$PATH:${SCALA_HOME}/bin >> /etc/profile.d/scala.sh \
&& source /etc/profile.d/scala.sh \
&& cd \
&& scala -version \
&& : "*** sbt install ***" \
&& curl https://bintray.com/sbt/rpm/rpm | tee /etc/yum.repos.d/bintray-sbt-rpm.repo \
&& yum install -y sbt \
&& sbt -version \
&& ln -sf /usr/share/zoneinfo/Asia/Tokyo /etc/localtime \
&& yum clean all \
&& mkdir -p /usr/share/app
WORKDIR /usr/share/app
EXPOSE 80
Problems occurring · Error messages
the getting error is,
cp: invalid option -- 'e'
Try 'cp --help' for more information.
umm.., I've completely understand the error.
I haven't cp alias too.
Could you please help me?
If you need more information, feel free asking me.
Regards,
K
i had the same error as you because i wrote the command as below
sudo find /home/usersdata -type f -user mark -exec cp -p -parents {} /official ;
the parameters parents should --parents instead of -parents
For some reasons the error does not provide explicitly what is wrong.
Hope it helps you
Related
I want to write a Dockerfile in which I can specify the version of the extensions. But I have a problem when I run this docker file I get an error in timescale part
Error relocating /usr/bin/cmake: _ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEE7reserveEv: symbol not found
Error relocating /usr/bin/cmake: _ZSt28__throw_bad_array_new_lengthv: symbol not found
The command '/bin/sh -c set -ex && apk add --no-cache --virtual .fetch-deps build-base ca-certificates git tar && apk add --no-cache --virtual .build-deps --repository http://dl-cdn.alpinelinux.org/alpine/edge/main --repository http://dl-cdn.alpinelinux.org/alpine/edge/testing make cmake gcc clang clang-dev g++ llvm-dev dpkg dpkg-dev util-linux-dev libc-dev coreutils && apk add --no-cache --virtual .timescaledb-rundeps openssl libstdc++ openssl-dev && wget -O timescaledb.tar.gz "https://github.com/timescale/timescaledb/archive/$TIMESCALEDB_VERSION.tar.gz" && echo "$TIMESCALEDB_SHA256 timescaledb.tar.gz" | sha256sum -c - && mkdir -p /usr/src/timescaledb && tar --extract --file timescaledb.tar.gz --directory /usr/src/timescaledb --strip-components 1 && rm timescaledb.tar.gz && cd /usr/src/timescaledb && ./bootstrap -DPROJECT_INSTALL_METHOD="docker" -DREGRESS_CHECKS=OFF && cd ./build && make install && cd / && rm -rf /usr/src/timescaledb && apk del .fetch-deps .build-deps && sed -r -i "s/[#]\s*(shared_preload_libraries)\s*=\s*'(.)'/\1 = 'timescaledb,\2'/;s/,'/'/" /usr/local/share/postgresql/postgresql.conf.sample' returned a non-zero code: 127
##[error]The command '/bin/sh -c set -ex && apk add --no-cache --virtual .fetch-deps build-base ca-certificates git tar && apk add --no-cache --virtual .build-deps --repository http://dl-cdn.alpinelinux.org/alpine/edge/main --repository http://dl-cdn.alpinelinux.org/alpine/edge/testing make cmake gcc clang clang-dev g++ llvm-dev dpkg dpkg-dev util-linux-dev libc-dev coreutils && apk add --no-cache --virtual .timescaledb-rundeps openssl libstdc++ openssl-dev && wget -O timescaledb.tar.gz "https://github.com/timescale/timescaledb/archive/$TIMESCALEDB_VERSION.tar.gz" && echo "$TIMESCALEDB_SHA256 timescaledb.tar.gz" | sha256sum -c - && mkdir -p /usr/src/timescaledb && tar --extract --file timescaledb.tar.gz --directory /usr/src/timescaledb --strip-components 1 && rm timescaledb.tar.gz && cd /usr/src/timescaledb && ./bootstrap -DPROJECT_INSTALL_METHOD="docker" -DREGRESS_CHECKS=OFF && cd ./build && make install && cd / && rm -rf /usr/src/timescaledb && apk del .fetch-deps .build-deps && sed -r -i "s/[#]\s(shared_preload_libraries)\s*=\s*'(.*)'/\1 = 'timescaledb,\2'/;s/,'/'/" /usr/local/share/postgresql/postgresql.conf.sample' returned a non-zero code: 127
##[error]The process '/usr/bin/docker' failed with exit code 127
could you please help me. Thanks
# https://github.com/docker-library/postgres/blob/master/13/alpine/Dockerfile
FROM postgres:13.4-alpine
# https://postgis.net/docs/manual-3.1/postgis_installation.html
ENV POSTGIS_VERSION 3.1.4
ENV POSTGIS_SHA256 dfcbad0c6090c80bc59d3ea77d1adc4b3ade533a403761b4af6d9a44be1a6e48
RUN set -ex \
\
&& apk add --no-cache --virtual .fetch-deps ca-certificates openssl tar \
\
&& wget -O postgis.tar.gz "https://github.com/postgis/postgis/archive/$POSTGIS_VERSION.tar.gz" \
&& echo "$POSTGIS_SHA256 *postgis.tar.gz" | sha256sum -c - \
&& mkdir -p /usr/src/postgis \
&& tar \
--extract \
--file postgis.tar.gz \
--directory /usr/src/postgis \
--strip-components 1 \
&& rm postgis.tar.gz \
\
&& apk add --no-cache --virtual .build-deps \
autoconf automake json-c-dev libtool libxml2-dev make perl llvm clang clang-dev \
\
&& apk add --no-cache --virtual .build-deps-edge \
--repository http://dl-cdn.alpinelinux.org/alpine/edge/community \
--repository http://dl-cdn.alpinelinux.org/alpine/edge/main \
g++ gdal-dev geos-dev proj-dev protobuf-c-dev \
&& cd /usr/src/postgis \
&& ./autogen.sh \
&& ./configure \
&& make \
&& make install \
&& apk add --no-cache --virtual .postgis-rundeps \
json-c \
&& apk add --no-cache --virtual .postgis-rundeps-edge \
--repository http://dl-cdn.alpinelinux.org/alpine/edge/community \
--repository http://dl-cdn.alpinelinux.org/alpine/edge/main \
geos gdal proj protobuf-c libstdc++ \
&& cd / \
&& rm -rf /usr/src/postgis \
&& apk del .fetch-deps .build-deps .build-deps-edge
# https://docs.timescale.com/timescaledb/latest/how-to-guides/install-timescaledb/self-hosted/ubuntu/installation-source
ENV TIMESCALEDB_VERSION 2.4.2
ENV TIMESCALEDB_SHA256 b206a376251259eb745eee819e7a853b3fbab9dc8443303714484a0fef89a2a4
RUN set -ex \
&& apk add --no-cache --virtual .fetch-deps \
build-base \
ca-certificates \
git \
tar \
&& apk add --no-cache --virtual .build-deps \
--repository http://dl-cdn.alpinelinux.org/alpine/edge/main \
--repository http://dl-cdn.alpinelinux.org/alpine/edge/testing \
make \
cmake \
gcc \
clang \
clang-dev \
g++ \
llvm-dev \
dpkg \
dpkg-dev \
util-linux-dev \
libc-dev \
coreutils \
&& apk add --no-cache --virtual .timescaledb-rundeps \
openssl \
libstdc++ \
openssl-dev \
\
&& wget -O timescaledb.tar.gz "https://github.com/timescale/timescaledb/archive/$TIMESCALEDB_VERSION.tar.gz" \
&& echo "$TIMESCALEDB_SHA256 *timescaledb.tar.gz" | sha256sum -c - \
&& mkdir -p /usr/src/timescaledb \
&& tar \
--extract \
--file timescaledb.tar.gz \
--directory /usr/src/timescaledb \
--strip-components 1 \
&& rm timescaledb.tar.gz \
\
&& cd /usr/src/timescaledb \
&& ./bootstrap -DPROJECT_INSTALL_METHOD="docker" -DREGRESS_CHECKS=OFF \
&& cd ./build && make install \
\
&& cd / \
&& rm -rf /usr/src/timescaledb \
&& apk del .fetch-deps .build-deps \
&& sed -r -i "s/[#]*\s*(shared_preload_libraries)\s*=\s*'(.*)'/\1 = 'timescaledb,\2'/;s/,'/'/" /usr/local/share/postgresql/postgresql.conf.sample
COPY ./init-postgis.sh /docker-entrypoint-initdb.d/1.postgis.sh
COPY ./init-timescaledb.sh /docker-entrypoint-initdb.d/2.timescaledb.sh
COPY ./init-postgres.sh /docker-entrypoint-initdb.d/3.postgres.sh
I have an sbt project producing my artifact xyz.
I would like to put it along with all its dependencies in the docker container so it can be used using
coursier launch --mode offline xyz
The important part is that preparation should take use of local cursier cache from host.
I tried
executing sbt publishLocal,
then resolving my artifact dependencies (cursier resolve xyz),
then preparing to directories - local & cache - by copying resolved artifact into them
then copying those directories into docker container (as coursier cache and ivy local respectively).
This didn't work because coursier doesn't list .pom and .xml files in its output. I tried copying whole directories (abc/1.0.0 instead of abc/1.0.0/some.jar) but AFAIK there is no reliable way to know how many folders up one has to go because maven and ivy have different dir structures.
while my usecase is not quite identical to yours -- I figure I'd write up my findings and maybe my solution works for you as well!
here's my sample dockerfile, I used this to install scalafmt in an offline-compatible way
FROM ubuntu:jammy
RUN : \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* # */ stackoverflow highlighting bug
ARG CS=v2.1.0-RC4
ARG CS_SHA256=176e92e08ab292531aa0c4993dbc9f2c99dec79578752f3b9285f54f306db572
ARG JDK_SHA256=aef49cc7aa606de2044302e757fa94c8e144818e93487081c4fd319ca858134b
ENV PATH=/opt/coursier/bin:$PATH
RUN : \
&& curl --location --silent --output /tmp/cs.gz "https://github.com/coursier/coursier/releases/download/${CS}/cs-x86_64-pc-linux.gz" \
&& echo "${CS_SHA256} /tmp/cs.gz" | sha256sum --check \
&& curl --location --silent --output /tmp/jdk.tgz "https://download.java.net/openjdk/jdk17/ri/openjdk-17+35_linux-x64_bin.tar.gz" \
&& echo "${JDK_SHA256} /tmp/jdk.tgz" | sha256sum --check \
&& mkdir -p /opt/coursier \
&& tar --strip-components=1 -C /opt/coursier -xf /tmp/jdk.tgz \
&& gunzip /tmp/cs.gz \
&& mv /tmp/cs /opt/coursier/bin \
&& chmod +x /opt/coursier/bin/cs \
&& rm /tmp/jdk.tgz
ENV COURSIER_CACHE=/opt/.cs-cache
RUN : \
&& cs fetch scalafmt:3.6.1 \
&& cs install scalafmt:3.6.1 --dir /opt/wd/bin
the key to offline execution for me was to use cs fetch and set COURSIER_CACHE
here's the offline execution succeeding:
$ docker run --net=none --rm -ti cs /opt/wd/bin/scalafmt --version
scalafmt 3.6.1
mongodb docker container start failed with errors: IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
I use offical mongodb Dockerfile to build the container, and use docker-compose to start it.
Here is my Dockerfile:
FROM ubuntu:bionic
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mongodb && useradd -r -g mongodb mongodb
RUN echo "deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse" > /etc/apt/sources.list \
&& echo "deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse" >> /etc/apt/sources.list \
&& echo "deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse" >> /etc/apt/sources.list \
&& echo "deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse" >> /etc/apt/sources.list
RUN export all_proxy=http:192.168.1.177:1080
RUN set -eux; \
apt-get update; \
apt-get install -y --no-install-recommends \
ca-certificates \
jq \
numactl \
; \
if ! command -v ps > /dev/null; then \
apt-get install -y --no-install-recommends procps; \
fi; \
rm -rf /var/lib/apt/lists/*
# grab gosu for easy step-down from root (https://github.com/tianon/gosu/releases)
ENV GOSU_VERSION 1.11
# grab "js-yaml" for parsing mongod's YAML config files (https://github.com/nodeca/js-yaml/releases)
ENV JSYAML_VERSION 3.13.0
RUN mkdir ~/.gnupg && echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf
RUN set -ex; \
\
apt-get update; \
apt-get install -y --no-install-recommends \
wget \
; \
if ! command -v gpg > /dev/null; then \
apt-get install -y --no-install-recommends gnupg dirmngr; \
fi; \
rm -rf /var/lib/apt/lists/*; \
\
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
command -v gpgconf && gpgconf --kill all || :; \
rm -r "$GNUPGHOME" /usr/local/bin/gosu.asc; \
chmod +x /usr/local/bin/gosu; \
gosu --version; \
gosu nobody true; \
\
wget -O /js-yaml.js "https://github.com/nodeca/js-yaml/raw/${JSYAML_VERSION}/dist/js-yaml.js"; \
# TODO some sort of download verification here
\
apt-get purge -y --auto-remove wget
RUN mkdir /docker-entrypoint-initdb.d
ENV GPG_KEYS E162F504A20CDF15827F718D4B7C549A058F8B6B
RUN set -ex; \
export GNUPGHOME="$(mktemp -d)"; \
for key in $GPG_KEYS; do \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
done; \
gpg --batch --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mongodb.gpg; \
command -v gpgconf && gpgconf --kill all || :; \
rm -r "$GNUPGHOME"; \
apt-key list
# Allow build-time overrides (eg. to build image with MongoDB Enterprise version)
# Options for MONGO_PACKAGE: mongodb-org OR mongodb-enterprise
# Options for MONGO_REPO: repo.mongodb.org OR repo.mongodb.com
# Example: docker build --build-arg MONGO_PACKAGE=mongodb-enterprise --build-arg MONGO_REPO=repo.mongodb.com .
ARG MONGO_PACKAGE=mongodb-org-unstable
ARG MONGO_REPO=repo.mongodb.org
ENV MONGO_PACKAGE=${MONGO_PACKAGE} MONGO_REPO=${MONGO_REPO}
ENV MONGO_MAJOR 4.1
ENV MONGO_VERSION 4.1.10
# bashbrew-architectures:amd64 arm64v8 s390x
RUN echo "deb http://$MONGO_REPO/apt/ubuntu bionic/${MONGO_PACKAGE%-unstable}/$MONGO_MAJOR multiverse" | tee "/etc/apt/sources.list.d/${MONGO_PACKAGE%-unstable}.list"
RUN set -x \
&& apt-get update \
&& apt-get install -y \
${MONGO_PACKAGE}=$MONGO_VERSION \
${MONGO_PACKAGE}-server=$MONGO_VERSION \
${MONGO_PACKAGE}-shell=$MONGO_VERSION \
${MONGO_PACKAGE}-mongos=$MONGO_VERSION \
${MONGO_PACKAGE}-tools=$MONGO_VERSION \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /var/lib/mongodb \
&& mv /etc/mongod.conf /etc/mongod.conf.orig
RUN mkdir -p /data/db /data/configdb \
&& chown -R mongodb:mongodb /data/db /data/configdb \
&& chmod g+w -R /data/db \
&& chmod g+w -R /data/configdb
VOLUME /data/db /data/configdb
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 27017
CMD ["mongod"]
and docker-compose below:
mongodb:
build: ./dockerfiles/mongodb
volumes:
- ./data/mongodb/db:/data/db
- ./data/mongodb/configdb:/data/configdb
ports:
- 7017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=super
- MONGO_INITDB_ROOT_PASSWORD=123456
restart: always
I expect the container start successfully. But i got failure with :IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db, terminating
If i remove volumes sections, It works!
The host is ubuntu18.04, and /data is writeable!
I had solved my problem le 老铁!
AS i use vmware to start an Ubuntu server, and shared a folder with windows 7. The container volume on the shared folder.
check the inspect information of the mongodb container.
"Mounts": [
{
"Type": "bind",
"Source": "/mnt/hgfs/ubuntu/dockers/data/mongodb/configdb",
"Destination": "/data/configdb",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/mnt/hgfs/ubuntu/dockers/data/mongodb/db",
"Destination": "/data/db",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
the Source contains string "/mnt/hgfs/ubuntu" tell us: It shared folder!
I have an API Platform project with a postgresql DB and I can't find how to enable pdo pgsql driver with docker..
Here is my docker file :
FROM php:7.1-apache
# PHP extensions
ENV APCU_VERSION 5.1.7
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
libicu-dev \
zlib1g-dev \
libpq-dev \
libzip-dev \
libpcre3-dev \
ssmtp vim git cron zip \
&& docker-php-ext-install \
pdo \
pdo_pgsql \
zip
# Apache config
RUN a2enmod rewrite
ADD docker/apache/vhost.conf /etc/apache2/sites-available/000-default.conf
# Add the application
ADD . /app
WORKDIR /app
# Install composer
RUN ./docker/composer.sh \
&& mv composer.phar /usr/bin/composer \
&& composer global require "hirak/prestissimo:^0.3"
RUN usermod -u 1000 www-data
#RUN chown -R www-data:www-data /app/var/cache /app/var/logs /app/var/sessions
CMD ["/app/docker/start.sh"]
and here my docker-composer.yml file
web:
container_name: web-api-front
build: .
environment:
SYMFONY_ENV: dev
volumes:
- .:/app
ports:
- 8084:80
psql:
container_name: psql-api-front
image: postgres
environment:
POSTGRES_PASSWORD: ''
POSTGRES_USER: dbuser
POSTGRES_DB: dbname
ports:
- "5433:5432"
volumes:
- ./docker/sql:/var/sql
I've tryed a lot of websites but I still can't find a way to enable pgsql..
When I do
var_dump(PDO::getAvailableDrivers());
I only have
array(2) { [0]=> string(6) "sqlite" [1]=> string(5) "mysql" }
Also, when I run
docker-compose up
I have this in my log, i'm not sure about what this mean
psql-api-front | LOG: database system was shut down at 2017-08-01 08:18:57 UTC
psql-api-front | LOG: MultiXact member wraparound protections are now enabled
psql-api-front | LOG: database system is ready to accept connections
psql-api-front | LOG: autovacuum launcher started
What am I doing wrong ?
That means that PSQL is working properly but in order to apache to work properly you will need to add the psql libraries and drivers :
FROM php:7.1-apache
# PHP extensions
ENV APCU_VERSION 5.1.7
RUN buildDeps=" \
libicu-dev \
zlib1g-dev \
libsqlite3-dev \
libpq-dev \
" \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
$buildDeps \
libicu52 \
zlib1g \
sqlite3 \
git \
php5-pgsql \
&& rm -rf /var/lib/apt/lists/* \
&& docker-php-ext-install \
intl \
mbstring \
pdo_mysql \
pdo_pgsql \
pdo \
pgsql \
zip \
pdo_sqlite \
&& apt-get purge -y --auto-remove $buildDeps
RUN pecl install \
apcu-$APCU_VERSION \
xdebug \
&& docker-php-ext-enable xdebug \
&& docker-php-ext-enable --ini-name 05-opcache.ini \
opcache \
&& docker-php-ext-enable --ini-name 20-apcu.ini \
apcu
ARG SYMFONY_ENV=dev
ENV SYMFONY_ENV=dev
RUN if [ "$SYMFONY_ENV" -ne "dev" ]; then \
sed -i '1 a xdebug.remote_enable=1' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_handler=dbgp' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_autostart=0' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_connect_back=1 ' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_port=9001' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_log=/var/log/xdebug_remote.log' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
fi;
# Apache config
RUN a2enmod rewrite
ADD docker/apache/vhost.conf /etc/apache2/sites-available/000-default.conf
# PHP config
ADD docker/php/php.ini /usr/local/etc/php/php.ini
# Add the application
ADD . /app
WORKDIR /app
RUN chmod +x /app/docker/composer.sh
# Install composer
RUN /app/docker/composer.sh \
&& mv composer.phar /usr/bin/composer \
&& composer global require "hirak/prestissimo:^0.3"
ENV PATH="$PATH:$HOME/.composer/vendor/bin"
# to define
ARG INSTALL_DEP=true
RUN if [ -n "$INSTALL_DEP" ]; then \
if [ "$SYMFONY_ENV" -ne "prod" ]; then \
composer install --prefer-dist --no-scripts --no-dev --no-progress --no-suggest --optimize-autoloader --classmap-authoritative && composer run-script continuous-pipe; \
else \
composer install -o --no-interaction --prefer-dist --no-scripts && composer run-script continuous-pipe; \
fi; \
fi;
# Remove cache and logs if some and fixes permissions
RUN rm -rf var/cache/* && rm -rf var/logs/* && rm -rf var/sessions/* && chmod a+r var/ -R
# Apache gets grumpy about PID files pre-existing
RUN rm -f /var/run/apache2/apache2.pid
RUN a2enmod ssl
EXPOSE 443
CMD ["/app/docker/apache/run.sh"]
This should be working properly and you could compare with ur existing configuration.
I'm deploying the project with Asp.net Core, PostgreSql and Docker in Windows 10 (no PostgreSql installed). So I have to run sql script to update data before the application launches (for registering a singleton dependency injection).
The content of my Dockerfile as following:
# TODO use official docker image
FROM microsoft/dotnet:1.1.0-sdk-projectjson
# Install .NET CLI dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
autoconf \
automake \
bzip2 \
file \
g++ \
gcc \
imagemagick \
libbz2-dev \
libc6-dev \
libcurl4-openssl-dev \
libdb-dev \
libevent-dev \
libffi-dev \
libgdbm-dev \
libgeoip-dev \
libglib2.0-dev \
libjpeg-dev \
libkrb5-dev \
liblzma-dev \
libmagickcore-dev \
libmagickwand-dev \
libmysqlclient-dev \
libncurses-dev \
libpng-dev \
libpq-dev \
libreadline-dev \
libsqlite3-dev \
libssl-dev \
libtool \
libwebp-dev \
libxml2-dev \
libxslt-dev \
libyaml-dev \
make \
patch \
xz-utils \
zlib1g-dev \
&& rm -rf /var/lib/apt/lists/*
# Set environment variables
ENV ASPNETCORE_URLS="http://*:5000"
ENV ASPNETCORE_ENVIRONMENT="Development"
# Copy files to app directory
COPY . /app
# Set working directory
WORKDIR /app
# Restore NuGet packages
RUN ["dotnet", "restore"]
# Build app
RUN ["dotnet", "build"]
#dotnet ef migrations add InitialCreate
RUN ["dotnet", "ef", "migrations", "add", "InitialCreate"]
# Open up port
EXPOSE 5000
CMD chmod +x ./docker-start.sh
CMD bash ./docker-start.sh
And here is the content of docker-start.sh:
#!/bin/bash
set -e
# How to apply migrations
dotnet ef database update
# I would like to run sql file at here"
psql -h postgres --username postgres -d POSTGRES_USER -a -f /app/static.sql
# Start web app
echo "Starting web app"
dotnet run
How can I do that? Thanks advanced.
I have just found a solution for this. I missed postgresql-client. We will be need to install postgresql-client as using psql to run the sql script from Dockerfile.
So Dockerfile should be changed:
# TODO use official docker image
FROM microsoft/dotnet:1.1.0-sdk-projectjson
# Install .NET CLI dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
autoconf \
automake \
bzip2 \
file \
g++ \
gcc \
imagemagick \
libbz2-dev \
libc6-dev \
libcurl4-openssl-dev \
libdb-dev \
libevent-dev \
libffi-dev \
libgdbm-dev \
libgeoip-dev \
libglib2.0-dev \
libjpeg-dev \
libkrb5-dev \
liblzma-dev \
libmagickcore-dev \
libmagickwand-dev \
libmysqlclient-dev \
libncurses-dev \
libpng-dev \
libpq-dev \
libreadline-dev \
libsqlite3-dev \
libssl-dev \
libtool \
libwebp-dev \
libxml2-dev \
libxslt-dev \
libyaml-dev \
make \
patch \
xz-utils \
zlib1g-dev \
postgresql-client \
&& rm -rf /var/lib/apt/lists/*
# Install netcat so that we can ping the database server until it
RUN apt-get update -qq \
&& apt-get install -y netcat \
&& rm -rf /var/lib/apt/lists/*
# Set environment variables
ENV ASPNETCORE_URLS="http://*:5000"
ENV ASPNETCORE_ENVIRONMENT="Development"
ENV DB_HOSTNAME="posgres"
# Copy files to app directory
COPY . /app
# Set working directory
WORKDIR /app
# Restore NuGet packages
RUN ["dotnet", "restore"]
# Build app
RUN ["dotnet", "build"]
#dotnet ef migrations add InitialCreate
RUN ["dotnet", "ef", "migrations", "add", "InitialCreate"]
# Open up port
EXPOSE 5000
CMD chmod +x ./docker-start.sh
CMD bash ./docker-start.sh
Thanks.