When using database_name.my_schema on a remote database it tells me that db does not exists, but if I use only the database_name it seems to connect, but when I run
RUN php bin/console doctrine:migrations:migrate --no-interaction
on Dockerfile it fails to connect.
On /config/packages/doctrine.yaml
doctrine:
dbal:
dbname : 'my_db' // or dbname.myschema
user : 'user'
password : 'xxxxxxxx'
host : 'xxxxxxxx'
port: 1520
driver : 'pdo_pgsql'
server_version: '13'
The Dockerfile
FROM php:8.1-apache AS symfony_php
RUN a2enmod rewrite
RUN apt-get update \
&& apt-get install -y libpq-dev libzip-dev git libxml2-dev wget --no-install-recommends \
&& docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
&& docker-php-ext-install pdo pdo_pgsql pgsql zip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN wget https://getcomposer.org/download/2.3.5/composer.phar \
&& mv composer.phar /usr/bin/composer && chmod +x /usr/bin/composer
COPY docker/php/apache.conf /etc/apache2/sites-enabled/000-default.conf
COPY docker/php/php.ini /usr/local/etc/php/conf.d/app.ini
COPY . /var/www
WORKDIR /var/www
RUN composer install
RUN php bin/console doctrine:migrations:migrate --no-interaction
RUN mkdir ./var/cache/pfv && mkdir ./var/log/pfv && chmod -R 777 ./var/cache && chmod -R 777 ./var/lo
So, the questions are what is the correct setup for postgres with custom schema and why when apparently it connects with just de database name, the migration command fails.
EDIT
Error thrown:
[critical] Error thrown while running command "doctrine:migrations:migrate --no-interaction". Message: "An exception occurred in the driver: SQLSTATE[08006] [7] timeout expired"
In ExceptionConverter.php line 81:
An exception occurred in the driver: SQLSTATE[08006] [7] timeout expired
Related
I am working on a docker container which initialises postgre sql db from a sql dump file.However while running the init script I am facing below exception .
Attaching to postgres
postgres | psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
postgres | Is the server running locally and accepting connections on that socket?
postgres exited with code 2
DockerFile:
FROM postgres:latest
## Some utilities
RUN apt-get update -y && \
apt-get install -y build-essential libfuse-dev libcurl4-openssl-dev libxml2-dev pkg-config libssl-dev mime-support automake libtool wget tar git unzip
RUN apt-get install lsb-release -y && apt-get install zip -y && apt-get install vim -y
## Install S3 Fuse
RUN rm -rf /usr/src/s3fs-fuse
RUN git clone https://github.com/s3fs-fuse/s3fs-fuse/ /usr/src/s3fs-fuse
WORKDIR /usr/src/s3fs-fuse
RUN ./autogen.sh && ./configure && make && make install
## Create folder
RUN mkdir /data
WORKDIR /data
RUN mkdir out
## change workdir to /
WORKDIR /
RUN rm -rf 1-restore.sh
ADD docker-entrypoint-initdb.d/1-restore.sh 1-restore.sh
RUN chmod 777 1-restore.sh
ENTRYPOINT ["/bin/bash", "1-restore.sh" ]
Below is my init Shell script file:
#!/bin/bash
set -e
export PGPASSWORD=$POSTGRES_PASSWORD
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" < $DUMP_FILE
set -e
export PGPASSWORD=$POSTGRES_PASSWORD
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
SET client_encoding TO 'UTF8';
\t
\a
\o $OUT_FOLDER/export_json.sql;
SELECT '\o $OUT_FOLDER/' || table_name || '.json;' || E'\n' || 'SELECT row_to_json(r) FROM "' || table_name || '" AS r;'
FROM information_schema.tables
WHERE table_schema = 'public'
ORDER BY table_name;
\i $OUT_FOLDER/export_json.sql;
EOSQL
rm $OUT_FOLDER/export_json.sql
echo ""TestMessage">>test.txt
I checked the other so ans but nothing yet worked for me .
I have a python application that interacts with the postgresql database and i need to run it all in one docker container.
I get a connection error when running the container:
...
File "/usr/lib/python3.6/asyncio/tasks.py", line 358, in wait_for
return fut.result()
File "/usr/lib/python3.6/asyncio/base_events.py", line 787, in create_connection
', '.join(str(exc) for exc in exceptions)))
OSError: Multiple exceptions: [Errno 111] Connect call failed ('::1', 5432), [Errno 111] Connect call failed ('127.0.0.1', 5432)
My Dockerfile:
FROM postgres:10.0-alpine
RUN apk add --update --no-cache g++ alpine-sdk
RUN apk --no-cache add python3-dev
RUN apk add --no-cache python3 && \
python3 -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip3 install --upgrade pip setuptools && \
if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi && \
if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi && \
rm -r /root/.cache
ADD app /app/
RUN chmod -R 777 /app
WORKDIR /app
ADD requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
USER postgres
RUN chmod 0700 /var/lib/postgresql/data &&\
initdb /var/lib/postgresql/data &&\
echo "host all all 0.0.0.0/0 md5" >> /var/lib/postgresql/data/pg_hba.conf &&\
echo "listen_addresses='*'" >> /var/lib/postgresql/data/postgresql.conf &&\
pg_ctl start &&\
psql --command "ALTER USER postgres WITH ENCRYPTED PASSWORD 'postgres';"
EXPOSE 5432
EXPOSE 80
CMD ["python3", "main.py"]
Although this is not recommended, it's doable. The problem is pg_ctl in RUN instruction is executed at build time, not in the container. You need to run it with CMD.
You can have a script like
pg_ctl start
psql --command "ALTER USER postgres WITH ENCRYPTED PASSWORD 'postgres';"
python3 main.py
COPY the script in the image and at the end of the dockerfile, `CMD ["./script.sh"]
#Anton, it is not recommended to run multiple processes inside a docker container. Have a look at this link https://docs.docker.com/config/containers/multi-service_container/ it will explain more and demonstrate a manner to accomplish this. You probably are aware, but your container will be running the postgresql instance and have the data in it, so if you ever re-create the container, you will lose any data in that container.
I work on some old project (Laravel 4.2, php5.6, postgresql) and I wanna to set it up on docker - here is my dockerfile:
FROM php:5-apache
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
RUN apt-get update
RUN apt-get install -y \
git \
nano \
libpng-dev \
libmcrypt-dev \
postgresql-dev \
zlib1g-dev \
zip \
unzip &&\
a2enmod rewrite
RUN docker-php-ext-install pdo &&\
docker-php-ext-install pdo_mysql &&\
docker-php-ext-install pdo_pgsql &&\
docker-php-ext-install zip &&\
docker-php-ext-install gd &&\
docker-php-ext-install pcntl &&\
docker-php-ext-install mcrypt
# COPY php.ini /usr/local/etc/php/php.ini
I get following error:
E: Unable to locate package postgresql-dev
When I change postgresql-dev to postgresql (and change image to FROM php:5-apache-jessie with different combinations: RUN dpkg --configure -a && RUN apt-get -f install && apt-get update && apt-get upgrade -y && apt-get --purge remove postgresql\* I get errors like:
E: Sub-process /usr/bin/dpkg returned an error code (1)
E: Unable to locate package postgresql*
E: Couldn't find any package by glob 'postgresql*'
E: Couldn't find any package by regex 'postgresql*'
You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application.
You need to install postgresql-server-dev-X.Y for building a server-side extension or libpq-dev for building a client-side application.
configure: error: Cannot find libpq-fe.h. Please specify correct PostgreSQL installation path
Question: How to install postgres php client drivers properly?
In dockerfile change postgresql-dev \ to
libpq-dev \
I have an API Platform project with a postgresql DB and I can't find how to enable pdo pgsql driver with docker..
Here is my docker file :
FROM php:7.1-apache
# PHP extensions
ENV APCU_VERSION 5.1.7
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
libicu-dev \
zlib1g-dev \
libpq-dev \
libzip-dev \
libpcre3-dev \
ssmtp vim git cron zip \
&& docker-php-ext-install \
pdo \
pdo_pgsql \
zip
# Apache config
RUN a2enmod rewrite
ADD docker/apache/vhost.conf /etc/apache2/sites-available/000-default.conf
# Add the application
ADD . /app
WORKDIR /app
# Install composer
RUN ./docker/composer.sh \
&& mv composer.phar /usr/bin/composer \
&& composer global require "hirak/prestissimo:^0.3"
RUN usermod -u 1000 www-data
#RUN chown -R www-data:www-data /app/var/cache /app/var/logs /app/var/sessions
CMD ["/app/docker/start.sh"]
and here my docker-composer.yml file
web:
container_name: web-api-front
build: .
environment:
SYMFONY_ENV: dev
volumes:
- .:/app
ports:
- 8084:80
psql:
container_name: psql-api-front
image: postgres
environment:
POSTGRES_PASSWORD: ''
POSTGRES_USER: dbuser
POSTGRES_DB: dbname
ports:
- "5433:5432"
volumes:
- ./docker/sql:/var/sql
I've tryed a lot of websites but I still can't find a way to enable pgsql..
When I do
var_dump(PDO::getAvailableDrivers());
I only have
array(2) { [0]=> string(6) "sqlite" [1]=> string(5) "mysql" }
Also, when I run
docker-compose up
I have this in my log, i'm not sure about what this mean
psql-api-front | LOG: database system was shut down at 2017-08-01 08:18:57 UTC
psql-api-front | LOG: MultiXact member wraparound protections are now enabled
psql-api-front | LOG: database system is ready to accept connections
psql-api-front | LOG: autovacuum launcher started
What am I doing wrong ?
That means that PSQL is working properly but in order to apache to work properly you will need to add the psql libraries and drivers :
FROM php:7.1-apache
# PHP extensions
ENV APCU_VERSION 5.1.7
RUN buildDeps=" \
libicu-dev \
zlib1g-dev \
libsqlite3-dev \
libpq-dev \
" \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
$buildDeps \
libicu52 \
zlib1g \
sqlite3 \
git \
php5-pgsql \
&& rm -rf /var/lib/apt/lists/* \
&& docker-php-ext-install \
intl \
mbstring \
pdo_mysql \
pdo_pgsql \
pdo \
pgsql \
zip \
pdo_sqlite \
&& apt-get purge -y --auto-remove $buildDeps
RUN pecl install \
apcu-$APCU_VERSION \
xdebug \
&& docker-php-ext-enable xdebug \
&& docker-php-ext-enable --ini-name 05-opcache.ini \
opcache \
&& docker-php-ext-enable --ini-name 20-apcu.ini \
apcu
ARG SYMFONY_ENV=dev
ENV SYMFONY_ENV=dev
RUN if [ "$SYMFONY_ENV" -ne "dev" ]; then \
sed -i '1 a xdebug.remote_enable=1' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_handler=dbgp' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_autostart=0' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_connect_back=1 ' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_port=9001' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && \
sed -i '1 a xdebug.remote_log=/var/log/xdebug_remote.log' /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
fi;
# Apache config
RUN a2enmod rewrite
ADD docker/apache/vhost.conf /etc/apache2/sites-available/000-default.conf
# PHP config
ADD docker/php/php.ini /usr/local/etc/php/php.ini
# Add the application
ADD . /app
WORKDIR /app
RUN chmod +x /app/docker/composer.sh
# Install composer
RUN /app/docker/composer.sh \
&& mv composer.phar /usr/bin/composer \
&& composer global require "hirak/prestissimo:^0.3"
ENV PATH="$PATH:$HOME/.composer/vendor/bin"
# to define
ARG INSTALL_DEP=true
RUN if [ -n "$INSTALL_DEP" ]; then \
if [ "$SYMFONY_ENV" -ne "prod" ]; then \
composer install --prefer-dist --no-scripts --no-dev --no-progress --no-suggest --optimize-autoloader --classmap-authoritative && composer run-script continuous-pipe; \
else \
composer install -o --no-interaction --prefer-dist --no-scripts && composer run-script continuous-pipe; \
fi; \
fi;
# Remove cache and logs if some and fixes permissions
RUN rm -rf var/cache/* && rm -rf var/logs/* && rm -rf var/sessions/* && chmod a+r var/ -R
# Apache gets grumpy about PID files pre-existing
RUN rm -f /var/run/apache2/apache2.pid
RUN a2enmod ssl
EXPOSE 443
CMD ["/app/docker/apache/run.sh"]
This should be working properly and you could compare with ur existing configuration.
What I mean is that I want to create a docker image for postgis that will be completely usable right after build. So that if user runs
docker run -e POSTGRES_USER=user somepostgis
the user database would be created and extensions already installed?
The official postgres image can't be used for that AFAIK.
Basically need to write script and tell that it would be entrypoint. This script should create database and create extensions with porstgres server running on different port and then restart it on port 5432.
But I don't know sh enough and docker to do that. Right now it's saying that there is no pg_ctl command
If you want to help you can fork
FROM ubuntu:15.04
#ENV RELEASE_NAME lsb_release -sc
#RUN apt-get update && apt-get install wget
#RUN echo "deb http://apt.postgresql.org/pub/repos/apt ${RELEASE_NAME}-pgdg main" >> /etc/apt/sources.list
#RUN cat /etc/apt/sources.list
#RUN wget --quiet -O - http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc | sudo apt-key add -
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
postgresql-9.4-postgis-2.1 \
curl \
&& curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu \
&& apt-get purge -y --auto-remove curl
RUN mkdir /docker-entrypoint-initdb.d
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
RUN chmod +x /docker-entrypoint.sh
RUN ls -l /docker-entrypoint.sh
EXPOSE 5432
CMD ["postgres"]
So I'm trying to do somethink like that, but it doesn't work.
#!/bin/bash
${POSTGRES_DB:=$POSTGRES_USER}
gosu postgres pg_ctl start -w -D ${PGDATA} -0 "-p 5433"
gosu postgres createuser ${POSTGRES_USER}
gosu postgres createdb ${POSTGRES_DB} -s -E UTF8
gosu postgres psql -d ${POSTGRES_DB} -c "create extension if not exists postgis;"
gosu postgres psql -d ${POSTGRES_DB} -c "create extension if not exists postgis_topology;"
pg_ctl -w restart