I want to make a Dockerfile to build an image of Postgres:11 that already installed postgresql-hll extension inside.
Im not experienced with Docker so Im have no idea to follow the instruction of installing this extension properly.
In order to do this you need to:
clone the git repository:
git clone https://github.com/citusdata/postgresql-hll.git
Create a file called Dockerfile (at the same level with the folder postgresql-hll created at step 1) with the contents:
ARG psversion=11
FROM postgres:$psversion
COPY postgresql-hll /postgresql-hll
RUN apt-get update -y && apt-get install -y postgresql-server-dev-${PG_MAJOR} make gcc g++
WORKDIR /postgresql-hll
RUN PG_CONFIG=/usr/bin/pg_config make
RUN PG_CONFIG=/usr/bin/pg_config make install
RUN echo "shared_preload_libraries = 'hll'" >> /usr/share/postgresql/postgresql.conf.sample
COPY create_extension.sql /docker-entrypoint-initdb.d/
Create a file create_extension.sql at the same level with the Dockerfile, with the contents:
CREATE EXTENSION hll;
Build your image:
# build for POSTGRES 11
docker build -t hll:1.0 --build-arg psversion=11 .
# build for POSTGRES 9.6
docker build -t hll:1.0 --build-arg psversion=9 .
NOTE: The version for POSTGRES 9.6 gives an error when trying to load the library. It is here for completeness and maybe somebody can contribute to fix it.
Run a container based on this image
docker run -d --name hll hll:1.0
Open a shell in the newly created container:
docker exec -ti hll bash
Inside the container run:
su postgres
psql
\dx
The output should show the hll extension as installed.
Related
I want to to build a PostgreSQL image that only contains some extra .sql files to be executed at starting time
Dockerfile:
FROM postgres:11.9-alpine
USER postgres
WORKDIR /
COPY ddl/*.sql /docker-entrypoint-initdb.d/
Then I build the image:
docker build -t my-postgres:1.0.0 -f Dockerfile .
And run the container
docker run -d --name my-database \
-e POSTGRES_PASSWORD=abc123 \
-p 5432:5432 \
my-postgres:1.0.0
The output of it is the container id
33ed596792a80fc08f37c7c0ab16f8827191726b8e07d68ce03b2b5736a6fa4e
Checking the running containers returns nothing:
Docker container ls
But if I explicitly start it, it works
docker start my-postgres
In the original PostgreSQL image the Docker run command already starts the database. Why after building my own image it doesn't?
It turned out that one of the copied .sql files was failing to execute and, based on this documentation, it makes the entrypoint script to exit. Fixing the SQL solved the issue and the container started normally with Docker run
I create a docker container with the base image ubuntu:18.04 and installed MongoDB using the shell in this container. Now, I commit and export this container and use this container image as a base image in a new container in which I need the MongoDB status as running for further process. I am using Dockerfile for both containers.
First Dockerfileis :
#getting base image ubuntu
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y apt-utils wget gnupg gnupg2 curl
ENTRYPOINT ["tail", "-f", "/dev/null"]
CMD ["echo","Create base image"]
After building this Dockerfile, I accessed shell installed MongoDB, created a new db and collection, commit and export in the tar file.
Next, I import this tar file as image mongoubuntu:1.0
Now, I create another Dockerfile in which I want to run some commands which need MongoDB status as running.
The second Dockerfile is as follows:
FROM mongoubuntu:1.0
RUN apt-get update && apt-get -y install sudo && apt-get -y install curl
RUN apt-get install -y wget #install wget lib
COPY . /
RUN chmod +x /genesis.sh
RUN /genesis.sh
RUN chmod +x /wallet.sh
RUN /wallet.sh >> /config.sh
ENTRYPOINT ["tail", "-f", "/dev/null"]
CMD ["echo","Install EOS Complete"]
When I run this as docker build -t eossample:1.5 . It gets build successful. When I check the bash shell for the newly created container using this image, MongoDB is not running. If I manually start MongoDB it shows running status. Please help. How could I start MongoDB from DockerFile?
I am trying to install pg_cron extension in postgres in alpine linux docker.
When running
CREATE EXTENSION pg_cron;
in psql console I get:
ERROR: could not open extension control file "/usr/local/share/postgresql/extension/pg_cron.control": No such file or directory
The problem is that the actual pg_cron.control is not under /usr/local/share/... but under /usr/share/..
Where in postgresql.conf I can define the path?
Steps taken:
docker run --name postgres-0 -e POSTGRES_PASSWORD=Password1 -p 5432:5432 -d postgres:10-alpine
docker exec -it postgres-0 /bin/bash
apk update
apk add postgresql-pg_cron --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community
cat <<EOT >> /var/lib/postgresql/data/postgresql.conf
shared_preload_libraries='pg_cron'
EOT
pg_ctl reload
PostgreSQL expects to find the extension files in the SHAREDIR/extension/ directory associated with the installation (execute pg_config --sharedir to confirm the value of SHAREDIR for your particular installation).
There is however no facility for specifying an alternative location for extension files; it looks like something is wrong with the packaging.
I'm not familiar with Alpine Linux, but a quick Google search brings up e.g. this issue: Postgres extensions are installed into incorrect path and the suggested solution is to use a bare Alpine Linux image and install PostgreSQL via the apk command, so you might want to try that.
after i had some previous problem to Dockerise my MySQL Kitura SETUP here : Docker Build Kitura Sqift Container - Shim.h mysql.h file not found
I am running in a new Problem i can not solve following the Guide from : https://www.kitura.io/docs/deploying/docker.html .
After i followed all the steps and also did the fixing on the MySQL issue previously i was now able to run the following command :
docker run -p 8080:8080 -it myapp-run
THis however leads to the following issue :
error while loading shared libraries: libmysqlclient.so.18: cannot open shared object file: No such file or directory
i assume something tries again to open the libmysqclclient from some wrong Environmental Directories ?
But how can i fix this issues by building the docker images ... is there any way and better a smart way ?
Thanks a lot again for the help.
I was able to update and enhance my dockerfile this is now running smoothly and also can be used for CI and CD tasks.
FROM ibmcom/swift-ubuntu-runtime:latest
##FROM ibmcom/swift-ubuntu-runtime:5.0.1
LABEL maintainer="IBM Swift Engineering at IBM Cloud"
LABEL Description="Template Dockerfile that extends the ibmcom/swift-ubuntu-runtime image."
# We can replace this port with what the user wants
EXPOSE 8080
# Default user if not provided
ARG bx_dev_user=root
ARG bx_dev_userid=1000
# Install system level packages
RUN apt-get update && apt-get dist-upgrade -y
RUN apt-get update && apt-get install -y sudo libmysqlclient-dev
# Add utils files
ADD https://raw.githubusercontent.com/IBM-Swift/swift-ubuntu-docker/master/utils/run-utils.sh /swift-utils/run-utils.sh
ADD https://raw.githubusercontent.com/IBM-Swift/swift-ubuntu-docker/master/utils/common-utils.sh /swift-utils/common-utils.sh
RUN chmod -R 555 /swift-utils
# Create user if not root
RUN if [ $bx_dev_user != "root" ]; then useradd -ms /bin/bash -u $bx_dev_userid $bx_dev_user; fi
# Bundle application source & binaries
COPY ./.build /swift-project/.build
# Command to start Swift application
CMD [ "sh", "-c", "cd /swift-project && .build/release/Beautylivery_Server_New" ]
I'm new to Docker.
I'm trying to run my node app tests in a Docker container.
I want to run the tests with a real postgres db.
I'm creating this container with the following Dockerfile:
# Set image
FROM postgres:alpine
# Install node latest
RUN apk add --update nodejs nodejs-npm
# Set working dir
WORKDIR .
# Copy the current directory contents into the container at .
ADD src src
ADD .env.testing .env
ADD package.json .
ADD package-lock.json .
# Run tests
CMD npm install && npm run coverage
From the image docs, when I run the container with:
$ docker run build-name -d postgres
I see that the container takes time to start postgresql service.
When I run the container without the "-d postgres" param:
$ docker run build-name
The service does not start and the tests fail due to "could not connect to server".
Questions:
A. How can I run the tests AFTER the postgresql service starts?
B. I saw some examples using docker-composer but can I do this without composer?
Thanks
Thanks to #Bogdan I found the complete solution:
Dockerfile should be:
# Set image
FROM postgres:alpine
# Install node latest
RUN apk add --update nodejs nodejs-npm
# Set working dir
WORKDIR .
# Copy the current directory contents into the container at .
ADD src src
ADD .env.testing .env
ADD package.json .
ADD package-lock.json .
# Install
RUN npm install
# Init container
CMD psql -U postgres -c "SELECT 1;" postgres
Build container:
$ docker build -t test .
Run container:
$ docker run --name startedtest -d test -d postgres
Run tests after conatiner is running:
$ docker exec startedtest some_create_schema_script && npm run coverage
If the goal is just to run the tests in the Postgres container, one solution could be to install NodeJs in your postgres:alpine derived image and run the container normally. Once the database is up, you can run npm using docker exec like this:
docker exec <container_id> npm run coverage