I have a problem with adding pg_cron extension to the postgres: 13.4-alpine3.14 docker image and I totally stuck on it: /
What I was trying to do:
I performed a simple Dockerfile to build an image with the needed extension:
FROM postgres:13.4-alpine3.14
RUN apk add --update pg_cron
I ran the container firstly with the default config, to check if the extension has been installed properly, and I get:
bash-5.1# apk list | grep cron
postgresql-pg_cron-1.3.1-r0 x86_64 {postgresql-pg_cron} (PostgreSQL) [installed]
so it looks like the extension has been properly installed inside the container. But when I mount the postgres config that is needed for launching newly extension (including shared_preload_libraries = 'pg_cron')
docker run -d --name pg_test -v ${PWD}/postgresql.conf:/etc/postgres.conf -e POSTGRES_PASSWORD=PASSWD local/postgres:13.4-alpine3.14-pgcron postgres -c config_file=/etc/postgres.conf
the container is crashing and I see in logs below message:
could not access file "pg_cron": no such file or directory
Maybe someone has any ideas about what am I doing wrong?
Thanks in advance.
Related
I and new to use docker, I'm so confused about creating postgres container in docker.
what is -v /data:/var/lib/postgresql/data in the command line below, which is for creating a container in docker? Is it for setting the volume? Can I change the path since I cannot find the postgresql in /lib, and so I cannot find the file path when I want add the file permission in file sharing in docker setting.
sudo docker run -d --name mybd --network mydb-network -p 5432:5432 -v /data:/var/lib/postgresql/data -e POSTGRES_PASSWORD=mydb -e PGDATA=/var/lib/postgresql/data/pgdata postgres
running container error
when I tried to run container in docker, it showed this error. Then I went to docker setting and wanted add /data path to file sharing, however, I cannot find the /date path.
I need to change the locale of the offical Postgres(11.4) image in order to create databases with my language.
https://github.com/docker-library/postgres/blob/87b15b6c65ba985ac958e7b35ba787422113066e/11/Dockerfile
I copied the Dockerfile and docker-entrypoint.sh from offical postgres image( I did not add the customization yet)
aek#ubuntu:~/Desktop/Docker$ ls
docker-entrypoint.sh Dockerfile
aek#ubuntu:~/Desktop/Docker$ sudo docker build -t postgres_custom .
Step 24/24 : CMD ["postgres"]
---> Running in 8720b67094b1
Removing intermediate container 8720b67094b1
---> eb63a36ee850
Successfully built eb63a36ee850
Successfully tagged postgres_custom:latest
Image is successfully built but when I try to run it I get error below:
aek#ubuntu:~/Desktop/Docker$ docker run --name postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres_custom
d75b25367f019e3398f7daff78260e87c02a0c1898658585ec04bbd219bbe3e9
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"docker-entrypoint.sh\": executable file not found in $PATH": unknown.
Can't figure out what is wrong with the entrypoint.sh. Can you please help me?
you need to make entrypoint.sh executable :
RUN chmod +x /path/to/entrypoint.sh
as you said that you copied it without any further changes.
When I executed sudo apt update I'm getting
Reading package lists... Done
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (20: Not a directory)
Also, I was getting a status error which I solved using
sudo cp /var/lib/dpkg/status-old /var/lib/dpkg/status
I tried sudo mkdir /var/lib/apt/lists/partial as suggested in few other threads
mkdir: cannot create directory ‘/var/lib/apt/lists/partial’: Not a directory
Even I tried sudo mkdir /var/lib/apt/lists/
Any other solution?
The answer may be inappropriate here. But as I came here others may land here too.
If you're using docker and you face the same issue you can do like the following.
USER root
# RUN commands
USER 1001
Reference: Link
You can try adding -u 0 in the command
sudo docker exec -u 0 -it ContainerID bin/bash
According to Docker, the u flag defines what username or UID in the system for the container to run as, setting -u 0 means you run the container as root, use it with caution! Reference here
The same happened to me. I follow as guide this answer:The package lists or status file could not be parsed or opened
I assumed my lists were corrupted. I went to /var/lib/apt/ I saw a file (lists#) instead of a directory. I deleted it (sudo rm lists) and re-created the path (sudo mkdir -p /var/lib/apt/lists/partial). Double-check the path gets created.
I ran into the same issue while trying to build a new container and experimenting with Dockerfile for a while.
What saved me finally was just to delete all containers I've created during this process using docker rm.
I had this same issue when trying to install an Typora on Ubuntu 20.04.
I was running into the error whenever I run the command below:
# add Typora's repository
sudo add-apt-repository 'deb https://typora.io/linux ./'
Here's how I solved it:
I disconnected and reconnected my network connection, and when I ran the command again, it worked fine.
I think it was an issue with my network connectivity.
That's all.
I hope this helps
I had a similar error when using bitnami spark image and docker exec command with arguments -u didn't work for me. I found my answer in the image documentation here.
In case you are using a docker image, it might be that the image is a non root container image. Read the documents of the docker image provider to find the solution to see how you can use the image as a root container image.
this is how it works access as root in docker bash and install your apps
get id container by name
sudo docker ps -aqf "name=name=es01"
access bash as root
sudo docker exec -u 0 -it 3d42134dfd59 bash
Example install:
apt get update
apt-get install nano
You first need to have super user privilege by typing in sudo -i and then inserting your password.
I had planned an upgrade of artifactory from 6.7.5 to 6.8.1. As part of the upgrade I checked jfrog's repo on github and it looks like they have a new recommended nginx and postgres version.
The current docker-compose is using postgres 9.5 and the new default version if 9.6. Simply pulling down postgres 9.6 however does not do an inplace upgrade.
FATAL: database files are incompatible with server DETAIL: The data
directory was initialized by PostgreSQL version 9.5, which is not
compatible with this version 9.6.11.
The upgrade instructions do not mention anything about how to do the upgrade.
The examples provided in github (https://github.com/jfrog/artifactory-docker-examples) are just examples.
Using them in production could cause issues and backwards compatibility is not guaranteed.
To get over the PostgreSQL matter when upgrading, I would suggest:
$ docker-compose -f yml-file-name.yml stop
edit the yml-file-name.yml and change the docker.bintray.io/postgres:9.6.11 to docker.bintray.io/postgres:9.5.2
$ docker-compose -f yml-file-name.yml up -d
Artifactory should be upgraded after following this, however it will keep using the previous version of the PostgreSQL DB
I have been able to upgrade database using following approach:
Dump all database to an SQL script using old database image; store it in a volume for future import:
# Override PostgreSQL image used to export using old binaries
printf "version: '2.1'\nservices:\n postgresql:\n image: docker.bintray.io/postgres:9.5.2\n" > image_override.yml
started_container=$(docker-compose -f artifactory-pro.yml -f image_override.yml run -d -v sql_dump_volume:/tmp/dump --no-deps postgresql)
# Dump database to a text file in a volume (to make it available for import)
docker exec "${started_container}" bash -c "until pg_isready -q; do sleep 1; done"
docker exec "${started_container}" bash -c "pg_dumpall --clean --if-exists --username=\${POSTGRES_USER} > /tmp/dump/dump.sql"
docker stop "${started_container}"
docker rm --force "${started_container}"
Back up old database directory and prepare a new one:
mv -fv /data/postgresql /data/postgresql.old
mkdir -p /data/postgresql
chown --reference=/data/postgresql.old /data/postgresql
chmod --reference=/data/postgresql.old /data/postgresql
Run a new database image with mounting dump script from step 1. It processes SQL scripts upon startup when setting up a new database, provided it's started as postgres something. We just don't need to leave the server running afterwards, so I provided --version to make entrypoint execute, import the data and quit:
docker-compose -f artifactory-pro.yml run --rm --no-deps -e POSTGRES_DB=postgres -e POSTGRES_USER=root -v sql_dump_volume:/docker-entrypoint-initdb.d postgresql postgres --version
After all this is done, I was able to start Artifactory normally with docker-compose -f artifactory-pro.yml up -d and it started up normally, applying rest of schema and file upgrade procedure as usual.
I have also prepared a script that basically does the above steps along with some additional checks and cleanup. Feel free to use if you find it useful.
I'm on windows 10 with docker version 1.9.1 using docker toolbox
I wanted to put up a quick postgres container, something I've done before with a dockerfile I had laying around.
FROM postgres
ADD create-db.sql /tmp/
ADD drop_create_table.sql /tmp/
ADD db.sql /tmp/
ADD create-db.sh /docker-entrypoint-initdb.d/
It's pretty simple.
and when i run the resulting image. it starts fine.
However at the end it says:
...
server started ALTER ROLE
/docker-entrypoint-sh: running
/docker-entrypoint-initdb.d/create-db.sh :No such file or directory
If I try to do docker run -it <imagename> //bin/bash I can see that the file is indeed there:
root#xxxx:/docker-entrypoint-initdb.d# ls
create-db.sh
but whenever I run it it tells me it's not.
The container promptly stops when it doesn't find the file, so I can't try to ssh into the running container.