I'm using postgres:13.1-alpine image for a docker container. I tried to make a backup of the volume using the following command:
docker run --rm \
--volume [DOCKER_COMPOSE_PREFIX]_[VOLUME_NAME]:/[TEMPORARY_DIRECTORY_STORING_EXTRACTED_BACKUP] \
--volume $(pwd):/[TEMPORARY_DIRECTORY_TO_STORE_BACKUP_FILE] \
ubuntu \
tar xvf /[TEMPORARY_DIRECTORY_TO_STORE_BACKUP_FILE]/[BACKUP_FILENAME].tar -C /[TEMPORARY_DIRECTORY_STORING_EXTRACTED_BACKUP] --strip 1
Something went wrong and now I can't access the database. I used to access it using the user/role myuser. But it seems to not exists anymore.
What I tried
I still can access the container using docker exec -it postgres sh. But I can't start psql because neither root, postgres or myuser roles exists.
All solutions I have found so far are basically the same: or use postgres user to create another user or use root user to create the role "postgres". This solutions doesn't work.
Most likely your database is toast. It is hard to see how an innocent backing-up accident could leave you with no predictable users, but an otherwise intact database. But it is at least plausible that you or docker just blew away your database entirely, then created a new one with a user whose name you wouldn't immediately guess. So find the size of the data directory, is that most plausible for the database you hope to find, or with one newly created from scratch?
I would run strings DATADIR/global/1260 and see if it finds anything recognizable as a user name in there, then you could try logging in as that.
Or, you could shutdown the database and restart it in single-user mode, /path/to/bin/postgres --single -D /path/to/DATADIR and take a look at pg_authid to see what is in there.
Related
I'm trying to create a database and a user with limited privileges. The user should have access only to that database, nothing more.
In a blank slate Postgres 13 deployment using Docker, I connect with the user postgres, a superadmin, and run the following:
CREATE DATABASE db_foo;
CREATE USER usr_bar with NOINHERIT LOGIN password 'pwd1234';
That's just it. Nothing more than that. Then I connect to it with the newly created user, using psql -h <pg_host> -U usr_bar -d <db_name>.
Replacing <pg_host> with either 127.0.0.1 when running psql from the Docker host machine or with the docker container name when running psql from another docker container. Also replacing <db_name> with either postgres or db_foo; they both yield the same odd behavior.
What I expected to happen is that the login above (with usr_bar user) for any of the databases would fail due to lack of permission. Or that at least I wouldn't be able to make any changes, but I'm able to run, for instance, a create table command and it works. I would expect the user to not have any permissions by default, since no GRANT was performed.
So my question is: Why does this newly created user has so much permission by default? What am I doing wrong? If anyone can also suggest how to solve this, you're welcome; but I'd like to understand the reasoning behind it.
NOTE: For the docker image, I tried with two different ones, but had the same results. They are:
$ docker run --rm -ti -e POSTGRES_PASSWORD=root1337 --network some_net --name some-pg postgres:13
and
$ docker run --rm -ti -e POSTGRESQL_PASSWORD=root1337 --network some_net --name some-pg bitnami/postgresql:13
You are not doing anything wrong. There are two things that conspire to produce the behavior you see:
The default permissions for databases allow CONNECT and TEMP to PUBLIC, i.e., everyone.
That may seem lax, but it is mitigated by the fact that the default pg_hba.conf does not allow remote connections at all, which is restrictive.
In a way, the CONNECT permission on databases and pg_hba.conf rules overlap: they both restrict access of users to databases. I guess it was decided that being strict in one of them is good enough.
The default permissions on the public schema allow CREATE to PUBLIC.
That is an unsafe default, and the documentation recommends to REVOKE that privilege in databases.
The reason why this is not changed is backward compatibility, which is highly priced in PostgreSQL.
I am using a postgis/postgis Docker image to set a database server for my application.
The database server must have a tablespace created, then a database.
Then each time another application will start from another container, it will run a Liquibase script that will update the database schema (create tables, index...) when needed.
On a terminal, to prepare the database container, I'm running these commands :
# Run a naked Postgis container
sudo docker run --name ecoemploi-postgis
-e POSTGRES_PASSWORD=postgres
-d -v /data/comptes-france:/data/comptes-france postgis/postgis
# Send 'bash level' commands to create the directory for the tablespace
sudo docker exec -it ecoemploi-postgis
bin/sh -c 'mkdir /tablespace && chown postgres:postgres /tablespace'
Then to complete my step 1, I have to run SQL statements to create the tablespace in a PostGIS point of view, and create the database by a CREATE DATABASE.
I connect myself, manually, under the psql of my container :
sudo docker exec -it ecoemploi-postgis bin/sh
-c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR"
-p "$POSTGRES_PORT_5432_TCP_PORT" -U postgres'
And I run manally these commands :
CREATE TABLESPACE data LOCATION '/tablespace';
CREATE DATABASE comptesfrance TABLESPACE data;
exit
But I would like to have a container created from a single Dockerfile having done all the needed work. The difficulty is that it has to be done in two parts :
One before the container is started. (creating directories, granting them user:group).
One after it is started for the first time : declaring the tablespace and creating the base. If I understand well the base image I took, it should be done after an entrypoint docker-entrypoint.sh has been run ?
What is the good way to write a Dockerfile creating a container having done all these steps ?
The PostGIS image "is based on the official postgres image", so it should be able to use the /docker-entrypoint-initdb.d mechanism. Any files you put in that directory will be run the first time the database container is started. The postgis Dockerfile already uses this directory to install the PostGIS extensions into the default database.
That means you can put your build-time setup directly into the Dockerfile, and copy the startup-time script into that directory.
FROM postgis/postgis:12-3.0
RUN mkdir /tablespace && chown postgres:postgres /tablespace
COPY createdb.sql /docker-entrypoint-initdb.d/20-createdb.sql
# Use default ENTRYPOINT/CMD from base image
For the particular setup you describe, this may not be necessary. Each database runs in an isolated filesystem space and starts with an empty data directory, so there's not a specific need to create an alternate data directory; Docker style is to just run multiple databases if you need isolated storage. Similarly, the base postgres image will create a database for you at first start (named by the POSTGRES_DB environment variable).
In order to run a container, your Dockerfile must be functional and completed.
you must enter the queries in a bash file and in the last line you have to enter an ENTRYPOINT with this bash script
On my macbook I have postgresql running in a docker container and I use a mapped volume to persist the data. This works perfectly locally. However, when I try to do the same on the Ubuntu server the 'initial' data from the mapped volume is not working. Postgres starts up in an 'empty' initial state.
However, when I add a table and data in that table in the default postgres database it IS persistent. So the volume mapping seems to work.
Furthermore it is interesting to note that I'm getting an error when I try to create a table in a new database. The new database is persistent as well, but the table cant be saved as there is an error thrown:
could not open file "base/16384/2611": No such file or directory
This is expected as the folder base/16384 doesn't exist.
To me this seems this is a user/rights issue perhaps, but no clue how to fix this.
I tried running the container as root, which didn't help.
Any suggestions?
I'm starting the container with either docker-compose or from the command line using;
docker run --rm --name pg -e POSTGRES_PASSWORD=[password] -d -p 5432:5432 -v /root/docker/volumes/postgres:/var/lib/postgresql/data postgres -c listen_addresses='*'
Instead of moving the actual data folder around I used pg_dump and pg_restore within the docker containers per suggestion on the docker forums. This did the trick
I have the following docker-compose file:
version: '3'
services:
web:
build:
context: ./django_httpd_mod_wsgi
ports:
- "8000:80"
db:
build:
context: ./postgresql
volumes:
- db-data:/var/lib/postgres/data
volumes:
db-data:
I am building psotgresql image using archlinux:
The following is my postgresql Dockerfile:
FROM archlinux/base
RUN yes | pacman -S postgresql
RUN mkdir /run/postgresql/
RUN chown -R postgres:postgres /run/postgresql/
USER postgres
RUN initdb -D /var/lib/postgres/data
RUN psql -c 'CREATE DATABASE btgapp;'
RUN psql -c "CREATE USER simha WITH PASSWORD 'krishna';"
RUN psql -c 'GRANT ALL PRIVILEGES ON DATABASE btgapp TO simha;'
CMD ["/usr/bin/postgres","-D","/var/lib/postgres/data"]
When i try to do:
docker-compose up
I get the error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/run/postgresql/.s.PGSQL.5432"?
ERROR: Service 'db' failed to build: The command '/bin/sh -c psql -c 'CREATE DATABASE dbname;'' returned a non-zero code: 2
I understood that i have to run the psql -c CREATE DATABSE "dbname" after starting the postgresql server by /usr/bin/postgres -D /var/lib/postgres/data
But i cannot start multiple commands in a Dockerfile. So how to do this.
The option is start a script. But then it will be difficult to see postgres running as a single process.
Based on the comments, I will try to answer here.
I believe that you should go with the postgres 11-alpine image. And I will try to explain why here.
Official docker images come with a number of benefits that you should always consider before starting your own.
Upgrade path is easy - when a new revision of the application wrapped in the image is released, the official docker image will in most cases be updated along with it. And ususally the changes respect the configuration conventions that the image has established. Such as environment variables, startup specifics. So that users can simple change the tag in their stacks, and upgrade. There may of course be breaking changes - always check this.
Large user base - when images like postgres have been downloaded more than 10 milliion times (2019), this does not only mean that it is popular, but inherently works like a guarantee that the image has been tested thoroughly. Any elementary bugs have been weeded out already, and you will have an easy time with the image.
Optimized for size and performance - you can be sure that attention has been paid to a lot of details, minimizing the size of the image and maximizing performance. Many projects publish their applications on a few different linux distros. Like postgres - they publish debian and a alpine based images. The alpine image is the smaller one, while the debian is slightly larger, but gives you access to the vast debian package repositories if you need extra packages installed.
Easy configuration - maintainers of the official images usually understand that usecases of their userbase very well. And they try to make our lives as developers and admins easier (god bless them). Official images usually have some pretty good documentation sitting right on their docker hub landing page, or a link to a github repo where the README.md will cover common usecases. I find that these instructions are worth a good read from top to bottom.
I understand that you want to keep the image small, but what do you know - the postgres project has got your usecase covered.
The latest alpine postgres image tagged 11-alpine has a compressed footprint of 28 MB and decompressed of 70MB. While the archlinux/base image that you want to start off with has compressed base footprint of 153MB and a decompressed size of 445MB. And that's before you introduce postgres itself.
Add to that, that the database and user that you want created on startup - can be handled in the environment variables alone for the official postgres image. Like this:
docker run -d --name some-postgres \
-e POSTGRES_PASSWORD=mysecretpassword \
-e POSTGRES_USER=simha \
-e POSTGRES_DB=btgapp \
postgres:11-alpine
If that does not cover the initialization that you need for your database, then you can copy .sql scripts (and .sh scripts) into a special location in the image - and they will be executed on startup. For this you can extend their image like this:
init-user-db.sh
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER simha;
CREATE DATABASE btgapp;
GRANT ALL PRIVILEGES ON DATABASE btgapp TO simha;
EOSQL
And then with a Dockerfile like this:
Dockerfile
FROM postgres:11-alpine
COPY ./init-user-db.sh /docker-entrypoint-initdb.d/init-user-db.sh
(This is taken from the postgres description on docker hub)
In closing - I would recommend that you do not prioritize the distro that an image is based on over the usability and maintainability. Docker enables us to run applications in containers without really caring too much about what distro is inside the container. It's all linux anyway. At the end of the day, I expect that you want a stable postgres database container like me. This is what I get with the official postgres image.
I hope I helped you evaluate your options on this.
I get this error while trying to dump database, i entered
linuxuser $ sudo su postgres
linuxuser $ [sudo] password for linuxuser:...
$ pg_dump -h localhost mydb >tempfile
$ sh: cannot create tempfile: Permission denied
What the problem? i've just installed fresh postgresql.
Write into directory where postgres user has write access. For instance /tmp.
$ pg_dump -h localhost mydb >/tmp/tempfile
In your attempt postgres user tries to create a file in some random directory belonging to the other user.
backup and restore can be done by any unpriviledged user that knows the postgres superuser password by changing permissions on the working directory:
% mkdir backup
% chmod 0777 backup
% su postgres
[enter password]
$ cd backup
$ pg_dump mydb >tempfile
$ exit
"tempfile" will be owned by postgres and same group as the user
sudo su postgres doesn't change the current directory so you're still in linuxuser's home directory and postgres has no permission to write into it.
Change to a different directory
postgres User
As the other correct answers said, the folder in which you are trying save the backup does not have permissions assigned to the postgres user (operating system user account). The postgres user is the one running the backup utility. This user account was created during the Postgres installation process. You may have used a different name, but the default is postgres.
Folder With Permissions
The solution is to either find or create a folder where the postgres user has read-write permissions.
Mac OS X
In Mac OS X (Mountain Lion), I am able to create such a folder in the Finder.
In the Finder, create a new folder. Select it.In this example, I created a folder named postgres_backups.
Choose File > Get Info.
Open the disclosure triangle for the Sharing & Permissions section.
Click the Plus button to add another item to the list of users.A list of users appears in a "sheet" dialog.
Select the postgres user from the list.
In the Privilege column, for the new postgres row, change the popup menu to Read & Write.
Close the Get Info window. Done.
Now you can direct your Postgres backup files to that folder.
By the way, I use the pgAdmin app to do backups and restores. Control+click on the desired database and choose Backups…. The pgAdmin app was probably bundled with your Postgres installation.
first thing you have to do is do not switch to user postgres. The use this command for backup:
pg_dump -U username -h localhost dbname > /db.sql
I wrestled with this "Permission Denied" issue while trying to backup my PSQL database on an Ubuntu machine for a long time, and tried user access rights, superuser status, folder properties--lots of stuff. Finally, something worked that was ridiculously simple. In the command:
pg_dump -U username -O dbname > 'filename.sql'
Make sure you have quotes (single or double quotes seem to work) around the filename which follows the greater than (">") sign. The examples above do not have quotes around the filename. Once I tried that, I no longer received Permission Denied errors.