I'm trying to create a database and a user with limited privileges. The user should have access only to that database, nothing more.
In a blank slate Postgres 13 deployment using Docker, I connect with the user postgres, a superadmin, and run the following:
CREATE DATABASE db_foo;
CREATE USER usr_bar with NOINHERIT LOGIN password 'pwd1234';
That's just it. Nothing more than that. Then I connect to it with the newly created user, using psql -h <pg_host> -U usr_bar -d <db_name>.
Replacing <pg_host> with either 127.0.0.1 when running psql from the Docker host machine or with the docker container name when running psql from another docker container. Also replacing <db_name> with either postgres or db_foo; they both yield the same odd behavior.
What I expected to happen is that the login above (with usr_bar user) for any of the databases would fail due to lack of permission. Or that at least I wouldn't be able to make any changes, but I'm able to run, for instance, a create table command and it works. I would expect the user to not have any permissions by default, since no GRANT was performed.
So my question is: Why does this newly created user has so much permission by default? What am I doing wrong? If anyone can also suggest how to solve this, you're welcome; but I'd like to understand the reasoning behind it.
NOTE: For the docker image, I tried with two different ones, but had the same results. They are:
$ docker run --rm -ti -e POSTGRES_PASSWORD=root1337 --network some_net --name some-pg postgres:13
and
$ docker run --rm -ti -e POSTGRESQL_PASSWORD=root1337 --network some_net --name some-pg bitnami/postgresql:13
You are not doing anything wrong. There are two things that conspire to produce the behavior you see:
The default permissions for databases allow CONNECT and TEMP to PUBLIC, i.e., everyone.
That may seem lax, but it is mitigated by the fact that the default pg_hba.conf does not allow remote connections at all, which is restrictive.
In a way, the CONNECT permission on databases and pg_hba.conf rules overlap: they both restrict access of users to databases. I guess it was decided that being strict in one of them is good enough.
The default permissions on the public schema allow CREATE to PUBLIC.
That is an unsafe default, and the documentation recommends to REVOKE that privilege in databases.
The reason why this is not changed is backward compatibility, which is highly priced in PostgreSQL.
Related
Actually learning docker,
i manipulate postgres containers and asking myself the
following questions :
I launch a first postgres container like this :
docker run -e POSTGRES_PASSWORD=secret -p 5464:5432 -v postgres-data:/var/lib/postgresql/data -d postgres
and then a second container, using this command, and by consequence EXACTLY THE SAME VOLUME.
docker run -p 5465:5432 -v postgres-data:/var/lib/postgresql/data -d postgres
Is it a problem ?
And my most essential question is :
do i have to consider i have two postgres servers sharing the same configurations files,
or do i have to conside i have two postgres containers sharing the same postgres server ?
It's not really clear for me.
Thanks in advance.
Yes, that's a problem. I think PostgreSQL is clever enough that one of the databases just won't start up. In the worst case, this is a recipe for data corruption. This isn't specific to Docker; just in general, you can't run two databases against the same physical storage.
A typical container-oriented setup is to have two separate databases with two separate volumes, one for each service that requires a database.
I'm using postgres:13.1-alpine image for a docker container. I tried to make a backup of the volume using the following command:
docker run --rm \
--volume [DOCKER_COMPOSE_PREFIX]_[VOLUME_NAME]:/[TEMPORARY_DIRECTORY_STORING_EXTRACTED_BACKUP] \
--volume $(pwd):/[TEMPORARY_DIRECTORY_TO_STORE_BACKUP_FILE] \
ubuntu \
tar xvf /[TEMPORARY_DIRECTORY_TO_STORE_BACKUP_FILE]/[BACKUP_FILENAME].tar -C /[TEMPORARY_DIRECTORY_STORING_EXTRACTED_BACKUP] --strip 1
Something went wrong and now I can't access the database. I used to access it using the user/role myuser. But it seems to not exists anymore.
What I tried
I still can access the container using docker exec -it postgres sh. But I can't start psql because neither root, postgres or myuser roles exists.
All solutions I have found so far are basically the same: or use postgres user to create another user or use root user to create the role "postgres". This solutions doesn't work.
Most likely your database is toast. It is hard to see how an innocent backing-up accident could leave you with no predictable users, but an otherwise intact database. But it is at least plausible that you or docker just blew away your database entirely, then created a new one with a user whose name you wouldn't immediately guess. So find the size of the data directory, is that most plausible for the database you hope to find, or with one newly created from scratch?
I would run strings DATADIR/global/1260 and see if it finds anything recognizable as a user name in there, then you could try logging in as that.
Or, you could shutdown the database and restart it in single-user mode, /path/to/bin/postgres --single -D /path/to/DATADIR and take a look at pg_authid to see what is in there.
I have two Postgres databases and I want to sync data between themes.
So far I have these two containers, exactly the same with different posts and different names.
docker container run --name='p1' -d -p 5435:5432 -v /tmp/dbs/test/:/var/lib/postgresql/data postgres
docker container run --name='p2' -d -p 5436:5432 -v /tmp/dbs/test/:/var/lib/postgresql/data postgres
The problem happens when something changed.
If I change something in p1 like insert a row, then I can't see it in p2.
But if I kill, and run containers again, then I can see the inserted data in both of themes.
Why this is happening?
Is there a way to sync data between themes?
Running two postmaster processes on the same files is a sure road to data corruption. Don't do that.
You cannot have multi-master replication with standard PostgreSQL, but you can have a read-only standby server.
I'm creating a Docker image based on the postgres image and I'm trying to interact with it like this:
FROM postgres:9.6
USER postgres
RUN createuser foo
However, this results in the following error while building:
createuser: could not connect to database postgres: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
How do I properly connect to the PostgreSQL server from within this container?
The postgres server isn't running during the docker build process, so trying to connect to it with a RUN statement in your Dockerfile isn't going to work.
If you want to create users or databases or extensions, etc, you need to do that at runtime. There are a few options available, and which one you choose depends on exactly what you're trying to do.
If you just need to create a user and/or database that differs from the default, you can do that via environment variables as described in the documentation.
To create a user other than postgres:
docker run -e POSTGRES_USER=foo -e POSTGRES_PASSWORD=secret [...] postgres
To create a database other than the default (which will match the name of POSTGRES_USER):
docker run -e POSTGRES_DB=mydbname [...] postgres
If you need to do anything more complicated, take a look at the "How to extend this image" section of the documentation. You can place shell scripts or sql scripts into /docker-entrypoint-initdb.d and they will be executed during container startup. There is an example there that demonstrates how to create an additional database using this mechanism.
Is there a way to get console access to Dokku's PostgreSQL plugin? On Heroku I'd do heroku pg:psql. Is this possible in a Dokku environment and if so how?
There is in fact a way to do this directly with the dokku-pg-plugin.
The command postgresql:restore <db> < dump_file.sql connects to the specified database and restores it with the provided dump file. If you simply omit the dump file part (< dump_file.sql), a psql console session opens.
Since postgresql:restore <db> is semantically not the best way to open a console session, I have opened a pull request to add the command postgresql:console <db>.
So, until my PR is merged, the options for opening a psql console for a database are either:
doing it manually with psql -h 172.17.42.1 -p <port> -U root db with the <port> and password taken from the output of dokku postgresql:info <db>,
using the semantically incorrect command dokku postgresql:restore <db>, or
use my forked and patched version of the plugin which adds the command postgresql:console <db>.
Edit:
The owner of the dokku-pg-plugin has merged my pull request. If you're using this plugin and are looking for a way to access your PostgreSQL console with it, you might want to update it to the latest version. Once you have done that, you can use the command postgresql:console <db> to open a psql session to the specified database.
This worked for me for my Rails app that I'm running on Dokku:
dokku run <app-name> rails db
That brought up the console for the PostgreSQL container I created (via dokku postgresql:create <db>). I couldn't figure out another way to get at the PostgreSQL instance in that container, short of attempting to directly connect to the DB, with the connection info/credentials listed when you do this:
dokku postgresql:info <db>
I haven't tried that, though I suspect it would work.