I have a Redshift database in my company (not in my power to change that) and recently some data just desapear. I thinked in do some kind of trigger to identify when any delete happen and try to found the source, but I learn Redshift don't have trigger. There are any opcions for monitoring what user and when delete from database?
Ideally you should be having different users or roles for each and every process or client connecting to redshift. Use grants to solve/debug this problem.
Then you should be using grant to provide DELETE grants to some specific user/users or roles.
Also, there is sql_history table that you could query to see which user has issued delete query.
I hope it will help.
Related
I have a table that gets regularly and automatically dropped and recreated. I would like users to be able to select from it but not be able to create VIEWs that would prevent this automatic dropping from happening, is there a way to do so? We're stuck on PostgreSQL 9.6, if that's relevant.
For what I know and the documentation for Postgress 9.6 SELECT privilege allow to select from tables and objects you have granted access to, but that doesn't include the creation of VIEWS. To create views you need the CREATE privilege, so my suggestion is that you revoke the CREATE privilege to the users and grant the SELECT one
My question is: when I create a PostgreSQL user different from the default postgres user, should I also create a new database for that user to connect to?
What's the point of a setup like that?
A few explanations:
Don't use the postgres database for user data. It is intended for administrative purposes, for example as a database to connect to if you want to run CREATE DATABASE.
This has nothing to do with users.
Users are cluster-wide, that is, all databases in a cluster share the same users. Note that that does not imply that every user can connect to each database.
PostgreSQL command line programs have two defaults:
If you don't specify a database user, the default is the database user that is called like the operating system user.
If you don't specify a database, the default is the database with the same name as the database user.
I assume that it is this last default that inspires your question.
This is just a default value and should not influence your database design.
Create one database for each separate body of data, like all the data that belong to one application.
Create users as your application and data maintenance procedures require. It is a good idea to use different users for different tasks. For example, the user that performs the backup should not be used by your application to connect to the database.
No. Even if it's a local admin user so you don't need to go through sudo, you should just add export PGDATABASE=postgres to your .bashrc or .profile. I always make a new superuser with the name of my local user, and configure pg_hba.conf to allow local connection if necessary.
I'm going to develop a multi-tenant application, where each tenant lives in its own database or schema (I've not decided this yet).
In this scenario, if I wanted to use point in time recovery (PITR), I also want to have it per-tenant. If a tenant has a problem, I want to be able to roll back only his database or schema and not the whole server.
While I found information how to do backup/restore in such situations with pg_dump and pg_restore, I haven't found any information for PITR.
Is this even possible? If yes, only per database or even per schema?
I can imagine that postgres maybe stores the log of the whole server in a single file, which may be the reason why it could not be possible. But I may be wrong..
In order to secure our database we create a schema for each new customer. We then create a user for this schema and when a customer logs in via the web we use their user and hence prevent them gaining access to other areas of the database.
Our issue is with connection pooling as it is a bit inefficient to keep creating/dropping new connections for these users. We would like to have a solution that can work across many hundreds of different database users.
We've looked at pg_bouncer, but the issue here is that we have to create a text record in an ini file for each user and restart pg_bouncer every time we set up a customer. This is not a great solution.
Is there an alternative solution that works in real time and would mean a customers connection/connection(s) would stay in the pool whilst they were active?
According to the latest release notes pgbouncer might actually do this. But I haven't tried.
Pooling mode can be configured both per-database and per-user.
As for use case in general. We also had this kind of issue a while ago. We just went with connection pooling with one user/database and multiple schemas. Before running psql query we just used SET search_path TO schemaName. As for logging, we had compliance mode, when we could log activity per customer and save it in appropriate schema.
To connect to the database I use this example. But I can't find lessons on how to create a database.
For example:
connect to server
create new database
do something
drop database
close connection
Can anybody show me how to do it?
Thanks!
Follow the manual on how to create a database cluster:
http://www.postgresql.org/docs/9.1/interactive/creating-cluster.html
The database and users are created only once and you can use the client applications for that. Or are you trying to do it automatically as part of a software install package? After that you connect to it as many times as needed.
Since you are creating a new database and then dropping it, why not use the built-in SQLite database? You can do a completely in-memory database that will be lightning fast (unless you fill up available RAM).
I believe you can create databases by issuing standard SQL commands just as you can create tables in a database, as long as you are using a user (e.g. admin or similarly entitled user) that has permissions to create new databases.
So, all you need is to connect to the DB with the right user and then issue SQL commands with db.SQLExecute, such as "create database newDBname".