we've got an existing postgres database which is configured to run with postgREST. There are already RLS on database level and we would like to use superset on top of that. Unfortunately, by default we didn't find a solution to pass a variable (in this case the username) to the database via the superset db connector. This would then be used for RLS. Is there any possibility to create a custom DB connector, which passes the username (superset username) to the DB as a userdefined postgres variable with a given name?
I tried to find some help reading this article: https://preset.io/blog/building-database-connector/.
Thereafter some more questions arose, such as:
how could I access the username of an superset instance from within the database connector?
would it suffice to insert the username once, or do I have to pass it through in each query?
Due to the size and complexity of the application I did not progress further.
Cheers,
Related
We have database in mongo for a long time and now we have decided to move to Postgres. Since these two are totally different we have started with table design and API migration first. Now it comes to the data part.
In mongo, we have the following schemas and we wanted to migrate the same data to Postgres. I have gone through a couple of articles that say you can export data from mongo in CSV and import in Postgres using COPY command or using pgAdmin.
Mongos used uuid which is basically a string but in postgres we have id as an integer. we have used crossed ref foreign key in mongo as well how we can migrate those without lossing the connection between tabeles ?
can anyone suggest any good method ?
I need to log all activity for some specific user on database. I have set up the logging with ALTER ROLE username SET log_statement TO 'all'; and the logging works fine, all queries from user are logged. The problem is that for this user queries to Postgres internal schemas (pg_catalog) from clients like psql and pgAdmin are also logged. I have a bunch of lines with SELECT pg_catalog.quote_ident(n.nspname) || '.' || pg_catalog.quote_ident(c.relname).... in the log that are of no use to me. Even worse this queries are more then one line in the log so it's not easy to filter them out.
Is it possible to somehow restrict the logging only to one specific database or schema and not to include queries to other schemas like pg_catalog?
I don't know if the standard logging utility in postgres has that option (my guess is no). But maybe it's worth a look to the pgaudit external library for postgres.
The module pgadmin is designed to generate audit logs, but it uses the standard postgres logging tool. You can tweak several parameters to customize the logs, and it has a specific parameter that I think is perfect for your use case. From the documentation:
pgaudit.log_catalog
Specifies that session logging should be enabled in the case where all
relations in a statement are in pg_catalog. Disabling this setting
will reduce noise in the log from tools like psql and PgAdmin that
query the catalog heavily.
The default is on.
I hope it helps!
Change your logging format from text to csv (log_destination=csvlog) — you can then import the data to the database and then filter out the queries you are not interested in:
Using CSV-Format Log Output
I run a couple of PostgreSQL databases (9.3), one of which does not need archiving the other of which I'd rather run in WAL archive mode by can get away with not.
I now have a need for a data which is archived.
As far as I can tell the setting is on an instance basis, so I wouldn't be able to just choose which databases to archive and which not, which would indicate that I will need to create a new PostgreSQL instance.
Am I missing something?
Also, FWIW, will I be able to create database links between databases on the two instances?
Thanks, --sw
You cannot to choose database for archiving - only all (or none) in PostgreSQL instance can be archived. There are not any pother possibility now.
You can send query to other PostgreSQL instance via dblink extension or with Foreign Data Wrappers API. FDW API should be preferred, although dblink has some usage still.
How can I obtain the creation date or time of an IBM's DB2 database without connecting to the specified database first? Solutions like:
select min(create_time) from syscat.tables
and:
db2 list tables for schema SYSIBM
require me to connect to the database first, like:
db2 connect to dbname user userName using password
Is there another way of doing this through a DB2 command instead, so I wouldn't need to connect to the database?
Can db2look command be used for that?
Edit 01: Background Story
Since more than one person asked why do I need to do this and for what reasons, here is the background story.
I have a server with DB2 DBMS where many people and automated scripts are using it to create some databases for temporary tasks and tests. It's never meant to keep the data for long time. However for one reason or another (ex: developer not cleaning after himself or tests stopping forcefully before they can do the clean up) some databases never get dropped and they start to get accumulated till the hard disk is filled out eventually. So The idea of the app is to look up the age of the database and drop it, if it's older than 6 months (for example).
I moved some tables in my postgresql (8.2) database to a new schema.
at first, my "user" could not see the tables in the new schema, but I used set search_path to tell it to look in this new schema
I access these tables with a simple web application that uses hibernate. At first, my web application, which uses the "user" user, could not see the tables either, even after I set the search_path. I eventually set the default-schema in the hibernate config file and it worked, but I understand from what I've read that I should not have to set this property? I have a few JDBC queries in this app that still can't see the tables in the new schema.
I've browsed through the postgresql docs and can't find the cause of my problems. Is there something simple I'm missing?
SET search_path is not persisted. It is only valid for the current session.
You need to use ALTER USER to make that change permanently, but you don't need special privileges to change the user you are logged in with (i.e. "yourself")