Is there any way how to get all database names out of a postgres database using JDBC? I can get the current one, but thats not what I am looking for...
I have a jUnit rule, which creates database for each test and after the test it drops it, but in some special cases, when the JVM dies, the drop never happens. So I'd like to check in the rule also existing database and clean some, which are not used any more. What I am looking for is some \l metacommand (but I can't easily ssh to the machine from unit tests...)
What would be also a solution for me would be some database ttl, something like some amqp queues have, but I suppose thats not in postgres either...
Thanks
Just run:
select datname
from pg_database
through JDBC. It returns all databases on the server you are connected to.
If you know how to get the information you want through a psql meta command (e.g. \l) just run psql with the -E switch - all internal SQL queries for the meta commands are then printed to the console.
-l actually uses a query that is a bit more complicated, but to only the the names, the above is sufficient
Related
I have a Postgres database defined with the public schema and it is accessed from within a Python application. Before doing a pg_dump, the tables can be accessed without using a qualified table name, for example select * from user works. After doing the pg_dump, select * from user fails with a relation "user" does not exist error, but select * from public.user works. No restore has been performed.
Since this is within an application, I cannot change the access to include the schema. The application uses sqlalchemy and pgbouncer for interacting with the database.
In trying to figure out what's happening, I've discovered that running pg_dump causes the session to change. Before running the command, by querying pg_stat_activity, I can see there are 10 sessions in the pool, one active and nine idle. After running the command, a different session is active and the other nine are idle. Also, the settings in pg_db_role_setting and pg_user look correct for the session that I can see. But, even when those look correct, the query select * from user fails.
Also, just for reference, the code currently does not contain pg_dump and runs fine. As soon as I add the pg_dump, I see the issues mentioned.
Is there anything in pg_dump that could be causing what I'm seeing or is it just a result of going to another session? Since these are some of the first commands being run after running migrations, is there any reason the sessions should have different settings? What else am I missing?
Thanks!
I've tried to find a way to ask the postgres server for the current list of databases (and later list and describe the tables) from a C program using the libpq library. Currently I'm doing a (simplified) popen("psql --command '\\l'") but this is not really how I would like to solve the problem ... Is there any way of asking the postgres server for the information I need via a libpq function?
You can run a metadata query like
SELECT ... FROM pg_database;
The columns you select will depend on the information you need.
Ok, I screwed up.
I dumped one of my psql (9.6.18) staging database with the following command
pg_dump -U postgres -d <dbname> > db.out
And after doing some testing, I "restored" the data using the following command.
psql -f db.out postgres
Notice the absence of -d option? yup. And that was supposed to be the username.
Annnd as the database happend to have the same name as its user, it overwrote the 'default' database (postgres), which had data that other QAs are using.
I cancelled the operation quickly as soon as I realised my mistake, but the damage was still done. Around 1/3 ~ 1/2 of the database is roughly identical to the staging database - at least in terms of the schema.
Is there any way to revert this? I am still looking for any other dumps if any of these guys made one. But I don't think there is any past two to three months. Seems like I got no choice but to own up and apologise to them in the morning.
Without a recent dump or some sort of PITR replication setup, you can't un-revert this easily. The only option is to manually go through the log of what was restored and remove/alter it in the postgres database. This will work for the schema, the data is another matter. FYI, the postgres database should not really be used as a 'working' database. It is there to be a database to connect to for doing other operations, such as CREATE DATABASE or to bootstrap your way into a cluster. If left empty then the above would not have been a problem. You could have done, from another database, DROP DATABASE postgres; and then CREATE DATABASE postgres.
Do you have a capture of the output of the psql -f db.out postgres run?
Since the pg_dump didn't specify --clean or -c, it should not have overwritten anything, just appended. And if your tables have unique or primary keys, most of the data copy operations should have failed with unique key violations and rolled back. Even one overlapping row (per table) would roll back the entire dataset for that table.
Without having the output, it will be hard to figure out what damage has actually been done.
You should also immediately copy the pg_xlog data someplace safe. If it comes down to it, you might be able to use pg_xlogdump to figure out what changes committed and what did not.
I have a relatively large data table (~4m rows) that has been imported to a locally hosted postgresql database. (As it happens it's a ruby on rails app database, but that shouldn't be important for the purposes of the question - unless it helps)
I want to take that table and add it into an identical table in a heroku postgresql database (the table is currently empty).
How would I do that quickly and efficiently?
I found this Copy a table from one database to another in Postgres
but I'm struggling with the syntax for the heroku end, i.e. how do I connect to both at the same time? Which database am I connecting to originally?
In that answer, you are originally connected to the database "source_db" or "my_db" (depending on which line in the answer you are looking at). Presumably that database is on the instance running locally on port 5432, unless unshown environment variables (or non-default compilation) have changed that. And the destination database is named "target_db", running in the same instance.
The pg_dump and psql are independent commands and each takes all the connection options that they would take if run in isolation. So you would probably want something like:
pg_dump -t table_to_copy source_db | psql target_db -h you.heroku.hostname_or_ip
A problem could be if both commands prompt for a password, it might make a mess. Which password do you need to enter first? And whichever order, will they read them correctly? If both need passwords, it is best to arrange that at least one of them be supplied by ~/.pgpass.
I am beginner to PostgreSQL.
I want to connect to another database from the query editor of Postgres - like the USE command of MySQL or MS SQL Server.
I found \c databasename by searching the Internet, but its runs only on psql. When I try it from the PostgreSQL query editor I get a syntax error.
I have to change the database by pgscripting. Does anyone know how to do it?
When you get a connection to PostgreSQL it is always to a particular database. To access a different database, you must get a new connection.
Using \c in psql closes the old connection and acquires a new one, using the specified database and/or credentials. You get a whole new back-end process and everything.
You must specify the database to use on connect; if you want to use psql for your script, you can use "\c name_database"
user_name=# CREATE DATABASE testdatabase;
user_name=# \c testdatabase
At this point you might see the following output
You are now connected to database "testdatabase" as user "user_name".
testdatabase=#
Notice how the prompt changes. Cheers, have just been hustling looking for this too, too little information on postgreSQL compared to MySQL and the rest in my view.
In pgAdmin you can also use
SET search_path TO your_db_name;
The basic problem while migrating from MySQL I faced was, I thought of the term database to be same in PostgreSQL also, but it is not. So if we are going to switch the database from our application or pgAdmin, the result would not be as expected.
As in my case, we have separate schemas (Considering PostgreSQL terminology here.) for each customer and separate admin schema. So in application, I have to switch between schemas.
For this, we can use the SET search_path command. This does switch the current schema to the specified schema name for the current session.
example:
SET search_path = different_schema_name;
This changes the current_schema to the specified schema for the session. To change it permanently, we have to make changes in postgresql.conf file.
Use this commad when first connect to psql
=# psql <databaseName> <usernamePostgresql>
set search_path = 'schema name here'
while connecting to the postgres, you have to opt for default database to connect. If you have nothing, you can use 'postgres' as default.
You can use dbeaver to connect to postgres. UI is good
PgAdmin 4, GUI Tool: Switching between databases
In the PgAdmin Browser on the left hand side, right click on the database you are willing to switch to.
Select a QueryTool from the drop down menu (or any other option that you need, I will stick with the QueryTool for now).
You will see the QueryTool in the PgAdmin window, and on top you will see the active database and the role name.
Now you can write queries against the chosen database.
You can open multiple QueryTools for multiple database, and work with them as you do with your graphical text editor.
In order to be sure that you are querying the proper database, issue the following query:
SELECT session_user, current_database();