i would like to run a small query every time postgres is restarted, is this possible?
I have found that is possible to do that every time psql is launched, using .psqlrc but that does not address my need
thanks
For those coming here looking for a solution, as of today (2020, postrgresql 12) it is not possible to configure postgres to always run a given script/query.
You can of course use your own launch script but you can not prevent, it seems, other people restarting their way if they have the right permissions.
Related
This question already has answers here:
Export and import table dump (.sql) using pgAdmin
(6 answers)
Closed 1 year ago.
Let I first state that I am not a DBA-guy but I do have a question regarding restoring remote databases using PG Admin.
I have this PG Admin tool (v4.27) running in a Docker container and I use this portal to maintain two separate Postgress databases, both running in a Docker container as well. I installed PG Agent in both database containers and run scheduled daily backup's, defined via PG Admin and stored in the container of each corresponding databases. So far so good.
Now I want to restore one of these databases by using the latest daily backup file (*.sql), but the Restore Dialog of PG Admin only looks for files stored locally (the PG Admin container)?
Whatever I tried or searched for on the internet, to me it seems not possible to show a list of remote backup files in PG Admin or run manually a remote SQL file. Is this even possible in PG Admin? Running psql in the query editor is not possible (duh ...) and due to not finding the remote SQL-restore file I have no clue how to run this code within PG Admin on the remote corresponding database container.
The one and only solution so far I can think of, is scheduling a restore which has no calendar and should be triggered manually when needed, but it's not the prettiest solution.
Do I miss something or did I overlook the right documentation or have I created a silly, unmaintainable solution?
Thanks in advance for thinking along and kind regards,
Aad Dijksman
You cannot restore a plain format dump (an SQL script) with pgAdmin. You will have to use psql, the command line client.
COPY statements and data are mixed in such a dump, and that would make pgAdmin choke.
The solution by #Laurenz Albe points out that it is best to use the command line psql here, and that would be my first go-to.
However, if for whatever reason you don't have access to the command line and are only able to connect to this database via pgadmin, there is another solution which you can find here:
Export and import table dump (.sql) using pgAdmin
I recommend looking at the solution by Tomas Greif.
There is a postgres 9 with database AAA that is used by my application. I use pgadmin 4 to manage it manually.
I would like to check what queries are executed by this application on database AAA in real time.
I did a research about monitoring options in pgadmin in vain.
Is is possible to do that by using just pgadmin4? Or is it necessary to use another tool (if yes - what is he name of this tool)?
When I point pgAdmin4 at a 9.6 server, I see a dashboard by default which shows every session (done by querying pg_stat_activity). You can then drill down in a session to see the query. If a query last for less time than the monitoring interval, then you might not see it if the sample is taken at the wrong time.
If that isn't acceptable, then you should probably use a logging solution (like log_statemnt='all') or maybe the pg_stat_statements extension, rather than sample-based monitoring. pg_stat_statements doesn't integrate with the dashboard in pgAdmin4, but you can select from the view from an SQL window just like you can run any other SQL. I don't believe pgAdmin4 offers a built-in way to monitor the database server's log files, the way pgAdmin3 did.
I am not too familiar with Bucardo and Postgres , so I was hoping to get some feedback / how to from this question.
We have 3 computers running in parallel in various parts of the building. When one updates something the other shows the data etc thanks to bucardo syncs running between the computers.
However here is the requirement. At anytime one computer can be brought offline and reimaged. Now when this computer comes back on line, the operator should be able to hit replicate and get the data from the master computer.
What is the best way to accomplish this ?
My thought was to run a pg_dump on the master computer and run pg_restore on the reimaged computer.
or do you think setting bucardo onetimecopy=2 is the best course of action.
I have a collection with about 3,000,000 entries that I need to reindex. This whole thing began when I tried to add a 2d index. To do this, I created an ssh tunnel, opened the mongo shell and tried to use ensureIndex. I'm in a place with a somewhat unreliable internet connection, and an hour in it ended up breaking the pipe. I then tunneled back in, opened the mongo shell and tried to look at the number of indexes using getIndexes; the new index I created showed up, but I wasn't confident it had finished, so I decided to use reIndex. In retrospect, this was stupid. The pipe broke again. Now when I open the shell and try to issue getIndexes, the shell doesn't respond.
So what should I do? Do I need to repair my database? Can I issue reIndex when I have a more reliable internet connection? Is there a way to issue reIndex without keeping the shell open, but without doing it in the background and having it take eons? (I'll check the mongod shell options to see if I can find anything, then check the node.js mongo api so I can try running something as a service on server)
And also, if I end up running reIndex as a service on the server, is there any way to check if it's working? The most frustrating part of this right now is I have no idea if my database is ok, if reIndex is still running, etc. Any help would be much appreciated. Thanks.
You don't have a problem. Mongo runs commands and only stops them if you explicitly kill the operation (db.killOp()).
You do not need to wait for the index operation to finish!
Regarding the connection problems, try using the screen command.
It enables you to create a "persistent" screen - not in the way of disk persistence, but in the means of connection-loss.
I want to make a script that will run postgres in-memory without durability.
I read this page: http://www.postgresql.org/docs/9.1/static/non-durability.html
But I didn't understand how I can set this parameters in script. Could you please, help me?
Thanks for help!
Most of those parameters, like fsync, can only be set in postgresql.conf. Changes are applied by re-starting PostgreSQL. They apply to the whole database cluster - all the databases in that PostgreSQL install. That's because the databases all share a single postmaster, write-ahead log, and set of shared system tables.
The only parameter listed there that you can set at the SQL level in a script is synchronous_commit. By setting synchronous_commit = 'off' you can say "it's OK to lose this transaction if the database crashes in the next few seconds, just make sure it still applies atomically".
I wrote more on this topic in a previous answer, Optimise PostgreSQL for fast testing.
If you want to set the other params with a script you can do so but you have to do it by opening and modifying postgresql.conf using the script, then re-starting PostgreSQL. Text-processing tools like sed make this kind of job easier.
If you're running a debian based linux distro, you can just do something like:
pg_createcluster -d /dev/shm/mypgcluster 8.4 ramcluster
to create a ram based cluster. Note that you'll have to do:
pg_drop cluster 8.4 ramcluster
and recreate it on reboot etc.