Jobs is not shown in pgAdmin 4 - postgresql

I have a problem with pgAdmin 4. Jobs are not displayed even if I am the superuser.
Jobs give you the opportunity to link tasks to a PostgreSQL database. Unlike triggers, which are executed only after actions on the database, these jobs can be executed, for example, according to a point in time.
With the pgAgent addon from Stack Builder this should be possible in pgAdmin.
Does anyone have a solution to my misery?
Tryed to make a new super user, reconnected to the server, looked inside the proerties of pgAdmin.

Related

Advice for Backup

I have a chron job that runs on a stateless server. On this chron job, I am trying to take a snapshot of my Postgres GCP Sql db (PRODUCTION_DATABASE), save it to S3 and then upload it to my staging, qa-1, dev databases. The problem is one table, call it LARGE_TABLE, needs to be shrunk because the table size is growing rapidly, thus causing problems and exceeding timeouts. Does anyone have any advice on how to get this done?
I tried running the cloud_sql_proxy to run pg_dump but no go with that method. Is there a way I can truncate one table and make a backup?

PGAdmin restore remote database [duplicate]

This question already has answers here:
Export and import table dump (.sql) using pgAdmin
(6 answers)
Closed 1 year ago.
Let I first state that I am not a DBA-guy but I do have a question regarding restoring remote databases using PG Admin.
I have this PG Admin tool (v4.27) running in a Docker container and I use this portal to maintain two separate Postgress databases, both running in a Docker container as well. I installed PG Agent in both database containers and run scheduled daily backup's, defined via PG Admin and stored in the container of each corresponding databases. So far so good.
Now I want to restore one of these databases by using the latest daily backup file (*.sql), but the Restore Dialog of PG Admin only looks for files stored locally (the PG Admin container)?
Whatever I tried or searched for on the internet, to me it seems not possible to show a list of remote backup files in PG Admin or run manually a remote SQL file. Is this even possible in PG Admin? Running psql in the query editor is not possible (duh ...) and due to not finding the remote SQL-restore file I have no clue how to run this code within PG Admin on the remote corresponding database container.
The one and only solution so far I can think of, is scheduling a restore which has no calendar and should be triggered manually when needed, but it's not the prettiest solution.
Do I miss something or did I overlook the right documentation or have I created a silly, unmaintainable solution?
Thanks in advance for thinking along and kind regards,
Aad Dijksman
You cannot restore a plain format dump (an SQL script) with pgAdmin. You will have to use psql, the command line client.
COPY statements and data are mixed in such a dump, and that would make pgAdmin choke.
The solution by #Laurenz Albe points out that it is best to use the command line psql here, and that would be my first go-to.
However, if for whatever reason you don't have access to the command line and are only able to connect to this database via pgadmin, there is another solution which you can find here:
Export and import table dump (.sql) using pgAdmin
I recommend looking at the solution by Tomas Greif.

pgagent - not running jobs - pgpass file is correct - postgresql

I have Pgagent installed on my Debian OS. Along with Postgresql 9.4.
I have checked .pgpass file as this seems to be the most common cause for a job to not run.
host port 5432 database = * username = postgres password = xxxx.
for both local and the remote host. The database I'm trying to set a job for is on a remote host.
I made sure it was enabled. It's just a simple INSERT script that should repeat every 5 minutes.
No errors are being triggered that I can find. Any ideas of what would cause the job not to run at all - even when selecting 'run now'?
Check postgre db, pgAgent Catalog, pga_jobsteplog
IDK about Linux but I had similar problem in windows where the thing won't run and it doesn't raise any notice on the error even after doing RUN NOW. The only error i could find out was that if i click on the job and click on statistics, i could see like shit ton of times it ran and everytime it ran, its status was F.
The reason for this failure is becuase the pgagent couldn't connect to the main database of postgresql.
The services of pgagent isn't running at all (as we can see this information under services in task manager in windows).
Forcing the service to run would create a failure which can be viewed in the event manager in windows.
To solve this issue, first try putting that pgpass.txt file in the environment variable (if not automatically put), if this didn't work, then what I did was to uninstall and delete all possible folders of Postgres, pgagent, and pgadmin, clearing out all temp files, clearing out registry details which have been put by Postgres, pgagent, and pgadmin and also from environment variable. Then reinstall it and it would normally work :)

ALTER command takes too much time in AWS RDS

I have one instance of Postgresql running as AWS RDS instance.
I logged in as admin user in the terminal and tried to add a column to a table whose owner is admin.
The table only has 23 rows. It's a dev machine. So, no connections to it apart from mine. It takes forever to respond. I do not know what is going on.

Postgres: how to start a procedure right after database start?

I have dozens of unlogged tables, and doc says that an unlogged table is automatically truncated after a crash or unclean shutdown.
Based on that, I need to check some tables after database starts to see if they are "empty" and do something about it.
So in short words, I need to execute a procedure, right after database is started.
How the best way to do it?
PS: I'm running Postgres 9.1 on Ubuntu 12.04 server.
There is no such feature available (at time of writing, latest version was PostgreSQL 9.2). Your only options are:
Start a script from the PostgreSQL init script that polls the database and when the DB is ready locks the tables and populates them;
Modify the startup script to use pg_ctl start -w and invoke your script as soon as pg_ctl returns; this has the same race condition but avoids the need to poll.
Teach your application to run a test whenever it opens a new pooled connection to detect this condition, lock the tables, and populate them; or
Don't use unlogged tables for this task if your application can't cope with them being empty when it opens a new connection
There's been discussion of connect-time hooks on pgsql-hackers but no viable implementation has been posted and merged.
It's possible you could do something like this with PostgreSQL bgworkers, but it'd be a LOT harder than simply polling the DB from a script.
Postgres now has pg_isready for determining if the database is ready.
https://www.postgresql.org/docs/11/app-pg-isready.html