I'm using pgcli to execute a transaction on a postgresql database. I've come to the point where I'm almost ready to commit, I now want to review the queries I've ran. Is there a way to do this, either with pgcli or psql? I've looked at the postgresql docs and pgcli docs but haven't found anything.
I've seen dbeaver records queries made during a transaction, but I'd like to see if it's possible using commandline tools.
The best solution I have so far is to hit the up arrow and page through recent commands until I see my last BEGIN query. After, I would page back down until I reach the most recent query.
Related
As recently, I keep getting "org.postgresql.util.PSQLException: FATAL: sorry, too many clients already" on my pgadmin4 (at the same time my application hang/not responsive). Upon studying around and read from another thread in here about pg_stat_activity to troubleshoot this issue I still couldnt find the exact query that perhaps didnot close the connection properly. Here the screenshot after I extract the data from pg_stat_activity. I also combine with pg_stat_statements but still unable to find it.
Is there anyway I can get the actual query so that I find the connection that didnt close properly? Sorry im not backend dev but just junior helping on troubleshooting server issue.
I looking from other thread;
Survey few postgresql monitoring tools.
CREATE EXTENSION pg_stat_statements;
My Cloud SQL Mysql 5.7.37 Highly available instance is stuck in a "Failover operation in progress. This may take a few minutes. While this operation is running, you may continue to view information about the instance" process. It is a fairly small database and it has been stuck like this for 5 hours and the failover is not available so no DB queries can be executed, hence our system is currently down.
No commands on the DB can be executed since it is in an updating process, the error log is empty and the operations log only contain this update and successfull backups.
Does anyone have any suggestions? I am not paying for Google Support so I cant get support directly from them (which I think is terrible since this a fully managed service).
Best,
Carl-Fredrik
I've got a database on Google SQL that is used by our application running on kubernetes in GKE.
The mysql instance is running on 5.6, and I need to update it to 5.7, so I tried using the new migration jobs.
I've set up the connection profile and all the required permissions for the source DB, then followed the instructions to make a continuous migration.
The Job says it's running, migrating the ~450GB database. After about a day, it's still running, the storage used seems to have stopped growing, and the replication delay is at 0. The source database is not currently in use (That's why I'm unsing it to try this out before doing the same with a more important db).
According to this, if the dump phase is done, I should be able to promote the instance, but the promote button remains greyed out, and there's no way to check the running state (it only says "running", and I don't see any way to check if it's dumping, on CDC, or anything else).
The documentation seems a bit lacking, and I couldn't find anything by googling around. Has anyone been using this?
In short, my questions are:
Why can't I promote the instance?
and how can I check in what phase is the migration?
Here's a screencap of my job:
link because SO doesn't let me embed images yet
Thanks.
p.d.: the tag that the documentation says should be used in stackoverflow is: google-cloud-database-migration-service, which is too long and stackoverflow doesn't allow, so I used google-cloud-sql instead :/
I am seeing an issue like this, but possibly more frustrating. After a week for a 2TB database, storage resets to near-zero and the full dump restarts, without any errors or indication of what happened.
So I have an app taking advantages of Heroku Connect to sync datas between platforms.
I need to find a way to detect when an update has been made by Salesforce (or at least, when the sync has been executed). I'm using sequelize in nodejs, but of course the hooks don't work since heroku connect works directly on the DB and doesn't use the ORM.
So I'm wondering what are my options here.
The solutions that come to my mind (likely there are more):
check out the Heroku Connect system tables like _trigger_log. This table will give you an exact log of the actions HC took (updates/insert/deletes) with information about the record. Yes, you would need to poll it :)
Postgres brings it's own queue-system with LISTEN and NOTIFY. You an write your own database-trigger that will react on change in the salesforce tables, and have a listening/worker-process on the LISTEN queue in PostgreSQL.
I'm using MongoDB as my database, and as a first-time back-end developer the ease with which I can delete an entire database/collection really bothers me.
Simply typing db.collection.remove() removes all records from that collection!
I know that an effective backup strategy should render this a non-issue, but I occasionally do run .remove() on some collections, and I'd hate to type in the wrong collection name by accident and (a) have to go through a backup restore, and (b) lose whatever data I had gathered between the backup and the restore, especially as my app gathers a lot of user data.
Is there any 'safeguard' I can set up my database to use, even if it's just a warning/confirmation that says
"Yo, are you sure you want to remove everything from <collectionname>? Choose: Yes/No"
User roles won't fix your problem. If your account has permissions to delete one user, you could accidentally delete them all. If your account has permissions to update an attribute for one user, you could accidentally update all of your users.
There's a simple fix for this however.
Step 0: Backup your database. And test your backups regularly. And make sure you get alerted if the backup did not run, or errored. Replica sets are not backups. I know this is obvious, but evidentally it's not obvious to everybody.
Step 1: Write a web admin GUI interface for your database. This it will only take a day or two -- and it should be simple enough that a secretary or intern could use it without fear for your data. (If you think this will take a long time, find a framework with more bells and whistles. Your admin console doesn't even need to be written in the same language as your app.)
Step 2: Data migrations (maintenance transformations of your database) should always be run from scripts checked into source control and tested on non-prod beforehand. The script could be as simple as mongo -e "foo.update(blah)", but you should run it as a script to avoid cut-n-paste errors. Ideally, you would even have a checklist for all migrations. (Check that you have a recent backup. Check the database log and system load beforehand. Write a before and after query that will tell you if the migration was successful...)
Step 3: You now no longer need to use the production Mongo console. So don't. It's a useful tool for development, but that's only needed on local development databases.
The above-mentioned Roles might be useful for read-only queries. But you can already do that against the non-master replica set member.
tl;dr: You can go pretty far using cowboy admin techniques, but eventually you're going to figure out that it's better (and not much more work) to automate everything.
There is nothing you can do in the current version to provide this functionality.
In a future version when user defined roles are available you could define a role which allows insert() and update() but not remove() or drop() etc. and therefore make yourself log-in as a different higher-role user, but that's not available in the current (2.4) version.