Stop script in MySQL Workbench after error - mysql-workbench

I just ran a script in MySQL Workbench 6.3 Community.
I've set it to 'Stop Script Execution on Errors'.
There were two queries that gave errors.
The errors say 'Error Code: 1175. You are using safe update mode and you tried to update a table without a WHERE that uses a KEY column To disable safe mode, toggle the option in Preferences -> SQL Editor and reconnect.'
Ok, so I missed that. No big deal.
But after that, the script simply continued as if there were no error at all, or as if I never set 'Stop Script Execution on Errors'.
As a consequence I've lost some data. (This was a test database, so no worries there.)
The script continues after the errors:
Any idea how to make it stop executing the script?

A colleague pointed me to Edit->Preferences, SQL Editor, SQL Execution, Continue SQL script execution on errors (by default).
Continue SQL script execution on errors (by default)
This worked after restarting Workbench.

Related

Start transaction automatically on psql login

I'm wondering if it's possible to have psql start a transaction automatically when I open a psql session on the command line. I know I can start a transaction manually using 'BEGIN;' but I'm wondering if that can be done automatically without me typing in 'BEGIN;' manually on the command line.
Thanks!
I did a google search but that didn't come up with any good results.
You cannot have psql start a transaction when you login, but you can have it start a transaction with the first SQL statement you enter. For that, put a .psqlrc file into your home directory and give it the following content:
\set AUTOCOMMIT off
Note that that is a very bad idea (in my personal opinion). You are running the risk to inadvertently start a transaction that holds locks and blocks the progress of autovacuum. I have seen more than one PostgreSQL instance that suffered serious damage because of administrators who disabled autocommit in their interactive clients and kept transactions open. At the very least, add the following to your .psqlrc:
SET idle_in_transaction_session_timeout = '1min';

Datagrip: Create postgres index without waiting for execution

Is there a way to submit a command in datagrip to a database without keeping the connection open / asynchronously? I'm attempting to create indexes concurrently, but I'd also like to close my laptop.
My datagrip workflow:
Select column in a database, click 'modify column', and eventually run code such as:
create index concurrently batchdisbursements_updated_index
on de_testing.batchdisbursements (updated);
However, these run as background tasks and cancel if I exit datagrip.
However, these run as background tasks and cancel if I exit datagrip.
What if you close your laptop without exiting datagrip? Datagrip is probably actively sending a cancellation message to PostgreSQL when you exit it. If you just close the laptop, I doubt it will do that. In that case, PostgreSQL won't notice the client has gone away until it tries to send a message, at which point the index creation should already be done and committed.
But this is a fragile plan. I would ssh to the server, run screen (or one of the fancier variants), run psql in that, and create the indexes from there.

Getting Warning Message when running SQL script on IBM DB2

I'm using IBM DB2 on cloud and selected Run SQL to create a script. After creating the script, I obtain the following warning message and I'm not able to see any results:
*Commands Running:
SQL commands are currently running. You can wait or cancel the current commands to run commands again.
You should only get that message when a query is running in the console from the Run SQL page.
If you can't see a query running in the console which you can cancel, then that might indicate an issue with the console (maybe clear your cookies and re-login, or raise a support case if you are on a paid plan).
You should also be able to open a new tab and run a 2nd query.

using executable in Liquibase changesets

I am using execute command tag from my liquibase changesets and this inturn is configured to run the sqls in oracle instant client sql plus.
when i run a liquibase update on my changelogxml everything works fine and the liquibase update is sucessfull.I can see the changes to the table also.
But when i try to fail the update process by giving a syntax error in my sql file refered in the changeset.Liquibase still returns liquibase update sucessfull.I expected it to throw sql errors.The sql when run seperately in toad throws syntax error.What should i do to get the error displayed out.?
Datical has created a custom Liquibase change tag that executes SQL using the sqlplus command line client. It was surprisingly much more complicated that you might think.
Some of the issues we had to deal with:
we had to do things to ensure that the sql files always had certain statements in place, and never had certain other statements. This might include things like setting the schema, ensuring that the only spool commands were ones we knew about, that the script had an 'EXIT' command, and ensuring that whenever there was a SQL error that the exit code was returned.
The sqlplus executable does not return an exit code (i.e. a non-zero exit code form the native process) in all cases, and instead will write errors to an error table in the database. The table where sqlplus writes errors is called sperrorlog, and this may be what you will need to look into.
I can't really go into all the details, but just know that what you are attempting to do is neither simple nor straightforward.

Program cannot reconnect to Firebird after abnormal termination

What can be done to prevent having to restart a PC after a program (C++Builder) terminated abnormaly without closing the database using firebird 2?
What I am looking for: I would like to be able to just restart the program without any other intervention. (I could have the user call a batch file executing some cleanup or add some lines of code to the program to disconnect everything.)
If your database is firebird 2.1+, there are monitoring tables that show the active connections, and the sysdba can manually delete any left-over connnections.
If you look in your release notes, the syntax details should be there.