How to undo ALTER TABLE using sqlplus (Oracle 10g Express)? - oracle10g

rollback;
doesn't seem to undo alter table changes.
Background:
I'm generating some .sql scripts (based on parsed Hibernate scripts) which are trashing my tables. Importing the full database for testing takes up to 30 minutes (also slowing my machine) and as much as I enjoy taking breaks, i'd prefer to just undo everything with a command such as rollback and try again.
btw this is Oracle 10g Express Edition Release 10.2.0.1.0
Is this even possible?

With the express edition, I'm not sure this is possible. You cannot rollback a DDL operation like ALTER TABLE because DDL is implicitly committed.
Oracle does have the option to create restore points that you can then flashback the entire database to a point in time relatively quickly. That will undo the effects of all committed transactions (DML and DDL) between the creation of the restore point and the point where you issued the flashback command. Here is an example of creating and flashing back to a restore point and here's another that does the flashback for the entire database. I'm just not sure that this functionality is available in the express edition.

This version of Oracle performs a commit on any ALTER TABLE statements.
See this post:
it possible to roll back CREATE TABLE and ALTER TABLE statements in major SQL databases?

Related

Inserting Data manually into table in AWS Redshift, sql workbench

I am able to connect to redshift from SQL workbench and I am able to create a table but When I try to insert values into the table It throws me the below error.
Since I am using temp schema and the connectivity shows schema as public, is this still an issue even if my insert statement is
Insert into tempschema.temp_staging values
Postgres (and thus Redshift which is based on an ancient version of Postgres) has a very strict transaction concept: either all statements work or none.
As soon as one statement in your transaction fails, the whole transaction needs to be rolled back.
So all you need to do is to issue a ROLLBACK command and you can continue. There is no need to restart SQL Workbench/J.
If you don't want to do that for every statement that throws an error, just enable autocommit in the connection profile:
7.3.5. Autocommit
This check box enables/disables the "auto commit" property for the connection. If autocommit is enabled, then each SQL statement is automatically committed on the DBMS. If this is disabled, any DML statement (UPDATE, INSERT, DELETE, ...) has to be committed in order to make the change permanent. Some DBMS require a commit for DDL statements (CREATE TABLE, ...) as well. Please refer to the documentation of your DBMS.
Link to manual
I am part of SQL Workbench/J support
It's just a temporary acquired lock.
Disconnect the workbench from the datasource
Restart the workbench
Reconnect to your datasource.
You'll be able to resume from here.

Slow insert and update commands during mysql to redshift replication

I am trying to make a replication server from MySQL to redshift, for this, I am parsing the MySQL binlog. For initial replication, I am taking the dump of the mysql table, converting it into a CSV file and uploading the same to S3 and then I use the redshift copy command. For this the performance is efficient.
After the initial replication, for the continuous sync when I am reading the binlog the inserts and updates have to be run sequentially which are very slow.
Is there anything that can be done for increasing the performance?
One possible solution that I can think of is to wrap the statements in a transaction and then send the transaction at once, to avoid multiple network calls. But that would not address the problem that single update and insert statements in redshift run very slow. A single update statement is taking 6s. Knowing the limitations of redshift (That it is a columnar database and single row insertion will be slow) what can be done to work around those limitations?
Edit 1:
Regarding DMS: I want to use redshift as a warehousing solution which just replicates our MYSQL continuously, I don't want to denormalise the data since I have 170+ tables in mysql. During ongoing replication, DMS shows many errors multiple times in a day and fails completely after a day or two and it's very hard to decipher DMS error logs. Also, When I drop and reload tables, it deletes the existing tables on redshift and creates and new table and then starts inserting data which causes downtime in my case. What I wanted was to create a new table and then switch the old one with new one and delete old table
Here is what you need to do to get DMS to work
1) create and run a dms task with "migrate and ongoing replication" and "Drop tables on target"
2) this will probably fail, do not worry. "stop" the dms task.
3) on redshift make the following changes to the table
Change all dates and timestamps to varchar (because the options used
by dms for redshift copy cannot cope with '00:00:00 00:00' dates that
you get in mysql)
change all bool to be varchar - due to a bug in dms.
4) on dms - modify the task to "Truncate" in "Target table preparation mode"
5) restart the dms task - full reload
now - the initial copy and ongoing binlog replication should work.
Make sure you are on latest replication instance software version
Make sure you have followed the instructions here exactly
http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html
If your source is aurora, also make sure you have set binlog_checksum to "none" (bad documentation)

Does postgresql have Database-level triggers?

I want to build a trigger in postgresql that will fire when the server starts.
In Oracle I can use
CREATE TRIGGER iii AFTER STARTUP ON DATABASE
No. Postgres triggers can only fire when a query is run against a table
As a kludge, you might be able to find a table that's modified in certain way at database startup (perhaps there's an INSERT to some system table?) but depending on this would be a hack.

Sync transactions between databases in PostgreSQL using dblink

I would like to sync a set of tables between two databases, transparently without application code chages. My idea is to create triggers on insert, update and delete, in the source database tables to replicate the data using dblink to the dest. database tables seamlessly.
The problem is that changes in the source tables are always done inside a transaction. The triggers automatically replicate changes in the dest. tables but if the source transaction is rolled back the dest. tables changes are not.
Is there a way to automatically sync transaction begin and commit/rollback between the two databases? A trigger-like behavior would be ideal.
Yes, it's possible from ages - using Slony-I, or other trigger-based replication.
Row updates are logged to special tables on "master" side, and replayed asynchronously on "slave" side.
External program/daemon is used to synchronise changes.
See http://www.postgresql.org/docs/current/static/different-replication-solutions.html and http://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling#Replication for more information.
When 9.3 comes out, check out http://www.postgresql.org/docs/9.3/static/postgres-fdw.html. You can roll back transactions in foreign databases.

How to copy everything except data from one database to another?

In T-SQL (Microsoft SQL 2008), how can I make a new database which will have the same schemas, tables, table columns, indexes, constraints, and foreign keys, but will not contain any data from the original database?
Note: making a full copy, then removing all data is not a solution in my case, since the database is quite big, and such full copy will spend too much time.
See here for instructions: How To Script Out The Whole Database In SQL Server 2005 and SQL Server 2008
In SQL Management Studio, right click on the database and select "Script database as"
http://msdn.microsoft.com/en-us/library/ms178078.aspx
You can then use the script to create an empty one.
Edit : OP did say 2008
I use liquibase for this purpose. Just point liquibase to a different server and it will use your changelog to bring the second database up to date, schema wise. It has the added benefit that the changelog file gets stored in source control and so I can have tagged versions of it, allowing me to restore a database to what a specific version of my app is expecting.