What options do I have for importing a MySQL table using the InnoDB engine, where the table already exists, without losing read access to the existing table? Generally I am referring to situations where ALTER TABLE is really slow, and I instead want to dump the table, modify the dump, then import it.
If I use mysqldump to make the dump, then when I reinsert it drops the table before inserting, so read access is of course lost until the import is complete.
Can I simply change the tablename in the dump, import that, then when that's done, drop the old table and rename the new one? Is there a better way? Thanks in advance.
If your table is of the same structure, I don't see any problems. You need just to skip "DROP TABLE" operator. Almost all dumpers have the option to not include DROP TABLE/CREATE TABLE in the dumps. Personally I'd recommend Sypex Dumper.
If you want to change table structure without locking table, I think the best way is to use
pt-online-schema-change - ALTER tables without locking them
use MySQL Enterprise backup for this http://www.mysql.com/products/enterprise/backup.html
Related
I'm using the Oracle foreign data wrapper and would like to have local copies of some of my foreign tables locally. Is there another option than having materialized views and refreshing them manually?
Not really, unless you want to add functionality in Oracle:
If you add a trigger on the Oracle table that records all data modifications in another table, you could define a foreign table on that table. Then you can regularly run a function in PostgreSQL that takes the changes since you checked last time and applies them to a PostgreSQL table.
If you understand how “materialized view logs” work in Oracle (I don't, and I think the documentation doesn't tell), you could define a foreign table on that and use it like above. That might be cheaper.
Both of these ideas would still require you to regularly run something in PostgreSQL, but you might be cheaper. Perhaps (if you have the money) you could use Oracle Heterogenous Services to modify a PostgreSQL table whenever something changes in an Oracle table.
i would like to import all data of a existing table of one database to a new table present inside different database in postgres, any suggestions will be helpful.
The easiest way would be to pg_dump the table and pg_restore in the target database.
In case it is not an option, you should definitely take a look a postgres_fdw (Foreign Data Wrapper), which allows you to access data from different databases - even from different machines. It is slightly more complex than the traditional export/import approach, but it creates a direct connection to the foreign table.
Take a look at this example.
Scenario: I want to take backup of one schema tables to another schema in the Same database.
Existing design: Dropping the indexes on backup tables and truncating the data, finally loading the data into backup tables( Usign insert query).
Requirement: Existing design is taking more time to process. Please suggest me is there any other way to achieve this......
Thanks.
If you have a non-trivial amount of data, and you're already doing the "obvious" things to make it run fast (direct path inserts, parallel mainly, nologging if you can afford that), there's probably not much more you can do sticking with SQL.
One thing you could try though is using data pump schema export/import with the remap_schema option. The process would be:
Export the source schema with expdp (in SCHEMAS mode)
Drop the target schema and recreate it (empty)
Import the dump with impdp and the remap_schema=source:target option (see REMAP_SCHEMA).
You can skip the second step and use table_exists_action=replace during the import instead - might be faster, and certainly is better if the target schema has other objects (see TABLE_EXISTS_ACTION.)
If you want to stay "in the database" to automate this, there is an API for data pump: DBMS_DATAPUMP.
I'm copying several tables (~1.5M records) from one data source to another, but it is taking a long time. I'm looking to speed up my use of DBD::Pg.
I'm currently using pg_getcopydata/pg_putcopydata, but I suppose that the indexes on the destination tables are slowing the process down.
I found that I can find some information on table's indexes using $dbh->statistics_info, but I'm curious if anyone has a programmatic way to dynamically drop/recreate indexes based on this information.
The programmatic way, I guess, is to submit the appropriate CREATE INDEX SQL statements via DBI that you would enter into psql.
Sometimes when copying a large table it's better to do it in this order:
create table with out indexes
copy data
add indexes
I've got a Postgres 9.0 database which frequently I took data dumps of it.
This database has a lot of indexes and everytime I restore a dump postgres starts background task vacuum cleaner (is that right?). That task consumes much processing time and memory to recreate indexes of the restored dump.
My question is:
Is there a way to dump the database data and the indexes of that database?
If there is a way, will worth the effort (I meant dumping the data with the indexes will perform better than vacuum cleaner)?
Oracle has some the "data pump" command a faster way to imp and exp. Does postgres have something similar?
Thanks in advance,
Andre
If you use pg_dump twice, once with --schema-only, and once with --data-only, you can cut the schema-only output in two parts: the first with the bare table definitions and the final part with the constraints and indexes.
Something similar can probably be done with pg_restore.
Best Practice is probably to
restore the schema without indexes
and possibly without constraints,
load the data,
then create the constraints,
and create the indexes.
If an index exists, a bulk load will make PostgreSQL write to the database and to the index. And a bulk load will make your table statistics useless. But if you load data first, then create the index, the stats are automatically up to date.
We store scripts that create indexes and scripts that create tables in different files under version control. This is why.
In your case, changing autovacuum settings might help you. You might also consider disabling autovacuum for some tables or for all tables, but that might be a little extreme.