I'm doing a lot of stuff with big databases in MongoDB and sometimes there's a need in making a copy of database where copydb command fits perfectly, however it does add global lock for whole mongod instance at the end of process for creating index.
I know that there's an argument to create index in background "background:true", but I didn't find such argument for copydb command (or copydatabase).
Is there way to avoid global lock and force background index creation for Copy Database?
Related
I have a table which index size is too big (about 2G). When I restore the database to a VM, the size is only 200M so I need to rebuild/recreate the index and I will probably do this online.
What is the difference between re-building (reindex) and re-creating the index, and which one is better when I do it online? Particularly, which option allows querying the DB during the operation?
The REINDEX command requires an exclusive table lock, which means that it'll stall any accesses to the table until the command has completed. If you can afford that kind of maintenance window it's perfectly fine.
The alternative for online rebuilding is to create a new index using CREATE INDEX CONCURRENTLY, then drop the old one. This will take longer to complete, but allows access to the table while rebuilding the index.
Postgres 12 has added a REINDEX INDEX CONCURRENTLY command, which does what you want here. https://paquier.xyz/postgresql-2/postgres-12-reindex-concurrently/ https://www.depesz.com/2019/03/29/waiting-for-postgresql-12-reindex-concurrently/
I'm trying to set up an app that will act as a front end to an externally updated mongo database. The data will be pushed into the database by another process.
I so far have the app connecting to the external mongo instance and pulling data out with on issues, but its not reactive (not seeing any of the new data going into the mongo database).
I've done some digging and it so far can only find that I might need to set up replica sets and use oplog, is there a way to do this without going to replica sets (or is that the best way anyway)?
The code so far is really simple, a single collection, a single publication (pulling out the last 10 records from the database) and a single template just displaying that data.
No deps that I've written (not sure if that's what I'm missing).
Thanks.
Any reason not to use Oplog? For what I've read it is the recommended approach even if your DB isn't updated by an external process, and a must if it does.
Nevertheless, without Oplog your app should see the changes on the DB made by the external process anyway. It should take longer (up to 10 seconds), but it should update.
I need to load large amount of data on a table in DB2 database. I am using CLI load mode on a table written in C using SQLSetStmtAttr function. Select statements does not work (the table gets locked) when it is set.
When the loading of the data completes I am doing load mode off. After that the table becomes accessible so that i can perform select from db2 command line tools (or control center).
But the problem is when my C program crashes or fails before doing load mode off. The table is always locked. I have to drop the table and all previous data is lost.
My question is whether there is a way to recover the table?
DBMS Documentation is your friend. You can read the description of SQL0668N (or any other error!) to find out what reason code 3 means, as well as how to fix it.
Basically, when a LOAD operation fails, you need to perform some cleanup on the table – either restart or terminate it. This can be done using the LOAD utility from outside of your program (e.g., LOAD from /dev/null of del TERMINATE into yourtable nonrecoverable) but you can also do it programatically.
Typically you would do this using the db2Load() API, and setting the piLongActionString member of the db2LoadStruct parameter you pass to db2Load(), with the same RESTART or TERMINATE operation.
It looks like you can set the SQL_ATTR_LOAD_INFO statement to the same db2LoadStruct when using a CLI Load, too, but I am not sure if this would actually work to complete a load restart / terminate.
I have one mongo instance running on amazon. There are 5M docs in a single collection. And 20docs/1sec come in data. No index. And my server just have 50G space, already used 22G.
Now I need to do some data analyse for those data, but because on index, I execute one query, the db is block and can't insert data until I restart the server.
And data keep come in, so I worry about the space is not enough.
What I'm trying to do is build another server, setup a new mongo instance, then copy the data into it. Then add index on the new one and do the analyse.
Waht is the best way, any suggestion?
Probably the best way is to just create an index in the background. This will not block anything and you can then just run the indexed query on your node. Creating an index in the background takes a bit longer but it does prevent the blocking:
db.collection.ensureIndex( { col: 1 }, { background: true } );
See also: http://docs.mongodb.org/manual/reference/method/db.collection.ensureIndex/
If you really want a secondary to do analysis, then you can create a replica set from your existing member. But for that you will have to take MongoDB down - and restart it with the replSet parameter. After starting it with that parameter, you can now add a new replica set member which will sync the data. This synching will also impact performance as lots of data will have to be copied. The primary will also need more disk space now because of the oplog that MongoDB needs to sync secondaries with.
mongodump and mongorestore can also be an option but then the data between the two nodes will not stay in sync. You would have to run the dump+restore each time you want to run analysis on the new data. In that case, a replica set might be better.
A replica set really wants 3 members though, to prevent a split brain in case a node goes down. This can be another data node, but in your case you would probably want to set-up an arbiter. If you don't want automatic failover (I don't think you'd need it in this case, as you're just doing analysis), then set up your replica set two nodes, but make the second (new) one hidden: http://docs.mongodb.org/manual/tutorial/configure-a-hidden-replica-set-member/
set up replica set from this existing member, and then add the index
on the secondary and do analysis.
Take a mongodump and restore to a new server and do the analysis
I'd like to create a read-only snapshot of a database at the end of each day, and keep them around for a couple of months.
I'd then like to be able to run queries against a specific (named) snapshot.
Is this possible to achieve elegantly and with minimal resource usage (the database only changes very slowly, but has a few GBs of data - so almost all data is common to all snapshots).
The usual way to create a snapshot in PostgreSQL is to use pg_dump/pg_restore.
A much quicker method is to simply use CREATE DATABASE to clone your database.
CREATE DATABASE my_copy_db TEMPLATE my_production_db;
which will be much faster than a dump/restore. The only drawback to this solution is that the source database must not have any open connections.
The copy will not be read-only by default, but you could simply revoke the respective privileges from the users to ensure that