Redis DB export/import [closed] - import

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Does anybody know a good solution for export/import in Redis?
Generally I need to dump DB (and edit the dump as a case) from a server and load it to another one (e.g. localhost).
Maybe some scripts?

Redis has two binary format files supported: RDB and AOF.
RDB is a dump like what you asked. You can call save to force a rdb. It will be stored in the dbfilename setting you have, or dump.rdb in the current working directory if that setting is missing.
More Info:
http://redis.io/topics/persistence

If you want a server to load the content from other server, no dump is required. You may use slaveof to sync the data and once it's up to date call slaveof no one.
More information on replication can be found in this link: http://redis.io/topics/replication

you can try my dump util, rdd, it extract or insert data into redis and can split, merge, filter, rename

Related

What is the best way to put content of uploaded by user CSV file into database [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What is the best way to insert the content of the file (.csv, up to 800 MBytes) uploaded by a web-application user into the PostgreSQL database?
I see three options:
Insert statement for each file row
Insert statement for multiple rows (insert batches containing e.g. 1000 rows)
Store temp file, upload it using PostgreSQL COPY command (I have shared directory between servers where application and database located)
Which way is better? Or maybe there is any other way?
Additional details:
I use Java 8 and JSP
Database: PostgreSQL 9.5
To handle multipart data I use Apache Commons FileUpload and Apache Commons CSV to parse the file
Definitely NOT a single insert for each row. Relaying on PostgreSQL COPY command should be the fastest option.

Restore a database changing the schema on DB2 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I have a database dump where the table schema is db2inst2. I would like to restore this database on a different server, but using db2inst1 as the destination schema.
Is there a way to do it using db2 restore command?
If not, is there a way to change the schema of all tables after the restore?
You can use the ADMIN_COPY_SCHEMA procedure to copy all objects from one schema to another.
Once completed and you verify everything you can use ADMIN_DROP_SCHEMA to drop the old one.

MeteorJS (MongoDB) with Spark [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am using MeteorJS framework for one of my project .
I have build a basic webApp once before using MeteorJS and it works perfectly fine when its just Client, Server and MongoDB.
In this project, I want the monogDB (which comes in build with MeteorJS) to populate data from Apache Spark.
Basically, Apache Spark will process some data and inject it into mongoDB
Is this doable ?
Please can you point me to the right tutorial for this
How complex is this to implement ?
Thanks in advance for your help
Yes this is very possible and quite easy. That said it won't be via MeteorJS, it would be part of the Apache Spark job and would be configured there.
Using the MongoDB Spark Connector taking data from a DataFrame or an RDD and saving it to MongoDB is easy.
First you would configure how and where the data is written:
// Configure where to save the data
val writeConfig = WriteConfig(Map("uri" -> "mongodb://localhost/databaseName.collectionName"))
With RDD's you should convert them into Documents via a map function eg:
val documentRDD = rdd.map(data => Document) // map the RDD into documents
MongoSpark.save(documentRDD, writeConfig)
If you are using DataFrames it's much easier as you can just provide a DataFrameWriter and writeConfig:
MongoSpark.save(dataFrame.write, writeConfig)
There is more information in the documentation or there are examples in the github repo.

How to create a slave of large mongodb databases [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
As we know, mongodb has limited oplog.
If I just create a new slave, everything in the database is not sync yet. Everything in the database is bigger than any oplog.
So how do I get around this? Does that mean we cannot create a new slave that's bigger than the oplog? Does mongodb has other mechanism for synching database besides that oplog?
How exactly it's done then if true?
So what's the problem?
If your database is of reasonable size, and you have a snapshot, you can copy over the files (specified by the --dbpath flag on startup or in the config file) to allow the new replica set member to come online quicker. However, an initial sync may still happen.
Conceptually, the following things happen
Start up the new member as part of the replica set
Add it to the rs.conf()
The new replicaset is synced off the closest (could be a primary or a secondary) and will begin pulling data from it (initial sync) and mark a point in the oplog for it's own reference.
The new secondary will then apply the oplog from it's own timestamp that it has copied from the other replica set member.
If the sync fails, another initial sync (from the very start) will happen. For really large datasets, the sync can take some time.
In reply to your questions
Does that mean we cannot create a new slave that's bigger than the oplog?
You can create and add a new member that is bigger than the oplog
Does mongodb has other mechanism for synching database besides that oplog?
Yes, the initial sync and the file copy mentioned above.

What is an MDF file? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is this like an “embedded” database of sorts? A file containing a built in database?
SQL Server databases use two files - an MDF file, known as the primary database file, which contains the schema and data, and a LDF file, which contains the logs. See wikipedia. A database may also use secondary database file, which normally uses a .ndf extension.
As John S. indicates, these file extensions are purely convention - you can use whatever you want, although I can't think of a good reason to do that.
More info on MSDN here and in Beginning SQL Server 2005 Administation (Google Books) here.
Just to make this absolutely clear for all:
A .MDF file is “typically” a SQL Server data file however it is important to note that it does NOT have to be.
This is because .MDF is nothing more than a recommended/preferred notation but the extension itself does not actually dictate the file type.
To illustrate this, if someone wanted to create their primary data file with an extension of .gbn they could go ahead and do so without issue.
To qualify the preferred naming conventions:
.mdf - Primary database data file.
.ndf - Other database data files i.e.
non Primary.
.ldf - Log data file.