Data Migration from Java Hibernate SQL Server to Python Mongo Stack - mongodb

I have one live website with multiple active users(around 30K) and each of them have their own configuration to render there homepages. The current stack of the portal is Java Spring Hibernate with SQL Server. Now, we have re written the code in Python MongoDB stack and want to migrate our users to new system. The issue here is that the old and new code will be deployed on the separate machines and we want to run this migration for few users as part of Beta Testing. Once the Beta testing is done, we will migrate all the users.
What would be the best approach to achieve this? We are thinking about dumping the data in alternative file system like XML/JSON on a remote server and then reading it in the new code.
Please suggest what should be the best way to accomplish this task

Import CSV, TSV or JSON data into MongoDB.
It will be faster and optimal to dump the file in a format like json, txt or csv , which should be copied to the new server and then import the data using mongoimport, in the command line shell.
Example
mongoimport -d databasename -c collectionname < users.json
Kindly regard to the link below for more information on mongoimport if you need to
http://docs.mongodb.org/manual/reference/mongoimport/

Related

Best practice for importing bulk data to AWS RDS PostgreSQL database

I have a big AWS RDS database that needs to be updated with data on a periodic basis. The data is in JSON files stored in S3 buckets.
This is my current flow:
Download all the JSON files locally
Run a ruby script to parse the JSON files to generate a CSV file matching the table in the database
Connect to RDS using psql
Use \copy command to append the data to the table
I would like switch this to an automated approach (maybe using an AWS Lambda). What would be the best practices?
Approach 1:
Run a script (Ruby / JS) that parses all folders in the past period (e.g., week) and within the parsing of each file, connect to the RDS db and execute an INSERT command. I feel this is a very slow process with constant writes to the database and wouldn't be optimal.
Approach 2:
I already have a Ruby script that parses local files to generate a single CSV. I can modify it to parse the S3 folders directly and create a temporary CSV file in S3. The question is - how do I then use this temporary file to do a bulk import?
Are there any other approaches that I have missed and might be better suited for my requirement?
Thanks.

How do i dump data from an Oracle Database without access to the database's file system

I am trying to dump the schema and data from an existing Oracle DB and import it into another Oracle DB.
I have tried using the "Export Wizard" provided by sqldeveloper.
I found answers using Oracle Data Pump, however i do not have access to the filesystem of the DB server.
I expect to get a file that i can copy and import into another DB
Without Data Pump, you have to make some concessions.
The biggest concession is you're going to ask a Client application, running somewhere on your network, to deal with a potentially HUGE amount of data/IO.
Withing reasonable limits, you can use the Tools > Database Export wizard to build a series of SQLPlus style scripts, both DDL (CREATEs) and DATA (INSERTs).
Once you have those scripts, you can use SQLPlus, SQLcl, or SQL Developer to run them on your new/target database.

Azure Cosmos DB: Clone collection to another database

Currently I am trying to clone a cosmos db collection from one database to another database within the cosmos db. The API of the cosmos db is set to Mongo API.
I already tried to use Azure Data factory, but it looks like that there is no support for the Mongo API so far.
Has anyone an idea how to do this respective to efficiency, automation and performance?
Any ideas are appreciated.
You can use data Migration tool suggested by Microsoft to do the same.
There is no way to take a backup and import cosmosdb.
EDIT:
With the new Cosmic Clone tool, you can take a clone/backup with data/stored procedures/triggers/udf, etc. Read my blog on the same.
I already tried to use Azure Data factory, but it looks like that
there is no support for the Mongo API so far.
Actually, Cosmos DB Mongo API and SQL API are all belong to Azure Cosmos DB service.So , you still can create cosmos db linked service and dataset in the azure data factory for your database.
Then you could create copy activity to import data from one collection to another collection.
If you want to make it as an automation task, I suggest using following 2 ways to run the copy activity.
1.Azure Time Trigger Function.
2.Web job which is run in the background of Azure Web App.
Hope it helps you.Any concern, please feel free to let me know.
I used mongodump and mongorestore to copy my database (with mongodb version 4.0.9 installed). From the windows command line I ran the following commands from my mongodb bin directory (c:\Program Files\MongoDB\Server\4.0\bin in my case).
This will copy all the collections, including indexes, in the DB to the specified /out directory as .json files.
mongodump.exe /uri:URI /out:A_DIRECTORY_TO_DUMP_TO
I then ran the following command to take everything in the /out directory and write it to the target DB:
mongorestore.exe /uri:URI /dir:DIRECTORY_TO_RESTORE_FROM
NOTE: Before importing I also had to increase the throughput for the collection, otherwise I ran into rate limiting errors. If you've set throughput at the database level this may need to be changed.

IBM WCS and DB2 : Want to export all catentries data from one DB and import into another DB

Basically i have two environments, Production and QA. On QA DB, the data is not same as it is on production so my QA team is unable to test it properly. So want to import all catentries/products related data in QA DB from production. I searched a lot but not found any solution regarding this.
May be i need to find all product related tables and export them one by one and then import into dev db but not sure.
Can anyone please guide me regarding this. How i can do this activity with best practices?
I am using DB2
The WebSphere Commerce data model is documented, which will help you identify all related tables. You can then use the DB2 utility db2move to export (and later load) those tables in one shot. For example,
db2move yourdb export -sn yourschema -tn catentry,catentrel,catentdesc,catentattr
Be sure to list all tables you need, separated by commas with no spaces. You can specify patterns to match table names:
db2move yourdb export -sn yourschema -tn "catent*,listprice"
db2move will create a file db2move.lst that lists all extracted tables, so you can load all data with:
db2move yourQAdb load -lo replace
running from the same directory.

How to export meteor mongodb data to CSV or any other file format?

I have an application which collects Email, Phone number, Address etc. I want to get this data from meteor mongo db and export it into some kind of a file format (Maybe CSV).
I tried using mongoexport but that even didnt work.
mongoexport
Whats the command to export meteor database data into a file or is there any kind of package.
FYI: application is stored on amazon ec2 instance.
You can still use mongoexport, you just need to point it at the meteor-internal mongodb:
mongoexport --port=3001
Just make sure meteor is running when you issue this command.
This is assuming you are running in development (using meteor, not a bundle), and using the default port (3000).