I need to update my MongoDB database each time from given JSON files. Is there any way to import a JSON file into MongoDB with Scala? or is it possible to execute a raw mongo command like this in Scala environment?
mongoimport --db issue --collection customer --type json --file /home/lastvalue/part-00000.json
In Java we can do like this but I need to implement it in Scala. Where I will get to import the libraries of this classes?
When writing Scala, you can use any Java library, including the Process library that your link refers to.
This will allow you to run the mongoimport command in a process spawned from your Scala code. If you're looking for a solution entirely written in Scala, you should use the mongo-scala-driver. The documentation includes a complete example to mimic mongoimport's functionalities.
Related
Is there a tool that reads the structure of an existing MONGO DB database and generates the appropriate model.py code, using the declarative style if possible.
for FASTAPI python motor?
https://koxudaxi.github.io/datamodel-code-generator/
This code generator creates pydantic model from an mongoDB json file
I exported the collections from a local database and I want to import them into an ec2 instance.
I did the following:
1) I exported the collections to a folder called data. The files are in this format:
collecion_test.bson
collection.metadata.json
2) I transferred the folder to an ec2 instance. The path to access the folder is like this:
/home/ec2-user/data
3) I went into mongo and did "use database_test" and created a collection like this: db.createCollection("data")
Finally, I tried to import the file this way:
mongoimport --db database_test --collection data --file /home/ec2-user/data/data.metadata.json -jsonArray
but I get this error:
2022-02-18T19:29:38.380+0000 E QUERY [js] SyntaxError: missing ; before statement #(shell):1:14
I appreciate if anyone can help me analyze this!
The problem is that you used mongodump which created the xxx.bson and xxx.meta.json so you need to use mongorestore to read those files. Use mongoimport to read files created with mongoexport
for a full explanation see https://docs.mongodb.com/database-tools/
In short mongodump/mongorestore deal with bson files while mongoexport/mongoimport work with csv/tsv/json files. So for example one neat thing about these commands, is if you supply an optional -q parameter like {field:x} then only the records that filter would select will be used in the dump.
My application is using RethinkDb. Everything is running fine, but a new required needs to migrate the db into MongoDb.
Is this possible? How do I migrate the tables/collections, data, indexes, etc?
How about blob types, auto increments. ids?
Thanks!
Is this possible? How do I migrate the tables/collections, data, indexes, etc?
One way to migrate data from RethinkDB to MongoDB is to export data from RethinkDB using rethinkdb dump command, and then use mongoimport to import into MongoDB. For example:
rethinkdb dump -e dbname.tableName
This would generate an archive file:
rethinkdb_dump_<datetime>.tar.gz
After uncompressing the archive file, you can then use mongoimport as below:
mongoimport --jsonArray --db dbName --collection tableName ./rethinkdb_dump_<datetime>/dbName/tableName.json
Unfortunately for the indexes, the format between RethinkDB and MongoDB is quite different. The indexes are stored within the same archived file:
./rethinkdb_dump_<datetime>/dbName/tableName.info
Although you can still write a Python script to read the info file, and use MongoDB Python driver (PyMongo) to create the indexes in MongoDB. See also create_indexes() method for more information.
One of the reasons in suggesting to use Python, is because RethinkDB also has a Client Python driver. So technically, you can also skip the export stage and write a script to connect your RethinkDB to MongoDB.
I have one live website with multiple active users(around 30K) and each of them have their own configuration to render there homepages. The current stack of the portal is Java Spring Hibernate with SQL Server. Now, we have re written the code in Python MongoDB stack and want to migrate our users to new system. The issue here is that the old and new code will be deployed on the separate machines and we want to run this migration for few users as part of Beta Testing. Once the Beta testing is done, we will migrate all the users.
What would be the best approach to achieve this? We are thinking about dumping the data in alternative file system like XML/JSON on a remote server and then reading it in the new code.
Please suggest what should be the best way to accomplish this task
Import CSV, TSV or JSON data into MongoDB.
It will be faster and optimal to dump the file in a format like json, txt or csv , which should be copied to the new server and then import the data using mongoimport, in the command line shell.
Example
mongoimport -d databasename -c collectionname < users.json
Kindly regard to the link below for more information on mongoimport if you need to
http://docs.mongodb.org/manual/reference/mongoimport/
Does anyone know how to populate mongodb with initial data? For example, with a traditional SQL database, you can put all your SQL statements in a textfile, then load that using a SQL command. This is extremely useful for unit testing.
Is it possible to do this with the mongo shell? For example, write down a list of shell statements into a file, and get the mongo shell to read the file and execute the statements.
./mongo server:27017/dbname --quiet my_commands.js
In my_commands.js:
db.users.insert({name:"john", email:"john#doe.com", age:12});
You can use the mongoimport tool that comes with MongoDB to import raw data.
To run scripts from a file, e.g. to recreate indexes, pass the file name as a command line argument:
mongo file.js "another file.js"