Seeding data into MongoDB database - mongodb

I'm creating a MERN project and want a file to seed data into my database. Using sql I've done this by creating a seed file with a .db extension which I would then run as a script in my terminal. I am wondering how this is done for MongoDB and what file extension I should use, is this just a json file? I am also wondering what the proper way of doing this is. I was looking online but I see so many different ways that people do things so I'm just trying to figure out what the standard is.

Create each collection in a separate JSON or CSV file, and use mongoimport

Related

Clone/Backup of ArangoDB-Collection with Python

Is there a Pythonic way to do a quick backup or a copy of a collection (document and edge)?
The regular way seems to be using arangodump/arangoimp in the console. Also there are some tutorials on how to do a full backup on a different server in JS.
I'm just trying a simple case: Directly copy a collection on the existing server.

How to create a flat-file for a mongdb database and how to use that flat-file in elastic-search and kibana to query the data?

I am learning elastic-search and it's stack so I have a task assigned, first I need to create a flat-file of a MongoDB database that I have set up locally and then using that flat-file, I need to import that into kibana and query the database.
elastic-search and kibana are also hosted locally.
I don't know how to create a flat file and I just heard that word and googled what that is. I learned elastic-server query language but I don't know how to create an index using a flat-file in kibana or elastic-server.
I don't need some full explanation or steps, just some references to solutions will be awesome.
Use mongoexport to create either JSON or CSV format files from a MongoDB database.

Working with PowerShell and file based DB operations

I have a scenario where I have a lot of files in a CSV file i need to do operations on. The script needs to be able to handle if script is stopped or failed, then it should continue where i stopped from. In a database scenario this would be fairly simple. I would have an updated column and update that when operation for the line has completed. I have looked if I somehow could update the CSV on the fly, but I dont think that is possible. I could start having multiple files, but not that elegant. Can anyone recommend some kind of simple file based DB like framework? Where I from PowerShell could create a new database file (maybe json) and read from it and update on the fly.
If your problem is really so complex, that you actually need somewhat of a local database solution, then consider to go with SQLite which was built for such scenarios.
In your case, since you process an CSV row-by-row, I assume storing the info for the current row only will be enough. (Line number, status etc.)

Import Data to cassandra and create the Primary Key

I've got some csv data to import to cassandra. This could work with the copy-command. The Problem is, that the csv doesn't serve a unique ID for the data so I need to create a timeuuid on import.
Is it possible to do this via copy-command or did I need to write a external script for importing?
I would write a quick script to do it, the copy command can really only handle small amounts of data anyway. Try the new python driver. I find it quite fast to setup loading scripts with, especially if you need any sort of minor modifications of the data before being loaded.
If you have a really big set of data bulk-loading is still the way to go.

Loading a CSV into Core Data managed sqlite db

I have a CSV file containing data.
I want to load it into a Core Data managed sqlite db.
I just ran one of the sample Core Data Xcode apps and noticed it created the db file.
I noticed table names all started with Z and the primary keys were stored in separate table so from this am I right in presuming that just importing the CSV data directly into the db using sqlite3 command line might mess up primary keys.
Do I need to write a program to read in the CSV line by line and then create objects for each row and persist them to the db.
Anyone got any code for this?
And can I write a desktop client to do this using Core Data. If so will the db be fine to use in IPhone core data app?
Can I then just include the prefilled db in my project and it will be deployed with the app correctly or is there something else I should do.
Use NSScanner to read your CSV file into the NSManagedObject instances in your Core Data store.
I have some categories on NSString for reading and writing CSV files from/to NSArrays. I'll post them online and edit my answer with a link to it.
edit
They're online here: http://github.com/davedelong/CHCSVParser