how to load multiple files in mongoDB - mongodb

How to store multiple files in mongoldb with pymongo commands programmatically? Does anyone have an idea? Considering that this is a very common task I wasn't able to find a solution two days now, searching through Stack Overflow and trying my own solutions.
Say you have a folder with many .csv files, how do we import every file sequentially in a different collection in the same database?

Related

Reading Different csv's using springbatch

Hi I need help with reading different csv files using spring batch. Files are different types not able to get idea how to read. can somebody help me with this issue.
I'm using FlatFileItemReader to read one file. I need to read multiple files
example:
i need to read all files process and insert in to db.
Files are of different types, so I would keep it simple use different items readers.
If those files can be processed independently, you can process them concurrently in different steps.

Seeding data into MongoDB database

I'm creating a MERN project and want a file to seed data into my database. Using sql I've done this by creating a seed file with a .db extension which I would then run as a script in my terminal. I am wondering how this is done for MongoDB and what file extension I should use, is this just a json file? I am also wondering what the proper way of doing this is. I was looking online but I see so many different ways that people do things so I'm just trying to figure out what the standard is.
Create each collection in a separate JSON or CSV file, and use mongoimport

Spring batch job to read multiple files and write to multiple tables

I need to create a spring batch job which takes multiple files and writes to multiple tables. Tried to use multiresourceitemwriter but my files are located in different folders and no common name. Looking for examples using ListItemReader and ListItemWriter. Any references are highly helpful.Thank you.
You can try the example at here
https://bigzidane.wordpress.com/2016/09/12/spring-batch-partitionerreaderprocesorwriterhibernateintellij/
I believe it fits to your question. If you need more question, please let me know.

Exporting into a single large CSV from MySQL Workbench into the client machine without viewing it on GUI?

After going through similar questions on Stackoverflow, I am unable to find a method where I could export a large CSV file from a query made in MySQL workbench (v 5.2).
The query is about 4 million rows, 8 columns (comes to about 300Mb when exported as a csv file).
Currently I load the entire rows (have see it in the GUI) and use the export option. This makes my machine crash most of the time)
My constraints are:
I am not looking for a solution via bash terminal.
I need to export it to the client machine and not the database server.
Is this drawback of MySQL Workbench?
How do I not see it in GUI but yet export all the rows into a single file?
There is a similar question I found, but the answers dont meet the constraints I have:
" Exporting query results in MySQL Workbench beyond 1000 records "
Thanks.
In order to export to CSV you first have to load all that data, which is a lot to have in a GUI. many controls are simply no made to carry that much data. So your best bet is to avoid GUI as much as possible.
One way could be to run your query outputting to a text window (see Query menu). This is not CSV but at least should work. You can then try to copy out the text into a spreadsheet and convert it to CSV.
If that is too much work try limiting your rows into ranges, say 1 million each, using the LIMIT clause on your query. Lower the size until you have one that can be handled by MySQL Workbench. You will get n CSV files you have to concatenate later. A small application or (depending on your OS) a system tool should be able to strip headers and concatenate the files into one.

How do I import certain columns from a csv file into mongodb?

I'm not sure if I should ask this in ServerFault or here, but I'm trying to write a PHP script that loops through a folder and adds csv files to a mongodb database.
I'd only like to import certain fields/columns. Is that possible, or do I need to import the whole table/collection, then drop fields? Google doesn't seem to be helping...
You might find this question/answer helpful. The OP there was attempting to do the same thing.
The mongoimport command doesn't give you the option of skipping fields during the import process, so that would require a full import followed by $unset operations on the fields you intended to omit. Ultimately, that would leave your collection with fragmentation.
You would be better served using fgetcsv or str_getcsv in PHP to parse the file. This would also allow you the change to validate/sanitize the input as necessary. Finally, MongoCollection::batchInsert() would efficiently insert multiple documents into MongoDB at once.