Importing large Bson file to mongo db in raspberry pi - mongodb

I am trying to import a very large BSON file (15Gb) to mongodb. I am using a raspberry pi 3b+ (it has a 1Gb of ram).
The problem I am running to is that when I execute the mongorestore command to import the bson, the process is killed.
I have tried to add swap using a usb (6Gb added) but the problem persists, the error displayed is related to lack of ram.
Executing the command in a Ubuntu Virtual Machine with 8Gb ram shows no problem.
Thanks

Related

MongoDB gives out of memory at dumping

I have a database which is almost 4gb size which working on 16GB memory, 6 core Ubuntu 18.04. I write a cron which backup database at specific time everyday, but yesterday, firstly it occured an error which is Out of memory at backup process and killed the mongo service. This is my script for dumping mongodump --db $DB --archive=/home/backup/$BACKUP_NAME --gzip. I am not sure 4gb is big for a db, so what do you suggest for me ?

psql runs out of memory when restoring dump

I have a PostgreSQL text dump file approximatley 4.5GB in size (uncompressed) that I am trying to restore, but always fails due to running out of memory.
Interestingly enough, no matter what I try it always fails at the exact same line number of the dump file, which leads me to believe the changes I have attempted have had no effect. (I did look at this line number in the file and it is just another row of data, nothing significant is occurring at that point in the file.)
I am using psql with the -f option, as I read that can be better than the standard input. Both methods fail, however.
I have tried the following:
increase work_mem from 4MB to 128MB
increase shared buffers from 128MB to 2GB
increase VM memory from 8GB to 16GB
Using both Top and PG_Top I can see (what I believe shows) both the OS and database still have memory available when psql fails. I'm not doubting that something somewhere is running out of memory, I just wish I had a better way of telling what exactly that was.
Other information that may be helpful:
PostgreSQL 10.5
Ubuntu 16.04 LTS running on VMWare Workstation

Strange Raspberry PI SD card behaviour

My raspberry 1 pi has raspbian on Sandisk Extreme 32GB.
It is working well in the Raspberry, OS boots, no problem, however, if I shut it down, and insert into ubuntu laptop, I see error JBD2: Error -5 detected when updating journal superblock for sdb2-8. and I cannot read it.
First time I noticed a year ago. Raspberry pi still working without a problem.
More details would help to identify your problem more accurately.
But it souds like you may have to run fsck in order to fix filsystem errors.
here is an example for running fsck if your partition is on sdb2 :
sudo fsck /dev/sdb2
It's always recommended to take a backup first.

ogr2ogr slow upload to postgres

I need to upload geojson to a PostgreSQL database, using ogr2ogr. When I upload the geojson on the same server that the PostgreSQL database is on, ogr2ogr uploads a file in around 3-4 seconds. When I run ogr2ogr on my Ubuntu 16.04 computer, though, ogr2ogr takes ages, up to 1+ hour for the same file. I have a an i72600k on my desktop while the server is an Amazon EC2 free trial micro instance, so it can't be processing power. One thing I did notice is opening system monitor on my desktop shows a 7KB/sec upload speed system wide, so I'm not sure if it's just uploading incredibly slow on my desktop. I use the same host URL for the ogr2ogr command on both machines, so it's not like the Amazon ec2 instance is saving a DNS lookup. What can be wrong?
I solve it by using --config PG_USE_COPY YES.

MongoDB 2Gb limit - can't compact database

I have been adding files to GridFS in my 32bit Mongo database. It eventually failed when the size of all Mongo files hit 2Gb. So, I then deleted the files in GridFS. I've tried running the repairDatabase() command, but it fails, saying "mongo requires 64bit for larger datasets". I get the same error trying to run the compact command against GridFS.
So, I've hit the 2Gb limit, but it won't let me compact or repair because it doesn't have space. Talk about Catch22!!
What do I do?
Edit
This is an immediate problem I have - how do I compact the database right now?
I think the only recourse is to upgrade to a 64-bit OS.
I had the same problem on my database and I solved it such way. At first I created Amazon EC2 64-bit instance and moved database files from 32-bit instance via plain copy. Then I made all needed cleanups in database on 64-bit instance and made dump with mongodump. This dump I moved back to 32-bit instance and restored database from it.
If you need to restore database with same name, that you had before, you can just rename your old db-files in dbpath (files have database name in their name)
And of course, you should move to upgrade to 64-bit later. MongoDB on 32-bit OS is very bad in support.
shot in the dark here... you could try opening a slave off the master (in 64 bit) and see if you can force a replication over to the slave, essentially backing up your data. I have no idea if this would actually work, as it's pretty clear that 32bit has a 2gig limit (all their docs warn about this :( ), but thought I'd at least post a somewhat potentially creative solution..