Max open files on ununtu keeps reseting - mongodb

I am migrating from a file based cache to using a MongoDB. I am having issues with he max amount of open files. The mongo daemon keeps crashing because too many files are open. I set the limit of max open files in /etc/sysctl.conf to 500,000. I have set the max open files using ulimit -n 500000 and unlimited. When I do ulimit -n 50000 it says that it updates. Image for setting limit to 500,000
As soon as i close the session, it goes back to the default 1024. Image here. Even if I start the mongo daemon on that session that says it has the 500,000 open files, it still crashes at the 1,000 file limit.
What should I do to fix this? I am running Ubuntu 16.04 and mongo 3.3.0

Your new session may not get resource limit conf the from pam_limits PAM module. Check your /etc/pam.d/su file to see if pam_limits is there.

Related

Lose data of local MongoDB

My scenario is that I use a local DB and I always keep it running. An hour ago, I upgraded iTerm2 and all opened windows/tabs closed. After I restart iTerm2, I found that mongod is still running, so I use kill $(pidof mongod) and restart with mongod --dbpath ./db. However, most data I inserted has disappeared. I have checked .zsh_history to confirm that I am using the same db folder. I have also checked all mongod.lock address on my computer to see other DB folders, and none of them stores my latest data. I have seen mongo.log and the commands I used to query new data was still there. Therefore, how can I find my lost data? Thanks.
P.S. This situation happened last month when my mac was forced to shut down because of low battery. It happens again so I ask and want to know why it happens and what I can do.

Moving a very large MongoDB database [duplicate]

This question already has answers here:
Too many open files while ensure index mongo
(3 answers)
Closed 5 years ago.
Trying to move a MongoDB database with a little over 100 million documents. Moving it from server in AWS to server in GCP. Tried mongodump - which worked, but mongorestore keeps breaking with an error -
error running create command: 24: Too many open files
How can this be done?
Don't want to transfer by creating a script on AWS server to fetch each document and push to an API endpoint on GCP server because it will take too long.
Edit (adding more details)
Already tried setting ulimit -n to unlimited. Doesn't work as GCP has a hardcoded limit that cannot be modified.
Looks like you are hitting the ulimit for your user. This is likely a function of some or all of the following:
Your user having the default ulimit (probably 256 or 1024 depending on the OS)
The size of your DB, MongoDB's use of memory mapped files can result in a large number of open files during the restore process
The way in which you are running mongorestore can increase the concurrency thereby increasing the number of file handles which are open concurrently
You can address the number of open files allowed for your user by invoking ulimit -n <some number> to increase the limit for your current shell. The number you choose cannot exceed the hard limit configured on your host. You can also change the ulimit permanently, more details here. This is the root cause fix but it is possible that your ability to change the ulimit is constrained by AWS so you might want to look at reducing the concurrency of your mongorestore process by tweaking the following settings:
--numParallelCollections int
Default: 4
Number of collections mongorestore should restore in parallel.
--numInsertionWorkersPerCollection int
Default: 1
Specifies the number of insertion workers to run concurrently per collection.
If you have chosen values for these other than 1 then you could reduce the concurrency (and hence the number of concurrently open file handles) by setting them as follows:
--numParallelCollections=1 --numInsertionWorkersPerCollection=1
Naturally, this will increase the run time of the restore process but it might allow you to sneak under the currently configured ulimit. Although, just to reiterate; the root cause fix is to increase the ulimit.

Moodle database too large to import in phpMyAdmin

I want to transfer a Moodle website and its database to another host (shared host), but my database backup is too large (120 MB), meanwhile max file size allowed to be imported is 50Mb, at phpMyAdmin. Is there any way to import this whole database at a time or to separate it into smaller .sql files ?
You can either increase the necessary values in your php.ini file ''/etc/php.ini'' (RedHat)
post_max_size = 250M
upload_max_filesize = 250M
; Optional if the error persists
max_execution_time = 5000
max_input_time = 5000
memory_limit = 1000M
or you do as the other two mentioned and import the file on shell using
mysql -u user -p database < YourSQL.sql
If your hosting account has SSH, Please restore the dump in commandline (using mysql command)
You can login to mysql console and select the db and run source command.
use database-name
source /file-path/to-sql-file.sql
Source command runs in small chunk so should not have problem with packet size

Restore HAProxy stats

I use following command to restart HAProxy, when changing the configurration file:
/usr/local/sbin/haproxy -f /etc/haproxy.cfg -p /var/run/haproxy.pid -sf $(</var/run/haproxy.pid)
Sadly after the HAProxy is back all stats of the previous launch are away.
Is there a possibility in HAProxy to restore stats from a previous HAProxy start?
As of version 1.6, you can dump server states into a flat file right before performing the reload and let the new process know where the states are stored.
See example here:seamless_reload
The "show servers state" command is used to keep servers uptime and healthy status cross reload, but it doesn't give session data, or bytes in/out, etc. "show stat" command can dump these stats to a file that you can use to create a report later, although HAproxy doesn't have a feature to reload this information.
Can't be done unfortunately. HAProxy's stats are all in memory, so when restarting (even gracefully with -sf), those stats get lost.
Might you can export data to CSV file before doing reload/restart
"http://localhost:8080/haproxy?stats;csv"
or
curl -u <USER>:<MyPASSWORD> "http://localhost:8080/haproxy?stats;csv"
according to HAproxy 1.5 doc you can clear all stats using the unix socket.
clear counters all
Clear all statistics counters in each proxy (frontend & backend) and in each
server. This has the same effect as restarting. This command is restricted
and can only be issued on sockets configured for level "admin".

How to enable auto clean of pg_xlog

I'm trying to configure a PostgreSQL 9.6 database to limit the size of the pg_xlog folder. I've read a lot of threads about this issue or similar ones but nothing I've tried helped.
I wrote an setup script for my Postgresql 9.6 service instance. It executes initdb, registers a windows service, starts it, creates an empty database and restores a dump into the database. After the script is done, the database structure is fine, the data is there but the xlog folder already contains 55 files (880 mb).
To reduce the size of the folder, I tried setting wal_keep_segments to 0 or 1, setting the max_wal_size to 200mb, reducing the checkpoint_timeout, setting archive_mode off and archive_command to an empty string. I can see the properties have been set correctly when I query pg_settings.
I then forced checkpoints through SQL, vacuumed the database, restarted the windows service and tried pg_archivecleanup, nothing really worked. My xlog folder downsized to 50 files (800 mb), not anywhere near the 200 mb limit I set in the config.
I have no clue what else to try. If anyone can tell me what I'm doing wrong, I would be very grateful. If more information is required, I'll be glad to provide it.
Many thanks
PostgreSQL won't aggressively remove WAL segments that have already been allocated when max_wal_size was at the default value of 1GB.
The reduction will happen gradually, whenever a WAL segment is full and needs to be recycled. Then PostgreSQL will decide whether to delete the file (if max_wal_size is exceeded) or rename it to a new WAL segment for future use.
If you don't want to wait that long, you could force a number of WAL switches by calling the pg_switch_xlog() function, that should reduce the number of files in your pg_xlog.