PGbouncer ERROR accept() failed: Too many open files - postgresql

I am running a server with 20 cpu cores and 96 GB of ram. I have configured Postgresql and Pgbouncer to handle 1000 connections at a time.
However when the connections increase (even though they are well below the 1000 limit I have set) I start getting failed connections. I checked the pgbouncer log and I noticed the following
ERROR accept() failed: Too many open files
What limit do I need to increase to solve this issue? I am running Debian 8

Increate the operating system limit of the maximum number of open files for the user under which pgBouncer is running.

I added the below parameters in the pgBouncer service. After that, pgbanch was run again. So problems was solved.
The file limit size depend on your Linux file size. You checked your system with these codes.
cat /proc/sys/fs/file-max
ulimit -n
ulimit -Sn
ulimit -Hn
vim /lib/systemd/system/pgbouncer.service
[Service]
LimitNOFILE=64000
LimitNOFILESoft=64000

Related

Broken configuration in Mongo on ubuntu - cannot start mongod with correct config

I have managed to break what was a stable instance of Mongo running on an Ubuntu server.
It doesn't seem to start the service using the correct config.
Running mongod gives me the following:
2016-11-01T16:06:27.853+0000 I STORAGE [initandlisten] exception in initAndListen: 29 Data directory /data/db not found., terminating
and therefore, when I try to run the mongo shell I get:
2016-11-01T16:06:48.476+0000 W NETWORK Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
The config file at /etc/mongod.conf states the dbpath as /var/lib/mongod which has a bunch of dbses already in there (which were working!) and the port as 3306. This is clearly not what the error messages above are pointing to.
I've tried running mongod with this config file using --config /etc/mongod.conf but I get this:
2016-11-01T16:09:12.530+0000 F CONTROL Failed global initialization: FileNotOpen Failed to open "/var/log/mongodb/mongod.log"
Any ideas of what steps I can take to restore the original service on the right dbpath and port?
There is an upstart file at /etc/init/mongod.conf but a server reboot hasn't had any impact.
Thanks.
When running mongod with data files on a low disk space volume (typically I had this issue on my always crowded development laptop), you may experience the above, with the slightly misleading symptoms. From memory, the required space in my/our case was around a few gigabytes, 3 typically sufficed.
If this fits, you can either choose to delete files, or to ignore (since you aren't in a production environment), use the --smallfiles option, see the documentation.
(I just noticed that this issue is about 2 years old ... possibly only relates to the mmapv1 engine, which isn't the default since 3.2. Having written it, I'll post this possible answer anyway, probably won't be of much use though by now :) )

Monit memcached config without pidfile

I have classic situation. Need to configure monit for memcached on CentOS7. The problem is, that all configs i can find in google contains this row:
check process memcached with pidfile /var/run/memcached/memcached.pid
However, There is no memcached.pid file in /var/run and no /var/run/memcached folder. I've checked /usr/lib/systemd/system/memcached.service
[Service]
Type=simple
EnvironmentFile=-/etc/sysconfig/memcached
ExecStart=/usr/bin/memcached -u $USER -p $PORT -m $CACHESIZE -c $MAXCONN $OPTIONS
So, there is no path to .pid file.
The question is can I check memcached without .pid file?
The second question - can be this .pid file in another location?
Replace in your monit config
check process memcached with pidfile /var/run/memcached/memcached.pid
with
check process memcached with match memcached
My config for memcached:
check process memcached with match memcached
start program = "/usr/bin/systemctl start memcached"
stop program = "/usr/bin/systemctl stop memcached"
if failed host 127.0.0.1 port 11211 protocol MEMCACHE then restart
if cpu > 70% for 2 cycles then alert
if cpu > 98% for 5 cycles then restart
if 2 restarts within 3 cycles then timeout
Centos 7, monit 5.14

How should I set up mongodb cluster to handle 20K+ simultaneous

My application uses MongoDB as database. We are expecting 20K+ simultaneous connections to mongodb cluster. How should I config the server if I want to run the mongodb on 20 servers and shard the cluster 20 ways?
Here is what I've done so far:
On each of my 20 servers, I have one mongos (router) running on port 30000, and on 3 servers I run mongo config servers on port 20000. Then on each server, I run 3 instances of mongod. One of the is the primary. In order words, I have 20 mongos, 3 mongo-config, 60 mongod servers (20 primary and 40 replica).
Then in my application (which also run on each server and connect to the localhost:30000 mongos), I set the mongoOptions such that the connectionsPerHost = 1000.
10-15 minutes after all services start, some of them became no longer ssh-able. These servers are still ping-able. I suspect there were too many connections, and it caused the server to die.
My own analysis is as follows:
1K connections per connection pool means for each shard's primary, it will have 1K * 20 (shards) = 20K simultaneous connections open. A few servers will probably have more than one primary running on it, which will double or triple the number of connections to 60K. Somehow mongod cannot handle these many connections although I changed my system settings to allow each process to open way more files.
Here are what 'ulimit -a' shows:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 20
file size (blocks, -f) unlimited
pending signals (-i) 16382
max locked memory (kbytes, -l) 64000000
max memory size (kbytes, -m) unlimited
open files (-n) 320000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
BTW, I didn't specify --maxConns when I start up mongod/mongos, I also didn't change MONGO.POOLSIZE.
A side question: if my reasoning is correct, the total number of simultaneous connection requirement will be posed on each primary, which doesn't seem right to me, it almost means mongodb cluster is not scalable at all. Someone tell me I'm wrong please?
Aout your cluster architecture :
Running several instances of mongod on the same server is usually not a good idea, do you any particular reason to do this ? The primary server of each shard will put some heavy pressure on your server, the replication also add io pressure, so mixing them won't be really good for performance. IMO, you should rather have 6 shards (1 master - 2 secondaries) and give each instance their own server. (Conf and arbiter instance are not very resources consomming so its ok to leave them on the same servers).
Sometimes the limits don't apply to the process itself. As a test go onto one of the servers and get the pid for the mongo service you want to check on by doing
ps axu | grep mongodb
and then do
cat /proc/{pid}/limit
That will tell you if the limits have taken effect. If the limit isn't un effect then you need to specify the limit in the startup file and then stop - start the mongo service and test again.
A sure-fire way to know if this is happening is to tail -f the mongo log on a dying server and watch for those "too many files" messages.
We set our limit to 20000 per server and do the same on all mongod and mongos instances and this seems to work.
We're running a 4-shard replicaset on 4 machines. We have 2 shard primaries on 2 hosts, 2 shard replicas on the other 2 boxes, arbiters and config servers spread out).
We're getting messages:
./checkMongo.bash: fork: retry: Resource temporarily unavailable
./checkMongo.bash: fork: retry: Resource temporarily unavailable
./checkMongo.bash: fork: retry: Resource temporarily unavailable
Write failed: Broken pipe
Checking ulimit -a:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 773713
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Okay, so we're possibly hitting a process limit because of the fork message. Here's how to check that:
$ ps axo pid,ppid,rss,vsz,nlwp,cmd | egrep mongo
27442 1 36572 59735772 275 /path/mongod --shardsvr --replSet shard-00 --dbpath /path/rs-00-p --port 30000 --logpath /path/rs-00-p.log --fork
27534 1 4100020 59587548 295 /path/mongod --shardsvr --replSet shard-02 --dbpath /path/rs-02-p --port 30200 --logpath /path/rs-02-p.log --fork
27769 1 57948 13242560 401 /path/mongod --configsvr --dbpath /path/configServer_1 --port 35000 --logpath /path/configServer_1.log --fork
So, you can see the mongod's have 275, 295, and 401 subprocesses/threads each. though I'm not hitting a limit now, I probably was earlier. So, the solution: change the system's ulimit for the user we're running under from 1024 to 2048 (or even unlimited). You can't change via
ulimit -u unlimited
unless you sudo first or something; I don't have privs to do that.

increase item max size in memcached?

i am using memcached on my centos server , my project is large and has objects more than 1MB which i need to save to memcached , well , i can't ! because the max_item_size is 1MB , anyway to edit that ?
Thank you
You can change the limit quickly by edit the configuration file [/etc/memcached.conf] adding:
# Increase limit
-I 128M
Or if you have trouble with SO config run it with command line directly
memcached -I 128M
If you are using Memcache >= 1.4.2, this is now configurable. Here is an example of how to set this in your init script for starting Memcache on CentOS: http://www.alphadevx.com/a/387-Changing-the-maximum-item-size-allowed-by-Memcache
You can compile memcached and change the memory allocation setting to use POWER_BLOCK's, in the slabs.c file (or you can recompile and user malloc/free, but that is the greater of the evils).
http://code.google.com/p/memcached/wiki/FAQ#Why_are_items_limited_to_1_megabyte_in_size?
I would seriously consider what you are caching and how it can be more modular, > 1MB in active memory is large.
Spent tons of time to figure this out:
in /etc/sysconfig/memcached edit options
OPTIONS="-l 127.0.0.1 -I 3m"
then systemctl restart memcached to take effect.
Would recommend option -l 127.0.0.1 it secures to localhost usage only and -I 3m increases the limit as described above.
With Centos 7 I had no luck with these paths /etc/memcached.conf /etc/default/memcached

filezilla, error while writing failure

I'm transferring a very large (35GB) file through SFTP and FileZilla.
Now the transfer is 59.7% done, but I keep getting this error, and it hasn't changed that number for hours.
Error: File transfer failed after transferring 1,048,576 bytes in 10 seconds
Status: Starting upload of C:\Files\static.sql.gz
Status: Retrieving directory listing...
Command: ls
Status: Listing directory /var/www/vhosts/site/httpdocs
Command: reput "C:\Files\static.sql.gz" "static.sql.gz"
Status: reput: restarting at file position 20450758656
Status: local:C:\Files\static.sql.gz => remote:/var/www/vhosts/site/httpdocs/static.sql.gz
Error: error while writing: failure
Why do I keep getting this error?
Credit to cdhowie: The remote volume was out of space.
I encountered the same situation.
Go to your server, run "df" command to see if there is a problem of hard disk space.
http://wiki.filezilla-project.org/Network_Configuration#Timeouts_on_large_files
Recently faced this issue, Turned out to be the disk space issue. Removed some old logs, specially mysqld.log file which was in GBs. It worked after that.
In our case it was because the file exceeded the user's quota. We use Virtualmin and the virtual server had a default quota of just 1GB. Increasing that value in Virtualmin solved the problem.
filezilla, error while writing: failure issue occurred when server storage is full. Login in Linux server and
Kindly run below two commands to find out which files are consuming max storage in /var/log recursively..
for MB Size:
sudo du -csh $(sudo find /var/log -type f) |grep M|sort -nr
For GB size:
sudo du -csh $(sudo find /var/log -type f) |grep G|sort -nr
This happened when I tried to replace the file which was already open or running in the background. Once closed, I was able to overwrite the file.