** WARNING: soft rlimits too low. Number of files is 256, should be at least 1000 - mongodb

i ran on Mac OSX capitan and everytime i run mongo shell, this warning will pop up. i tried:
sudo ulimit -n 1024
ulimit -n 1024
it still doesn't work. any ideas?

You should try to close your terminal and redo it. Or directly try it in your shell which runs mongodb.
Close the running MonogoDB
Run the following bash code:
sudo launchctl limit maxfiles 65536 65536
sudo launchctl limit maxproc 2048 2048
ulimit -n 65536
ulimit -u 2048
Close the terminal or bash and restart.
run ulimit -n in terminal to test if change successfully

Related

loop device setup (losetup, mount, etc) fails in a container immediately after host reboot

I'm trying to populate a disk image in a container environment (podman) on Centos 8. I had originally run into issues with accessing the loop device from the container until finding on SO and other sources that I needed to run podman as root and with the --privileged option.
While this did solve my problem in general, I noticed that after rebooting my host, my first attempt to setup a loop device in the container would fail (failed to set up loop device: No such file or directory), but after exiting and relaunching the container it would succeed (/dev/loop0). If for some reason I needed to set up a second loop device (/dev/loop1) in the container (after having gotten a first one working), it too would fail until I exited and relaunched the container.
Experimenting a bit further, I found I could avoid the errors entirely if I ran losetup --find --show <file created with dd> enough times to attach the maximum number of loop devices I would need, then detached all of those with losetup -D, I could avoid the loop device errors in the container entirely.
I suspect I'm missing something obvious about what losetup does on the host which it is apparently not able to do entirely within a container, or maybe this is more specifically a Centos+podman+losetup issue. Any insight as to what is going on and why I have to preattach/detach the loop devices after a reboot to avoid problems inside my container?
Steps to reproduce on a Centos 8 system (after having attached/detached once following a reboot):
$ dd if=/dev/zero of=file bs=1024k count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB, 10 MiB) copied, 0.00826706 s, 1.3 GB/s
$ cp file 1.img
$ cp file 2.img
$ cp file 3.img
$ cp file 4.img
$ sudo podman run -it --privileged --rm -v .:/images centos:8 bash
[root#2da5317bde3e /]# cd images
[root#2da5317bde3e images]# ls
1.img 2.img 3.img 4.img file
[root#2da5317bde3e images]# losetup --find --show 1.img
/dev/loop0
[root#2da5317bde3e images]# losetup --find --show 2.img
losetup: 2.img: failed to set up loop device: No such file or directory
[root#2da5317bde3e images]# losetup -D
[root#2da5317bde3e images]# exit
exit
$ sudo podman run -it --privileged --rm -v .:/images centos:8 bash
[root#f9e41a21aea4 /]# cd images
[root#f9e41a21aea4 images]# losetup --find --show 1.img
/dev/loop0
[root#f9e41a21aea4 images]# losetup --find --show 2.img
/dev/loop1
[root#f9e41a21aea4 images]# losetup --find --show 3.img
losetup: 3.img: failed to set up loop device: No such file or directory
[root#f9e41a21aea4 /]# losetup -D
[root#f9e41a21aea4 images]# exit
exit
$ sudo podman run -it --privileged --rm -v .:/images centos:8 bash
[root#c93cb71b838a /]# cd images
[root#c93cb71b838a images]# losetup --find --show 1.img
/dev/loop0
[root#c93cb71b838a images]# losetup --find --show 2.img
/dev/loop1
[root#c93cb71b838a images]# losetup --find --show 3.img
/dev/loop2
[root#c93cb71b838a images]# losetup --find --show 4.img
losetup: 4.img: failed to set up loop device: No such file or directory
I know it's a little old but I've stumbled across similar problem and here what I've discovered:
After my vm boots up it does not have any loop device configured and it's ok since mount can create additional devices if needed but:
it seems that docker puts overlay over /dev so it won't see any changes that were done in /dev/ after container was started so even if mount requested new loop devices to be created and they actually were created my running container won't see it and fail to mount because of no loop device available.
Once you restart container it will pick up new changes from /dev and see loop devices and successfully mount until it run out of them and try to request again.
So what i tried (and it seems working) I passed /dev to docker as volume mount like this
docker -v /dev:/dev -it --rm <image> <command> and it did work.
If you still have this stuff I was wondering if you could try it too to see if it helps.
The only other method I can think of, beyond what you've already found is to create the /dev/loop devices yourself on boot. Something like this should work:
modprobe loop # This may not be necessary, depending on your kernel build but is harmless.
major=$(grep loop /proc/devices | cut -c3)
for index in 0 1 2 3 4 5
do
mknod /dev/loop$i $major $i
done
Put this in /etc/rc.local, your system's equivalent or otherwise arrange for it to run on boot.

centos cannot coredump with ulimit -c is unlimited

I just installed CentOs 7 on my mac using Parallel Desktop.
Here is the result of ulimit -c:
[root#centos-linux test1]# ulimit -c
unlimited
Here is the content of /etc/security/limits.conf
soft core unlimited
But the is no coredump file created.
What else can I do to enable coredump?
This phenomenon results from false core dump file path.
I thought the coredump file should be created in current dir or /tmp. But it's not.
cat /proc/sys/kernel/core_pattern tells where the coredump files are.
In my system:
[root#centos-linux Linux]# cat /proc/sys/kernel/core_patternĀ 
/mydata/corefile/core-%e-%s-%u-%g-%p-%t
However, there is no /mydata/corefile in my system.
So I can created a new dir /mydata/corefile or using
sysctl -w kernel.core_pattern=/tmp/core-%e-%s-%u-%g-%p-%t
to get coredump file in /tmp.

Image dump fails during operation

I use the openocd script below to dump the flash memory of a STM32 microcontroller.
mkdir -p dump
openocd -f board/stm3241g_eval_stlink.cfg \
\
-c "init" \
-c "reset halt" \
-c "dump_image dump/image.bin 0x08000000 0x100000" \
-c "shutdown" \
FILENAME=dump/image.bin
FILESIZE=$(stat -c%s "$FILENAME")
echo "Size of $FILENAME = $FILESIZE bytes."
The script is supposed to read the whole memory which is 1MB in my case but it does it very rarely. Generally it stops reading the memory before the end.
Why can't I obtain 1MB each time I execute this script? What is the problem here to cause openocd stop dumping the rest of the memory?
You can use dfu-utils to reflash your STM32 micros.
In Ubuntu/Debian distros you can install dfu-utils with apt:
$ sudo apt-get install dfu-util
$ sudo apt-get install fwupd
Boot your board in DFU mode (check datasheet). Once in DFU mode, you should see something similar to this:
$ lsusb | grep DFU
Bus 003 Device 076: ID 0483:df11 STMicroelectronics STM Device in DFU Mode
Once booted in DFU mode, reflash your binary:
$ sudo dfu-util -d 0483:df11 -a 0 -s 0x08000000:leave -D build/$(PROJECT).bin
With -d option you choose product:vendorid such as listed by lsusb in DFU mode.
With the -a 0 option you select alternate mode 0, check the options available as in the following example:
$ sudo dfu-util -l
Found DFU: [0483:df11] ver=2200, devnum=101, cfg=1, intf=0, alt=1, name="#Option Bytes /0x1FFFF800/01*016 e", serial="FFFFFFFEFFFF"
Found DFU: [0483:df11] ver=2200, devnum=101, cfg=1, intf=0, alt=0, name="#Internal Flash /0x08000000/064*0002Kg", serial="FFFFFFFEFFFF"
As you can see, alt=0 is for internal flash memory.
With the -s option you specify the flash memory address where you save your binary. Check your memory map in datasheet.
Hope this helps! :-)

Using Supervisord to manage mongos process

background
I am trying to automate the restarting in case of crash or reboot for mongos process used in mongodb sharded setup.
Case 1 : using direct command, with mongod user
supervisord config
[program:mongos_router]
command=/usr/bin/mongos -f /etc/mongos.conf --pidfilepath=/var/run/mongodb/mongos.pid
user=mongod
autostart=true
autorestart=true
startretries=10
Result
supervisord log
INFO spawned: 'mongos_router' with pid 19535
INFO exited: mongos_router (exit status 0; not expected)
INFO gave up: mongos_router entered FATAL state, too many start retries too quickly
mongodb log
2018-05-01T21:08:23.745+0000 I SHARDING [Balancer] balancer id: ip-address:27017 started
2018-05-01T21:08:23.745+0000 E NETWORK [mongosMain] listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:27017
2018-05-01T21:08:23.745+0000 E NETWORK [mongosMain] addr already in use
2018-05-01T21:08:23.745+0000 I - [mongosMain] Invariant failure inShutdown() src/mongo/db/auth/user_cache_invalidator_job.cpp 114
2018-05-01T21:08:23.745+0000 I - [mongosMain]
***aborting after invariant() failure
2018-05-01T21:08:23.748+0000 F - [mongosMain] Got signal: 6 (Aborted).
Process is seen running. But if killed does not restart automatically.
Case 2 : Using init script
here the slight change in the scenario is that some ulimit commands, creation of pid files is to be done as root and then the actual process should be started as mongod user.
mongos script
start()
{
# Make sure the default pidfile directory exists
if [ ! -d $PID_PATH ]; then
install -d -m 0755 -o $MONGO_USER -g $MONGO_GROUP $PIDDIR
fi
# Make sure the pidfile does not exist
if [ -f $PID_FILE ]; then
echo "Error starting mongos. $PID_FILE exists."
RETVAL=1
return
fi
ulimit -f unlimited
ulimit -t unlimited
ulimit -v unlimited
ulimit -n 64000
ulimit -m unlimited
ulimit -u 64000
ulimit -l unlimited
echo -n $"Starting mongos: "
#daemon --user "$MONGO_USER" --pidfile $PID_FILE $MONGO_BIN $OPTIONS --pidfilepath=$PID_FILE
#su $MONGO_USER -c "$MONGO_BIN -f $CONFIGFILE --pidfilepath=$PID_FILE >> /home/mav/startup_log"
su - mongod -c "/usr/bin/mongos -f /etc/mongos.conf --pidfilepath=/var/run/mongodb/mongos.pid"
RETVAL=$?
echo -n "Return value : "$RETVAL
echo
[ $RETVAL -eq 0 ] && touch $MONGO_LOCK_FILE
}
daemon comman represents original script, but daemonizing under the supervisord is not logical, so using command to run the process in foreground(?)
supervisord config
[program:mongos_router_script]
command=/etc/init.d/mongos start
user=root
autostart=true
autorestart=true
startretries=10
Result
supervisord log
INFO spawned: 'mongos_router_script' with pid 20367
INFO exited: mongos_router_script (exit status 1; not expected)
INFO gave up: mongos_router_script entered FATAL state, too many start retries too quickly
mongodb log
Nothing indicating error, normal logs
Process is seen running. But if killed does not restart automatically.
Problem
How to correctly configure script / no script option for running mongos under supervisord ?
EDIT 1
Modified Command
sudo su -c "/usr/bin/mongos -f /etc/mongos.conf --pidfilepath=/var/run/mongodb/mongos.pid" -s /bin/bash mongod`
This works if ran individually on command line as well as part of the script, but not with supervisord
EDIT 2
Added following option to config file for mongos to force it to run in the foreground
processManagement:
fork: false # fork and run in background
Now command line and script properly run it in the foreground but supervisord fails to launch it. At the same time there are 3 processes show up when ran from command line or script
root sudo su -c /usr/bin/mongos -f /etc/mongos.conf --pidfilepath=/var/run/mongodb/mongos.pid -s /bin/bash mongod
root su -c /usr/bin/mongos -f /etc/mongos.conf --pidfilepath=/var/run/mongodb/mongos.pid -s /bin/bash mongod
mongod /usr/bin/mongos -f /etc/mongos.conf --pidfilepath=/var/run/mongodb/mongos.pid
EDIT 3
With following supervisord config things are working fine. But I want to try and execute the script if possible to set ulimit
[program:mongos_router]
command=/usr/bin/mongos -f /etc/mongos.conf --pidfilepath=/var/run/mongodb/mongos.pid
user=mongod
autostart=true
autorestart=true
startretries=10
numprocs=1
For the mongos to run in the foreground set the following option
#how the process runs
processManagement:
fork: false # fork and run in background
with that and above supervisord.conf setting, mongos will be launched and under the supervisord control

How do I get a core dump on OS X Lion?

I am working on a PostgreSQL extension in C that segfaults, so I want to look at the core dump file on my OS X Lion box. However, there are no core files in /cores or anywhere else that I can find. It appears that they are enabled in the system but are limited to a size of 0:
> sysctl kern.coredump
kern.coredump: 1
> ulimit -c
0
I tried setting ulimit -c unlimited in the shell session I'm using to start and stop PostgreSQL, and it seems to stick:
> ulimit -c
unlimited
And yet no matter what I do, no core files. I am starting PostgreSQL with pg_ctl -c, where the -c tells PostgreSQL to generate core dumps. But the system has nothing. How can I get Lion to dump core files?
The /cores/ directory is not necessarily there in Lion , and if it's not there, you won't get cores. You should be able to set the ulimit (as you have), run a program like cat(1), quit with a SIGQUIT (control-backslash) and get a coredump:
lion:~ user$ ulimit -c unlimited
lion:~ user$ cat
^\
^\
Quit: 3 (core dumped)
lion:~ user$ ls -l /cores/
total 716584
-r-------- 1 user user 366891008 Jun 21 23:35 core.1263
lion:~ user$
Technical Note TN2124 http://developer.apple.com/library/mac/#technotes/tn2124/ as suggested by Yuji in https://stackoverflow.com/a/3783403/225077 is helpful.