How to bypass a 2TB core dump file system limit? - x86-64

I have a huge VM address space in a multi-threaded program. Typically it runs in ~5TB of virtual, and of that only touches up to 16GB of resident set. I map HugePages, and allow them to be dumped from their sections.
When the RedHat 6.3 + Kernel 3.0.29 system forces a crash dump,
most of the resident set gets dumped, but the core file stops at 2TB.
-rw------- 1 root root 2.0T Aug 21 21:15 core-primes-6-1377119514.21645
top reports ~4TB in this instance.
21726 root -91 0 4191g 7.1g 13m R 99.7 5.6 2:39.06 50 primes
gdb reports the core is truncated
BFD: Warning: /export0/crash/core-primes-6-1377119514.21645 is truncated: expected core file size >= 4488958177280, found: 2133738614784.
Is there some magic foo to allow the kernel to dump more than 2TB of this
process? The filesystem is ext3, and has plenty of space.
The VM for the unused memory is never touched.

The underlying etx3 filesystem with a 4KB block size tops out at 2TB max file size. Switching to an XFS filesystem gives a much larger max file size, and the core dump completes up to ~16TB. However it takes 30 minutes to dump. Switching to ext4, the native max file size is 16TB. The dump completes in < 4 minutes. You just have to remember to update fstab. mkfs.ext4 -LEXPORT0 -Tlargefile /dev/sda10. Using largefile hints to the filesystem that more space than files are needed.

ulimit -c is the command to verify and change the allowed maximum size of core files created; it is documented in the bash man page. Use something like
ulimit -c 16000000
As root, you can run
ulimit -c unlimited

Related

How to extend boot disk ubuntu

we created a instance with 10GB of storage and loaded a few app now our PostgreSQL SQL won't start because of low disk space.
We tried to edit the instance by changing the boot disk space from 10 - 30 GB
We added a new disk 200GB
How can we extend the boot disk sba1 (10GB)?
Off: I think you mistagged your question, it's not related to Postgres itself.
Also, you don't provided enough info about your environment (what's your filesystem? are you using lvm?)
To solve your problem without manipulating partitions you can move postgres data to your new disk partition and make a symbolic link to it:
mv /path/to/data/ /new/disk/
ln -s /new/disk/data /path/to/data

aws ec2 - Mongodb EROR: Insufficient free space for journal files

I've seen this question asked a few times but nothing is working for me at the moment. So from the error message which is:
Tue May 31 16:06:09.566 [initandlisten] ERROR: Insufficient free space for journal files
Tue May 31 16:06:09.566 [initandlisten] Please make at least 3379MB available in data/journal or use --smallfiles
My database is called data so when I run mongod I need to pass through where my database sits so I run mongod --dbpath data/ i've tried appending --smallfiles and end up with this:
Tue May 31 16:07:10.268 [FileAllocator] error: failed to allocate new file: data/local.ns size: 16777216 boost::filesystem::create_directory: No space left on device: "data/_tmp". will try again in 10 seconds
I came across this answer Why getting error mongod dead but subsys locked and Insufficient free space for journal files on Linux? which told me to add smallfiles = true to the mongodb.conf file which I have and still getting the issue.
Because i'm not sure of how to fix this, i've tried increasing my instance from t2.micro to m4.large and i'm still getting the error with not enough space available.
Does anyone have any idea what I can do to fix this?
t2 or m4 instances are working with EBS drive (you get more CPU/RAM but not more disk space as you attach disk from EBS)
You can expand your EBS volume to get more disk space
The high-level step-by-step procedure will be:
Run the command df –h on your EBS Volume. It will display the drive’s details before resizing and the available space on the drive
stop your instance and detach the existing volume from this instance
create a snapshot from the volume (the one you just detached)
Go to the snapshot area of the console and select your snapshot then Create Volume. In the pop up window you can adjust the size (make sure to give it enough space)
when volume will be ready, attach this new volume to your instance (attach as root device) and start your instance
ssh to your instance and rerun df -h to check the new available space on the drive.
In case the info given by df -h does not show the expected space, you would need to claim the free space, run sudo resize2fs /dev/xvda??? (make sure to replace ??? by the number of your partition given by df -h)

SWAP memory of my solaris server is used more than its threshold

SWAP memory of my solaris server is used more than its threshold, is it possible to free some space by restarting the processes which are using most of SWAP memory?
$ swap -s
total: 10820256k bytes allocated + 453808k reserved = 11274064k used, 11911648k available
$ swap -l
swapfile dev swaplo blocks free
/dev/md/dsk/d210 85,210 16 20972720 20971152
You are only using 800 KB of swap out of 10 GB (i.e. 0.008 %) which is insignificant. There is then probably nothing to worry about according to the reported statistics.
You might have a look to /tmp (or other tmpfs backed directories) to free some virtual memory.

GCE persistent disk data management

GCE beginner here...Basic question : How can I send data to a persistent disk?
I have attached a persistent disk to an instance and tried sending files through the instance using the copy-file instruction. The disk seems correctly mounted (see below)
$ sudo fdisk -l
Disk /dev/sda: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x000935ca
Device Boot Start End Blocks Id System
/dev/sda1 2048 20969472 10483712+ 83 Linux
Disk /dev/sdb: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
I was able to send files to the instance itself - targetting the /tmp directory on the instance.
I haven't succeeded however in sending the files to the persistent disk.
Should I send the data to the instance first, then move the data to the attached drive? Or can that be done directly? Either way some directions would help.
Thanks in advance
You have to mount and format the Disk before usage:
Formatting disks
Before you can use non-root persistent disks in Compute Engine, you need to format and mount them. Compute Engine provides a tool, safe_format_and_mount, that can be used to assist in this process. The tool can be found at the following location on your virtual machine instance:
/usr/share/google/safe_format_and_mount
The tool performs the following actions:
Format the disk (only if it is unformatted)
Mount the disk
This can be helpful if you need to use a non-root persistent disk from a startup script, because the tool prevents your script from accidentally reformatting your disks and erasing your data.
safe_format_and_mount works much like the standard mount tool:
$ sudo mkdir MOUNT_POINT
$ sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" DISK_LOCATION MOUNT_POINT
Alternatively, you can format and mount disks using standard tools such as mkfs and mount.
Caution: If you are formatting disks from a startup script, you risk data loss if you do not take precautions to prevent reformatting your data on boot. Make sure to back up all important data and set up data recovery systems.
Source:
https://cloud.google.com/compute/docs/disks/persistent-disks
Then you can copy Data to the folder you mounted the Disk to :)

Is there any option to limit mongodb memory usage?

I am using Mongo-DBv1.8.1. My server memory is 4GB but Mongo-DB is utilizing more than 3GB. Is there memory limitation option in Mongo-DB?.
If you are running MongoDB 3.2 or later version, you can limit the wiredTiger cache as mentioned above.
In /etc/mongod.conf add the wiredTiger part
...
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
...
This will limit the cache size to 1GB, more info in Doc
This solved the issue for me, running ubuntu 16.04 and mongoDB 3.2
PS: After changing the config, restart the mongo daemon.
$ sudo service mongod restart
# check the status
$ sudo service mongod status
Starting in 3.2, MongoDB uses the WiredTiger as the default storage engine. Previous versions used the MMAPv1 as the default storage engine.
With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache.
In MongoDB 3.2, the WiredTiger internal cache, by default, will use the larger of either:
60% of RAM minus 1 GB, or
1 GB.
For systems with up to 10 GB of RAM, the new default setting is less than or equal to the 3.0 default setting (For MongoDB 3.0, the WiredTiger internal cache uses either 1 GB or half of the installed physical RAM, whichever is larger).
For systems with more than 10 GB of RAM, the new default setting is greater than the 3.0 setting.
to limit the wiredTriggered Cache Add following line to .config file :
wiredTigerCacheSizeGB = 1
This question has been asked a couple times ...
See this related question/answer (quoted below) ... how to release the caching which is used by Mongodb?
MongoDB will (at least seem) to use up a lot of available memory, but it actually leaves it up to the OS's VMM to tell it to release the memory (see Caching in the MongoDB docs.)
You should be able to release any and all memory by restarting MongoDB.
However, to some extent MongoDB isn't really "using" the memory.
For example from the MongoDB docs Checking Server Memory Usage ...
Depending on the platform you may see
the mapped files as memory in the
process, but this is not strictly
correct. Unix top may show way more
memory for mongod than is really
appropriate. The Operating System (the
virtual memory manager specifically,
depending on OS) manages the memory
where the "Memory Mapped Files"
reside. This number is usually shown
in a program like "free -lmt".
It is called "cached" memory.
MongoDB uses the LRU (Least Recently Used) cache algorithm to determine which "pages" to release, you will find some more information in these two questions ...
MongoDB limit memory
MongoDB index/RAM relationship
Mongod start with memory limit (You can't.)
You can limit mongod process usage using cgroups on Linux.
Using cgroups, our task can be accomplished in a few easy steps.
Create control group:
cgcreate -g memory:DBLimitedGroup
(make sure that cgroups binaries installed on your system, consult your favorite Linux distribution manual for how to do that)
Specify how much memory will be available for this group:
echo 16G > /sys/fs/cgroup/memory/DBLimitedGroup/memory.limit_in_bytes
This command limits memory to 16G (good thing this limits the memory for both malloc allocations and OS cache)
Now, it will be a good idea to drop pages already stayed in cache:
sync; echo 3 > /proc/sys/vm/drop_caches
And finally assign a server to created control group:
cgclassify -g memory:DBLimitedGroup \`pidof mongod\`
This will assign a running mongod process to a group limited by only 16GB memory.
source: Using Cgroups to Limit MySQL and MongoDB memory usage
I don't think you can configure how much memory MongoDB uses, but that's OK (read below).
To quote from the official source:
Virtual memory size and resident size will appear to be very large for the mongod process. This is benign: virtual memory space will be just larger than the size of the datafiles open and mapped; resident size will vary depending on the amount of memory not used by other processes on the machine.
In other words, Mongo will let other programs use memory if they ask for it.
mongod --wiredTigerCacheSizeGB 2 xx
Adding to the top voted answer, in case you are on a low memory machine and want to configure the wiredTigerCache in MBs instead of whole number GBs, use this -
storage:
wiredTiger:
engineConfig:
configString : cache_size=345M
Source - https://jira.mongodb.org/browse/SERVER-22274
For Windows it seems possible to control the amount of memory MongoDB uses, see this tutorial at Captain Codeman:
Limit MongoDB memory use on Windows without Virtualization
Not really, there are a couple of tricks to limit memory, like on Windows you can use the Windows System Resource Manager (WSRM), but generally Mongo works best on a dedicated server when it's free to use memory without much contention with other systems.
Although the operating system will try to allocate memory to other processes as they need it, in practice this can lead to performance issues if other systems have high memory requirements too.
If you really need to limit memory, and only have a single server, then your best bet is virtualization.
This can be done with cgroups, by combining knowledge from these two articles:
https://www.percona.com/blog/2015/07/01/using-cgroups-to-limit-mysql-and-mongodb-memory-usage/
http://frank2.net/cgroups-ubuntu-14-04/
You can find here a small shell script which will create config and init files for Ubuntu 14.04:
http://brainsuckerna.blogspot.com.by/2016/05/limiting-mongodb-memory-usage-with.html
Just like that:
sudo bash -c 'curl -o- http://brains.by/misc/mongodb_memory_limit_ubuntu1404.sh | bash'
There is no reason to limit MongoDB cache as by default the mongod process will take 1/2 of the memory on the machine and no more. The default storage engine is WiredTiger. "With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache."
You are probably looking at top and assuming that Mongo is using all the memory on your machine. That is virtual memory. Use free -m:
total used free shared buff/cache available
Mem: 7982 1487 5601 8 893 6204
Swap: 0 0 0
Only when the available metric goes to zero is your computer swapping memory out to disk. In that case your database is too large for your machine. Add another mongodb instance to your cluster.
Use these two commands in the mongod console to get information about how much virtual and physical memory Mongodb is using:
var mem = db.serverStatus().tcmalloc;
mem.tcmalloc.formattedString
------------------------------------------------
MALLOC: 360509952 ( 343.8 MiB) Bytes in use by application
MALLOC: + 477704192 ( 455.6 MiB) Bytes in page heap freelist
MALLOC: + 33152680 ( 31.6 MiB) Bytes in central cache freelist
MALLOC: + 2684032 ( 2.6 MiB) Bytes in transfer cache freelist
MALLOC: + 3508952 ( 3.3 MiB) Bytes in thread cache freelists
MALLOC: + 6349056 ( 6.1 MiB) Bytes in malloc metadata
MALLOC: ------------
MALLOC: = 883908864 ( 843.0 MiB) Actual memory used (physical + swap)
MALLOC: + 33611776 ( 32.1 MiB) Bytes released to OS (aka unmapped)
MALLOC: ------------
MALLOC: = 917520640 ( 875.0 MiB) Virtual address space used
MALLOC:
MALLOC: 26695 Spans in use
MALLOC: 22 Thread heaps in use
MALLOC: 4096 Tcmalloc page size
One thing you can limit is the amount of memory mongodb uses while building indexes. This is set using the maxIndexBuildMemoryUsageMegabytes setting. An example of how its set is below:
mongo --eval "db.adminCommand( { setParameter: 1, maxIndexBuildMemoryUsageMegabytes: 70000 } )"
this worked for me on an AWS instance, to at least clear the cached memory mongo was using. after this you can see how your settings have had effect.
ubuntu#hongse:~$ free -m
total used free shared buffers cached
Mem: 3952 3667 284 0 617 514
-/+ buffers/cache: 2535 1416
Swap: 0 0 0
ubuntu#hongse:~$ sudo su
root#hongse:/home/ubuntu# sync; echo 3 > /proc/sys/vm/drop_caches
root#hongse:/home/ubuntu# free -m
total used free shared buffers cached
Mem: 3952 2269 1682 0 1 42
-/+ buffers/cache: 2225 1726
Swap: 0 0 0
If you're using Docker, reading the Docker image documentation (in the Setting WiredTiger cache size limits section) I found out that they set the default to consume all available memory regardless of memory limits you may have imposed on the container, so you would have to limit the RAM usage directly from the DB configuration.
Create you mongod.conf file:
# Limits cache storage
storage:
wiredTiger:
engineConfig:
cacheSizeGB: 1 # Set the size you want
Now you can assign that config file to the container: docker run --name mongo-container -v /path/to/mongod.conf:/etc/mongo/mongod.conf -d mongo --config /etc/mongo/mongod.conf
Alternatively you could use a docker-compose.yml file:
version: '3'
services:
mongo:
image: mongo:4.2
# Sets the config file
command: --config /etc/mongo/mongod.conf
volumes:
- ./config/mongo/mongod.conf:/etc/mongo/mongod.conf
# Others settings...