GCE persistent disk data management - google-cloud-console

GCE beginner here...Basic question : How can I send data to a persistent disk?
I have attached a persistent disk to an instance and tried sending files through the instance using the copy-file instruction. The disk seems correctly mounted (see below)
$ sudo fdisk -l
Disk /dev/sda: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x000935ca
Device Boot Start End Blocks Id System
/dev/sda1 2048 20969472 10483712+ 83 Linux
Disk /dev/sdb: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
I was able to send files to the instance itself - targetting the /tmp directory on the instance.
I haven't succeeded however in sending the files to the persistent disk.
Should I send the data to the instance first, then move the data to the attached drive? Or can that be done directly? Either way some directions would help.
Thanks in advance

You have to mount and format the Disk before usage:
Formatting disks
Before you can use non-root persistent disks in Compute Engine, you need to format and mount them. Compute Engine provides a tool, safe_format_and_mount, that can be used to assist in this process. The tool can be found at the following location on your virtual machine instance:
/usr/share/google/safe_format_and_mount
The tool performs the following actions:
Format the disk (only if it is unformatted)
Mount the disk
This can be helpful if you need to use a non-root persistent disk from a startup script, because the tool prevents your script from accidentally reformatting your disks and erasing your data.
safe_format_and_mount works much like the standard mount tool:
$ sudo mkdir MOUNT_POINT
$ sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" DISK_LOCATION MOUNT_POINT
Alternatively, you can format and mount disks using standard tools such as mkfs and mount.
Caution: If you are formatting disks from a startup script, you risk data loss if you do not take precautions to prevent reformatting your data on boot. Make sure to back up all important data and set up data recovery systems.
Source:
https://cloud.google.com/compute/docs/disks/persistent-disks
Then you can copy Data to the folder you mounted the Disk to :)

Related

How can I get mongo db actual size?

When I run below command in mongo shell,
db.stats()
I get below and this is wired. When I check mongoDB installation folder it only used around 2 GB disk space?
see below fsTotalSize over 80 GB
From dbStats documentation:
fsTotalSize and fsUsedSize are about the filesystem that the database is stored on. They'd be used to get an idea about how much the database could grow to.
dataSize is the size of the all the documents themselves.
storageSize is the size of the data stored on the filesystem (it can be smaller than the dataSize if using compression).
So the database takes up 6.7MB on the filesystem.
dbStats
dbStats.fsTotalSize - Total size of all disk capacity on the filesystem where MongoDB stores data.
It looks like 80 GB is the total of all disk capacity on the filesystem

How to extend boot disk ubuntu

we created a instance with 10GB of storage and loaded a few app now our PostgreSQL SQL won't start because of low disk space.
We tried to edit the instance by changing the boot disk space from 10 - 30 GB
We added a new disk 200GB
How can we extend the boot disk sba1 (10GB)?
Off: I think you mistagged your question, it's not related to Postgres itself.
Also, you don't provided enough info about your environment (what's your filesystem? are you using lvm?)
To solve your problem without manipulating partitions you can move postgres data to your new disk partition and make a symbolic link to it:
mv /path/to/data/ /new/disk/
ln -s /new/disk/data /path/to/data

SWAP memory of my solaris server is used more than its threshold

SWAP memory of my solaris server is used more than its threshold, is it possible to free some space by restarting the processes which are using most of SWAP memory?
$ swap -s
total: 10820256k bytes allocated + 453808k reserved = 11274064k used, 11911648k available
$ swap -l
swapfile dev swaplo blocks free
/dev/md/dsk/d210 85,210 16 20972720 20971152
You are only using 800 KB of swap out of 10 GB (i.e. 0.008 %) which is insignificant. There is then probably nothing to worry about according to the reported statistics.
You might have a look to /tmp (or other tmpfs backed directories) to free some virtual memory.

How to bypass a 2TB core dump file system limit?

I have a huge VM address space in a multi-threaded program. Typically it runs in ~5TB of virtual, and of that only touches up to 16GB of resident set. I map HugePages, and allow them to be dumped from their sections.
When the RedHat 6.3 + Kernel 3.0.29 system forces a crash dump,
most of the resident set gets dumped, but the core file stops at 2TB.
-rw------- 1 root root 2.0T Aug 21 21:15 core-primes-6-1377119514.21645
top reports ~4TB in this instance.
21726 root -91 0 4191g 7.1g 13m R 99.7 5.6 2:39.06 50 primes
gdb reports the core is truncated
BFD: Warning: /export0/crash/core-primes-6-1377119514.21645 is truncated: expected core file size >= 4488958177280, found: 2133738614784.
Is there some magic foo to allow the kernel to dump more than 2TB of this
process? The filesystem is ext3, and has plenty of space.
The VM for the unused memory is never touched.
The underlying etx3 filesystem with a 4KB block size tops out at 2TB max file size. Switching to an XFS filesystem gives a much larger max file size, and the core dump completes up to ~16TB. However it takes 30 minutes to dump. Switching to ext4, the native max file size is 16TB. The dump completes in < 4 minutes. You just have to remember to update fstab. mkfs.ext4 -LEXPORT0 -Tlargefile /dev/sda10. Using largefile hints to the filesystem that more space than files are needed.
ulimit -c is the command to verify and change the allowed maximum size of core files created; it is documented in the bash man page. Use something like
ulimit -c 16000000
As root, you can run
ulimit -c unlimited

Is there any option to limit mongodb memory usage?

I am using Mongo-DBv1.8.1. My server memory is 4GB but Mongo-DB is utilizing more than 3GB. Is there memory limitation option in Mongo-DB?.
If you are running MongoDB 3.2 or later version, you can limit the wiredTiger cache as mentioned above.
In /etc/mongod.conf add the wiredTiger part
...
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
wiredTiger:
engineConfig:
cacheSizeGB: 1
...
This will limit the cache size to 1GB, more info in Doc
This solved the issue for me, running ubuntu 16.04 and mongoDB 3.2
PS: After changing the config, restart the mongo daemon.
$ sudo service mongod restart
# check the status
$ sudo service mongod status
Starting in 3.2, MongoDB uses the WiredTiger as the default storage engine. Previous versions used the MMAPv1 as the default storage engine.
With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache.
In MongoDB 3.2, the WiredTiger internal cache, by default, will use the larger of either:
60% of RAM minus 1 GB, or
1 GB.
For systems with up to 10 GB of RAM, the new default setting is less than or equal to the 3.0 default setting (For MongoDB 3.0, the WiredTiger internal cache uses either 1 GB or half of the installed physical RAM, whichever is larger).
For systems with more than 10 GB of RAM, the new default setting is greater than the 3.0 setting.
to limit the wiredTriggered Cache Add following line to .config file :
wiredTigerCacheSizeGB = 1
This question has been asked a couple times ...
See this related question/answer (quoted below) ... how to release the caching which is used by Mongodb?
MongoDB will (at least seem) to use up a lot of available memory, but it actually leaves it up to the OS's VMM to tell it to release the memory (see Caching in the MongoDB docs.)
You should be able to release any and all memory by restarting MongoDB.
However, to some extent MongoDB isn't really "using" the memory.
For example from the MongoDB docs Checking Server Memory Usage ...
Depending on the platform you may see
the mapped files as memory in the
process, but this is not strictly
correct. Unix top may show way more
memory for mongod than is really
appropriate. The Operating System (the
virtual memory manager specifically,
depending on OS) manages the memory
where the "Memory Mapped Files"
reside. This number is usually shown
in a program like "free -lmt".
It is called "cached" memory.
MongoDB uses the LRU (Least Recently Used) cache algorithm to determine which "pages" to release, you will find some more information in these two questions ...
MongoDB limit memory
MongoDB index/RAM relationship
Mongod start with memory limit (You can't.)
You can limit mongod process usage using cgroups on Linux.
Using cgroups, our task can be accomplished in a few easy steps.
Create control group:
cgcreate -g memory:DBLimitedGroup
(make sure that cgroups binaries installed on your system, consult your favorite Linux distribution manual for how to do that)
Specify how much memory will be available for this group:
echo 16G > /sys/fs/cgroup/memory/DBLimitedGroup/memory.limit_in_bytes
This command limits memory to 16G (good thing this limits the memory for both malloc allocations and OS cache)
Now, it will be a good idea to drop pages already stayed in cache:
sync; echo 3 > /proc/sys/vm/drop_caches
And finally assign a server to created control group:
cgclassify -g memory:DBLimitedGroup \`pidof mongod\`
This will assign a running mongod process to a group limited by only 16GB memory.
source: Using Cgroups to Limit MySQL and MongoDB memory usage
I don't think you can configure how much memory MongoDB uses, but that's OK (read below).
To quote from the official source:
Virtual memory size and resident size will appear to be very large for the mongod process. This is benign: virtual memory space will be just larger than the size of the datafiles open and mapped; resident size will vary depending on the amount of memory not used by other processes on the machine.
In other words, Mongo will let other programs use memory if they ask for it.
mongod --wiredTigerCacheSizeGB 2 xx
Adding to the top voted answer, in case you are on a low memory machine and want to configure the wiredTigerCache in MBs instead of whole number GBs, use this -
storage:
wiredTiger:
engineConfig:
configString : cache_size=345M
Source - https://jira.mongodb.org/browse/SERVER-22274
For Windows it seems possible to control the amount of memory MongoDB uses, see this tutorial at Captain Codeman:
Limit MongoDB memory use on Windows without Virtualization
Not really, there are a couple of tricks to limit memory, like on Windows you can use the Windows System Resource Manager (WSRM), but generally Mongo works best on a dedicated server when it's free to use memory without much contention with other systems.
Although the operating system will try to allocate memory to other processes as they need it, in practice this can lead to performance issues if other systems have high memory requirements too.
If you really need to limit memory, and only have a single server, then your best bet is virtualization.
This can be done with cgroups, by combining knowledge from these two articles:
https://www.percona.com/blog/2015/07/01/using-cgroups-to-limit-mysql-and-mongodb-memory-usage/
http://frank2.net/cgroups-ubuntu-14-04/
You can find here a small shell script which will create config and init files for Ubuntu 14.04:
http://brainsuckerna.blogspot.com.by/2016/05/limiting-mongodb-memory-usage-with.html
Just like that:
sudo bash -c 'curl -o- http://brains.by/misc/mongodb_memory_limit_ubuntu1404.sh | bash'
There is no reason to limit MongoDB cache as by default the mongod process will take 1/2 of the memory on the machine and no more. The default storage engine is WiredTiger. "With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache."
You are probably looking at top and assuming that Mongo is using all the memory on your machine. That is virtual memory. Use free -m:
total used free shared buff/cache available
Mem: 7982 1487 5601 8 893 6204
Swap: 0 0 0
Only when the available metric goes to zero is your computer swapping memory out to disk. In that case your database is too large for your machine. Add another mongodb instance to your cluster.
Use these two commands in the mongod console to get information about how much virtual and physical memory Mongodb is using:
var mem = db.serverStatus().tcmalloc;
mem.tcmalloc.formattedString
------------------------------------------------
MALLOC: 360509952 ( 343.8 MiB) Bytes in use by application
MALLOC: + 477704192 ( 455.6 MiB) Bytes in page heap freelist
MALLOC: + 33152680 ( 31.6 MiB) Bytes in central cache freelist
MALLOC: + 2684032 ( 2.6 MiB) Bytes in transfer cache freelist
MALLOC: + 3508952 ( 3.3 MiB) Bytes in thread cache freelists
MALLOC: + 6349056 ( 6.1 MiB) Bytes in malloc metadata
MALLOC: ------------
MALLOC: = 883908864 ( 843.0 MiB) Actual memory used (physical + swap)
MALLOC: + 33611776 ( 32.1 MiB) Bytes released to OS (aka unmapped)
MALLOC: ------------
MALLOC: = 917520640 ( 875.0 MiB) Virtual address space used
MALLOC:
MALLOC: 26695 Spans in use
MALLOC: 22 Thread heaps in use
MALLOC: 4096 Tcmalloc page size
One thing you can limit is the amount of memory mongodb uses while building indexes. This is set using the maxIndexBuildMemoryUsageMegabytes setting. An example of how its set is below:
mongo --eval "db.adminCommand( { setParameter: 1, maxIndexBuildMemoryUsageMegabytes: 70000 } )"
this worked for me on an AWS instance, to at least clear the cached memory mongo was using. after this you can see how your settings have had effect.
ubuntu#hongse:~$ free -m
total used free shared buffers cached
Mem: 3952 3667 284 0 617 514
-/+ buffers/cache: 2535 1416
Swap: 0 0 0
ubuntu#hongse:~$ sudo su
root#hongse:/home/ubuntu# sync; echo 3 > /proc/sys/vm/drop_caches
root#hongse:/home/ubuntu# free -m
total used free shared buffers cached
Mem: 3952 2269 1682 0 1 42
-/+ buffers/cache: 2225 1726
Swap: 0 0 0
If you're using Docker, reading the Docker image documentation (in the Setting WiredTiger cache size limits section) I found out that they set the default to consume all available memory regardless of memory limits you may have imposed on the container, so you would have to limit the RAM usage directly from the DB configuration.
Create you mongod.conf file:
# Limits cache storage
storage:
wiredTiger:
engineConfig:
cacheSizeGB: 1 # Set the size you want
Now you can assign that config file to the container: docker run --name mongo-container -v /path/to/mongod.conf:/etc/mongo/mongod.conf -d mongo --config /etc/mongo/mongod.conf
Alternatively you could use a docker-compose.yml file:
version: '3'
services:
mongo:
image: mongo:4.2
# Sets the config file
command: --config /etc/mongo/mongod.conf
volumes:
- ./config/mongo/mongod.conf:/etc/mongo/mongod.conf
# Others settings...