Unable to resize /dev/sda1 of GCP postgres - postgresql

I created a postgres VM in GCP using this instructions https://joncloudgeek.com/blog/deploy-postgres-container-to-compute-engine/#create-a-compute-instance-running-a-postgres-container with a 10GB disk, everything has worked fine for the last couple of months but I seem to have run out of space on /dev/sda1. So i increased the disk size to 400GB but I can't seem to be able to resize /dev/sda1 using the standard command "sudo growpart /dev/sda 1" I keep getting command not found.

Solution for me:
Create a machine image of the container.
Spin up a new VM based on the machine image created.
Delete old VM.
This created a new Postgres VM with 400GB of disk.

Related

How can I change the config file of the mongo running on ECS

I changed the mongod.conf.orig of the mongo running on ECS, but when I restart, the changes are gone.
Here's the details:
I have a mongodb running on ECS, it always crashes due to out of memory.
I have found the reason, I set the ECS memory to 8G, but because the mongo is running in a container, it detected a higher memory.
when I run db.hostInfo()
I got the memSizeMB higher than 16G.
It caused that when I run db.serverStatus().wiredTiger.cache
I got a "maximum bytes configured" higher than 8G
so I need to reduce the wiredTigerCacheSizeGB in config file.
I used the command line copilot svc exec -c /bin/sh -n mongo to connect to it.
Then I found a file named mongod.conf.orig.
I ran apt-get install vim to install vi and edit this file mongod.conf.orig.
But after I restart the mongo task, all my changes are gone. include the vi I just installed.
Did anyone meet the same problem? Any information will be appreciated.
ECS containers has ephemeral storage. In your case, you could create an EFS and mount it in a container, then share the configuration.
If you use CloudFormation, look at mount points.

IBM Cloud: How do I see the newly attached block storage in a virtual server?

I created a virtual server but found that the space was too small, so I wanted to add additional disk space to it. After attaching it, I'm not able to see the newly attached disk using df -h in the virtual server. How can I see the newly attached disk?
There are several commands that you can use: fdisk -l and lsblk.
The first disk drive should appear as xvda. Usually, there is a second disk drive xvdb that is used for swap. Your new disk will appear as xvdc. Note that naming is OS-specific.
You will need to partition the new disk, format a file system and then mount the file system to a directory. The exact steps are OS-dependent.
Note: Following steps may change wrt OS. Below example specfic to IBM VSI with ubuntu 18.04 LTS
First confirm the disk is attached to VSI(virtual server instance)and check the number of disk at the node by fdisk command.
fdisk -l | grep xvdc
Create a directory for new partition to be mounted
mkdir /data1
Make the partition using fdisk command
fdisk /dev/xvdc
After the above command it will ask some info
n # And then hit enter until it creates the partition.
Format the partition using mkfs command mkfs.ext4
mkfs.ext4 /dev/xvdc
Mount the partitoined on the new directory
mount /dev/xvdc /data1
Check the disk and its mounted points
df -h

Kubernetes in vmware vsphere issues

I am following this guide to set up my cluster. It all works fine.
However, when I install fabric8 in this cluster I run out of disk on the minions. The image, kube.vmdk, is only about 6GB. It is the /var/lib/docker which gets filled up. How do I solve this?
Using the GUI for vmware the option to resize the disk is 'greyed out'.
Should I attach a second disk to the minions and then mount this disk? Where should I mount it? /var/lib/docker?
I would appreciate any input.
Docker's image is store in /var/lib/docker(more precisely, it store in storage driver's directory, /var/lib/docker/aufs when using aufs storage driver) , so when Kubernetes report disk gets filled up, it check that directory.
So you can
Remove all the images in docker(not necessary, you can copy everything to new dir).
stop docker daemon.
mount your new disk to /var/lib/docker/ or /var/lib/docker
start docker daemon.
If you are not sure what storage driver your docker is using, type docker info in your node, will get something contain this:
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 139
Dirperm1 Supported: true
It seems that you run out of the space of the disk. You can remove all the files in /var/lib/docker, and mount the second disk. Finally you need restart your dockerd.

Limit Disk usage in Docker+MongoDB

I am using the official mongo Docker image to start a MongoDB container where my boot disk is limited (e.g. 10G) I configured the docker to run with Google Cloud Logging driver and was hoping Google to store all the logs and save my local disk space. However, I notice the disk continues to grow:
$ df -h
/dev/sda1 9.9G 4.5G 4.9G 49%
As I digged deeper I realized the size of docker containers seems to be growing over time.
$ sudo du -sh /var/lib/docker/
3.6G /var/lib/docker/
However, I can't go further as somehow I can't access the directories within.
If I go inside the docker and du -sh the root, I don't find any suspicious directories occupying space.
So my problem is how do I find out where the disk space is used and how do I eliminate it.
My docker startup command (shown without project options)
docker run -d --log-driver=gcplogs mongo mongod
EDIT: I noticed the size growing has stopped at 4.5GB from ~3GB for a while. So I supposed it has reached some equilibrium now.

Docker container: MongoDb Insufficient free space for journal files

I am running MongoDB inside a Docker (version 1.10.1, on OSX) container and it is giving this error:
MongoDb Insufficient free space for journal files
I am not able to find out weather the issue in on the host, the container, or in virtual box?
However, on my host I have:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1 465Gi 75Gi 389Gi 17% 19777401 102066309 16% /
And on the docker container:
Filesystem Inodes IUsed IFree IUse% Mounted on
none 1218224 742474 475750 61% /
I have also mounted a volume from the host with:
run -it -v /Users/foobar/Projects/compose:/data/db mongoImage:0.1 /bin/bash
Thanks for the comments #andy, the issue did seem to be within the virtual box env.
I was able to resolve the issue by:
backing up all docker images
cloning the default virtual box iso (as
a backup)
deleting the default virtual box iso and all associated
files.
restarting docker, a new default Vbox iso was created. This
resolved the issue (which I expect to have again at some point)