Can't get accurate result for gsutil du command - google-cloud-storage

When I run the gsutil du -sh command in Google Cloud Storage to determine the size of a folder in my storage buckets it always returns 0 B. I know this is incorrect but I can't understand why this is happening? The command was working fine last week and all of the sudden stopped functioning properly. Anyone had this issue?

Related

FATAL: could not write lock file "postmaster.pid": No space left on device

I'm running a PostgreSQL database in a docker container. I allocated 300Gb for docker and everything was going great, having like 90% of free space. Some day I got this error, PostgreSQL Database directory appears to contain a database; Skipping initialization:: FATAL: could not write lock file "postmaster.pid": No space left on device, which I don't know where it came from and how to actually fix it. Temporarily, I just increased the size to 400Gb, which fixed it, but why? Now I have 95% space left, which goes according to the previous output of 90% free space. I tried to trace using df -h and some other commands to check for disk usage and didn't find anything. Has anyone faced something similar?
I tried df -h to check for disk usage and allocate more docker resources

Spoof free space available in Docker or tricking Postgres and RabbitMQ

I'm using Google Cloud Run to host some solutions. When the containers start, programs can write to disk, and the data persists until the container stops. However, from a system point of view, all partitions of the container always report zero free space. I confirmed this in a few ways:
Running df from start.sh shows zero free space when the container starts
Deleting a large file and then running df from start.sh still shows zero free space
It is possible to write to disk via start.sh, PHP scripts, etc, so the system DOES have free space to write to memory, yet df still reports zero free space
(All of the above are once the container is deployed to Cloud Run. Manually running the same container via docker from the Cloud Shell and executing df reports free space).
The problem is that certain applications perform disk space checks when they start, and they fail to load in Google Cloud Run. For example, MariaDB uses df in its init script, so commenting out these lines makes it possible to add a static yet functional MariaDB instance to a Cloud Run container.
MariaDB made it easy. Now, I'm trying to do the same thing with PostgreSQL and RabbitMQ, but I'm having trouble figuring out how to override their disk space checks. Here are the two options I am considering:
Keep digging through the source of PostgreSQL and RabbitMQ until I find the disk space check and override it. I don't speak Erlang, so this is a pain, and I would have to do it for every application with this issue
Programs are probably using coreutils to determine disk size. I could edit the source and rebuild it as part of my Dockerfile routine so the system always returns with free space available (could have unintentional side effects)
Is anyone either familiar with the source of Postgres or RabbitMQ or have a system-wide solution that I could implement that would "spoof" the free space available?
EDIT: Here are the error messages given by RabbitMQ and PostgreSQL
RabbitMQ:
{error,{cannot_log_to_file,"/var/log/rabbitmq/rabbit#localhost.log",{error,einval}}}
Postgres:
Error: /usr/lib/postgresql/10/bin/pg_ctl /usr/lib/postgresql/10/bin/pg_ctl start -D /var/lib/postgresql/10/main -l /var/log/postgresql/postgresql-10-main.log -s -o -c config_file="/etc/postgresql/10/main/postgresql.conf" exited with status 1:

What is the /var/lib/postgresql/10/main/ equivilent in PSQL-12?

I'm working on setting up point in time backups for a PSQL server that I have running, and I'm following a tutorial for an earlier version. I'm trying to figure out what the specific directory is for the DB cluster in PSQL-12 so that I can clear out that directory and test what I've setup. In the video, he runs a recursive remove on the PSQL-10 directory /var/lib/postgresql/10/main, and is still able to start the PSQL-10 service again when he's finished the restoration.
When I attempted it, I ran the recursive remove on the directory /var/lib/pgsql/12/data/ because the command SHOW data_directory; told me that is where my server's cluster data is stored. Removing all the data, however, messes up the postgresql-12.service, so I can't start it back up when I've completed the recovery.
This is displayed when I restore the backup and run systemctl start postgresql-12.service:
Process: 26672 ExecStart=/usr/pgsql-12/bin/postmaster -D ${PGDATA} (code=exited, status=1/FAILURE)
Dec 31 11:07:29 localhost.localdomain systemd[1]: Failed to start PostgreSQL 12 data....
I've tried making a backup of the working /data/ directory and doing a diff -qr to see what files differ between the working backup and the point in time backup, but coping those files from the working directory to the PIT directory doesn't seem to fix the issue, and I'm still unable to start the postgresql-12.service. It seems, however, that I am able to start the service back up successfully if I just do a mass copy of the working directory to /var/lib/postgresql/10/main.
Can someone please point me in the right direction? I've done plenty of research trying to find the working cluster directory so I can just erase table information and work on a PIT recovery without messing up the core application prereqs (such as the service), but I can't seem to find the information I'm looking for. Any assistance would be greatly appreciated! Additionally, if there's a way to spot this directory more quickly in the future, either by a command or looking at the files within, I would love to know so I can implement this procedure on different PSQL versions. Thank you!

Google cloud storage does not let me remove bucket with huge data

I have around 200 gb of data on a google cloud coldline bucket. When i try to remove it, it keeps preparing forever.
Any way to remove the bucket ?
Try the gsutil tool if you have been trying with the Console and it did not work. To do so, you can just open Google Cloud Shell (most left button in the top right corner of the Console) and type a command like:
gsutil -m rm -r gs://[BUCKET_NAME]
It may take a while, but with the -r flag you will be deleting first the contents of the bucket recursively, and later delete the bucket itself. The -m flag performs parallel removes, to speed up the process.

mongorestore for a collection results in "Killed" output and collection isn't fully restored

I type the following below:
root#:/home/deploy# mongorestore --db=dbname --collection=collectionname pathtobackupfolder/collectionname.bson
Here's the output:
2016-07-16T00:08:03.513-0400 checking for collection data in pathtobackupfolder/collectionname.bson
2016-07-16T00:08:03.525-0400 reading metadata file from pathtobackupfolder/collectionname.bson
2016-07-16T00:08:03.526-0400 restoring collectionname from file pathtobackupfolder/collectionname.bson
Killed
What's going on? I can't find anything on Google or Stackoverflow about a mongorestore resulting in "Killed". The backup folder that I'm restoring from is a collection of 12875 documents, and yet everytime I run the mongorestore, it always says "Killed", and always restores a different number that is less than the total number: 4793, 2000, 4000, etc.
The machine that I'm performing this call on is "Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-71-generic x86_64)" from Digital Ocean
Any help is appreciated. Thanks.
After trying the mongorestore command for the 5th and 6th time after posting this question, this time more explicit output came out that indicated it was a memory issue specific to Digital Ocean. I followed https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04 and the restore finished completely without errors.
If you are trying to solve it in docker, just increase swap memory in settings.json file