S3FS file transfer fails after 500MB - s3fs

I am trying to transfer a file of size 900MB into S3 directory which is mounted using S3FS on EC2 instance. I installed S3FS using default config without changing any parameters.
Once file size hits 530MB while transferring it gets failed and temporary file is removed from that folder.
Is there any limitation for file size to upload in s3fs directory? or any parameter which is blocking this transfer?
Thanks in Advance

Related

DB2 SQL3706N A disk full error was encountered

I have nearly 600+ files to load in DB2 database version 10.5.9. Each file size is nearly 200 MB. I have a batch script to upload each files in a loop.
My Disk "/mnt/blumeta0/db2/copy"size is 16 GB
If i run this upload with nonrecoverable mode it works. But i cant do that in my prod database.
I tried to db3 connect refresh and db3 terminate after each file uploaded but does not worked.
Manually cleaned up disk /mnt/blumeta0/db2/copy but total size of all files is more than 16 GB so got same error.
I cannot clean folder in script as clean up can be done with super user.
db2 "LOAD FROM $i OF DEL INSERT INTO <table_name>"
SQL3706N A disk full error was encountered on "/mnt/blumeta0/db2/copy".
How DB2 server cleans copy folder? Is there any other alternative i can try?
You suggested that the Load succeeds when using NONRECOVERABLE mode, however fails otherwise with error "SQL3706N A disk full error was encountered on "/mnt/blumeta0/db2/copy".
I'm guessing that the Load is being performed using the COPY YES option. Since the Load command that you pasted does not show the COPY YES option, I'm guessing that you have a special configuration setting enabled that forces Load operations to use COPY YES in order to prevent the table from becoming inaccessible in a rollforward recovery event or HADR standby takeover event. The name of this configuration setting (registry variable) is "DB2_LOAD_COPY_NO_OVERRIDE".
When the Load is performed with COPY YES, a copy of the table pages/extents that were generated during the Load operation is written into a copy image file.
I suspect that you have the registry variable "DB2_LOAD_COPY_NO_OVERRIDE=COPY YES /mnt/blumeta0/db2/copy" configured (you can use db2set -all on the database server to display all configured registry variables). If so, the copy image files are being stored in this path, which at 16GB appears to be too small to contain them all.
You can consider changing the location of this path to somewhere with more disk space, however the path should always be accessible in the event of a database rollforward recovery or hadr standby takeover, otherwise the table will not be accessible after such an event.

Nextcloud : impossible to create or write in the data directory /media/pi/HCLOUD/nextcloudData/

I'm trying to build a Nextcloud server on my raspberry pi connected to an external disk. Installation worked. But during the setup I want to change the data directory (where all the files will be stored) to my external disk. But setup said: impossible to create or write in the data directory /media/pi/HCLOUD/nextcloudData/
I found the solution !
First we need to install nextcloud on the sd card (like that : https://www.marksei.com/how-to-install-nextcloud-13-on-ubuntu/).
If your disk is in ntfs format you really need to format it into ext4 - that gives to linux the possibilty of changing permissions on the disk.
Then mount it on this folder : /var/nextcloud and to move the data repo on the external HDD we need to follow this tutorial (step : moving nextcloud data folder): https://pimylifeup.com/raspberry-pi-nextcloud-server/
try doing sudo chown pi:www-data /media/pi/HCLOUD/nextcloudData/ and then change the data dir.

How to skip existing files in gsutil rsync

I want to copy files between a directory on my local computer disk and my Google Cloud Storage bucket with the below conditions:
1) Copy all new files and folders.
2) Skip all existing files and folders irrespective of whether they have been modified or not.
I have tried to implement this using the Google ACL policy, but it doesn't seem to be working.
I am using Google Cloud Storage admin service account to copy my files to the bucket.
As #A.Queue commented, the solution to skip existing files would be the use of the gsutil cp command with the -n option. This option means no-clobber, so that all files and directories already present in the Cloud Storage bucket will not be overwritten, and only new files and directories will be added to the bucket.
If you run the following command:
gsutil cp -n -r . gs://[YOUR_BUCKET]
You will copy all files and directories (including the whole directory tree with all files and subdirectories underneath) that are not present in the Cloud Storage bucket, while all of those which are already present will be skipped.
You can find more information related to this command in this link.

Executed PHP Script Cannot Access GCS Mounted Drive on GCE

I was able to mount my Google Cloud Storage using the command line below:
gcsfuse -o allow_other -file-mode=660 -dir-mode=770 --uid=<uid> --gid=<gid> testbucket /path/to/domain/folder
The group includes the user apache. Apache is able to write to the mounted drive like so:
sudo -u apache echo 'Some Test Text' > /path/to/domain/folder/hello.txt
hello.txt appears in the bucket as expected. However when I execute the below php script I get an error:
<?php file_put_contents('/path/to/domain/folder/hello.txt', 'Some Test Text');
PHP Error: failed to open stream: Permission denied
echo exec('whoami'); Returns apache
I assumed this is a common use for mounting with gcsfuse or something similar to this but, I seem to be the only one on the internet with this issue. I do not know if its an issue with the way I mounted it or the service security of httpd.
I came across a similar issue.
Use the flag --implicit-dirs while mounting the Google Storage bucket using gcsfuse. More on this here.
Mounting the bucket as a folder makes the OS to treat it like a regular folder which may contain files and folders. But Google Cloud Storage bucket doesn't have directory structures. For example, when you are creating a file named hello.txt in a folder named files inside a Google Storage bucket, you are not actually creating a folder and putting the file in it. The object is created in the bucket with the name as files/hello.txt. More on this here and here.
To make the OS treat the GCS bucket like a hierarchical structure, you have to specify the --implicit-dirs flag to the gcsfuse.
Note:
I wouldn't recommend using gcsfuse in production systems as it is a beta quality software.

What are important mongo data files for backup

If I want to backup database by copying raw files. What files do I need to copy ? only db-name.ns, db-name.0, db-name.1.... or whole folder (local.ns.., journal). I'm running replica set. I understand procedure for locking hidden secondary node and then copying files to new location. But I'm wondering do I need to copy whole folder or just some files.
Thx
Simple answer: All of them. As obvious as it might sound. And here is why:
If you don't copy a namespaces file, your database will most likely not work.
When not copying all datafiles, some of your data is missing and your indices will point to void locations. The database in question might work (minus the data stored in the missing data file), but I would not bet on that – and since the data was important enough to create a backup in the first place, you don't want this to happen, do you?
Config, admin and local databases are vitally necessary for their respective features – and since you used the feature, you probably want to use it after a restore, too.
How do I backup all files?
The best solution save for MMS backup I have found so far is to create LVM snapshots of the filesystem the MongoDB data resides on. In order for tis to work, the journal needs to be included. Usually, you don't need a dedicated backup node for this approach. It is a bit complicated to set up, though.
Preparing LVM backups
Let's assume you have your data in the default data directory /data/db and you have not changed any paths. Then you would mount a logical volume to /data/db and use this to hold the data. Assuming that you don't have anything like this, here is a step by step guide:
Create a logical volume big enough to hold your data. I will call that one /dev/VolGroup/LogVol1 from now on. Make sure that you only use about 80% of the available disk space in the volume group for creating the logical volume.
Create a filesystem on the logical volume. I prefer XFS, so we create an xfs filesystem on /dev/VolGroup/LogVol1:
mkfs.xfs /dev/VolGroup/LogVol1
Mount the newly created filesystem on /mnt
mount /dev/VolGroup/LogVol1 /mnt
Shut down mongod:
killall mongod
(Note that the upstart scripts sometimes have problems shutting down mongod, and this command gracefully stops mongod anyway).
Copy the datafiles from /data/dbto /mntby issuing
cp -a /data/db/* /mnt
Adjust your /etc/fstab so that the logical volume gets mounted on reboot:
# The noatime parameter increases io speed of mongod significantly
/dev/VolGroup/LogVol1 /data/db xfs defaults,noatime 0 1
Umount the logical volume from it's current outpoint and remount it on the correct one:
cd && umount /mnt/ && mount /data/db
Restart mongod
Creating a backup
Creating a backup now becomes as easy as
Create a snapshot:
lvcreate -l100%FREE -s -n mongo_backup /dev/VolGroup/LogVol1
Mount the snapshot:
mount /dev/VolGroup/mongo_backup /mnt
Copy it somewhere. The reason we need to do this is that the snapshot can only be held up until the changes to the data files do not exceed the space in the volume group you did not allocate during preparation. For example, if you have a 100GB disk and you allocated 80GB for /dev/VolGroup/LogVol1, the snapshot size would be 20GB. While the changes on the filesystem from the point you took the snapshot are less than 20GB, everything runs fine. After that, the filesystem will refuse to take any changes. So you aren't in a hurry, but you should definitely move the data to an offsite location, an FTP server or whatever you deem appropriate. Note that compressing the datafiles can take quite long and you might run out of "change space" before finishing that. Personally, I like to have a slower HDD as a temporary place to store the backup, doing all other operations on the HDD. So my copy command looks like
cp -a /mnt/* /home/mongobackup/backups
when the HDD is mounted on /home/mongobackup.
Destroy the snapshot:
umount /mnt && lvremove /dev/VolGroup/mongo_backup
The space allocated for the snapshot is released and the restrictions to the amount of changes to the filesystem are removed.
Whole db-Data folder + where ever you have your logs and journalling
The best solution to backup data on MongoDB would be to use Mongo monitoring Service(MMS). All other solutions including copying files manually, mongodump, mongoexport are way behind MMS.