Executed PHP Script Cannot Access GCS Mounted Drive on GCE - google-cloud-storage

I was able to mount my Google Cloud Storage using the command line below:
gcsfuse -o allow_other -file-mode=660 -dir-mode=770 --uid=<uid> --gid=<gid> testbucket /path/to/domain/folder
The group includes the user apache. Apache is able to write to the mounted drive like so:
sudo -u apache echo 'Some Test Text' > /path/to/domain/folder/hello.txt
hello.txt appears in the bucket as expected. However when I execute the below php script I get an error:
<?php file_put_contents('/path/to/domain/folder/hello.txt', 'Some Test Text');
PHP Error: failed to open stream: Permission denied
echo exec('whoami'); Returns apache
I assumed this is a common use for mounting with gcsfuse or something similar to this but, I seem to be the only one on the internet with this issue. I do not know if its an issue with the way I mounted it or the service security of httpd.

I came across a similar issue.
Use the flag --implicit-dirs while mounting the Google Storage bucket using gcsfuse. More on this here.
Mounting the bucket as a folder makes the OS to treat it like a regular folder which may contain files and folders. But Google Cloud Storage bucket doesn't have directory structures. For example, when you are creating a file named hello.txt in a folder named files inside a Google Storage bucket, you are not actually creating a folder and putting the file in it. The object is created in the bucket with the name as files/hello.txt. More on this here and here.
To make the OS treat the GCS bucket like a hierarchical structure, you have to specify the --implicit-dirs flag to the gcsfuse.
Note:
I wouldn't recommend using gcsfuse in production systems as it is a beta quality software.

Related

Export firestore data by overwriting existing data gcloud firestore

I am trying to overwrite existing export data in gcloud using:
gcloud firestore export gs://<PROJECT>/dir --collection-ids='tokens'
But I get this error:
(gcloud.firestore.export) INVALID_ARGUMENT: Path already exists: /fcm-test-firebase.appspot.com/dir/dir.overall_export_metadata
Is there anyway to either delete the path or export with replace?
You can easily determine the list of available flags for any gcloud.
Here are variants of the command and you can see that there's no overwrite option:
gcloud firestore export
gcloud alpha firestore export
gcloud beta firestore export
Because the export is too a Google Cloud Storage (GCS) bucket, you can simply delete the path before attempting the export.
BE VERY CAREFUL with this command as it recursively deletes objects
gsutil rm -r gs://<PROJECT>/dir
If you would like Google to consider adding an overwrite feature, consider filing a feature request on it's public issue tracker.
I suspect that the command doesn't exist for various reasons:
GCS storage is cheap
Many backup copies is ∞>> no backup copies
It's easy to delete copies using gsutil

Nextcloud : impossible to create or write in the data directory /media/pi/HCLOUD/nextcloudData/

I'm trying to build a Nextcloud server on my raspberry pi connected to an external disk. Installation worked. But during the setup I want to change the data directory (where all the files will be stored) to my external disk. But setup said: impossible to create or write in the data directory /media/pi/HCLOUD/nextcloudData/
I found the solution !
First we need to install nextcloud on the sd card (like that : https://www.marksei.com/how-to-install-nextcloud-13-on-ubuntu/).
If your disk is in ntfs format you really need to format it into ext4 - that gives to linux the possibilty of changing permissions on the disk.
Then mount it on this folder : /var/nextcloud and to move the data repo on the external HDD we need to follow this tutorial (step : moving nextcloud data folder): https://pimylifeup.com/raspberry-pi-nextcloud-server/
try doing sudo chown pi:www-data /media/pi/HCLOUD/nextcloudData/ and then change the data dir.

Creating symbolic links resulting in 500 error

Currently running a WHM / Cpanel server running Centos. Server seems to be running fine no issues there. However I'm using a deployment process to put files outside of the document root. e.g.
~/deployment
instead of:
~/public_html
Obviously I need to point public_html to this folder so my site will run. So, I'm removing the public_html and creating a symlink and pointing it to the new deployment folder. This results in a 500 error.
So looking at the logs I've discovered that it produces the following error:
Directory "/home/xyz/deployment" is writeable by group
Checking the file permissions looks as though the symlink is 777 where i need it to be 755 for the server to allow viewing.
Is there a setting in WHM ? Is there a setting in Centos? I have another box running that doesn't have this issue so I'm assuming that this is related to the current setup of this machine.
Any help would be appreciated, thanks.
when you create a hard link from a file or folder, This file/folder inherits the accesses and permissions of the original file/folder, and in soft link it will be 777 permission, so i think you can use rsync options for both purpose :
1- have a folder with all files in source
2- have your own permissions in folder

How to skip existing files in gsutil rsync

I want to copy files between a directory on my local computer disk and my Google Cloud Storage bucket with the below conditions:
1) Copy all new files and folders.
2) Skip all existing files and folders irrespective of whether they have been modified or not.
I have tried to implement this using the Google ACL policy, but it doesn't seem to be working.
I am using Google Cloud Storage admin service account to copy my files to the bucket.
As #A.Queue commented, the solution to skip existing files would be the use of the gsutil cp command with the -n option. This option means no-clobber, so that all files and directories already present in the Cloud Storage bucket will not be overwritten, and only new files and directories will be added to the bucket.
If you run the following command:
gsutil cp -n -r . gs://[YOUR_BUCKET]
You will copy all files and directories (including the whole directory tree with all files and subdirectories underneath) that are not present in the Cloud Storage bucket, while all of those which are already present will be skipped.
You can find more information related to this command in this link.

PostgreSQL: Error importing csv file from shared network folder

My goal is to import csv file to postgresql database.
my file is located in network shared folder and I do not have no option to make it in a local folder.
My Folder located in :
"smb://file-srv/doc/myfile.csv"
When I run my this PostgreSQL script:
COPY tbl_data
FROM 'smb://file-srv/doc/myfile.csv' DELIMITER ',' CSV;
I would get this error :
ERROR: could not open file "smb://file-srv/doc/myfile.csv" for reading: No such file or directory
SQL state: 58P01
I have no problem to access the file and open it.
I am using PostgreSQL 9.6 under Ubuntu 16.04.
Please Advice how to fix this problem.
Update
When I try to access the file with postgres user I would have same error:
postgres#file-srv:~$$ cat smb://file-srv/doc/myfile.csv
cat: 'smb://file-srv/doc/myfile.csv' : No such file or directory
As I mention when I user mounted folder I created I can access the file.
it is about permission. you have to check read access on file and folders.
also, logging with superuser access may solve your problem.
In short, this is a permissions issue: Your network share is likely locally mounted to your user's UID, while the PostgreSQL server is running as the postgres user.
Second, when you log into your database, there is not an overlap between the database's users and the system's users, even if you have the same username. This means that when you request a file from your network share, the DB user, in this case postgres, does not have the necessary permissions.
To see this, and assuming you have root access on the box in question, you might try to become the postgres user and see that you cannot access the file:
$ sudo su - postgres
$ cat /run/user/.../smb.../yourfile.csv
Permission denied
The fix to your issue will involve -- somehow -- making the file or share accessible to the postgres user. Copying is certainly the quickest way. But that's off the table. You could mount the share (perhaps as read only) as the postgres user. You might do this in fstab.
However, unless this is going to be an automated detail that happens regularly, this seems like heroics. Without more information as to why you can't copy locally, I suggest copying the file locally.