Nextcloud : impossible to create or write in the data directory /media/pi/HCLOUD/nextcloudData/ - raspberry-pi

I'm trying to build a Nextcloud server on my raspberry pi connected to an external disk. Installation worked. But during the setup I want to change the data directory (where all the files will be stored) to my external disk. But setup said: impossible to create or write in the data directory /media/pi/HCLOUD/nextcloudData/

I found the solution !
First we need to install nextcloud on the sd card (like that : https://www.marksei.com/how-to-install-nextcloud-13-on-ubuntu/).
If your disk is in ntfs format you really need to format it into ext4 - that gives to linux the possibilty of changing permissions on the disk.
Then mount it on this folder : /var/nextcloud and to move the data repo on the external HDD we need to follow this tutorial (step : moving nextcloud data folder): https://pimylifeup.com/raspberry-pi-nextcloud-server/

try doing sudo chown pi:www-data /media/pi/HCLOUD/nextcloudData/ and then change the data dir.

Related

Storage Manager in pgAdmin

I am trying to backup one of my databases in PostgreSQL pgAdmin tool. I used this tutorial:
backup database with pgAdmin
After finishing that I want to have the file. In that tutorial it says that we can use the Storage Manager to download the backup file on the client machine. After that from this link I wanted to access the Storage Manager. It says that "You can access Storage Manager from the Tools Menu", but from my system there is not any option with that name:
What is the problem and how could I obtain the backup database file?
If you are not running pgAdmin4 in server mode, then there is no storage manager. The storage manager is only relevant when the computer from which you run the pgAdmin4 GUI is different from the computer where the pgAdmin4 app-server is running.
When you took the backup, you told it where to save the file although not in a very user-friendly way. It asks for a filename, and there are three dots you can click to browse for a directory into which to put the file. But if you don't avail yourself of the three dots, then you don't know where it is going to put the file, it just uses an apparently OS-dependent default and doesn't tell you what it is. I usually find in my "Documents" folder. (Well, I usually don't use pgAdmin4 in the first place as it makes everything harder than just using the command line is, but when I do use it...)

How to change MariaDB's data directory?

I use CentOS7 as my system
I tried to change the data direction on MariaDB 10.1.43
I follow the process on internet and all show to change the datadir=/var/lib/mysql/ in my.cnf
but the problem is there is no my.cnf file in my computer
only a my.cnf.d folder with a server.cnf file in it
I type datadir=/newpath/ in this server.cnf
but it didn't work, the datadir that mariaDB shows is still /var/lib/mysql/
what should I do for now? how can I find this my.cnf file?
I realize this is an old question. But wanted to add an answer that worked for me on a legacy machine running MariaDB 10.1.43 and CentOS 6.
Within the /etc/my.cnf file add this under [client-server] to look like this:
[client-server]
port=3306
socket=/home/mysql/mysql.sock
Then, within the /etc/my.cnf.d folder in the server.cnf file add this under [mysqld] to look like this:
[mysqld]
datadir=/home/mysql
socket=/home/mysql/mysql.sock
I moved the data to the /home directory, which is a newly mounted volume with additional space for this machine.
The next part of my answer is out of scope for this question. But the instructions here worked like a charm for moving your MySQL/MariaDB data directory. Semi-pro tip: Be sure to follow the RedHat/CentOS step to add the security context.

Samba Share Not Writable (Linux)

I am currently running a fresh install of CentOS 7 (64-bit). This machine isn't used for anything except for storage via Samba. However, for some strange reason, I can't see to get the share to be writable through windows. With the drive mapped, I can read the file lists and browse (even accessing files), but I cannot write any new files.
The steps that I took was to install samba via yum. I added a system user, bdawson, and then added that same user as a Samba user. I then logged in as that user and make a directory called storage (path being /home/bdawson/Storage).
I then edited my Samba config and added the following:
[Storage]
valid users = bdawson,#bdawson
path = /home/bdawson/Storage
write list = bdawson,#bdawson
/home/bdawson was chown -R'd to be owned by bdawson:bdawson. File permissions are set to 0755 for both /home/bdawson and /home/bdawson/Storage.
At this point, I am not sure what I'm doing wrong that is preventing me from being able to write. This same configuration worked just fine on a different machine, so I'm at a complete loss. (Side note: Samba logs aren't showing any issues and watching the Samba connections via Webmin does show that I am connecting and reading from the share, but attempts to write to it fail saying I need permission.)
After a lot of digging, I discovered this was due to a missing SELinux label. This was not an issue with my Ubuntu share, since Ubuntu doesn't use SELinux.

Executed PHP Script Cannot Access GCS Mounted Drive on GCE

I was able to mount my Google Cloud Storage using the command line below:
gcsfuse -o allow_other -file-mode=660 -dir-mode=770 --uid=<uid> --gid=<gid> testbucket /path/to/domain/folder
The group includes the user apache. Apache is able to write to the mounted drive like so:
sudo -u apache echo 'Some Test Text' > /path/to/domain/folder/hello.txt
hello.txt appears in the bucket as expected. However when I execute the below php script I get an error:
<?php file_put_contents('/path/to/domain/folder/hello.txt', 'Some Test Text');
PHP Error: failed to open stream: Permission denied
echo exec('whoami'); Returns apache
I assumed this is a common use for mounting with gcsfuse or something similar to this but, I seem to be the only one on the internet with this issue. I do not know if its an issue with the way I mounted it or the service security of httpd.
I came across a similar issue.
Use the flag --implicit-dirs while mounting the Google Storage bucket using gcsfuse. More on this here.
Mounting the bucket as a folder makes the OS to treat it like a regular folder which may contain files and folders. But Google Cloud Storage bucket doesn't have directory structures. For example, when you are creating a file named hello.txt in a folder named files inside a Google Storage bucket, you are not actually creating a folder and putting the file in it. The object is created in the bucket with the name as files/hello.txt. More on this here and here.
To make the OS treat the GCS bucket like a hierarchical structure, you have to specify the --implicit-dirs flag to the gcsfuse.
Note:
I wouldn't recommend using gcsfuse in production systems as it is a beta quality software.

How to migrate/shift/copy/move data in Neo4j

Does any one know how to migrate data from one instance of Neo4j to another. To be more precise, I want to know, how to move the data from one instance of Neo4j on my local machine to another on remote machine. Does any one have any idea about it.
I'm working on my windows machine with Eclipse and Embedded Neo4j . I need to transfer this data to remote Neo4j instance on a Centos machine. Please help me with this.
Not sure how to do it for "embedded neo4j db".
But for standalone and in case you have something like the command line tool "Putty" on your windows machine, this should work. Instead of $NEO4j_HOME you can also use the normal path without the env variable.
$NEO4J_HOME/bin/neo4j stop
cd $NEO4J_HOME/data
tar -cvf graph.db.tar graph.db
gzip graph.db.tar
scp -i ~/some_path/key_for_remote_server.pem ./graph.db.tar.gz username#your_remote_domeain.com:~/
ssh -i ~/some_path/key_for_remote_server.pem/ username#your_remote_domeain.com
On your remote server (at least this works for ubuntu):
Maybe you need to use "sudo" (prefix the commands with sudo).
mv ./graph.db.tar.gz /some_path/
cd /some_path/
gunzip graph.db.tar.gz
tar -xvf graph.db.tar
$NEO4J_HOME/bin/neo4j start
$NEO4J_HOME/bin/neo4j status
You can migrate the data by using the apoc procedure by running the below query in the cypher shell from where the data needs to be exported:
CALL apoc.export.cypher.all('myfilename.cypher');
This will download the file with cypher queries in the import folder
Go the database instance where the data needs to be imported and copy the file in the import folder. Run the below command using the cypher shell:
apoc.cypher.runFile("myfilename.cypher",{}) yield row, result;
For more advanced options follow the below links:
https://neo4j.com/docs/labs/apoc/current/export/cypher/
http://neo4j-contrib.github.io/neo4j-apoc-procedures/3.4/cypher-execution/run-cypher-scripts/
I found out the following workaround for copying the data from a server in the cluster to all others, after using the neo4j-import tool:
Stop all nodes.
On the new node/server, where you need your data to be copied, you have to create the database folder for that graph (in my case loadTest):
/neo4j-enterprise-3.1.0/data/databases/loadTest.db
Then, the source node/server that is holding the data, you have to copy here the neostore.id file to the destination server db folder (loadTest.db from the previous step).
Start all nodes. In the background neo4j will copy data from other cluster servers to the new node.
For embedded mode , you would just need to locate the graph neo4j-db folder location then zip and send it to the remote system.
In your code where you would have called graphdatabaseservice , you would have given the target location
Check if its relative path then the database might be in your project folder .
Now for running the db instance on browser , you will need to use the neo4j communty server and point it to the folder containing the index folder. So if your neo4j-db is located at $project/tmp/neo4j-db then you will give the file path till this folder(the index folder will be inside this folder)
Edit
The folder that will contain the schema and index folders needs to be zipped. You can upload and unzip the folder at a certain location using Putty on your standalone server. Then just change the org.neo4j.server.database.location in conf/neo4j-server.properties file.