Azure data factory - SftpPermissionDeniedException - azure-data-factory

Using a copy data activity I want to upload files to an SFTP service, but receive the following error message:
I can upload files via a simple linux sftp client to the target folder with the same user, and also able to create files and folders within the target folder(but not in its parent folder, which is the root folder).
"Upload with temp file" option is set to false.
Any idea?

To confirm which user your build runs as you can run the whoami command as a part of your build process.
Solution:
Store things inside of a folder that the user running the build has permissions to.
Change the ownership of the directory with the chown command before trying to write to it.
Refer - https://support.circleci.com/hc/en-us/articles/360003649774-Permission-Denied-When-Creating-Directory-or-Writing-a-File

Related

Creating symbolic links resulting in 500 error

Currently running a WHM / Cpanel server running Centos. Server seems to be running fine no issues there. However I'm using a deployment process to put files outside of the document root. e.g.
~/deployment
instead of:
~/public_html
Obviously I need to point public_html to this folder so my site will run. So, I'm removing the public_html and creating a symlink and pointing it to the new deployment folder. This results in a 500 error.
So looking at the logs I've discovered that it produces the following error:
Directory "/home/xyz/deployment" is writeable by group
Checking the file permissions looks as though the symlink is 777 where i need it to be 755 for the server to allow viewing.
Is there a setting in WHM ? Is there a setting in Centos? I have another box running that doesn't have this issue so I'm assuming that this is related to the current setup of this machine.
Any help would be appreciated, thanks.
when you create a hard link from a file or folder, This file/folder inherits the accesses and permissions of the original file/folder, and in soft link it will be 777 permission, so i think you can use rsync options for both purpose :
1- have a folder with all files in source
2- have your own permissions in folder

How to skip existing files in gsutil rsync

I want to copy files between a directory on my local computer disk and my Google Cloud Storage bucket with the below conditions:
1) Copy all new files and folders.
2) Skip all existing files and folders irrespective of whether they have been modified or not.
I have tried to implement this using the Google ACL policy, but it doesn't seem to be working.
I am using Google Cloud Storage admin service account to copy my files to the bucket.
As #A.Queue commented, the solution to skip existing files would be the use of the gsutil cp command with the -n option. This option means no-clobber, so that all files and directories already present in the Cloud Storage bucket will not be overwritten, and only new files and directories will be added to the bucket.
If you run the following command:
gsutil cp -n -r . gs://[YOUR_BUCKET]
You will copy all files and directories (including the whole directory tree with all files and subdirectories underneath) that are not present in the Cloud Storage bucket, while all of those which are already present will be skipped.
You can find more information related to this command in this link.

Moving postgres data folder on Ubuntu

I have a web application querying a Postgresql database (successfully) and I'm looking to move the data folder from location /var/lib/postgres/9.3/main to a customisable location.
Right now I'm prevented from even copying the folder due to permission errors, but I can't assign myself the permissions because that breaks the postgres server.
(I broke the server by running sudo chown <username> -R /var/lib/postgres/9.3/main - which worked as a command but stopped the postgres server from working)
I would simply create a new folder and change the location there, but I'll lose the current instance of my database if that was done.
How can I move the current folder to a new location, so that I can point to it in the .conf file? I need to explicitly move the folder, I can't create a new DB.
You can just copy or move the directory, including all subdirs and files
cp -rp or mv should be enough for this.
Postgres must not be running while you are messing with the files
The base of the data-drectory (PG_DATA) must be owned by postgres and have file mode 0700 . (when not: pg will refuse to start)
[the rest of the files must at least be readable/writeble by postgres]
the new location must also be known to the startup process (in /etc/init.d/ and (possibly) in the postgres.conf file within the data directory. (for the log file location)

Command line for Dropbox connection in Duplicati backup

I'm trying to backup Folders from local drive to Dropbox using Duplicati command in Command prompt. (Backup should be Incremental)
C:\Users\Desktop\Office_Works\Duplicati\Duplicati 1.3.4\Duplicati>Duplicati.CommandLine.exe backup a https://www.dropbox.com/
Enter passphrase: **
Confirm passphrase: **
**Unable to find backend for: https://www.dropbox.com/**
"a" is my folder in local drive. Now I want to know how to make a connection with Dropbox using command lines. Is any particular way to connect Dropbox using duplicati commands?
There is no direct support for DropBox in Duplicati.
Others have reported using a local folder under the DropBox folder as a destination, such that the DropBox client synchronizes the folder for you.
There's a project that can upload files to dropbox from the command prompt. It's very lightweight and both installation and usage couldn't have been easier!
PneumaticTube
If you use it the first time it will open your browser to ask for permission to access your dropbox account but from then on it's easy sailing.

How to migrate/shift/copy/move data in Neo4j

Does any one know how to migrate data from one instance of Neo4j to another. To be more precise, I want to know, how to move the data from one instance of Neo4j on my local machine to another on remote machine. Does any one have any idea about it.
I'm working on my windows machine with Eclipse and Embedded Neo4j . I need to transfer this data to remote Neo4j instance on a Centos machine. Please help me with this.
Not sure how to do it for "embedded neo4j db".
But for standalone and in case you have something like the command line tool "Putty" on your windows machine, this should work. Instead of $NEO4j_HOME you can also use the normal path without the env variable.
$NEO4J_HOME/bin/neo4j stop
cd $NEO4J_HOME/data
tar -cvf graph.db.tar graph.db
gzip graph.db.tar
scp -i ~/some_path/key_for_remote_server.pem ./graph.db.tar.gz username#your_remote_domeain.com:~/
ssh -i ~/some_path/key_for_remote_server.pem/ username#your_remote_domeain.com
On your remote server (at least this works for ubuntu):
Maybe you need to use "sudo" (prefix the commands with sudo).
mv ./graph.db.tar.gz /some_path/
cd /some_path/
gunzip graph.db.tar.gz
tar -xvf graph.db.tar
$NEO4J_HOME/bin/neo4j start
$NEO4J_HOME/bin/neo4j status
You can migrate the data by using the apoc procedure by running the below query in the cypher shell from where the data needs to be exported:
CALL apoc.export.cypher.all('myfilename.cypher');
This will download the file with cypher queries in the import folder
Go the database instance where the data needs to be imported and copy the file in the import folder. Run the below command using the cypher shell:
apoc.cypher.runFile("myfilename.cypher",{}) yield row, result;
For more advanced options follow the below links:
https://neo4j.com/docs/labs/apoc/current/export/cypher/
http://neo4j-contrib.github.io/neo4j-apoc-procedures/3.4/cypher-execution/run-cypher-scripts/
I found out the following workaround for copying the data from a server in the cluster to all others, after using the neo4j-import tool:
Stop all nodes.
On the new node/server, where you need your data to be copied, you have to create the database folder for that graph (in my case loadTest):
/neo4j-enterprise-3.1.0/data/databases/loadTest.db
Then, the source node/server that is holding the data, you have to copy here the neostore.id file to the destination server db folder (loadTest.db from the previous step).
Start all nodes. In the background neo4j will copy data from other cluster servers to the new node.
For embedded mode , you would just need to locate the graph neo4j-db folder location then zip and send it to the remote system.
In your code where you would have called graphdatabaseservice , you would have given the target location
Check if its relative path then the database might be in your project folder .
Now for running the db instance on browser , you will need to use the neo4j communty server and point it to the folder containing the index folder. So if your neo4j-db is located at $project/tmp/neo4j-db then you will give the file path till this folder(the index folder will be inside this folder)
Edit
The folder that will contain the schema and index folders needs to be zipped. You can upload and unzip the folder at a certain location using Putty on your standalone server. Then just change the org.neo4j.server.database.location in conf/neo4j-server.properties file.