Does any one know how to migrate data from one instance of Neo4j to another. To be more precise, I want to know, how to move the data from one instance of Neo4j on my local machine to another on remote machine. Does any one have any idea about it.
I'm working on my windows machine with Eclipse and Embedded Neo4j . I need to transfer this data to remote Neo4j instance on a Centos machine. Please help me with this.
Not sure how to do it for "embedded neo4j db".
But for standalone and in case you have something like the command line tool "Putty" on your windows machine, this should work. Instead of $NEO4j_HOME you can also use the normal path without the env variable.
$NEO4J_HOME/bin/neo4j stop
cd $NEO4J_HOME/data
tar -cvf graph.db.tar graph.db
gzip graph.db.tar
scp -i ~/some_path/key_for_remote_server.pem ./graph.db.tar.gz username#your_remote_domeain.com:~/
ssh -i ~/some_path/key_for_remote_server.pem/ username#your_remote_domeain.com
On your remote server (at least this works for ubuntu):
Maybe you need to use "sudo" (prefix the commands with sudo).
mv ./graph.db.tar.gz /some_path/
cd /some_path/
gunzip graph.db.tar.gz
tar -xvf graph.db.tar
$NEO4J_HOME/bin/neo4j start
$NEO4J_HOME/bin/neo4j status
You can migrate the data by using the apoc procedure by running the below query in the cypher shell from where the data needs to be exported:
CALL apoc.export.cypher.all('myfilename.cypher');
This will download the file with cypher queries in the import folder
Go the database instance where the data needs to be imported and copy the file in the import folder. Run the below command using the cypher shell:
apoc.cypher.runFile("myfilename.cypher",{}) yield row, result;
For more advanced options follow the below links:
https://neo4j.com/docs/labs/apoc/current/export/cypher/
http://neo4j-contrib.github.io/neo4j-apoc-procedures/3.4/cypher-execution/run-cypher-scripts/
I found out the following workaround for copying the data from a server in the cluster to all others, after using the neo4j-import tool:
Stop all nodes.
On the new node/server, where you need your data to be copied, you have to create the database folder for that graph (in my case loadTest):
/neo4j-enterprise-3.1.0/data/databases/loadTest.db
Then, the source node/server that is holding the data, you have to copy here the neostore.id file to the destination server db folder (loadTest.db from the previous step).
Start all nodes. In the background neo4j will copy data from other cluster servers to the new node.
For embedded mode , you would just need to locate the graph neo4j-db folder location then zip and send it to the remote system.
In your code where you would have called graphdatabaseservice , you would have given the target location
Check if its relative path then the database might be in your project folder .
Now for running the db instance on browser , you will need to use the neo4j communty server and point it to the folder containing the index folder. So if your neo4j-db is located at $project/tmp/neo4j-db then you will give the file path till this folder(the index folder will be inside this folder)
Edit
The folder that will contain the schema and index folders needs to be zipped. You can upload and unzip the folder at a certain location using Putty on your standalone server. Then just change the org.neo4j.server.database.location in conf/neo4j-server.properties file.
Related
I'm trying to import a .dmp file using the Data Pump Import tool in oracle sql developer.
I'm connected to an oracle database running in a container on my local machine.
When I get to the step where I specify where the dump file is to import, where should I place the .dmp file?
DATA_PUMP_DIR is a default Oracle directory object. It isn't part of SQL Developer; the import tool is really just giving you a GUI equivalent of running impdp from the command line.
You can find the operating system location that Oracle directory object points to by querying the data dictionary:
select directory_path from all_directories where directory_name = 'DATA_PUMP_DIR';
The path that returns is on the database server (in your case that'll be inside your container too), and your dump file needs to go there.
You might want to create additional directory objects pointing to other locations, and grant suitable privileges to users to be able to access them; but they all need to be on the DB server and read/writable by the Oracle process owner on that server.
(They could be remote filesystems mounted on the server, they don't necessarily have to be local storage, but that's another issue and more operating-system specific. Again, in your case, you might be able to share a folder on your local machine with the container, if you don't want to copy the file into the container.)
I've been given a project to extract data from a PostgreSQL database. I've no previous experience with PostgreSQL but the project I have is to bug fix existing code, so all the logic to connect to the engine and get data is already in place.
The problem I have is the database has been given to me in the form of the folders and files straight from the source HDD, not a backup (which isn't going to happen so "Get the customer to give you a backup instead isn't an option here).
The folders also contained the actual PostgreSQL binaries so I looked a the version (9.4.14) and downloaded the nearest (9.4.18) from the PostgreSQL site and installed it. Now all I have to do is some how is to get it to look at my given data files.
I tried the obvious of copying the contents of the data folder into the installed data folder but after the PostgreSQL service won't start.
I did find a option in the conf file:
#data_directory = 'ConfigDir'
I changed this to:
data_directory = 'C:\customer\data'
But again the service won't start after this.
The data directory used by the service is defined through the service command line which overwrites any property defined in postgresql.conf.
You need to re-create the service in order to change the data directory, e.g.:
Remove the service:
pg_ctl -unregister -N postgresql-9.1
postgresql-9.1 is the "real" name of the service, not the "Display Name". You can see that in the properties of the service inside the "services" app.
Then re-create the service with the correct data directory:
pg_ctl -register -D -D c:\customer\data -N postgresql-9.1
Another way of "debugging" startup errors in Windows, is to start Postgres from the command line (not through the service) because some errors during startup are not logged in the Postgres logfile but they are displayed on the command line. You can do that with e.g.:
pg_ctl start -D c:\customer\data`
If the bin directory is not in your PATH you need to specify the full path to it on the command line, e.g.: c:\Postgres9.1\bin\pg_ctl
I have a web application querying a Postgresql database (successfully) and I'm looking to move the data folder from location /var/lib/postgres/9.3/main to a customisable location.
Right now I'm prevented from even copying the folder due to permission errors, but I can't assign myself the permissions because that breaks the postgres server.
(I broke the server by running sudo chown <username> -R /var/lib/postgres/9.3/main - which worked as a command but stopped the postgres server from working)
I would simply create a new folder and change the location there, but I'll lose the current instance of my database if that was done.
How can I move the current folder to a new location, so that I can point to it in the .conf file? I need to explicitly move the folder, I can't create a new DB.
You can just copy or move the directory, including all subdirs and files
cp -rp or mv should be enough for this.
Postgres must not be running while you are messing with the files
The base of the data-drectory (PG_DATA) must be owned by postgres and have file mode 0700 . (when not: pg will refuse to start)
[the rest of the files must at least be readable/writeble by postgres]
the new location must also be known to the startup process (in /etc/init.d/ and (possibly) in the postgres.conf file within the data directory. (for the log file location)
I have implemented a javascript script for my mongo database. This script is called getMetrics.js and I am able to execute it by running: mongo getMetrics.js from my computer.
Now I want to automatically execute that script one time per day. To do so, I have created a Heroku app and I added to it the scheduler add-on (https://devcenter.heroku.com/articles/scheduler).
My main problem is that in order to be run, my task will execute the command "mongo getMetrics.js" and it will failed because I don't have mongo command installed in my Heroku app.
How can I run this script from Heroku?
Thanks a lot for your help.
I did the below in a similar case:
Download mongodb for linux https://www.mongodb.com/download-center#community
The bin folder contains the mongo binary
Make this binary available in your Heroku instance (e.g. If you have your Heroku configured with your git repo, then checkin this binary along side your script
[Make sure the folder you are keeping this binary is in the path, safe path will be inside /bin]
I created a mongodb dump of database in my localmachine. It was put in dump folder as "myProject". Within myProject I had two files one ending with .bson and other with metadata.json.
I moved the myProject folder to my remote server.
I used mongorestore on myProject folder.
I have installed mongodb in my remote server.
I got error /dbpath (/data/db/) does not exist and I fixed them using first solution given here.
Now how will mongodb in remote server know the location of my restored dump?
Should I move the .json and .bson file to a new folder named same as the database name ("myProject" placed inside /data/db/)?
If I need to move the restored dump where should I move them?
Mine is a spring web app and I am running in tomcat sever. So far in my localmachine, the spring bean had the host value as "localhost". Should it be changed to my remote server Ip address, when I am running the application in remote server or Is it fine if the value is localhost?