I have a web application querying a Postgresql database (successfully) and I'm looking to move the data folder from location /var/lib/postgres/9.3/main to a customisable location.
Right now I'm prevented from even copying the folder due to permission errors, but I can't assign myself the permissions because that breaks the postgres server.
(I broke the server by running sudo chown <username> -R /var/lib/postgres/9.3/main - which worked as a command but stopped the postgres server from working)
I would simply create a new folder and change the location there, but I'll lose the current instance of my database if that was done.
How can I move the current folder to a new location, so that I can point to it in the .conf file? I need to explicitly move the folder, I can't create a new DB.
You can just copy or move the directory, including all subdirs and files
cp -rp or mv should be enough for this.
Postgres must not be running while you are messing with the files
The base of the data-drectory (PG_DATA) must be owned by postgres and have file mode 0700 . (when not: pg will refuse to start)
[the rest of the files must at least be readable/writeble by postgres]
the new location must also be known to the startup process (in /etc/init.d/ and (possibly) in the postgres.conf file within the data directory. (for the log file location)
Related
Context
I have a pretty standard postgresql setup, with one main server and one hot standby.
I am testing a procedure to promote the standby server when the main server stops, and then turn the previous main server into a new standby.
When I call pg_rewind, I add the -c option, to instruct pg_rewind to look for possibly archived wal files.
My issue
The archive_command/restore_command of my postgres configuration call scripts relative to the data directory :
achive_command='./archive_wal.sh "%p"'
restore_command='./restore_wal.sh "%f" "%p"'
Unfortunately, when running pg_rewind -c, it tries to find these commands relative to my cwd, and not relative to the --target-pgdata=data directory.
Question
Is there a way to have pg_rewind -c use the data directory as the base directory for the restore command ?
I am looking to move the location of a pgsql 13 database from it's default to another disk.
I initially followed this guide link
But this is for v9.5, not 13. My challenge is that the location of the database - found from running the below command - is also where the configuration files are stored.
SHOW data_directory;
data_directory
------------------------
/var/lib/pgsql/13/data
(1 row)
SHOW config_file;
config_file
----------------------------------------
/var/lib/pgsql/13/data/postgresql.conf
(1 row)
With version 9.5 the configuration files were in a separate area, so at this point I got stuck with the guide.
It seems if I want to move the database location I also have to move all the configuration files as well.
I have tried moving the entire data folder to the new location and restarting postgres but no luck.
Any help would be appreciated.
Assuming your configuration files are located under $PG_DATA, where they belong:
Shut down the (old) database
Copy the data directory to the new location (use cp -rp, or rsync -acv, or tar, or cpio, ...) Make sure that file attributes and ownership are preserved by the copy. The pgdata directory should be mode == 0600, and owner.group == postgres.postgres.
[optionally] rename the old data directory
[optionally] you may want to edit the configuration files at the new location
edit the startup file (in /etc/init.d/postgresql ) and make sure $PG_DATA points to the new location. [note: this is for ubuntu; other distributions may us a different starting mechanism]
Start the new database, and check if it runs (ps auxw| grep postgres, and if you can connect (psql -U postgres postgres)
[optionally] remove the directory tree at the old location.
In a production server (Windows Server 2012 R2), we run out of space in the main HDD where a PostgreSQL 9.3 database was stored, so I had to move out the data directory to another drive. I followed all the steps to do it that, stop the server, move the data directory, change the folder permissions and change the -D start parameters.
I could start the server, but it only shows the default postgres db and user (I checked in pgAdmin and psql). All the files are there, even if I try to recreate the same user, I get an user already exist error. I also confirmed if the server started with the new directory (SHOW data_directory;).
Then I move back all the files to the original drive and I have the same problem.
I also checked the logs, but it shows nothing relevant to the problem.
I've been given a project to extract data from a PostgreSQL database. I've no previous experience with PostgreSQL but the project I have is to bug fix existing code, so all the logic to connect to the engine and get data is already in place.
The problem I have is the database has been given to me in the form of the folders and files straight from the source HDD, not a backup (which isn't going to happen so "Get the customer to give you a backup instead isn't an option here).
The folders also contained the actual PostgreSQL binaries so I looked a the version (9.4.14) and downloaded the nearest (9.4.18) from the PostgreSQL site and installed it. Now all I have to do is some how is to get it to look at my given data files.
I tried the obvious of copying the contents of the data folder into the installed data folder but after the PostgreSQL service won't start.
I did find a option in the conf file:
#data_directory = 'ConfigDir'
I changed this to:
data_directory = 'C:\customer\data'
But again the service won't start after this.
The data directory used by the service is defined through the service command line which overwrites any property defined in postgresql.conf.
You need to re-create the service in order to change the data directory, e.g.:
Remove the service:
pg_ctl -unregister -N postgresql-9.1
postgresql-9.1 is the "real" name of the service, not the "Display Name". You can see that in the properties of the service inside the "services" app.
Then re-create the service with the correct data directory:
pg_ctl -register -D -D c:\customer\data -N postgresql-9.1
Another way of "debugging" startup errors in Windows, is to start Postgres from the command line (not through the service) because some errors during startup are not logged in the Postgres logfile but they are displayed on the command line. You can do that with e.g.:
pg_ctl start -D c:\customer\data`
If the bin directory is not in your PATH you need to specify the full path to it on the command line, e.g.: c:\Postgres9.1\bin\pg_ctl
Does any one know how to migrate data from one instance of Neo4j to another. To be more precise, I want to know, how to move the data from one instance of Neo4j on my local machine to another on remote machine. Does any one have any idea about it.
I'm working on my windows machine with Eclipse and Embedded Neo4j . I need to transfer this data to remote Neo4j instance on a Centos machine. Please help me with this.
Not sure how to do it for "embedded neo4j db".
But for standalone and in case you have something like the command line tool "Putty" on your windows machine, this should work. Instead of $NEO4j_HOME you can also use the normal path without the env variable.
$NEO4J_HOME/bin/neo4j stop
cd $NEO4J_HOME/data
tar -cvf graph.db.tar graph.db
gzip graph.db.tar
scp -i ~/some_path/key_for_remote_server.pem ./graph.db.tar.gz username#your_remote_domeain.com:~/
ssh -i ~/some_path/key_for_remote_server.pem/ username#your_remote_domeain.com
On your remote server (at least this works for ubuntu):
Maybe you need to use "sudo" (prefix the commands with sudo).
mv ./graph.db.tar.gz /some_path/
cd /some_path/
gunzip graph.db.tar.gz
tar -xvf graph.db.tar
$NEO4J_HOME/bin/neo4j start
$NEO4J_HOME/bin/neo4j status
You can migrate the data by using the apoc procedure by running the below query in the cypher shell from where the data needs to be exported:
CALL apoc.export.cypher.all('myfilename.cypher');
This will download the file with cypher queries in the import folder
Go the database instance where the data needs to be imported and copy the file in the import folder. Run the below command using the cypher shell:
apoc.cypher.runFile("myfilename.cypher",{}) yield row, result;
For more advanced options follow the below links:
https://neo4j.com/docs/labs/apoc/current/export/cypher/
http://neo4j-contrib.github.io/neo4j-apoc-procedures/3.4/cypher-execution/run-cypher-scripts/
I found out the following workaround for copying the data from a server in the cluster to all others, after using the neo4j-import tool:
Stop all nodes.
On the new node/server, where you need your data to be copied, you have to create the database folder for that graph (in my case loadTest):
/neo4j-enterprise-3.1.0/data/databases/loadTest.db
Then, the source node/server that is holding the data, you have to copy here the neostore.id file to the destination server db folder (loadTest.db from the previous step).
Start all nodes. In the background neo4j will copy data from other cluster servers to the new node.
For embedded mode , you would just need to locate the graph neo4j-db folder location then zip and send it to the remote system.
In your code where you would have called graphdatabaseservice , you would have given the target location
Check if its relative path then the database might be in your project folder .
Now for running the db instance on browser , you will need to use the neo4j communty server and point it to the folder containing the index folder. So if your neo4j-db is located at $project/tmp/neo4j-db then you will give the file path till this folder(the index folder will be inside this folder)
Edit
The folder that will contain the schema and index folders needs to be zipped. You can upload and unzip the folder at a certain location using Putty on your standalone server. Then just change the org.neo4j.server.database.location in conf/neo4j-server.properties file.