LiteDB.Shell cannot open my database file - litedb

I have database file called db.MyDb and I was able to open it before and also insert some database. Currently this file has important data and due to some file uploaded to this database wrongly so I need to remove that file, but I cannot open the database file ! #mbdavid

Your file db.MyDb is in v1.0.4 version and if you don't want update to v2 use v1 database dll version (including v.1 Shell tool).
If you want update your file to v2, run (using v2 shell):
LiteDB.Shell.exe db.MyDb --upgrade db_v2.MyDB

Related

PostgreSQL data directory configuration

I am using PostgreSQL for CentOS. And i changed the data directory to store PostgreSQL data on a different disk.
nano /usr/lib/systemd/system/postgresql.service
#Environment=PGDATA=/var/lib/pgsql/data
Environment=PGDATA=/data/pgsql/data
However, after installing the package update, the contents of the configuration file were changed back to the default settings.
Do I need to check the configuration file every time I install a package update later? Or is there a way to preserve the config file?
There are two ways to deal with this:
the old way:
You create a file /etc/systemd/system/postgresql.service that contains
.include /usr/lib/systemd/system/postgresql.service
[Service]
Environment=PGDATA=/data/pgsql/data
the new way:
You create a directory /etc/systemd/system/postgresql.service.d that contains a file named (for example) pgdata.conf with the contents
[Service]
Environment=PGDATA=/data/pgsql/data
Then notify systemd with
systemctl daemon-reload
This configuration change will override the corresponding value from /usr/lib/systemd/system/postgresql.service, so the change will survive an upgrade.

Unknown property errors trying to do a with Flyway migration with per script config files

My company is evaluating Flyway for database releases. We have an AWS PostgreSQL version 11.2 database and I have installed Flyway Community Edition version 6.1.2.
I have successfully baselined the database and run several basic DDL scripts using Flyway migrate. However now I am testing a more complicated scenario in which I need to run multiple scripts as one migration but each script has to connect as a different PostgreSqL user. I have tried to do this by setting up two sql files each with their own config file as described here: https://flywaydb.org/documentation/scriptconfigfiles
Every time I run the migrate command I get a property error: "ERROR: Unknown script configuration property: flyway.user" or "ERROR: Unknown script configuration property: user", etc, etc.
For debugging purposes I removed one sql and config combo so that I now only have one file each. The files are named V2020.1.14.08.41.00__role_test1.sql and V2020.1.14.08.41.00__role_test1.sql.conf. I did confirm that any changes to that config file are being picked up by the migrate command. My config file contains the following properties (values changes for security reasons):
flyway.url=jdbc:postgresql://...
flyway.user=user1
flyway.password=password
flyway.schemas=test
I have also tried removing the flyway prefix:
url=jdbc:postgresql://...
user=user1
password=password
schemas=test
And removing the url parameter (both flyway.url and url) so the migration reads that value from the default flyway.conf file. Example:
user=user1
password=password
schemas=test
I get the errors every time. Anyone have any ideas? All help is greatly appreciated.
There is a typo in your code:
flyeay.user=user1
It should be:
flyway.user=user1

"mount" a PostgreSQL database from files not Backup

I've been given a project to extract data from a PostgreSQL database. I've no previous experience with PostgreSQL but the project I have is to bug fix existing code, so all the logic to connect to the engine and get data is already in place.
The problem I have is the database has been given to me in the form of the folders and files straight from the source HDD, not a backup (which isn't going to happen so "Get the customer to give you a backup instead isn't an option here).
The folders also contained the actual PostgreSQL binaries so I looked a the version (9.4.14) and downloaded the nearest (9.4.18) from the PostgreSQL site and installed it. Now all I have to do is some how is to get it to look at my given data files.
I tried the obvious of copying the contents of the data folder into the installed data folder but after the PostgreSQL service won't start.
I did find a option in the conf file:
#data_directory = 'ConfigDir'
I changed this to:
data_directory = 'C:\customer\data'
But again the service won't start after this.
The data directory used by the service is defined through the service command line which overwrites any property defined in postgresql.conf.
You need to re-create the service in order to change the data directory, e.g.:
Remove the service:
pg_ctl -unregister -N postgresql-9.1
postgresql-9.1 is the "real" name of the service, not the "Display Name". You can see that in the properties of the service inside the "services" app.
Then re-create the service with the correct data directory:
pg_ctl -register -D -D c:\customer\data -N postgresql-9.1
Another way of "debugging" startup errors in Windows, is to start Postgres from the command line (not through the service) because some errors during startup are not logged in the Postgres logfile but they are displayed on the command line. You can do that with e.g.:
pg_ctl start -D c:\customer\data`
If the bin directory is not in your PATH you need to specify the full path to it on the command line, e.g.: c:\Postgres9.1\bin\pg_ctl

How to migrate/shift/copy/move data in Neo4j

Does any one know how to migrate data from one instance of Neo4j to another. To be more precise, I want to know, how to move the data from one instance of Neo4j on my local machine to another on remote machine. Does any one have any idea about it.
I'm working on my windows machine with Eclipse and Embedded Neo4j . I need to transfer this data to remote Neo4j instance on a Centos machine. Please help me with this.
Not sure how to do it for "embedded neo4j db".
But for standalone and in case you have something like the command line tool "Putty" on your windows machine, this should work. Instead of $NEO4j_HOME you can also use the normal path without the env variable.
$NEO4J_HOME/bin/neo4j stop
cd $NEO4J_HOME/data
tar -cvf graph.db.tar graph.db
gzip graph.db.tar
scp -i ~/some_path/key_for_remote_server.pem ./graph.db.tar.gz username#your_remote_domeain.com:~/
ssh -i ~/some_path/key_for_remote_server.pem/ username#your_remote_domeain.com
On your remote server (at least this works for ubuntu):
Maybe you need to use "sudo" (prefix the commands with sudo).
mv ./graph.db.tar.gz /some_path/
cd /some_path/
gunzip graph.db.tar.gz
tar -xvf graph.db.tar
$NEO4J_HOME/bin/neo4j start
$NEO4J_HOME/bin/neo4j status
You can migrate the data by using the apoc procedure by running the below query in the cypher shell from where the data needs to be exported:
CALL apoc.export.cypher.all('myfilename.cypher');
This will download the file with cypher queries in the import folder
Go the database instance where the data needs to be imported and copy the file in the import folder. Run the below command using the cypher shell:
apoc.cypher.runFile("myfilename.cypher",{}) yield row, result;
For more advanced options follow the below links:
https://neo4j.com/docs/labs/apoc/current/export/cypher/
http://neo4j-contrib.github.io/neo4j-apoc-procedures/3.4/cypher-execution/run-cypher-scripts/
I found out the following workaround for copying the data from a server in the cluster to all others, after using the neo4j-import tool:
Stop all nodes.
On the new node/server, where you need your data to be copied, you have to create the database folder for that graph (in my case loadTest):
/neo4j-enterprise-3.1.0/data/databases/loadTest.db
Then, the source node/server that is holding the data, you have to copy here the neostore.id file to the destination server db folder (loadTest.db from the previous step).
Start all nodes. In the background neo4j will copy data from other cluster servers to the new node.
For embedded mode , you would just need to locate the graph neo4j-db folder location then zip and send it to the remote system.
In your code where you would have called graphdatabaseservice , you would have given the target location
Check if its relative path then the database might be in your project folder .
Now for running the db instance on browser , you will need to use the neo4j communty server and point it to the folder containing the index folder. So if your neo4j-db is located at $project/tmp/neo4j-db then you will give the file path till this folder(the index folder will be inside this folder)
Edit
The folder that will contain the schema and index folders needs to be zipped. You can upload and unzip the folder at a certain location using Putty on your standalone server. Then just change the org.neo4j.server.database.location in conf/neo4j-server.properties file.

How do you copy data from file to table in SQL? [duplicate]

This question already has answers here:
Postgres ERROR: could not open file for reading: Permission denied
(17 answers)
Closed 9 years ago.
How do you copy data from a file to a table in SQL? I'm using pgAdmin3 on a Macbook.
The table name is tutor, and the name of the file is tutor.rtf.
I use the following query:
COPY tutor
FROM /Users/.../tutor.rtf
WITH DELIMITER ',';
but got the error "permission denied'.
The file is not locked. So how do you solve this problem? Or is there any other quicker way to copy data from file to table except for INSERT INTO ... VALUE(); ?
COPY opens the file using the PostgreSQL server backend, so it requires that the user postgresql runs as have read permission (for COPY FROM) for the file in question. It also requires the same SQL-level access rights to the table as INSERT, but I suspect it's file permissions that're getting you here.
Most likely the postgres or postgres_ (depending on how you installed PostgreSQL) user doesn't have read access to /Users/somepath/tutor.rtf or some parent directory of that file.
The easiest solution is to use psql's \copy command, which reads the file using the client permissions, rather than those of the server, and uses a path relative to the client's current working directory. This command is not available in PgAdmin-III.
Newer PgAdmin-III versions have the Import command in the table context menu. See importing tables from file in the PgAdmin-III docs. This does the equivalent of psql's \copy command, reading the file with the access rights of the PgAdmin-III application.
Alternately you can use the server-side COPY command by making sure every directory from /Users up somepath has world-execute rights - meaning users can traverse it, cd into it, etc, but can't list its contents without r rights too. Then either set the file to group postgres and make sure it has group read rights, or make it world-readable.