From the documentation one can read:
bin/kc.[sh|bat] start --import-realm
When you set the --import-realm
option, the server is going to try to import any realm configuration
file from the data/import directory. Each file in this directory
should contain a single realm configuration. Only regular files using
the .json extension are read from this directory, sub-directories are
ignored.
Where to put my .json file ? they are saying data/import, but where exactly ?
PS: I'm not running keycloak in a docker container
In the root of the Keycloak project create the folder data/import and add your data into it.
Related
I have a .sql file (db) which I am trying to import using myphpadmin and keep getting a time out error.
The file is 46.6 MB (zipped)
Please note I am not on XAMPP but using a Godaddy myphpAdmin platform to manage the database.
What I've tried:
Re-downloaded the file as a zip file - and tried importing it. Still failed.
For this option given in phpmyadmin import, I tried UNSELECTING this option > "Allow the interruption of an import in case the script detects it is close to the PHP timeout limit. (This might be a good way to import large files, however it can break transactions.)"....and I also tried importing the db keeping it selected, but this failed. Which should it be?
What else can I do?
nothing worked, except SSH.
What you need:
Database (that you are importing into) username and password
Cpanel username and password + IP address (for Putty)
I had to upload the .sql file to a folder on the public_html.
Download pUtty
In putty I needed the IP address (hosting server) as well as the cpanel username and password (so have that handy).
Once in, you have to enter your cpanel's password
Use the "cd" change directory command to change directory to where you have placed your .sql file.
Once there, use the following command:
mysql -p -u user_name database_name < file.sql*
(Note: replace 'user_name', 'database_name', and 'file.sql' with the actual name.)**
You will be prompted for your database password, and then your database will be imported.
Useful link: https://www.siteground.co.uk/kb/exportimport-mysql-database-via-ssh/
You can try unzipping the file locally and importing the uncompressed .sql file; the overhead of uncompressing the file in memory could be the problem for phpMyAdmin. Generally, though, what Shadow said is correct and you should use some other means for import (like the command-line client). You could also use the phpMyAdmin UploadDir feature to put the file on the file in a special folder that phpMyAdmin can directly access on the server. This can help with a lot of the resource limits the webserver imposes.
Currently running a WHM / Cpanel server running Centos. Server seems to be running fine no issues there. However I'm using a deployment process to put files outside of the document root. e.g.
~/deployment
instead of:
~/public_html
Obviously I need to point public_html to this folder so my site will run. So, I'm removing the public_html and creating a symlink and pointing it to the new deployment folder. This results in a 500 error.
So looking at the logs I've discovered that it produces the following error:
Directory "/home/xyz/deployment" is writeable by group
Checking the file permissions looks as though the symlink is 777 where i need it to be 755 for the server to allow viewing.
Is there a setting in WHM ? Is there a setting in Centos? I have another box running that doesn't have this issue so I'm assuming that this is related to the current setup of this machine.
Any help would be appreciated, thanks.
when you create a hard link from a file or folder, This file/folder inherits the accesses and permissions of the original file/folder, and in soft link it will be 777 permission, so i think you can use rsync options for both purpose :
1- have a folder with all files in source
2- have your own permissions in folder
I'd like to run an OwnCloud instance with an alternative configuration directory. The client version is 1.6.2. I use the following command:
owncloud --confdir /path/to/owncloud.cfg
The always tries to take the default directory to get the configuration and if this is empty it ask me for server address, user name and password and so on and creates a new owncloud.cfg again.
I found some descriptions on the internet where this seems to work, but not on my machine. Does somebody has an idea?
Does any one know how to migrate data from one instance of Neo4j to another. To be more precise, I want to know, how to move the data from one instance of Neo4j on my local machine to another on remote machine. Does any one have any idea about it.
I'm working on my windows machine with Eclipse and Embedded Neo4j . I need to transfer this data to remote Neo4j instance on a Centos machine. Please help me with this.
Not sure how to do it for "embedded neo4j db".
But for standalone and in case you have something like the command line tool "Putty" on your windows machine, this should work. Instead of $NEO4j_HOME you can also use the normal path without the env variable.
$NEO4J_HOME/bin/neo4j stop
cd $NEO4J_HOME/data
tar -cvf graph.db.tar graph.db
gzip graph.db.tar
scp -i ~/some_path/key_for_remote_server.pem ./graph.db.tar.gz username#your_remote_domeain.com:~/
ssh -i ~/some_path/key_for_remote_server.pem/ username#your_remote_domeain.com
On your remote server (at least this works for ubuntu):
Maybe you need to use "sudo" (prefix the commands with sudo).
mv ./graph.db.tar.gz /some_path/
cd /some_path/
gunzip graph.db.tar.gz
tar -xvf graph.db.tar
$NEO4J_HOME/bin/neo4j start
$NEO4J_HOME/bin/neo4j status
You can migrate the data by using the apoc procedure by running the below query in the cypher shell from where the data needs to be exported:
CALL apoc.export.cypher.all('myfilename.cypher');
This will download the file with cypher queries in the import folder
Go the database instance where the data needs to be imported and copy the file in the import folder. Run the below command using the cypher shell:
apoc.cypher.runFile("myfilename.cypher",{}) yield row, result;
For more advanced options follow the below links:
https://neo4j.com/docs/labs/apoc/current/export/cypher/
http://neo4j-contrib.github.io/neo4j-apoc-procedures/3.4/cypher-execution/run-cypher-scripts/
I found out the following workaround for copying the data from a server in the cluster to all others, after using the neo4j-import tool:
Stop all nodes.
On the new node/server, where you need your data to be copied, you have to create the database folder for that graph (in my case loadTest):
/neo4j-enterprise-3.1.0/data/databases/loadTest.db
Then, the source node/server that is holding the data, you have to copy here the neostore.id file to the destination server db folder (loadTest.db from the previous step).
Start all nodes. In the background neo4j will copy data from other cluster servers to the new node.
For embedded mode , you would just need to locate the graph neo4j-db folder location then zip and send it to the remote system.
In your code where you would have called graphdatabaseservice , you would have given the target location
Check if its relative path then the database might be in your project folder .
Now for running the db instance on browser , you will need to use the neo4j communty server and point it to the folder containing the index folder. So if your neo4j-db is located at $project/tmp/neo4j-db then you will give the file path till this folder(the index folder will be inside this folder)
Edit
The folder that will contain the schema and index folders needs to be zipped. You can upload and unzip the folder at a certain location using Putty on your standalone server. Then just change the org.neo4j.server.database.location in conf/neo4j-server.properties file.
i have a tomcat6 server running on a CentOS 6 machine and so far so good.
in one of my webapps i need to use a context param to access an external folder located in the filesystem, i configured my server.xml like this (relevant portion of <Host> tag only) :
<Context path="/userimages" docBase="/home/someuser/faces/32x32" debug="0" reloadable="true" crossContext="true"/>
when i start the server i get this error :
java.lang.IllegalArgumentException: Document base /home/someuser/faces/32x32 does not exist or is not a readable directory
i read something about folder permission so i set both "32x32" and "webapps" folder to 777, but it's still not working...any idea of how to fix this ?
P.S. on windows OS it works perfectly
My suggestion is to put your data into /usr/share/tomcat6/conf/context.xml which is a symlink to /etc/tomcat6/context.xml on CentOS 6. At least tomcat6 does read the contents of that file when it restarts, and I had some luck getting resource data loaded from there. It would seem that this file is new in tomcat6.
I used strace to check which files it was visiting and it does run stat() on the various files like /var/lib/tomcat6/webapps/*/META-INF/context.xml but nowhere does it actually open() those files, so I'm pretty sure it does not read the contents. Maybe some bug? Maybe imaginary future feature?
I managed to get Plandora (uses context to supply MySQL database connection details) running on CentOS 6 with these packages (from yum):
apache-tomcat-apis-0.1-1.el6.noarch
java-1.6.0-openjdk-1.6.0.0-1.61.1.11.11.el6_4.i686
mysql-connector-java-5.1.17-6.el6.noarch
tomcat6-6.0.24-52.el6_4.noarch
tomcat6-servlet-2.5-api-6.0.24-52.el6_4.noarch
tomcat6-el-2.1-api-6.0.24-52.el6_4.noarch
tomcat6-admin-webapps-6.0.24-52.el6_4.noarch
tomcat6-jsp-2.1-api-6.0.24-52.el6_4.noarch
tomcat6-lib-6.0.24-52.el6_4.noarch
tomcat6-webapps-6.0.24-52.el6_4.noarch
Just in case anyone else is trying to get Plandora to work on CentOS 6, you also need to make sure you symlink:
ln -s /usr/share/java/mysql-connector-java.jar /usr/share/tomcat6/lib/