Could not create client id - DB failure - jboss

I was trying to start the ATG publishing servers on JBoss. From my JBoss bin directory, I used the following command .\run.bat -c ATGPublishing -b 0.0.0.0.I was getting the following error. I am using Oracle 11g. Any idea why this error is coming up??
2013-07-18 10:22:43,025 ERROR [nucleusNamespace.atg.dynamo.messaging.SqlJmsProvider] (/atg/dynamo/service/Scheduler-reusablejobhandler-PATCHBAY-RESTART) could not create client ID for client name Admin-VAIO:8850 : DB failure or maybe the client name already exists in the DB

com this side most of you r confusing how to install local dynamo DB here is solutin....
1).download java and install this will goes your c drive program file anyway
2).now download dynaolocalDB and put in that directory where you want to work
3).unzip it
4).now just know where java install i mean path like C:\Program Files (x86)\Java\jre7\bin
5).open cmd (comand prompt) and set java path like set path="C:\Program Files (x86)\Java\jre7\bin"
6).now go that dir where you put your dbdynamoDb like C:\developer\node_modules\npm\dbdynamoDb
copy and past this
$ java –Djava.library.path=. -jar DynamoDBLocal.jar
if error then
java -jar DynamoDBLocal.jar
still error then specify port
java -jar DynamoDBLocal.jar -- port8050

Related

Error While Running Kafka Server on Windows

while running kafka on windows.
C:\Program Files\kafka_2.12-2.1.0>.\bin\windows\kafka-server-start.bat .\config\server.properties
And getting the error
The system cannot find the path specified.
The syntax of the command is incorrect.
Error: Could not find or load main class Files\kafka_2.12-2.1.0.logs
You cannot have spaces in the file path, e.g Program Files
There's no specific reason Kafka needs to be in your Program Files folder. You could move it to C:\kafka for example, and I've been able to run it on Windows 10 (out of my users folder), so it does work

"mount" a PostgreSQL database from files not Backup

I've been given a project to extract data from a PostgreSQL database. I've no previous experience with PostgreSQL but the project I have is to bug fix existing code, so all the logic to connect to the engine and get data is already in place.
The problem I have is the database has been given to me in the form of the folders and files straight from the source HDD, not a backup (which isn't going to happen so "Get the customer to give you a backup instead isn't an option here).
The folders also contained the actual PostgreSQL binaries so I looked a the version (9.4.14) and downloaded the nearest (9.4.18) from the PostgreSQL site and installed it. Now all I have to do is some how is to get it to look at my given data files.
I tried the obvious of copying the contents of the data folder into the installed data folder but after the PostgreSQL service won't start.
I did find a option in the conf file:
#data_directory = 'ConfigDir'
I changed this to:
data_directory = 'C:\customer\data'
But again the service won't start after this.
The data directory used by the service is defined through the service command line which overwrites any property defined in postgresql.conf.
You need to re-create the service in order to change the data directory, e.g.:
Remove the service:
pg_ctl -unregister -N postgresql-9.1
postgresql-9.1 is the "real" name of the service, not the "Display Name". You can see that in the properties of the service inside the "services" app.
Then re-create the service with the correct data directory:
pg_ctl -register -D -D c:\customer\data -N postgresql-9.1
Another way of "debugging" startup errors in Windows, is to start Postgres from the command line (not through the service) because some errors during startup are not logged in the Postgres logfile but they are displayed on the command line. You can do that with e.g.:
pg_ctl start -D c:\customer\data`
If the bin directory is not in your PATH you need to specify the full path to it on the command line, e.g.: c:\Postgres9.1\bin\pg_ctl

breakpoints in eclipse using postgresql

I am using helios Eclipse for debugging my code in postgresql.
My aim is to know how postgresql uses join algorithms during the join query, so I started to debug nodenestloop.c which is in the Executor folder.
I gave break points in that file, But whenever I try to debug that file, the control goes to main.c and never comes back,How do I constraint the control only to that particular file(nodenestloop.c)
Below are the following fields which I gave in Debug configurations of Helios Eclipse.
C/C++ Application - src/backend/postgres and
project - pgsql
I followed the steps given in the following link for running the program.
https://wiki.postgresql.org/wiki/Working_with_Eclipse#
I even uncheked the field "Start on Start up=main" , but When I do that, The step in and Step over buttons are not activated and the following problem has popped up.
Could not save master table to file '/home/ravi/workspace/.metadata/.plugins/org.eclipse.core.resources/.safetable/org.eclipse.core.resources'.
/home/ravi/workspace/.metadata/.plugins/org.eclipse.core.resources/.safetable/org.eclipse.core.resources (Permission denied)
So I started eclipse using sudo, but this time the following error has come in the console of eclipse.
"root" execution of the PostgreSQL server is not permitted.
The server must be started under an unprivileged user ID to prevent
possible system security compromise. See the documentation for
more information on how to properly start the server.
Could any one help me with this.
Thank you
Problem 1: User ID mismatch
Reading between the lines, it sounds like you're trying to debug a PostgreSQL instance that's running as the postgres user, or a different user ID to your own anyway. Hence your attempt to use sudo.
That's painful, especially when using an IDE like Eclipse. With plain gdb you can just sudo the gdb command to the desired uid, e.g. sudo -u postgres -p 12345 to attach to pid 12345 running as user postgres. This will not work with Eclipse. In fact, running it with sudo has probably left your workspace with some messed up file permissions; run:
sudo chown -R ravi /home/ravi/workspace/
to fix file ownership.
If you want to debug processes under other user IDs with Eclipse, you'll need to figure out how to make Eclipse run gdb with sudo. Do not just run all of Eclipse with sudo.
Problem 2: Trying to run PostgreSQL under the control of Eclipse
This:
"root" execution of the PostgreSQL server is not permitted. The server must be started under an unprivileged user ID to prevent possible system security compromise. See the documentation for more information on how to properly start the server.
suggests that you're also attempting to let Eclipse start postgres directly. That's very useful if you're trying to debug the postmaster, but since you're talking about the query planner it's clear you want to debug a particular backend. Launching the postmaster under Eclipse is useless for that, you'll be attached to the wrong process.
I think you probably need to read the documentation on PostgreSQL's internals:
Tour of PostgreSQL Internals
PostgreSQL internals through pictures
Documentation chapter - internals
Doing it right
Here's what you need to do - rough outline, since I've only used Eclipse for Java development and do my C development with vim and gdb:
Compile a debug build of PostgreSQL (compiled with ./configure --enable-debug and preferably also CFLAGS="-ggdb -Og -fno-omit-frame-pointer"). Specify a --prefix within your homedir, like --prefix=$HOME/postgres-debug
Put your debug build's bin directory first on your PATH, e.g. export PATH=$HOME/postgres-debug/bin:$PATH
initdb -U postgres -D $HOME/postgres-debug-data a new instance of PostgreSQL from your debug build
Start the new instance with PGPORT=5599 pg_ctl -D $HOME/postgres-debug-data -l $HOME/postgres-debug-data.log -w start
Connect with PGPORT=5599 psql postgres
Do whatever setup you need to do
Get the backend process ID with SELECT pg_backend_pid() in a psql session. Leave that session open; it's the one you'll be debugging.
Attach Eclipse's debugger to that process ID, using the Eclipse project that contains the PostgreSQL extension source code you're debugging. Make sure Eclipse is configured so it can find the PostgreSQL source code you compiled with too (no idea how to do that, see the manual).
Set any desired breakpoints and resume execution
In the psql session, do whatever you need to do to make your extension run and hit the breakpoint
When execution pauses at the breakpoint in Eclipse, debug as desired.
Basic misunderstandings?
Also, in case you're really confused about how all this works: PostgreSQL is a client/server application. If you are attempting to debug a client program that uses libpq or odbc, and expecting a breakpoint to trigger in some PostgreSQL backend extension code, that is not going to happen. The client application communicates with PostgreSQL over a TCP/IP socket. It's a separate program. gdb cannot set breakpoints in the PostgreSQL server when it's connected to the client, because they are separate programs. If you want to debug the server, you have to attach gdb to the server. PostgreSQL uses one process per connection, so you have to attach gdb to the correct server process. Which is why I said to use SELECT pg_backend_pid() above, and attach to the process ID.
See the internals documentation linked above, and:
PostgreSQL site - coding
PostgreSQL wiki - developer resources
Developer FAQ
Attaching gdb to a backend on linux/bsd/unix
I also faced similar issue and resolved it after some struggle
I misunderstood the following point under Debugging with child processes in the wiki (https://wiki.postgresql.org/wiki/Working_with_Eclipse).
5."Start postmaster & one instant of postgresql client (for creating one new postgres)"
The above step should be performed from terminal by starting postgres server and one client.
Hope this helps
Once this is done then debugger in eclipse needs to be started for C/C++ Attach to Application

How to migrate/shift/copy/move data in Neo4j

Does any one know how to migrate data from one instance of Neo4j to another. To be more precise, I want to know, how to move the data from one instance of Neo4j on my local machine to another on remote machine. Does any one have any idea about it.
I'm working on my windows machine with Eclipse and Embedded Neo4j . I need to transfer this data to remote Neo4j instance on a Centos machine. Please help me with this.
Not sure how to do it for "embedded neo4j db".
But for standalone and in case you have something like the command line tool "Putty" on your windows machine, this should work. Instead of $NEO4j_HOME you can also use the normal path without the env variable.
$NEO4J_HOME/bin/neo4j stop
cd $NEO4J_HOME/data
tar -cvf graph.db.tar graph.db
gzip graph.db.tar
scp -i ~/some_path/key_for_remote_server.pem ./graph.db.tar.gz username#your_remote_domeain.com:~/
ssh -i ~/some_path/key_for_remote_server.pem/ username#your_remote_domeain.com
On your remote server (at least this works for ubuntu):
Maybe you need to use "sudo" (prefix the commands with sudo).
mv ./graph.db.tar.gz /some_path/
cd /some_path/
gunzip graph.db.tar.gz
tar -xvf graph.db.tar
$NEO4J_HOME/bin/neo4j start
$NEO4J_HOME/bin/neo4j status
You can migrate the data by using the apoc procedure by running the below query in the cypher shell from where the data needs to be exported:
CALL apoc.export.cypher.all('myfilename.cypher');
This will download the file with cypher queries in the import folder
Go the database instance where the data needs to be imported and copy the file in the import folder. Run the below command using the cypher shell:
apoc.cypher.runFile("myfilename.cypher",{}) yield row, result;
For more advanced options follow the below links:
https://neo4j.com/docs/labs/apoc/current/export/cypher/
http://neo4j-contrib.github.io/neo4j-apoc-procedures/3.4/cypher-execution/run-cypher-scripts/
I found out the following workaround for copying the data from a server in the cluster to all others, after using the neo4j-import tool:
Stop all nodes.
On the new node/server, where you need your data to be copied, you have to create the database folder for that graph (in my case loadTest):
/neo4j-enterprise-3.1.0/data/databases/loadTest.db
Then, the source node/server that is holding the data, you have to copy here the neostore.id file to the destination server db folder (loadTest.db from the previous step).
Start all nodes. In the background neo4j will copy data from other cluster servers to the new node.
For embedded mode , you would just need to locate the graph neo4j-db folder location then zip and send it to the remote system.
In your code where you would have called graphdatabaseservice , you would have given the target location
Check if its relative path then the database might be in your project folder .
Now for running the db instance on browser , you will need to use the neo4j communty server and point it to the folder containing the index folder. So if your neo4j-db is located at $project/tmp/neo4j-db then you will give the file path till this folder(the index folder will be inside this folder)
Edit
The folder that will contain the schema and index folders needs to be zipped. You can upload and unzip the folder at a certain location using Putty on your standalone server. Then just change the org.neo4j.server.database.location in conf/neo4j-server.properties file.

tomcat context resource not working

i have a tomcat6 server running on a CentOS 6 machine and so far so good.
in one of my webapps i need to use a context param to access an external folder located in the filesystem, i configured my server.xml like this (relevant portion of <Host> tag only) :
<Context path="/userimages" docBase="/home/someuser/faces/32x32" debug="0" reloadable="true" crossContext="true"/>
when i start the server i get this error :
java.lang.IllegalArgumentException: Document base /home/someuser/faces/32x32 does not exist or is not a readable directory
i read something about folder permission so i set both "32x32" and "webapps" folder to 777, but it's still not working...any idea of how to fix this ?
P.S. on windows OS it works perfectly
My suggestion is to put your data into /usr/share/tomcat6/conf/context.xml which is a symlink to /etc/tomcat6/context.xml on CentOS 6. At least tomcat6 does read the contents of that file when it restarts, and I had some luck getting resource data loaded from there. It would seem that this file is new in tomcat6.
I used strace to check which files it was visiting and it does run stat() on the various files like /var/lib/tomcat6/webapps/*/META-INF/context.xml but nowhere does it actually open() those files, so I'm pretty sure it does not read the contents. Maybe some bug? Maybe imaginary future feature?
I managed to get Plandora (uses context to supply MySQL database connection details) running on CentOS 6 with these packages (from yum):
apache-tomcat-apis-0.1-1.el6.noarch
java-1.6.0-openjdk-1.6.0.0-1.61.1.11.11.el6_4.i686
mysql-connector-java-5.1.17-6.el6.noarch
tomcat6-6.0.24-52.el6_4.noarch
tomcat6-servlet-2.5-api-6.0.24-52.el6_4.noarch
tomcat6-el-2.1-api-6.0.24-52.el6_4.noarch
tomcat6-admin-webapps-6.0.24-52.el6_4.noarch
tomcat6-jsp-2.1-api-6.0.24-52.el6_4.noarch
tomcat6-lib-6.0.24-52.el6_4.noarch
tomcat6-webapps-6.0.24-52.el6_4.noarch
Just in case anyone else is trying to get Plandora to work on CentOS 6, you also need to make sure you symlink:
ln -s /usr/share/java/mysql-connector-java.jar /usr/share/tomcat6/lib/