Titan server start fails - titan

I see the following message when starting titan-server:
Caused by: InvalidRequestException(why:Keyspace names must be case-insensitively unique ("titan" conflicts with "titan"))
at org.apache.cassandra.thrift.Cassandra$system_add_keyspace_result.read(Cassandra.java:33158)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_system_add_keyspace(Cassandra.java:1408)
at org.apache.cassandra.thrift.Cassandra$Client.system_add_keyspace(Cassandra.java:1395)
at com.netflix.astyanax.thrift.ThriftClusterImpl$9.internalExecute(ThriftClusterImpl.java:250)
at com.netflix.astyanax.thrift.ThriftClusterImpl$9.internalExecute(ThriftClusterImpl.java:247)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60)
... 26 more
Here is how i start it.
~/titan-server-0.4.0$ bin/titan.sh -c cassandra-es start
What am i missing? Thanks for any help.
Of course, After I run
titan.sh -c cassandra-es clean
and it starts up just fine. Does that mean there was something wrong with my data.

you can directly start titan by
bin/titan.sh start

Related

Mappers fail for pig to insert data into MongoDB

I am trying to import a file from HDFS to MongoDB using MongoInsertStorage with PIG. The files are large, around 5GB. The script runs fine when I run it in local mode with
pig -x local example.pig
However if I run it in the mapreduce mode, Most of the mappers fail with the following error:
Error: com.mongodb.ConnectionString.getReadConcern()Lcom/mongodb/ReadConcern;
Container killed by the ApplicationMaster.
Container killed on request.
Exit code is 143 Container exited with a non-zero exit code 143
Can someone help me solve this issue?? I also increased the memory allocated to YARN containers but that hasnt helped.
Some mappers are also timing out after 300 seconds.
Pig Script is as follows
REGISTER mongo-java-driver-3.2.2.jar
REGISTER mongo-hadoop-core-1.4.0.jar
REGISTER mongo-hadoop-pig-1.4.0.jar
REGISTER mongodb-driver-3.2.2.jar
DEFINE MongoInsertStorage com.mongodb.hadoop.pig.MongoInsertStorage();
SET mapreduce.reduce.speculative true
BIG_DATA = LOAD 'hdfs://example.com:8020/user/someuser/sample.csv' using PigStorage(',') As (a:chararray,b:chararray,c:chararray);
STORE BIG_DATA INTO 'mongodb://insert.some.ip.here:27017/test.samplecollection' USING MongoInsertStorage('', '')
Found a solution.
For the error
Error: com.mongodb.ConnectionString.getReadConcern()Lcom/mongodb/ReadConcern;
Container killed by the ApplicationMaster.
Container killed on request.
Exit code is 143 Container exited with a non-zero exit code 143
I changed the JAR versions - hadoopcore and hadooppig from 1.4.0 to 2.0.2 and for Mongo Java driver from 3.2.2 to 3.4.2. This eliminated the ReadConcern Error on the mappers!
For the timeout, I added this after registering the jars:
SET mapreduce.task.timeout 1800000
I had been using SET mapred.task.timeout which didnt work
Hope this helps anyone who has a similar issue!

Hawq init failed -- "postgres" is needed by initdb

After I build incubator-hawq on Centos7.1, I tried to init it. But the error below occurs:
20160516:18:10:43:002036 hawqinit.sh:host-172-16-0-105:hawqadmin-[INFO]:-Loading hawq_toolkit...
ALTER ROLE
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-20160516:18:10:43:002036 hawqinit.sh:host-172-16-0-105:hawqadmin-[INFO]:-Loading hawq_toolkit...
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-Master init successfully
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-Init segments in list: ['hawq-master']
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[DEBUG]:-Start to init segment on node 'hawq-master'
20160516:18:10:44:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-Total segment number is: 1
fgets failure: Success
The program "postgres" is needed by initdb but was either not found in the same directory as "/usr/hawq/bin/initdb" or failed unexpectedly.
Check your installation; "postgres -V" may have more information.
20160516:18:10:45:002318 hawqinit.sh:host-172-16-0-105:hawqadmin-[ERROR]:-Postgres initdb failed
20160516:18:10:45:002318 hawqinit.sh:host-172-16-0-105:hawqadmin-[ERROR]:-Segment init failed on host-172-16-0-105
20160516:18:10:45:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-20160516:18:10:45:002318 hawqinit.sh:host-172-16-0-105:hawqadmin-[ERROR]:-Postgres initdb failed
20160516:18:10:45:002318 hawqinit.sh:host-172-16-0-105:hawqadmin-[ERROR]:-Segment init failed on host-172-16-0-105
20160516:18:10:45:001766 hawq_init:host-172-16-0-105:hawqadmin-[ERROR]:-HAWQ init failed on hawq-master
20160516:18:10:46:001766 hawq_init:host-172-16-0-105:hawqadmin-[INFO]:-0 of 1 segments init successfully
20160516:18:10:46:001766 hawq_init:host-172-16-0-105:hawqadmin-[ERROR]:-Segments init failed, exit
When I type the command, the below shows:
[hawqadmin#host-172-16-0-105 hawqAdminLogs]$ postgres -V
postgres (HAWQ) 8.2.15
Any advice? Thanks!
If "postgres -V" works, that means the postgres binary is good.
Before you do "hawq init cluster", please make sure:
1) $GPHOME in greenplum_path.sh is correctly set to the directory of hawq binary, i.e, /usr/hawq in your case
2) source $GPHOME/greenplum_path.sh
3) check if initdb and postgres binary is in $GPHOME/bin
From the error you pasted above, 2 possible causes:
(1) The binary postgres called is not /usr/hawq/bin/postgres, You can use which postgres to check the path.
(2) The dependent lib for postgres may be wrong. You can use ldd for linux or otool for mac to print all dependent lib paths, and check them.
Moreover, if any error when init hawq, please check log in ~/hawqAdminLogs/, you may find out the specific error message.
Hope it will help you to find out the root cause.
Recently I faced same error while initializing cluster.
Postgres -V showed correct version, which postgres showed /usr/local/hawq/bin/postgres, also the path was already set, still faced above error.
Finally resolved by setting LD_LIBRARY_PATH to /usr/local/hawq/lib/ and sourced it via .bashrc file.
Looks like you might have installed hawq binaries in different directory . Please check the following
1.Make sure you have all the right PATH set
Check hawq initdb binaries are there in /usr/hawq/bin/ directory
make sure you have successed compile hawq and install them
check postgres is in the same dir with initdb
if there are more than 1 postgres in your pc, make sure the path of postgres(the same dir with initdb) is in your PATH.

cannot attach to service manager-error

I am new in firebird and I would like to trace my firebird-database activities, hence I am trying to use Audit/Trace Services.
My firbird databse is on Server: 10.7.105.8
I am running this comman in my cmd:
C:\Program Files\Firebird\Firebird_2_5\bin>fbtracemgr -se 10.7.105.8:3050:service_mgr -user SYSDBA -password masterkey -start -name "User Trace 1" -config "fbtrace.conf" > C:\Users\Babak\Desktop\trace.out
but I get this error:
Can not attach to service manager
Service 3050 : Service_mgr is not defined
What should I do to solve this problem?
thank you so much
EDIT
thank you for your hints. I think my trace process works fine, but I cant find the information, what I need in my trace.out file
If I am starting my trace my command promp looks like this:
if in this step I take a look in my trace.out I can only see this:
Trace Session ID 3 Started
I am running some select queries in my firebird, and then I finish my trace with with ctr+c, then the only things, which I can see in my trace.out are something like this:
Trace session ID 3 started
2015-07-08 10:49:59.868874 ***** loading fbclient.dll proc=4116 64Bit DLL Preload
2015-07-08 10:49:59.869066 GetDllDirectoryA=""
2015-07-08 10:49:59.869075 GetModuleFileNameA="C:\Program Files\Firebird\Firebird_2_5\bin\fbclient.dll"
2015-07-08 10:49:59.869086 Log-Level is set to 0
2015-07-08 10:49:59.869096 fbclient.dll loaded by: C:\Program Files\Firebird\Firebird_2_5\bin\fbtracemgr.exe
2015-07-08 10:49:59.869113 ***** dimensio integration successfully fbclient.dll
2015-07-08 10:58:10.091330 ***** cleanup unload fbclientorg.dll proc=4116
and not more infos about queries, which I have run.
Could you please say me, what I have done wrong? or what should I do more?
As Mark says, check file "fbtrace.conf". This is a text file and you will see something like this:
# default database section
#
<database>
# Do we trace database events or not
enabled false
# Operations log file name. For use by system audit trace only
#log_filename
....
....
# Put transaction start/end records
log_transactions false <--- TO TEST, SET THIS TO TRUE
# Put sql statement prepare records
log_statement_prepare false <-- TO TEST, SET THIS TO TRUE
Set to true what you need to trace, save the file and check the result.
Firebird connection strings are of the format:
host/port:database
Where /port is optional and defaults to 3050, and database is either the alias or path of a database, or the name of a service. Replace :3050 with /3050 (or leave it off entirely).
The following worked for me:
Open start menu
Search for services and open it
Search Firebird Guardian in the services list.
Start Firebird Guardian if it is stopped or restart if it is running.
Now try to connect your server. It will work.

Error CREATEing SolrCore ... Specified config does not exist in Zookeeper:default

I used the following command:
./solr -e cloud -z localhost:2181 -noprompt
The final message is the following:
{
"responseHeader":{
"status":0,
"QTime":1616},
"failure":{
"":"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard2_replica1': Unable to create core [gettingstarted_shard2_replica1] Caused by: Specified config does not exist in ZooKeeper:default",
"":"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard1_replica1': Unable to create core [gettingstarted_shard1_replica1] Caused by: Specified config does not exist in ZooKeeper:default",
"":"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard2_replica2': Unable to create core [gettingstarted_shard2_replica2] Caused by: Specified config does not exist in ZooKeeper:default",
"":"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard1_replica2': Unable to create core [gettingstarted_shard1_replica2] Caused by: Specified config does not exist in ZooKeeper:default"}}
Confirmed zookeeper is running
[zk: localhost:2181(CONNECTED) 0]
I looked almost everywhere and cannot overcome. Can anyone help?
I recently downloaded and installed solr 4.10.3, was going through the official Quick Start using the command:
bin/solr start -e cloud -noprompt
and encountered the same looking exceptions as you:
"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error CREATEing SolrCore 'gettingstarted_shard2_replica2': Unable to create core [gettingstarted_shard2_replica2] Caused by: Specified config does not exist in ZooKeeper:default"
Looking up in the bash I also saw the error line:
bin/solr: line 1085: jar: command not found
This was the reason for the exceptions, the "jar" command was not in the current path. After putting the "jar" command inside my path, these exceptions do not show up. It might be the same reason why you are getting the exceptions.
I am on a Fedora machine, and used the following guide to set up jar, java, javac etc. via the alternatives command (but I think you could just add the java/bin directory to your current path to solve the issue):
https://ask.fedoraproject.org/en/question/59412/cannot-find-oracle-jdk-on-fedora-21/

Issues with running two instances of searchd

I have just updated our Sphinx server from 1.10-beta to 2.0.6-release, and now I have run into some issues with searchd. Previously we were able to run two instances of searchd next to each other by specifying two different config-files, i.e:
searchd --config /etc/sphinx/sphinx.conf
searchd --config /etc/sphinx/sphinx.staging.conf
sphinx.conf listens to 9306:mysql41, and 9312, while sphinx.staging.conf listens to 9307:mysql41 and 9313.
After we updated to 2.0.6 however, a second instance is never started. Or rather.. the output makes it seem like it starts, and a pid-file is created etc. But for some reason only the first searchd instance keeps running, and the second seems to shutdown right away. So while trying to run searchd --config /etc/sphinx/sphinx.conf twice (if that was the first one started) complains that the pid-file is in use, trying to run searchd --config /etc/sphinx/sphinx.staging.conf (if that is the second started instance) "starts" the daemon again and again, only no new process is created..
Note that if I switch these commands around when first creating the process, then sphinx.conf is the instance not really started.
I have checked, and rechecked, that these ports are only used by searchd.
Does anyone have any idea of what I can do/try next? I've installed it from source on ubuntu 10.04 LTS with:
./configure --prefix /etc/sphinx --with-mysql --enable-id64 --with-libstemmer
make -j4 install
Note to self: Check the logs!
RT-indices use binary logs to enable crash recovery. Since my old config files did not specify a path for where these should be stored, both instances of searchd tried to write to the same binary logs. The instance started last was of course not permitted to manipulate these files, and thus exited with a fatal error:
[Fri Nov 2 17:13:32.262 2012] [ 5346] FATAL: failed to lock
'/etc/sphinx/var/data/binlog.lock': 11 'Resource temporarily unavailable'
[Fri Nov 2 17:13:32.264 2012] [ 5345] Child process 5346 has been finished,
exit code 1. Watchdog finishes also. Good bye!
The solution was simple, ensure to specify a binlog_path inside the searchd configuration section of each configuration file:
searchd
{
[...]
binlog_path = /path/to/writable/directory
[...]
}