I have a running Kylin Cluster in Kubernetes and Superset in Kubernets also.
Kylin is already configured with a built cube "kylin_sales_cube"
Superset is already configured with Kylin driver and the connection is established.
While trying to create a dataset from a Kylin table I get the following error message:
An error occurred while creating datasets: Dataset could not be created.
On the other hand, I am able to run a query on the same table, but without dataset, I cannot use charts.
Any ideas?
It seems a lack of implementation of a method in kylinpy (or somehere else) but until someone solves it, I suggest everyone who has this problem to implement the has_table method in sqla_dialect.py from kylinpy plugin. You will find it in kylinpy/sqla_dialect.py, and the method is:
You should change that return to the next line:
return table_name in self.get_table_names(connection, schema)
And everything will be back to normal.
Related
I have three docker containers running. The first runs a python script that writes data from a sensor to the InfluxDB emon_data bucket running in a second container. This works perfectly and i can run queries and create dashboards within InfluxDB. The third container runs Grafana. The data source setting in Grafana that establishes the connection to InfluxDB seems to be correct as it confirms having a connection to the data source - see picture.
However, when I go to set up a dashboard in Grafana it keeps throwing an error stating that the database cannot be found - see picture.
I have tried to find information on this error but am not finding much and what I am finding seems to be for much older versions of InfluxDB and Grafana. Any suggestions or pointers on how to resolve this would be much appreciated.
Baobab
I have a compute image that is running a mySql database, with confirmed access to the outside world.
When using the curl script to create an 'interface' to this database no errors are returned. However in the console GUI there is a warning triangle and no read replica can be created.
Are there any known issues with this functionality and/or are there any ways that I could get out some more logging in the response?
I am trying to configure a 5 node cassandra cluster to run Spark/Shark to test out some Hive queries.
I have installed Spark, Scala, Shark and configured according to Amplab [Running Shark on a cluster] https://github.com/amplab/shark/wiki/Running-Shark-on-a-Cluster.
I am able to get into the Shark CLI and when I try to create an EXTERNAL TABLE out of one of my Cassandra ColumnFamily tables, I keep getting this error
Failed with exception
org.apache.hadoop.hive.ql.metadata.HiveException: Error in loading
storage
handler.org.apache.hadoop.hive.cassandra.CassandraStorageHandler
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask
I have configured HIVE_HOME, HADOOP_HOME, SCALA_HOME. Perhaps I'm pointing HIVE_HOME and HADOOP_HOME to the wrong paths? HADOOP_HOME is set to my Cassandra hadoop folder (/etc/dse/cassandra), HIVE_HOME is set to the unpacked Amplad download of Hadoop1/hive, and I have also set HIVE_CONF_DIR to my Cassandra Hive path (/etc/dse/hive).
Am I missing any steps? Or have I configured these locations wrongly? Any ideas please? Any help will be very much appreciated. Thanks
Yes, I have got it.
Try https://github.com/2013Commons/hive-cassandra
whick is working with cassandra 2.0.4, hive 0.11, hadoop 2.0
I am using Cassandra 1.1.2, the latest, and I am unable to create or recreate secondary indexes. It times out or just does not create, sending out an unknown error message.
By the way I have tried with Cassandra Cluster Admin and the cassandra CLI.
Cassandra CLI spits back "unreachable nodes" whereas Cassandra Cluster Admin shows a blank error message.
After finding out using some Website, the Cassandra FAQ and a forum, I discovered that there was a schema mismatch, and I followed the instructions to attempt to rectify the schema.
It was successful, but I do not know how did the schema get mismatched all by itself.
Here is the link in case anyone is facing the same issue: http://wiki.apache.org/cassandra/FAQ#schema_disagreement
I am trying to use the oplog monitoring class in casbah
https://github.com/mongodb/casbah/blob/master/casbah-core/src/main/scala/util/OpLog.scala
What i want to do is monitor the oplog entries at a production mongo db on
production.someserver.com
and get the entries and send them to the storage DB at
test.someotherserver.com
and replicate all the data that is in the production server to the test server. I cannot use replica sets to do this as i cannot redeploy now. I am trying to build a scala app to do this. Casbah the official scala driver for mongo as the above mentioned class which i m trying to instantiate using
val mongoColl = MongoConnection() ("test") ("test_data")
val oLog = new MongoOpLog(mongoColl)
But im not even able to instantiate it, getting an error that mongooplog is not found. Ive imported the necessary package. But even if im able to do this i have no clue on how to do what i want to do. can any one pls point me in a right direction on how to achieve this. I am pretty new to scala so a bit of detailed explanation or a link containing it would be helpful for me.
You need to have replication enabled on the server for the oplog to be created; as either a member of a replica set or in master mode for master/slave.
Otherwise, MongoDB does not waste CPU cycles and disk space maintaining an oplog. Please see the documentation on Replication for more info - http://www.mongodb.org/display/DOCS/Replication
You should really never be running any database with a single server in production, incidentally.