Cassandra 1.1.2 failure to create secondary index - nosql

I am using Cassandra 1.1.2, the latest, and I am unable to create or recreate secondary indexes. It times out or just does not create, sending out an unknown error message.
By the way I have tried with Cassandra Cluster Admin and the cassandra CLI.
Cassandra CLI spits back "unreachable nodes" whereas Cassandra Cluster Admin shows a blank error message.

After finding out using some Website, the Cassandra FAQ and a forum, I discovered that there was a schema mismatch, and I followed the instructions to attempt to rectify the schema.
It was successful, but I do not know how did the schema get mismatched all by itself.
Here is the link in case anyone is facing the same issue: http://wiki.apache.org/cassandra/FAQ#schema_disagreement

Related

An error occurred while creating datasets: Dataset could not be created

I have a running Kylin Cluster in Kubernetes and Superset in Kubernets also.
Kylin is already configured with a built cube "kylin_sales_cube"
Superset is already configured with Kylin driver and the connection is established.
While trying to create a dataset from a Kylin table I get the following error message:
An error occurred while creating datasets: Dataset could not be created.
On the other hand, I am able to run a query on the same table, but without dataset, I cannot use charts.
Any ideas?
It seems a lack of implementation of a method in kylinpy (or somehere else) but until someone solves it, I suggest everyone who has this problem to implement the has_table method in sqla_dialect.py from kylinpy plugin. You will find it in kylinpy/sqla_dialect.py, and the method is:
You should change that return to the next line:
return table_name in self.get_table_names(connection, schema)
And everything will be back to normal.

Why does my Tarantool Cartridge retrieve data from router instance sometimes?

I wonder why my tarantool cartridge cluster is not woring as it should.
I have a cartridge cluster running on kubernetes and cartridge image is generated from cartridge cli cartridge pack, and no changes were made to the those generated files.
Kubernetes cluster is deployed via helm with the following values:
https://gist.github.com/AlexanderBich/eebcf67786c36580b99373508f734f10
Issue:
When I make requests from pure php tarantool client, for example SELECT sql request it sometimes retrieves the data from storage instances but sometimes unexpectedly it responds to me with the data from router instance instead.
Same goes for INSERT and after I made same schema in both storage and router instances and made 4 requests it resulted in 2 rows being in storage and 2 being in router.
That's weird and as per reading the documentation I'm sure it's not the intended behaviour and I'm struggling to find the source of such behaviour and hope for your help.
SQL in tarantool doesn't work in cluster mode e.g. with tarantool-cartridge.
P.S. that was the response to my question from tarantool community in tarantool telegramchat

AWS DMS "Load complete, replication ongoing" not working MongoDB to DocDB

I am trying to make a PoC for MongoDB to DocDB migration with DMS.
I've set up a MongoDB instance with some dummy data and an empty DocDB. Source and Target endpoints are also set in DMS and both of them are connecting successfully to my databases.
When I create a migration task in DMS everything seems to be working fine. All existing data is successfully replicated from the MongoDB instance to DocDB and the migration task state is at "Load complete, replication ongoing".
At this point I tried creating new entries in existing collections as well as creating new empty collections in MongoDB but nothing happens in DocDB. If I understand correctly the replication should be real time and anything I create should be replicated instantly?
Also there are no indication of errors or warnings whatsoever... I don't suppose its a connectivity issue to the databases since the initial data is being replicated.
Also the users I am using for the migration have admin privileges in both databases.
Does anyone have any suggestions?
#PPetkov - could you check the following?
1. Check if right privileges were assigned to the user in the MongoDB endpoint according to https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html.
2. Check if replicate set to capture changes was appropriately configured in the MongoDB instance.
3. Once done, try to search for "]E:" or "]W:" in the CloudWatch logs to understand if there are any noticeable failures/warnings.

Whats the status of External Master Replication beta in Google Cloud SQL

I have a compute image that is running a mySql database, with confirmed access to the outside world.
When using the curl script to create an 'interface' to this database no errors are returned. However in the console GUI there is a warning triangle and no read replica can be created.
Are there any known issues with this functionality and/or are there any ways that I could get out some more logging in the response?

using the oplog monitoring class in casbah

I am trying to use the oplog monitoring class in casbah
https://github.com/mongodb/casbah/blob/master/casbah-core/src/main/scala/util/OpLog.scala
What i want to do is monitor the oplog entries at a production mongo db on
production.someserver.com
and get the entries and send them to the storage DB at
test.someotherserver.com
and replicate all the data that is in the production server to the test server. I cannot use replica sets to do this as i cannot redeploy now. I am trying to build a scala app to do this. Casbah the official scala driver for mongo as the above mentioned class which i m trying to instantiate using
val mongoColl = MongoConnection() ("test") ("test_data")
val oLog = new MongoOpLog(mongoColl)
But im not even able to instantiate it, getting an error that mongooplog is not found. Ive imported the necessary package. But even if im able to do this i have no clue on how to do what i want to do. can any one pls point me in a right direction on how to achieve this. I am pretty new to scala so a bit of detailed explanation or a link containing it would be helpful for me.
You need to have replication enabled on the server for the oplog to be created; as either a member of a replica set or in master mode for master/slave.
Otherwise, MongoDB does not waste CPU cycles and disk space maintaining an oplog. Please see the documentation on Replication for more info - http://www.mongodb.org/display/DOCS/Replication
You should really never be running any database with a single server in production, incidentally.