Does Cloudant on Bluemix have a performance issue? - ibm-cloud

I have a mobile app using Cloudant CDTReplicator to synch the local datastore and the remote DB. The app was fast at the start, but then the synch would take more than 10 seconds when there are just 180 normal documents, each having a few string fields in the remote DB. And the delta between the local and remote is minimal.
After I deleted all the 180 documents, it speeds up greatly. So looks like it is related to number of documents.
Can you please let me know if there is a performance issue with Cloudant replication? Or this is a problem with my coding? The code is just to start the replicator, not much there.
Thank you,
Jen

From the status page https://developer.ibm.com/bluemix/support/#status Cloudant is reported as having some issues on US-South. The problem is related though to a "service broker timeout" generic message.

Related

Restore Deleted database from MongoDB

Someone enter in my mongo database and deleted my database, creating another called "Warning" with one document saying: "Your Database is downloaded and backed up on our secured servers. To recover your lost data: Send 0.6 BTC to our BitCoin ".
Is there anyway to recover the deleted database ?
I think this is a logical deleted,thus i hope that exist some log to recover this database.
Sadly, you have been the victim of the increasingly popular MongoDb ransomware attacks:
https://www.bleepingcomputer.com/news/security/massive-wave-of-mongodb-ransom-attacks-makes-26-000-new-victims/
https://www.zdnet.com/article/mongodb-ransacking-starts-again-hackers-ransom-26000-unsecured-instances/
https://www.trustwave.com/Resources/SpiderLabs-Blog/Protecting-Yourself-from-MongoDB-Ransomware/
Without a backup, you won't be able to get your data back.
Don't send 0.6btc to any address. It is incredibly unlikely the attacker took a backup either.
Andreas Nilsson, who is MongoDB's director of product security wrote a blog post on preventing such attacks.

Why is my mongodb collection deleted automatically?

I have a MongoDB client in three EC2 instances and I have created a replica set. Last time I had a problem, of space constraint which stopped my mongod process, thereby halting the application and now in an instance couple of days back, some of my tables were gone from database, so I set logging and all to my database just to catch if anything like that happens again. In a fresh incident this morning I was unable to login to my system and that's when I found out that whole database was empty. I checked other SO question like this which suggest setting up a TTL.Which I haven't done at all.
Now how do I debug this situation and do a proper root cause analysis? I can't even find anything in my debug logs as well. The tables just vanished. How do I set up proper logging mechanism and how do I ensure that all my tables are never ever deleted again?
Today I got a mail from Amazon that I was probably running an unsecured version of MongoDB and that may have caused this issue. So who ever is facing this issue please go through the Security Checklist Provided by MongoDB. There are some points that are absolutely necessary in there.
1. Enable Access Control and Enforce Authentication
2. Encrypt Communication
3. Limit Network Exposure
These three are the core and depending upon how many people access your database you can Configure Role-Based Access Control.
These are all the things I have done. Before this incident I had not taken security that seriously but after I was hit by it. I made sure I have all the necessary precautions in place.
Hope this helps someone.

Issue in Cloudant to Dash DB sync on bluemix: Documents lost in transit

Few documents have just vanished in sync between Cloudant to Dash DB. On Cloudant side, the count of messages in main & overflow table is different than the actual count in dash db tables. Any clue how to find those documents?
The best way for you to recover the lost documents would be by using the RESCAN function. That will re-load all documents currently in Cloudant to dashDB. Just beware that it will drop existing tables and records before it performs the reload.
If you want the Cloudant support team to investigate the specific cause for this loss, please open a ticket with support#cloudant.com and make sure to mention your account details!

Severe delays in cloud SQL responses

In the past 4-5 hours there have been 10s of simple read queries that took 40-70 seconds to return result from the cloud SQL DB. Usually they take 50ms or so. Is there some ongoing issue? I can provide DB IDs and specific times if needed.
Thanks.
Between 11.00PST and 11.30PST there was an issue that interrupted many Cloud SQL instances. The problem should now be resolved.
We apologize for the inconvenience and thank you for your patience and continued support. Please rest assured that system reliability is a top priority for the Google Cloud Platform, and we are making continuous improvements to make our systems better.
To be kept informed of other Google Cloud SQL issues and launches, please join google-cloud-sql-announce#googlegroups.com
https://groups.google.com/forum/#!forum/google-cloud-sql-announce
(from Joe Faith in another thread)

Mongodb or Memcached on busy site?

My system have a huge connections and data read and write always. I installed Memcached (only one memcached)but failed because a large connections and I tried to install Mongodb (only one instance too) but still failed.
My data is not large (about > 10.000 records). Now I'm using MySQL but it's very slow.
I'm thinking about memcached replication and master/slave on mongodb! Which one should i choice?
Thank you very much!
(sorry for my bad English)
MySQL is not likely to be causing your site to run slowly, especially with the amount of data you mention. I would stick with MySQL and try to find what is causing your site to run slowly.
Try running EXPLAIN on your MySql queries to know what is going wrong...