Folks,
We are evaluating distributed caching solutions for our application. We started with looking at Memcache, then expanded to look at Couchbase. One of our key requirements is the ability to back up the (in-memory) cache reliably to RDBMS and to restore from it in case of nod/cluster failure.
Our preferred option would be to have a configuration switch in couchbase that would cause it to back up new entries to RDBMS.
What we would like to avoid is writing application code that sends cache entries/refreshes explicitly to RDBMS.
Can anyone tell me if couchbase (cluster) can be configured to do so?
Thanks.
-Raj
Couchbase cannot be configured to write through to an RDBMS for backup. What you should take a look at is the Couchbase bucket, not the memcached bucket. The Couchbase bucket uses the memcached layer as a cache and provides replication and persistence out of the box. With this setup you do not need a separate RDBMS because Couchbase will take care of all of the persistence for you and it will replicate your data so that if you have server failures you can just failover any failed nodes and promote other replica nodes to active ones. Take a look at this page http://www.couchbase.com/couchbase-server/features and if you have any other architecture questions here then I would recommend posting them on the Couchbase forums http://www.couchbase.com/forums where some of the developers can give you some more in depth answers.
Related
We certainly want to use separate databases since the front-end team finds it robust to work with MongoDB Atlas and AWS cloud architects find it easy to work with DynamoDB.
Our architecture:
Web application uses MongoDB to insert, update and retrieve data.
The MongoDB is synced in real-time with DynamoDB.
Background AWS services use DynamoDB for inserting, updating and retrieving data.
The changes in either DynamoDB or MongoDB are replicated to each other.
Tried so far:
We currently do have a sync in place with DyanmoDB streams and MongoDB atlas trigger to listen to changes on each database and forward them to the other. We use lambas for this, but our replication logic is not robust yet.
AWS Database Migration Service with ongoing replication has been suggested but haven't been able to get it to work in our use case. Perhaps, this is one option.
3rd party services like: https://www.cdata.com/sync/
Ideal Fit
The most ideal solution would be an AWS-based solution if not a reliable 3rd party service.
Greatly appreciate any resources or thoughts on this! :)
We would like to use OrientDB Graph in an Azure environment. Does anybody has experience using it? We also would like to know if high availability from OrientDB is required under Azure cloud? Azure already offers high availability for Azure storage, Azure Drive and SQL. I understand that they have replications and load balancing built in.
This is super important because we prefer not to get into the business of replications and infrastructure management.
Thanks
So you can spin up 2 or more machines and install OrientDB on them, then configure them together as a distributed cluster. However I haven't been able to find any way that is simpler, easier to do. I am interested in this topic too.
Azure does have features such as geo-replication, which is protects your data against a major data-center incident but doesn't provide any performance benefit and will not make it highly available.
Although pretty reliable, occasionally Microsoft will reboot servers for updates, so to protect against downtime you can use affinity groups so that, of your 2 or more servers, one will always be online. This however does need to be used in conjunction with database replication and ideally load balancing.
It's also worth noting that OrientDB recommends clusters have an odd number of servers as this can prevent conflicts when synchronising data after a communication issue between the servers.
I am using it in amazon and I had to create a java project to monitor http requests inserts and queries. The queries are very fast but takes longer inserting data .
I recommend this type of graph database mode to decrease the time of the queries. Also if you have empty fields OrientDB manages very well compared to other databases .
If you need help with the java project can response to this post and I´ll help u.
I hope it helps. Good luck.
I was looking for a way to share object in several nodes cluster, and after a bit of research I thought it would be best using redis pub/sub. Then, I saw that redis doesn't support cluster yet, which means that a system based on redis will have single point of failure. Since high availability is a key feature for me, this solution is not applicable.
At the moment, I am looking into 2 other solutions for this issue:
Memcached
Couchbase
I have 2 questions:
On top of which solution it would be more efficient to simulate pub/sub?
which is better when keeping clusters in mind?
I was hoping that someone out there faced similar issues and share his experience.
I think it's a bad idea to use memcached and couchbase for pub/sub. Both solutions don't provide built-in pub/sub functions and implementing pub/sub on app side can cause a lot of ops/sec to memcache/couchbase server and as a result you'll get slow performance.
Couchbase stores data into disk, so for temporary storage it's better to use memcaced. It will be faster and will not load your disk.
If you can avoid that "pub/sub" and use memcached/couchbase just as simple HA shared key-value storage - do it. It will be much better then pub/sub.
When you install Couchbase server it provides 2 types of buckets: couchbase (with disk persistance, ability to create views, etc.) and memcached (only in-memory key-value storage). Both types of buckets act in the same way in clusters. Also couchbase support memcache api calls, so you don't need to change code to test both variants.
I've tried to use memcached provider for socket.io "pub/sub" sharing, but as I mentioned before it's ugly. And in my case there were few node.js servers with socket.io, so instead of sharing I've implemented something like "p2p messaging" between servers on top of sockets.
UPD: If you have such big amount of data may be it will be better not to have one shared storage, but use something like sharding with "predictible" data location.
We are developing application which will have many physical servers. We want to use NoSQL for logging and tracing since it does not required structured data.
We don't want to have Centralized logging.
Can we install NoSQL (any one) in each server and store logging/tracing details? Will NoSQL impact my actually process in the server? Is it good idea to do it?
Problem1: Data Collection
Many people're using NoSQL solutions for storing application logs. The first challenge you may have is how to collect huge amount of data from various data sources reliably with ease of management. One concern of not having log collection layer, is lock contention of database caused by high write throughput.
So basically having log collection layer is recommended. There're some open-source log collector implementation such as syslog, Fluentd, Scribe, and Flume :)
Problem2: Storage & Processing
The next big problem is how to store and process data. The backend infrastructure requires a lot of changes as the data volume increase. At first, you can use MongoDB to store all of your data, but at some moment you end up using Apache Hadoop to architect a massively scalable architecture.
Here's an example architecture of having Fluentd for log collection, and MongoDB for log storage and processing.
Here're some links to put the Apache Logs into Amazon S3, MongoDB, or Hadoop HDFS by Fluentd.
Store Apache Logs into Amazon S3
Store Apache Logs into MongoDB
Fluentd + HDFS: Instant Big Data Collection
Disclaimer: I'm a committer of Fluentd project.
definitely this is good idea for doing same thing with nosql rather than sql.
because in logging and tracing volume of data is high and ratio of retrieving data is also high.
you for logging and tracing you need complex reports for analysis so nosql is better for you.
also nosql support distributed environment so you create infrastructure at different geographic location.
It's obvious that I'm not an expert on Cassandra. So the question may sound silly.
Given an existing SQL-based project does it give any benefit or is it even possible to apply a no-SQL database(e.g. Cassandra) as an additional layer between business logic and SQL database to speed up our queries or inserts.
It's relatively new technology and I'm trying to find its place.
Cassandra will work fine, but if you don't care if you have to rebuild your data memcached will be faster.
But if you want a persistent cache, Cassandra is probably your best option -- reddit started by using Cassandra like this and is working on moving more functionality to it.
I would go with Windows Server AppFabric aka Velocity distributed cache with SQL Server, assuming you are on the .NET platform.
Scott Hanselman has a bunch of posts on AppFabric.