I am new to Apache Ignite. Here's what I am curious about:
Can I setup an ignite cluster as the frontend proxy to distribute requests based on some data column like tenantID, to mysql instances where each mysql instance holds data for single tenant?
Just to make it clear, it is pretty much like a proxy to multiple database instances with same table. So I could save single tenant data into an isolated database.
It's possible two approaches:
You could implement a custom cache store implementation[1] that uses the right connection depending on the record's attribute.
You could use tenantId as affinity key to map records with the same tenantId to the same partitions. Also a custom affinity function[2] allows to map partitions to corresponded nodes marked with some attribute[3].
On each node the cache store configuration[4] could use a datasource to a concrete mysql server based on node attribute.
[1] https://apacheignite.readme.io/docs/3rd-party-store
[2] https://apacheignite.readme.io/docs/affinity-collocation#affinity-function
[3] https://apacheignite.readme.io/docs/cluster#cluster-node-attributes
[4] https://apacheignite.readme.io/docs/3rd-party-store#cachejdbcpojostore
You can also just use the default cache store implementation, just pass a different data source to each node. It is taken from local Spring container typically.
In this case, every node will only talk to a local MySQL, leading to the consequence that every MySQL only holds data of one node, and then you can configure the distribution via Ignite's facilities.
I have not tried this, but it may be sound.
Related
We have a microservice acts as a cache service and decided to have only 2 instances of this microservice up and running. This microservice receives data through kafka topic and stores in it as in memory cache. But we are having a challenge to sync data between these 2 microservices. We decided to use different consumer group for each instance to receive same data, so that, both instances will be in sync. Being same codebase, how to achieve subscribing to different consumer group during startup. For example, if instance#1 subscribes to consumergrp1, other instance2 should be able to subscribe to consumergrp2. Please suggest me how to achieve this.
You can not sync in-memory data in microservices for multiple instance when you are getting data from streaming system or it's getting multiple times.If you are getting data only once in pod life, then you can achieve the sync in-memory data. For e,g. while service is getting up, you can get the data from source and persist in-memory.In this case both pod is having the same data.
You need to use the distributed cache database like redis, couchbase cache.That will be the more clean and neat approach for this.
You haven't specified any details about the way you use kafka (language/thirdparties), etc. So, speaking "in general", you can:
specify a random (or partially random) consumer group id. It won't be as "clean"
as "consumergrp1" and "conumergrp2", but its a string after all, so you can generate it randomly. This idea includes generating the identification of the process in a name of consumer group, for example, if the microservice instances are supposed to be running on different machines, you could include the name of machine as a part of the name of the consumer group.
More complicated, but still: if you have some shared storage, you could use it as a "synchronization" and store the monotonically increasing counter of the "current consumer group to create". once the value is read, it has to be increased. Of course the implementation details depend on the shared storage you actually use (DB, stuff like Redis, whatever).
So there are many different possible solutions. As a suggestion, in any solution you take, do not rely on the fact that you have exactly two instances of the service, maybe you'll reconsider that in future.
I am re-designing a dotnet backend api using the CQRS approach. This question is about how to handle the Query side in the context of a Kubernetes deployment.
I am thinking of using MongoDb as the Query Database. The app is dotnet webapi app. So what would be the best approach:
Create a sidecar Pod which containerizes the dotnet app AND the MongoDb together in one pod. Scale as needed.
Containerize the MongoDb in its own pod and deploy one MongoDb pod PER REGION. And then have the dotnet containers use the MongoDb pod within its own region. Scale the MongoDb by region. And the dotnet pod as needed within and between Regions.
Some other approach I haven't thought of
I would start with the most simple approach and that is to place the write and read side together because they belong to the same bounded context.
Then in the future if it is needed, then I would consider adding more read side or scaling out to other regions.
To get started I would also consider adding the ReadSide inside the same VM as the write side. Just to keep it simple, as getting it all up and working in production is always a big task with a lot of pitfalls.
I would consider using a Kafka like system to transport the data to the read-sides because with queues, if you later add a new or if you want to rebuild a read-side instance, then using queues might be troublesome. Here the sender will need to know what read-sides you have. With a Kafka style of integration, each "read-side" can consume the events in its own pace. You can also more easily add more read-sides later on. And the sender does not need to be aware of the receivers.
Kafka allows you to decouple the producers of data from consumers of the data, like this picture that is taken form one of my training classes:
In kafka you have a set of producers appending data to the Kafka log:
Then you can have one or more consumers processing this log of events:
It has been almost 2 years since I posted this question. Now with 20-20 hindsight I thought I would post my solution. I ended up simply provisioning an Azure Cosmos Db in the region where my cluster lives, and hitting the Cosmos Db for all my query-side requirements.
(My cluster already lives in the Azure Cloud)
I maintain one Postges Db in my original cluster for my write-side requirements. And my app scales nicely in the cluster.
I have not yet needed to deploy clusters to new regions. When that happens, I will provision a replica of the Cosmos Db to that additional region or regions. But still just one postgres db for write-side requirements. Not going to bother to try to maintain/sync replicas of the postgres db.
Additional insight #1. By provisioning the the Cosmos Db separately from my cluster (but in the same region), I am taking the load off of my cluster nodes. In effect, the Cosmos Db has its own dedicated compute resources. And backup etc.
Additional insight #2. It is obvious now but wasnt back then, that tightly coupling a document db (such as MongoDb) to a particular pod is...a bonkers bad idea. Imagine horizontally scaling your app and with each new instance of your app you would instantiate a new document db. You would quickly bloat up your nodes and crash your cluster. One read-side document db per cluster is an efficient and easy way to roll.
Additional insight #3. The read side of any CQRS can get a nice jolt of adrenaline with the help of an in-memory cache like Redis. You can first see if some data is available in the cache before you hit the docuement db. I use this approach for data such as for a checkout cart, where I will leave data in the cache for 24 hours but then let it expire. You could conceivably use redis for all your read-side requirements, but memory could quickly become bloated. So the idea here is consider deploying an in-memory cache on your cluster -- only one instance of the cache -- and have all your apps hit it for low-latency/high-availability, but do not use the cache as a replacemet for the document db.
I am currently researching different databases to use for my next project. I was wanting to use a decentralized database. For example Apache Cassandra claims to be decentralized. MongoDB however says it uses replication. From what I can see, as far as these databases are concerned, replication and decentralization are basically the same thing. Is that correct or is there some difference/feature between decentralization and replication that I'm missing?
Short answer, no, replication and decentralization are two different things. As a simple example, let's say you have three instances (i1, i2 and i3) that replicate the same data. You also have a client that fetches data from only i1. If i1 goes down you will still have the data replicated to i2 and i3 as a backup. But since i1 is down the client has no way of getting the data. This an example of a centralized database with single point of failure.
A centralized database has a centralized location that the majority of requests goes through. It could, as in Mongo DB's case be instances that route queries to instances that can handle the query.
A decentralized database is obviously the opposite. In Cassandra any node in a cluster can handle any request. This node is called the coordinator for the request. The node then reads/writes data from/to the nodes that are responsible for that data before returning a result to the client.
Decentralization means that there should be no single point of failure in your application architecture. These systems will provide deployment scheme, where there's no leader (or master) elected during the service life-cycle. These are often deliver services in a peer-to-peer fashion.
Replication means, that simply your data is copied over to another server instance to ensure redundancy and failure tolerance. Client requests can still be served from copies, but your system should ensure some level of "consistency", when making copies.
Cassandra serves requests in a peer-to-peer fashion. Meaning that clients can initiate requests to any node participating in the cluster. It also provides replication and tunable consistency.
MongoDB offers master/slave deployment, so it's not considered as decentralized. You can deliver a multi-master, to ensure that requests can still be served if master node goes down. It also provides replication out-of-the box.
Links
Cassandra's tunable consistency
MongoDB's master-slave configuration
Introduction to Cassandra's architecture
I have a production sharded cluster of PostgreSQL machines where sharding is handled at the application layer. (Created records are assigned a system generated unique identifier - not a UUID - which includes a 0-255 value indicating the shard # that record lives on.) This cluster is replicated in RDS so large read queries can be executed against it.
I'm trying to figure out the best option for accessing this data within Spark.
I was thinking of creating a small dataset (a text file) that contains only the shard names, i.e., integration-shard-0, integration-shard-1, etc. Then I'd partition this dataset across the Spark cluster so ideally each worker would only have a single shard name (but I'd have to handle cases where a worker has more than one shard). Then when I create a JdbcRDD I'd actually create 1..n such RDDs, one for each shard name residing on that worker, and merge the resulting RDDs together.
This seems like it would work but before I go down this path I wanted to see how other people have solved similar problems.
(I also have a separate Cassandra cluster available as second datacenter for analytic processing which I will be accessing with Spark.)
I ended up writing my own ShardedJdbcRDD for which the preliminary version can be found at the following gist:
https://gist.github.com/cfeduke/3bca88ed793ddf20ea6d
At the time I wrote it, this version doesn't support use from Java, only Scala. (I may update it.) It also doesn't have the same sub-partitioning scheme that JdbcRDD has, for which I will eventually create an overload constructor. Basically ShardedJdbcRDD will query your RDBMS shards across the cluster; if you have at least as many Spark slaves as shards, each slave will get one shard for its partition.
A future overloaded constructor will support the same range query that JdbcRDD has so if there are more Spark slaves in the cluster than shards the data can be broken up into smaller sets through range queries.
I am using phpcassa library to get and set data into cassandra which i have installed on 2 servers.... I am making connection with my seed node using CassandraConn::add_node('..*.**', 9160); so while insertion automatically gets replicate on other node in cluster... but if my seed node dies (if i shut down the cassandra process) then my insertion will not work and i am unable to get data from the other node too:(, so am i doing the right thing... because in this way their is no use of cluster then.. as ideally if my one node dies in the other node should respond me.. any help will be appreciated?
Connect with RRDNS instead of a single host. http://en.wikipedia.org/wiki/Round-robin_DNS
(You can also use a load balancer but that is usually overkill here.)
Most Cassandra clients will let you directly specify multiple server addresses, and will try them in turn if one fails.
I haven't used phpcassa (only pycassa) but the API docs at http://thobbs.github.com/phpcassa/api/index.html seem to suggeest that you can specify multiple servers.
Round-robin is another alternative as per the previous answer.