In a microservice architecture, we usually have two ways for 2 microservices to communicate. Let’s say service A needs to get information from service B. The first option is a remote call, usually synchronous over HTTPS, so service A query an API hosted by service B.
The second option is adopting an event-driven architecture, where the state of service B can be published and consumed by service A in an asynchronous way. Using this model, service A can update its own database with the information from the service B’s events and all queries are made locally in this database. This approach has the advantage of a better decoupling of microservices, from development until operations. But it comes with some disadvantages related to data replication.
The first one is the high consumption of disk space, since the same data can reside in the databases of the microservices that need it. But the second one is worst in my opinion: data can become stale if service B can’t process its subscription as fast as needed, or it can’t be available for service A at the same time it’s created at service B, given the eventual consistency of the model.
Let’s say we’re using Kafka as an event hub, and its topics are configured to use 7 days of data retention. Service A is kept in sync as service B publishes its state. After two weeks, a new service C is deployed and its database needs to be enriched with all information that service B holds. We can only get partial information from Kafka topics since the oldest events are gone. My question here is what are the patterns we can use to achieve this microservice’s database enrichment (besides asking service B to republish all its current state to the event hub).
There are 2 options:
You can enable log compaction for Kafka for an individual topic. That will keep the most recent value for a given key discarding old updates. This saves space and also holds more data than the normal mode for a given retention period
Assuming you take a backup of service B DB on a daily basis, on introduction of a new service C, you need to first create the initial state of C from the latest backup of B and then replay the Kafka topic events from the particular offset id that represents the data after the backup.
Your concern is right but at the same time Microservices approach is give and take. You get loose coupling at the cost of individual data base for each service. There is no right answer to microservices architecture and really depends on what you are trying to achieve.
According to CAP theorem you have to compromise between consistency and availability and in most cases we go with eventual consistency . If your service A is not consistent with B then it will eventually be and that's the trade off at the cost of availability.
Another thing regarding microservice is that you only keep the reference of data from other service and may be very limited actual data from other service but definitely not much. And that too only if replicating the data is making your service independent and autonomouse, if you can't achieve any of it even after replicating the data then there is no point. e.g. Your shipping service will have complete history of order transition , but your booking service only have the latest status of order (e.g. in transit , On board etc) . User goes to booking and you show the current status of the order. But if user click details you get all the order transition history from shipping microservice. Now at some point your shipping service goes down and your user comes to check the status you at-least have current order status even when you can't show the details because order status is replicated in the booking service.
Regarding new services joining the system at later stage , Event sourcing is the pattern that you use for these kind of scenarios. Its complex pattern but it will bring your newly added services to the state at which you want them to be. You basically save all your events in an event store and replay them to attain the current state of the system and pre-populate service C database with those events.
Related
I have the following use cases:
Assume you have two micro-services one AccountManagement and ActivityReporting that processes event U.
When a user registers, event U containing the user information will published into a broker for the two micro-services to process.
AccountManagement, and ActivityReporting microservice are replicated across two instances each for performance and scalability reasons.
Each microservice instance has a consumer listening on the broker topic. The choice of topic is so that both AccountManagement, and ActivityReporting can process U concurrently.
However, I want only one instance of AccountManagement to process event U, and one instance of ActivityReporting to process event U.
Please share your experience implementing a Consume Once per Application Group, broker system.
As this would effectively solve this problem.
If all your consumer listeners even from different instances have the same group.id property then only one of them will receive the message. You need to set this property when you initialise the consumer. So in your case you will need one group.id for AccountManagement and another for ActivityReporting.
I would recommend Cadence Workflow which is much more powerful solution for microservice orchestration.
It offers a lot of advantages over using queues for your use case.
Built it exponential retries with unlimited expiration interval
Failure handling. For example it allows to execute a task that notifies another service if both updates couldn't succeed during a configured interval.
Support for long running heartbeating operations
Ability to implement complex task dependencies. For example to implement chaining of calls or compensation logic in case of unrecoverble failures (SAGA)
Gives complete visibility into current state of the update. For example when using queues all you know if there are some messages in a queue and you need additional DB to track the overall progress. With Cadence every event is recorded.
Ability to cancel an update in flight.
See the presentation that goes over Cadence programming model.
The event carried state transfer removes the need to make remote calls to query information from other services.
Let's assume a practical case:
We have a customer service that publishes CustomerCreated/CustomerUpdated events to a customer Kafka topic.
A shipping service listens to an order topic
When an OrderCreated event is read by the shipping service, it will need access to the customer address. Instead of making a REST call to the customer service, shipping service will already have the user information available locally. It is kept in a KTable/GlobalKTable with persistent storage.
My questions are about how we should implement this: we want this system to be resilient and scalable so there will be more than one instance of the customer and shipping services, meaning there will also be more than one partition for the customer and order topics.
We could find scenarios like this: An OrderCreated(orderId=1, userId=7, ...) event is read by shipping service but if it uses a KTable to keep and access the local user information, the userId=7 may not be there because the partition that handles that userId could have been assigned to the other shipping service instance.
Offhand this problem could be solved using a GlobalKTable so that all shipping service instances have access to the whole range of customers.
Is this (GlobalKTable) the recommended approach to implement that pattern?
Is it a problem to replicate the whole customer dataset in every shipping service instance when the number of customers is very large?
Can this/should this case be implemented using KTable in some way?
You can solve this problem with both a GKTable and a KTable. The former data structure is replicated so the whole table is available on every node (and uses up more storage). The latter is partitioned so the data is spread across the various nodes. This has the side effect that, as you say, the partition that handles the userId may not also handle the corresponding customer. You solve this problem by repartitioning one of the streams so they are co-partitioned.
So in your example you need to enrich Order events with Customer information in the Shipping Service. You can either:
a) Use a GlobalKTable of Customer information and join to that on each node
b) Use a KTable of Customer information and perform the same operation, but before doing the enrichment you must rekey using the selectKey() operator to ensure the data is co-partitioned (i.e. the same keys will be on the same node). You also have to have the same number of partitions in the Customer and Orders topics.
The Inventory Service Example in the Confluent Microservices Examples does something similar. It rekeys the stream of orders so they are partitioned by productId, then joins to a KTable of Inventory (also keyed by productId).
Regarding your individual questions:
Is GlobalKTable the recommended approach to implement that pattern?
Both work. The GKTable has a longer worst-case reload time if your service loses storage for whatever reason. The KTable will have a slightly greater latency as data has to be repartitioned, which means writing the data out to Kafka and reading it back again.
Is it a problem to replicate the whole customer dataset in every shipping service instance when the amount of customers is very large?
The main difference is the aforementioned worst-case reload time. Although technically GKTable and KTable have slightly different semantics (GKTable load fully on startup, KTable load incrementally based on event-time, but that's not strictly relevant to this problem)
Can this/should this case be implemented using KTable in some way?
See above.
See also: Microservice Examples, Quick start, Blog Post.
We have a micro-services architecture, with Kafka used as the communication mechanism between the services. Some of the services have their own databases. Say the user makes a call to Service A, which should result in a record (or set of records) being created in that service’s database. Additionally, this event should be reported to other services, as an item on a Kafka topic. What is the best way of ensuring that the database record(s) are only written if the Kafka topic is successfully updated (essentially creating a distributed transaction around the database update and the Kafka update)?
We are thinking of using spring-kafka (in a Spring Boot WebFlux service), and I can see that it has a KafkaTransactionManager, but from what I understand this is more about Kafka transactions themselves (ensuring consistency across the Kafka producers and consumers), rather than synchronising transactions across two systems (see here: “Kafka doesn't support XA and you have to deal with the possibility that the DB tx might commit while the Kafka tx rolls back.”). Additionally, I think this class relies on Spring’s transaction framework which, at least as far as I currently understand, is thread-bound, and won’t work if using a reactive approach (e.g. WebFlux) where different parts of an operation may execute on different threads. (We are using reactive-pg-client, so are manually handling transactions, rather than using Spring’s framework.)
Some options I can think of:
Don’t write the data to the database: only write it to Kafka. Then use a consumer (in Service A) to update the database. This seems like it might not be the most efficient, and will have problems in that the service which the user called cannot immediately see the database changes it should have just created.
Don’t write directly to Kafka: write to the database only, and use something like Debezium to report the change to Kafka. The problem here is that the changes are based on individual database records, whereas the business significant event to store in Kafka might involve a combination of data from multiple tables.
Write to the database first (if that fails, do nothing and just throw the exception). Then, when writing to Kafka, assume that the write might fail. Use the built-in auto-retry functionality to get it to keep trying for a while. If that eventually completely fails, try to write to a dead letter queue and create some sort of manual mechanism for admins to sort it out. And if writing to the DLQ fails (i.e. Kafka is completely down), just log it some other way (e.g. to the database), and again create some sort of manual mechanism for admins to sort it out.
Anyone got any thoughts or advice on the above, or able to correct any mistakes in my assumptions above?
Thanks in advance!
I'd suggest to use a slightly altered variant of approach 2.
Write into your database only, but in addition to the actual table writes, also write "events" into a special table within that same database; these event records would contain the aggregations you need. In the easiest way, you'd simply insert another entity e.g. mapped by JPA, which contains a JSON property with the aggregate payload. Of course this could be automated by some means of transaction listener / framework component.
Then use Debezium to capture the changes just from that table and stream them into Kafka. That way you have both: eventually consistent state in Kafka (the events in Kafka may trail behind or you might see a few events a second time after a restart, but eventually they'll reflect the database state) without the need for distributed transactions, and the business level event semantics you're after.
(Disclaimer: I'm the lead of Debezium; funnily enough I'm just in the process of writing a blog post discussing this approach in more detail)
Here are the posts
https://debezium.io/blog/2018/09/20/materializing-aggregate-views-with-hibernate-and-debezium/
https://debezium.io/blog/2019/02/19/reliable-microservices-data-exchange-with-the-outbox-pattern/
first of all, I have to say that I’m no Kafka, nor a Spring expert but I think that it’s more a conceptual challenge when writing to independent resources and the solution should be adaptable to your technology stack. Furthermore, I should say that this solution tries to solve the problem without an external component like Debezium, because in my opinion each additional component brings challenges in testing, maintaining and running an application which is often underestimated when choosing such an option. Also not every database can be used as a Debezium-source.
To make sure that we are talking about the same goals, let’s clarify the situation in an simplified airline example, where customers can buy tickets. After a successful order the customer will receive a message (mail, push-notification, …) that is sent by an external messaging system (the system we have to talk with).
In a traditional JMS world with an XA transaction between our database (where we store orders) and the JMS provider it would look like the following: The client sets the order to our app where we start a transaction. The app stores the order in its database. Then the message is sent to JMS and you can commit the transaction. Both operations participate at the transaction even when they’re talking to their own resources. As the XA transaction guarantees ACID we’re fine.
Let’s bring Kafka (or any other resource that is not able to participate at the XA transaction) in the game. As there is no coordinator that syncs both transactions anymore the main idea of the following is to split processing in two parts with a persistent state.
When you store the order in your database you can also store the message (with aggregated data) in the same database (e.g. as JSON in a CLOB-column) that you want to send to Kafka afterwards. Same resource – ACID guaranteed, everything fine so far. Now you need a mechanism that polls your “KafkaTasks”-Table for new tasks that should be send to a Kafka-Topic (e.g. with a timer service, maybe #Scheduled annotation can be used in Spring). After the message has been successfully sent to Kafka you can delete the task entry. This ensures that the message to Kafka is only sent when the order is also successfully stored in application database. Did we achieve the same guarantees as we have when using a XA transaction? Unfortunately, no, as there is still the chance that writing to Kafka works but the deletion of the task fails. In this case the retry-mechanism (you would need one as mentioned in your question) would reprocess the task an sends the message twice. If your business case is happy with this “at-least-once”-guarantee you’re done here with a imho semi-complex solution that could be easily implemented as framework functionality so not everyone has to bother with the details.
If you need “exactly-once” then you cannot store your state in the application database (in this case “deletion of a task” is the “state”) but instead you must store it in Kafka (assuming that you have ACID guarantees between two Kafka topics). An example: Let’s say you have 100 tasks in the table (IDs 1 to 100) and the task job processes the first 10. You write your Kafka messages to their topic and another message with the ID 10 to “your topic”. All in the same Kafka-transaction. In the next cycle you consume your topic (value is 10) and take this value to get the next 10 tasks (and delete the already processed tasks).
If there are easier (in-application) solutions with the same guarantees I’m looking forward to hear from you!
Sorry for the long answer but I hope it helps.
All the approach described above are the best way to approach the problem and are well defined pattern. You can explore these in the links provided below.
Pattern: Transactional outbox
Publish an event or message as part of a database transaction by saving it in an OUTBOX in the database.
http://microservices.io/patterns/data/transactional-outbox.html
Pattern: Polling publisher
Publish messages by polling the outbox in the database.
http://microservices.io/patterns/data/polling-publisher.html
Pattern: Transaction log tailing
Publish changes made to the database by tailing the transaction log.
http://microservices.io/patterns/data/transaction-log-tailing.html
Debezium is a valid answer but (as I've experienced) it can require some extra overhead of running an extra pod and making sure that pod doesn't fall over. This could just be me griping about a few back to back instances where pods OOM errored and didn't come back up, networking rule rollouts dropped some messages, WAL access to an aws aurora db started behaving oddly... It seems that everything that could have gone wrong, did. Not saying Debezium is bad, it's fantastically stable, but often for devs running it becomes a networking skill rather than a coding skill.
As a KISS solution using normal coding solutions that will work 99.99% of the time (and inform you of the .01%) would be:
Start Transaction
Sync save to DB
-> If fail, then bail out.
Async send message to kafka.
Block until the topic reports that it has received the
message.
-> if it times out or fails Abort Transaction.
-> if it succeeds Commit Transaction.
I'd suggest to use a new approach 2-phase message. In this new approach, much less codes are needed, and you don't need Debeziums any more.
https://betterprogramming.pub/an-alternative-to-outbox-pattern-7564562843ae
For this new approach, what you need to do is:
When writing your database, write an event record to an auxiliary table.
Submit a 2-phase message to DTM
Write a service to query whether an event is saved in the auxiliary table.
With the help of DTM SDK, you can accomplish the above 3 steps with 8 lines in Go, much less codes than other solutions.
msg := dtmcli.NewMsg(DtmServer, gid).
Add(busi.Busi+"/TransIn", &TransReq{Amount: 30})
err := msg.DoAndSubmitDB(busi.Busi+"/QueryPrepared", db, func(tx *sql.Tx) error {
return AdjustBalance(tx, busi.TransOutUID, -req.Amount)
})
app.GET(BusiAPI+"/QueryPrepared", dtmutil.WrapHandler2(func(c *gin.Context) interface{} {
return MustBarrierFromGin(c).QueryPrepared(db)
}))
Each of your origin options has its disadvantage:
The user cannot immediately see the database changes it have just created.
Debezium will capture the log of the database, which may be much larger than the events you wanted. Also deployment and maintenance of Debezium is not an easy job.
"built-in auto-retry functionality" is not cheap, it may require much codes or maintenance efforts.
I understand locking is scoped per transaction for IReliableQueue in Service Fabric. I have a requirement where once the data is read from the ReliableQueue within a transaction, I need to pass the data back to my client and preserve the lock on that data for a certain duration and if the processing fails in client, then write the data back to queue (preferably at the head so that it is picked first in next iteration).
Service Fabric doesn't support this. I recommend you look into using an external queuing mechanism for this. For example, Azure Service Bus Queues provides the functionality you describe.
You can use this package to receive SB messages within your services.
preserve the lock on that data for a certain duration
We made that once or twice too in other contexts with success using modifiable-lists and a document-field LockedUntillUtc (initialized to mininimum or null, or using a different reliable collection of locked keys (sorted on LockedUntillUtc?) - which best suites your needs?).
If you can't trust your clients to adhere to such a lock-request and write/un-lock-request contract, consider an ETag pattern - only returned on a successfull lock-request...
I have a service A and a service B.
The service A is a REST API that stores some relevant information, that the sevice B needs, in a database.
The service B handles a lot of traffic and is constantly consuming messages from a Kafka topic. Each message needs some information from the service A. But this information rarely changes, at most it changes a time per day.
So, in order to avoid hitting the REST API constantly for information that rarely changes, i'm going to implement a cache. (Not using a cache would give also the problem of querying the DB all the time). And the service B will hit the cache first, and if it doesn't have the required data it will only hit A once.
Here comes the question.
If the service A updates its information, i would need to update the cache right away.
What is the best way of doing this?
1) I can implement something in the REST API to let B notice that it needs to update his chache, but in terms of separation of concerns and encapsulation, isn't bad that A knows that B handles an internal cache? (I think it is)
2) I can implement a pool in B (and make B check if the info changed every X time) or get the cache updated every X time. But this way i have the risk of not getting the information updated right away.
3) Maybe a cache in A for this information? At least i avoid querying the DB, but not hitting the API :/
Is there a better way of handling this?
Thanks!
This is a question of consistency guarantees and it is a core issue in distributed systems.
Your scenario contains three services: A, B and the database.
If B must never ever under any circumstances use stale data, then you have two options:
All reads will hit the database (no caching at A or B). Built in mechanisms at the database, such as the database's internal cache, disk cache and RAID mirroring, might relief some of the disk I/O bottleneck.
Cache the data at A (or B) and enforce strong consistency between the cache and the database, which means that every write would be done inside a distributed transaction between the database and the cache (or by using some other consensus protocol which provides strong consistency guarantees)
The first option requires no effort, and would work fine for a certain workload, but would become a serious bottleneck if the data ingress at B requires more throughput that the database can withhold.
The second option is quite complex to implement, would slow down data changes complicate the system and hurt its overall availability: if A goes down then data cannot be changed at the database; it a goes down amidst a transaction then the data won't be available for reading from the database (!)
The good news is that most systems don't need such strong consistency guarantees, and they're OK with using stale data occasionally, under specific circumstances.
If this is the case for your system, then there are several ways of invalidating the cache. Personally I'd go with Jose Martinez's suggestion to use a message queueing system, combined with the Publish/Subscribe pattern: service B would publish a "data changed" message to the pub/sub (the message would include information as to what data item changed exactly), service A would be a subscriber processing "data changed" messages and invalidating its cache as they arrive.
Additional points:
Caching inside B might seem like it can provide strong consistency at first, but truth is you might need to scale B so you'll have multiple instances of B, each with its own cache that needs be invalidated and synchronized.
You may use a whole other service for holding the cached data (Redis, Memcached etc.), which would allow you to split he responsibilities over the cached data (B could invalidate it and A could be reading from it directly), but it won't change the essence of the consistency dilemma.
Adding a third bullet point to #CapnSchwenk's answer...
Have A submit all changes to a message queue, like Rabbit MQ. The message queue can handle persistence (in case B is down), and publisher design model implementation. The queue can also contain the new data so that B need not have to query A for the new data.
Based on this statement: "If the service A updates its information, i would need to update the cache right away", then your 2 choices in my experience would be some form of distributed cache:
Have service A provide a listener mechanism that Service B can subscribe to, to invalidate its own internal cache when data changes;
Implement a distributed caching layer such as ehcache or memcache that both Service A and B are aware of; when Service A updates it writes the new value into the cache and all subscribers are automatically updated
Hope that helps!