I'm using PESSEMSTIC_WRITE lock on my repository method. So that is locks my object till end of transaction. However, I've got a problem, within one endpoint, controller -> service -> I start transaction then I need to update my object and send message to kafka, after that I need within this method again update my object and send to kafka. So because it's one transaction, changes works only local in cache. But I need to save in database then send to kafka, then again change my object and save to database and send to kafka message, I can't use REQUIRES_NEW and create a new transaction in any way, because my object is locked. So how I can deal with it?
This lock is used in many parts of my project to fix parallel transactions.
You should create new service which will orchestrate the flow. That way you will be able to obtain the same pessimistic lock again in the second operation.
#Service
class OrchestratorService {
...
void executeFlow() {
someService.executeFirstOperationAndSendKafkaEvent()
someService.executeSecondOperationAndSendKafkaEvent()
}
}
#Service
class SomeService {
#Transactional(REQUIRES_NEW)
void executeFirstOperationAndSendKafkaEvent() {
// any lock which obtained inside this method will be released once this method finishes
...
}
#Transactional(REQUIRES_NEW)
void executeSecondOperationAndSendKafkaEvent() {
// any lock which obtained inside this method will be released once this method finishes
...
}
}
There is one more important aspect worth to mention - sending kafka event is not transactional. #Transactional guarantees only that changes made to datasource will be transactional (in this case DB). Hence following scenarios are possible:
if event is sent inside transaction scope, transaction can be rollbacked after succesfull sending kafka event
if event is sent outside transaction commit, event sending may fail after succesful commiting transaction
Due to this nature it's good to split the process into few phases:
apply business changes in DB and store a flag in DB that kafka event should be sent, but it hasn't been done yet,
outside TX scope send event to kafka
in new TX change the flag that event has been sent, or schedule retry if there was error during sending event.
Related
I am using KafkaProducer from the kafka-client 1.0.0 library, and as per the documentation, the method Future<RecordMetadata> send(ProducerRecord<K, V> record) will immediately return but actually, but looks like not. This method also calls another method which is doSend (see below for the snippet) in the same class, and inside this method, it is waiting for the metadata of the topic, which I think is necessary as it is related to partitions and etc.
/**
* Implementation of asynchronously send a record to a topic.
*/
private Future<RecordMetadata> doSend(ProducerRecord<K, V> record, Callback callback) {
TopicPartition tp = null;
try {
// first make sure the metadata for the topic is available
ClusterAndWaitTime clusterAndWaitTime = waitOnMetadata(record.topic(), record.partition(), maxBlockTimeMs);
long remainingWaitMs = Math.max(0, maxBlockTimeMs - clusterAndWaitTime.waitedOnMetadataMs);
Cluster cluster = clusterAndWaitTime.cluster;
Is there any other options that it is fully asynchronous? The problem with this why I wanted it to be fully asynchronous is because if some of the servers in the bootstrap.servers are not responding, it will wait with the time based on max.block.ms, but i don't actually want it to wait, but instead, i just wanted it to return.
The documentation where i saw that it is gonna return immediately:
KafkaProducer java doc
The send is asynchronous and this method will return immediately once
the record has been stored in the buffer of records waiting to be
sent. This allows sending many records in parallel without blocking to
wait for the response after each one.
your analysis is correct - kafka has a (sometimes) blocking "non-blocking" API.
this has been brought up before - https://cwiki.apache.org/confluence/display/KAFKA/KIP-286%3A+producer.send%28%29+should+not+block+on+metadata+update - but never prioritized.
It's as asynchronous as it can be. Kafka maintains a cache of metadata that gets updated occasionally to keep it current and in your scenario you only wait if that cache is stale or not initialized. Once the cache is initialized there's no wait.
If your code has a single upcoming send() that must be executed as quickly as possible, you might try sending a prepatory partitionsFor() method call to the producer to see if you can't force update the cache if needed.
Aside from that, there will always be the potential, occasional wait for the metadata cache to be refreshed.
I have a stateless EJB which inserts data into database, sends a response immediately and in the last step calls an asynchronous EJB. Asynchronous EJB can run for long (I mean 5-10 mins which is longer then JPA transaction timeout). The asynchronous ejb needs to read (and work on it) the same record tree (only read) as the one persisted by stateless EJB.
Is seems that the asynchronous bean tries to read the record tree before it was commited or inserted (JPA) by the statelsss EJB so record tree is not visible by async bean.
Stateless EJB:
#Stateless
public class ReceiverBean {
public void receiverOfIncomingRequest(data) {
long id = persistRequest(data);
sendResponseToJmsBasedOnIncomingData(data);
processorAsyncBean.calculate(id);
}
}
}
Asynchronous EJB:
#Stateless
public class ProcessorAsyncBean {
#Asynchronous
public void calculate(id) {
Data data = dao.getById(id); <- DATA IS ALLWAYS NULL HERE!
// the following method going to send
// data to external system via internet (TCP/IP)
Result result = doSomethingForLongWithData(data);
updateData(id, result);
}
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public void updateData(id, result) {
dao.update(id, result);
}
Maybe I can use a JMS queue to send a signal with ID to the processor bean instead of calling asyc ejb (and message driven bean read data from database) but I want to avoid that if possible.
Another solution can be to pass the whole record tree as a detached JPA object to the processor async EJB instead of reading data back from database.
Can I make async EJB work well in this structure somehow?
-- UPDATE --
I was thinking about using Weblogic JMS. There is another issue here. In case of big load, when there are 100 000 or more data in queue (that will be normal) and there is no internet connection then all of my data in the queue will fail. In case of that exception (or any) appears during sending data via internet (by doSomethingForLongWithData method) the data will be rollbacked to the original queue based on the redelivery-limit and repetitaion settings of Weblogic. This rollback event will generate 100 000 or more threads on Weblogic in the managed server to manage redelivery. That new tons of background processes can kill or at least slow down the server.
I can use IBM MQ as well because we have MQ infrastructure. MQ does not have this kind of affect on Weblogic server but MQ does not have redelivery-limit and delay function. So in case of error (rollback) the message will appear immediately on the MQ again, without delay and I built a hand mill. Thread.sleep() in the catch condition is not a solution in EE application I guess...
Is seems that the asynchronous bean tries to read the record tree before it was commited or inserted (JPA) by the statelsss EJB so record tree is not visible by async bean.
This is expected behavior with bean managed transactions. Your are starting the asynchronous EJB from the EJB with its own transaction context. The asynchronous EJB never uses the callers transaction context (see EJB spec 4.5.3).
As long as you are not using transaction isolation level "read uncommited" with your persistence, you won't see the still not commited data from the caller.
You must think about the case, when the asynch job won't commit (e.g. applicationserver shutdown or abnormal abortion). Is the following calculation and update critical? Is the asynchronous process recoverable if not executed successfully or not even called?
You can think about using bean managed transactions, commiting before calling the asynchronous EJB. Or you can delegate the data update to another EJB with a new transactin context. This will be commited before the call of the asynchronous EJB. This is usally ok for uncritical stuff, missing or failing.
Using persistent and transactional JMS messages along with a dead letter queue has the advantage of a reliable processing of your caclulation and update, even with stopping / starting application server in between or with temporal errors during processing.
You just need to call async method next to the one with transaction markup, so when transaction is committed.
For example, caller of receiverOfIncomingRequest() method, could add
processorAsyncBean.calculate(id);
call next to it.
UPDATE : extended example
CallerMDB
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public void onMessage(Message message) {
long id = receiverBean.receiverOfIncomingRequest(data);
processorAsyncBean.calculate(id);
}
ReceiverBean
#TransactionAttribute(TransactionAttributeType.REQUIRED)
public long receiverOfIncomingRequest(data) {
long id = persistRequest(data);
sendResponseToJmsBasedOnIncomingData(data);
return id;
}
All the samples usually demonstrate some sort of change to reliable collections with CommitAsync() or rollback in case of a failure. My code is using TryRemoveAsync(), so failure is not a concern (will be retried later).
Is there a significant downside to invoking tx.CommitAsync() when no changes to reliable collections where performed?
Whenever you open a Transaction and execute commands against a collection, these commands acquire locks in the TStore(Collection) and are recorded to the transaction temporary dictionary(Change tracking) and also to the transaction logs, the replicator then will forward these changes to the replicas.
Once you execute the tx.CommitAsync() the temporary records are saved to the disk, the transaction is registered in the logs and then replicated to secondary replicas to also commit and save to the disk, and then the locks are released.
If the collection is not modified, the transaction won't have anything to save\replicate and will just close the transaction.
If you don't call tx.CommitAsync() after the operation, the transaction is aborted and any pending operations(if any) are discarded and the abort operation is written to the logs to notify other replicas.
In both cases, Commit and Abort, will generate logs(and replicate them), The only detail I am not sure is if these logs are also generated when no changes are in place, I assume they are. Regarding performance, the act of reading or attempting to change a collection, will acquire locks and need to be released with a commit or abort, I think these are to biggest impact on your code, because they will prevent other threads of modifying it while you not complete the transaction. In this case I wouldn't be too worried committing an empty transaction.
// Create a new Transaction object for this partition
using (ITransaction tx = base.StateManager.CreateTransaction()) {
//modify the collection
await m_dic.AddAsync(tx, key, value, cancellationToken);
// CommitAsync sends Commit record to log & secondary replicas
// After quorum responds, all locks released
await tx.CommitAsync();
} // If CommitAsync not called, this line will Dispose the transaction and discard the changes
You can find most of these details on this documentation
If you really want to go deep on implementation details to answer this question, I suggest you dig the answer in the source code for the replicator here
I am trying to implement an event driven architecture to handle distributed transactions. Each service has its own database and uses Kafka to send messages to inform other microservices about the operations.
An example:
Order service -------> | Kafka |------->Payment Service
| |
Orders MariaDB DB Payment MariaDB Database
Order receives an order request. It has to store the new Order in its DB and publish a message so that Payment Service realizes it has to charge for the item:
private OrderBusiness orderBusiness;
#PostMapping
public Order createOrder(#RequestBody Order order){
logger.debug("createOrder()");
//a.- Save the order in the DB
orderBusiness.createOrder(order);
//b. Publish in the topic so that Payment Service charges for the item.
try{
orderSource.output().send(MessageBuilder.withPayload(order).build());
}catch(Exception e){
logger.error("{}", e);
}
return order;
}
These are my doubts:
Steps a.- (save in Order DB) and b.- (publish the message) should be performed in a transaction, atomically. How can I achieve that?
This is related to the previous one: I send the message with: orderSource.output().send(MessageBuilder.withPayload(order).build()); This operations is asynchronous and ALWAYS returns true, no matter if the Kafka broker is down. How can I know that the message has reached the Kafka broker?
Steps a.- (save in Order DB) and b.- (publish the message) should be
performed in a transaction, atomically. How can I achieve that?
Kafka currently does not support transactions (and thus also no rollback or commit), which you'd need to synchronize something like this. So in short: you can't do what you want to do. This will change in the near-ish future, when KIP-98 is merged, but that might take some time yet. Also, even with transactions in Kafka, an atomic transaction across two systems is a very hard thing to do, everything that follows will only be improved upon by transactional support in Kafka, it will still not entirely solve your issue. For that you would need to look into implementing some form of two phase commit across your systems.
You can get somewhat close by configuring producer properties, but in the end you will have to chose between at least once or at most once for one of your systems (MariaDB or Kafka).
Let's start with what you can do in Kafka do ensure delivery of a message and further down we'll dive into your options for the overall process flow and what the consequences are.
Guaranteed delivery
You can configure how many brokers have to confirm receipt of your messages, before the request is returned to you with the parameter acks: by setting this to all you tell the broker to wait until all replicas have acknowledged your message before returning an answer to you. This is still no 100% guarantee that your message will not be lost, since it has only been written to the page cache yet and there are theoretical scenarios with a broker failing before it is persisted to disc, where the message might still be lost. But this is as good a guarantee as you are going to get.
You can further reduce the risk of data loss by lowering the intervall at which brokers force an fsync to disc (emphasized text and/or flush.ms) but please be aware, that these values can bring with them heavy performance penalties.
In addition to these settings you will need to wait for your Kafka producer to return the response for your request to you and check whether an exception occurred. This sort of ties into the second part of your question, so I will go into that further down.
If the response is clean, you can be as sure as possible that your data got to Kafka and start worrying about MariaDB.
Everything we have covered so far only addresses how to ensure that Kafka got your messages, but you also need to write data into MariaDB, and this can fail as well, which would make it necessary to recall a message you potentially already sent to Kafka - and this you can't do.
So basically you need to choose one system in which you are better able to deal with duplicates/missing values (depending on whether or not you resend partial failures) and that will influence the order you do things in.
Option 1
In this option you initialize a transaction in MariaDB, then send the message to Kafka, wait for a response and if the send was successful you commit the transaction in MariaDB. Should sending to Kafka fail, you can rollback your transaction in MariaDB and everything is dandy.
If however, sending to Kafka is successful and your commit to MariaDB fails for some reason, then there is no way of getting back the message from Kafka. So you will either be missing a message in MariaDB or have a duplicate message in Kafka, if you resend everything later on.
Option 2
This is pretty much just the other way around, but you are probably better able to delete a message that was written in MariaDB, depending on your data model.
Of course you can mitigate both approaches by keeping track of failed sends and retrying just these later on, but all of that is more of a bandaid on the bigger issue.
Personally I'd go with approach 1, since the chance of a commit failing should be somewhat smaller than the send itself and implement some sort of dupe check on the other side of Kafka.
This is related to the previous one: I send the message with:
orderSource.output().send(MessageBuilder.withPayload(order).build());
This operations is asynchronous and ALWAYS returns true, no matter if
the Kafka broker is down. How can I know that the message has reached
the Kafka broker?
Now first of, I'll admit I am unfamiliar with Spring, so this may not be of use to you, but the following code snippet illustrates one way of checking produce responses for exceptions.
By calling flush you block until all sends have finished (and either failed or succeeded) and then check the results.
Producer<String, String> producer = new KafkaProducer<>(myConfig);
final ArrayList<Exception> exceptionList = new ArrayList<>();
for(MessageType message : messages){
producer.send(new ProducerRecord<String, String>("myTopic", message.getKey(), message.getValue()), new Callback() {
#Override
public void onCompletion(RecordMetadata metadata, Exception exception) {
if (exception != null) {
exceptionList.add(exception);
}
}
});
}
producer.flush();
if (!exceptionList.isEmpty()) {
// do stuff
}
I think the proper way for implementing Event Sourcing is by having Kafka be filled directly from events pushed by a plugin that reads from the RDBMS binlog e.g using Confluent BottledWater (https://www.confluent.io/blog/bottled-water-real-time-integration-of-postgresql-and-kafka/) or more active Debezium (http://debezium.io/). Then consuming Microservices can listen to those events, consume them and act on their respective databases being eventually consistent with the RDBMS database.
Have a look here to my full answer for a guideline:
https://stackoverflow.com/a/43607887/986160
I am trying to implement a method in scala that performs couple of database updates using Slick (in the same DB transaction) and then sends several akka messages. Both sending messages and db updates should be atomic. In JEE world it happens pretty much transparently with JMS and DB(JPA for instance) participating in the same transaction and being coordinated by JTA. How do I achieve it with Akka and Slick. Examples would be very beneficial.
To continue the discussion in comments, As I see the solution to your problem:
Start with the main actor which performs db interaction. E.g. on message Start it updates database using Slick and saves Connection to actor's instance varibale and send messages to child actors. That actors have to send to your Main actor confirmation, for example message ConfirmTratnsaction. In the reaction on that message you perform commit on previously saved Connection and close it(or release it to the pool). Also, Main actor has to supervise that child actors. If that actor fail after sending message(or timeout occurs) you have to rollback transaction via saved Connection