interrupted lifeline in enterprise architect - enterprise-architect

I am making a sequence diagram in Enterprise Architect. The situation I want to show is a process C that calls a number of parallel processes (D and D2) asynchronously. Each called process indicates by (aysnc) callback when they're done. The process that sent out the signals waits until all processes have sent their callbacks and only then will it reply to its client B.
I want to show that B is blocked until C replies. That works, but when I add the client of B, A to the picture, the lifeline for A shows an interruption.
In the diagram below, the calls from A to B and from B to C are configured as synchronous calls and the reply arrows are set as 'is return'. The calls from C to D and D2 and those from D and D2 to C are configured as asynchronous calls.
Is it possible to show the lifeline for A as uninterrupted?
If so, how?
I am mostly puzzled by why it shows B's lifeline as uninterrupted, but not the one for A.

I'll state it for V13.5. Other versions might behave different.
Once you are at this point
right click the middle message to the left side
and tick Activation Down. That will yield:

Related

How can an event sourced entity to subscribe to state changes in another entity?

I have an events-sourced entity (C) that needs to change its state in response to state changes in another entity of a different type (P). The logic to whether the state of C should actually change is quite complex and the data to compute that lives in C; moreover, many instances of C should listen to one instance of P, and the set of instances increases over time, so I'd rather they pull out of a stream knowing the ID of P than have P keep track of the IDs of all the Cs and push to them.
I am thinking of doing something such as:
Tag a projection of P's events
Have a Subscribe(P.id) command that gets sent to C
If C is not already subscribing to a P (it can only subscribe to one, and it shouldn't change), fire an event Subscribed(P.id)
In response to the event, use Akka-persistent-query to materialize the stream of events tagged in 1, map them to commands, and run asynchronously with a sync that sends them to my ES entity reference
This seems a bit like an anti pattern to have a stream run in the event handler. I am wondering if there's a better/more supported way to do this without the upstream having to know about the downstream. I decided against Akka pub-sub because it does at-most-once delivery, and I'd like to avoid using Kafka if possible.
You definitely don't want to run the stream in the event handler: the event handler should never side effect.
Assuming that you would like a C to get events from times when that C was not running (including before that C had ever run), this suggests that a stream should be run for each C. Since the subscription will be to one particular P, I'd seriously consider not tagging, but instead using the eventsByPersistenceId stream to get all the events of a P and ignore the ones that aren't of interest. In the stream, you translate those to commands in C's API, including the offset in P's event stream with the command, and send it to C (for at-least-once delivery, a mapAsync with an ask is useful; C will persist an event recording that it processed the offset: this allows the command to be idempotent, as C can acknowledge the command if the offset is less-than-or-equal-to the high water offset in its state).
This stream gets kicked off by the command-handler after successfully persisting a Subscribed(P.id) event (in this case starting from offset 0) and then gets kicked off after the persistent actor is rehydrated if the state shows it's subscribed (in this case starting from one plus the high water offset).
The rationale for not using tagging here arises from an assumption that the number of events C isn't interested in is smaller than the number of events with the tag from Ps that C isn't subscribed to (note that for most of the persistence plugins, the more tags there are, the more overhead there is: a tag which is only used by one particular instance of an entity is often not a good idea). If the tag in question is rarely seen, this assumption might not hold and eventsByTag and filtering by id could be useful.
This does of course have the downside of running discrete streams for every C: depending on how many Cs are subscribed to a given P, the overhead of this may be substantial, and the streams for subscribers which are caught up will be especially wasteful. In this scenario, responsibility for delivering commands to subscribed Cs for a given P can be moved to an actor. The only real change in that scenario is that where C would run the stream, it instead confirms that it is subscribed to the event stream by asking that actor feeding events from the P. Because this approach is a marked step-up in complexity (especially around managing when Cs join and drop out of the shared "caught-up" stream), I'd tend to recommend starting with the stream-per-C approach and then going to the shared stream (it's also worth noting that there can be multiple shared streams: in fact I'd tend to have shared streams be per-ActorSystem (e.g. a "node singleton" per P of interest) so as not to involve remoting), since it's not difficult to make the transition (from C's perspective, there's not really a difference whether the adapted commands are coming from a stream it started or from a stream being run by some other actor).

How to do a Kafka Streams Left Join that returns LHS messages that have no corresponding RHS after a fixed period?

I'm new to Kafka Streams. I've just put together a left join between Stream A and Stream B. It happens in my setup that for every A there is a B, which arrives a few millis after A but in real life there may be missing B's, or B's that arrive late (after say 250ms). I want to be able to find these (missing and late B's).
I thought it would be easy - just do a left join between A and B, specify the window, and job done.
But I found to my surprise that I get 2 rows in the left join stream output.
Thinking about it, this makes sense - when A arrives, there is no B and a join row that looks like A-[null] is generated. A few milliseconds later, B arrives, and then A-B is generated.
What I want is to have those A messages that do not have a corresponding B after say 100ms - B could be late; might never arrive; but it did not arrive within 100ms of A.
Is there a standard pattern / idiomatic way to do this? I am thinking at the moment that maybe I would have to have a consumer that receives the A and then fires a message after a set time (although I'm not exactly sure how that would be done without some clunky synchronous code) and then I would have to join between that (call it Ax) and B.
This is probably quite a common requirement, but it doesn't seem as easy as I first thought....any thoughts/pointers/tips would be much appreciated. Thanks.
OK I have something that seems to work. All I need to do is, after the left join (which of course has a window), do a .groupByKey().count() and after that I can e.g. send stuff (using filter() and branch() I think, although I haven't done it yet) with a count < 2 to one stream ("missing"), and the others to another "good" stream eg for analysis/calculation of metrics etc.
I tried using .windowedBy(TimeWindows.of(ofMillis(250)).grace(ofMillis(10)))
and .suppress(Suppressed.untilWindowCloses(unbounded())); but got nowhere with it, so it's just as well that a groupBy with count is all that is needed by the looks of things.

Implementing a pub-sub pattern with Axon

We have a multi-step process we'd like to implement using a pub-sub pattern, and we're considering Axon for a big part of the solution.
Simply, the goal is to generate risk scores for insurance companies. These steps would apply generally to a pub-sub application:
A client begins the process by putting a StartRiskScore message on a bus, specifying the customer ID. The client subscribes to RiskScorePart3 messages for the customer ID.
Actor A, who subscribes to StartRiskScore messages, receives the message, generates part 1 of the risk score, and puts it on the bus as a RiskScorePart1 message, including the customer ID.
Actor B, who subscribes to RiskScorePart1 messages, receives the message, generates part 2 of the risk score, and puts it on the bus as a RiskScorePart2 message, including the customer ID.
Actor C, who subscribes to RiskScorePart2 messages, receives the message, generates part 3 of the risk score, and puts it on the bus as a RiskScorePart3 message, including the customer ID.
The original client, who already subscribed to RiskScorePart3 messages for the customer ID, receives the message and the process is complete.
I considered the following Axon implementation:
A. Make an aggregate called RiskScore
B. StartRiskScore becomes a command associated with the RiskScore aggregate.
C. The command handler for StartRiskScore becomes Actor A. It processes some data and puts a RiskScorePart1 event on the bus.
Now, here's the part I'm concerned about...
D. I'd create a RiskScorePart1 event handler in a separate PubSub object, which would do nothing but put a CreateRiskScorePart2 command on the command bus using the data from the event.
E. In the RiskScore aggregate, a command handler for CreateRiskScorePart2 (Actor B) would do some processing, then put a RiskScorePart2 event on the bus.
F. Similar to step D, a PubSub event handler for RiskScorePart2 would put a CreateRiskScorePart3 command on the command bus.
G. Similar to step E, a RiskScore aggregate command handler for CreateRiskScorePart3 (Actor C) would do some processing, then put a RiskScorePart3 event on the bus.
H. In the aggregate and the RiskScoreProjection query module, a RiskScorePart3 event handler would update the aggregate and projection, respectively.
I. The client is updated by a subscribed query to the projection.
I understand that replay occurs when a service is restarted. That's bad for old events because I don't want to re-fire commands from the PubSub handlers. It's good news for new events that occurred while the PubSub service was down.
EDIT #1:
I've considered using an Axon saga, which would be great. However, the same questions still exist even if PubSub is a saga:
How to ensure PubSub event handlers process each event exactly once, even after a restart?
Is there a different approach I should be taking to implement a pub-sub pattern in Axon?
Thanks for your help!
I think I can give some guidance in this area.
In your update you've pointed out that you envisioning the usage of a Saga to perform this set up.
I'd however would like to point out that a Saga is meant to 'Orchestrate a Complex Business Transaction between Bounded Contexts/Aggregates'. The scenario you're describing is not a transaction between other contexts and/or aggregates, it's all contained in a single Aggregate Root, the RiskScore.
I'd thus suggest against using a Saga for this situation, as the tool (read: Saga) is relatively heavy wait for what you're describing.
Secondly, from the steps you describe from A to I, it looks as if the components described in steps D and F are purely there to react with a command on the event. Thus, they perform zero business functionality, taking that assumption.
Taking my initial point of a transaction contained in a single Aggregate Root and the fact no business functionality occurs on the dispatching of the command back in to the aggregate, why not contain the entirety of the operation within the RiskScore aggregate?
You can very easily handle the events an Aggregate publishes with the #EventSourcingHandler and on that method apply another event. Or, if you would like to be 'pure' about segregating state updates and apply events, you could just apply more events for the separate risk-score steps there after.
Any how, I don't see why you would need to hold tightly towards the pub-sub pattern. I'd take a solution which resolves the business needs as best as possible. That might be an existing pattern, but could just as well be any other approach you can think off.
This is my two cents to the situation, hope they help!

Infinity confirmation loop

I came to interesting theoretical problem:
Let's assume we have Program A and Program B connected via some IPC like tcp socket or named pipe. Program A sends some data to Program B and depending on success of data delivery both A and B do some operations. However B should do its operation only if it is sure that A has got the delivery confirmation. So we came up to 3 connections:
A -> B [data tranfer]
B -> A [delivery confirmation]
A -> B [confirmation of getting delivery confirmation]
It may look weird but the goal is to don't do any operation neither on A nor B until both sides know that data has been transfered.
And here is the problem because second connection is for confirmation of success of first. And third is for confirmation of second but in fact there is no guarantee that connection 2 and 3 not fail and in that case we fall into infinite loop of confirmation. Is there some CS theory which solve that problem?
If I read your question right, the problem is called "the two general's problem". The gist of the issue is that the last entity that sends either a message or an acknowledgement knows nothing about the status of what it just sent, and so on.

How to synchronize tasks in different dispatch queues?

I'm new to queues and I'm having some trouble setting up the following scheme.
I have three tasks that need doing.
Task A: Can only run on the main queue, can run asynchronously with task B, cannot run asynchronously with task C. Runs a lot but runs fairly quickly.
Task B: Can run on any queue, can run asynchronously with task A, cannot run asynchronously with task C. Runs rarely, but takes a long time to run. Needs Task C to run afterwards, but once again task C cannot run asynchronously with task A.
Task C: Can run on any queue. Cannot run asynchronously with either task A or task B. Runs rarely and runs quickly.
Right now I have it like this:
Task A is submitted to the main queue by a Serial Queue X (a task is submitted to Serial Queue X to submit task A to the main queue).
Task B is submitted to Serial Queue X.
Task C is submitted to the main queue by Serial Queue X, just like task A.
The problem here is that task C sometimes runs at the same time as task B. The main queue sometimes runs task C at the same time that the serial queue runs task B.
So, how can I ensure that task B and task C never run at the same time while still allowing A and B to run at the same time and preventing A and C from running at the same time? Further, is there any easy way to make sure they run the same number of times? (alternating back and forth)
You know, I think I had this problem on my GRE, only A, B, and C were Bob, Larry, and Sue and they all worked at the same office.
I believe that this can be solved with a combination of a serial queue and a dispatch semaphore. If you set up a single-wide serial dispatch queue and submit tasks B and C to that, you'll guarantee that they won't run at the same time. You can then use a dispatch semaphore with a count set to 1 that is shared between tasks A and C to guarantee that only one of them will run at a time. I describe how such a semaphore works in my answer here. You might need to alter that code to use DISPATCH_TIME_FOREVER so that task A is held up before submission rather than just tossed aside if C is running (likewise for submission of C).
This way, A and B will be running on different queues (the main queue and your serial queue) so they can execute in parallel, but B and C cannot run at the same time due to their shared queue, nor can A and C because of the semaphore.
As far as load balancing on A and C goes (what I assume you want to balance), that's probably going to be fairly application-specific, and might require some experimentation on your part to see how to interleave actions properly without wasting cycles. I'd also make sure that you really need them to alternate evenly, or if you can get by with one running slightly more than another.
Did you check out NSOperation to synchronize your operations? You can handle dependencies there.
There's a much simpler way, of course, assuming that C must always follow A and B, which is to have A and B schedule C as completion callbacks for their own operations (and have C check to make sure it's not already running, in case A and B both ask for it to happen simultaneously). The completion callback pattern (described in dispatch_async man page) is very powerful and a great way of serializing async operations that need to be nonetheless coupled.
Where the problem is A, B, C, D and E where A-D can run async and E must always run at the end, dispatch groups are a better solution since you can set E to run as the completion callback for the entire group and then simply put A-E in that group.