is it possible to apply Dolev-Strong protocol for partial Synchronous model? - distributed-computing

I know that dolev-strong protocol is for synchronous model and PBFT is used for partial synchronous model. Now my question: is it possible to use dolev-strong protocol for partial synchronous model? dolev-strong protocol uses chain-signature to identify a node as byzantine. In the Synchronous model, there exists some known finite time bound Δ. In the partial Synchronous model, there exists some known finite time bound Δ but not priory known (e.g. Internet).

Related

What are the differences between the Command Dispatcher and Mediator Design Pattern?

Recently I've been introduced to the Command Dispatcher Pattern which could help the commands to be decoupled from the command handlers in our project that's based on the Domain-Driven Design approach and CQRS pattern.
Anyway, I'm confused it with the Mediator design pattern.
Robert Harvey has already answered a question about the Command Dispatcher pattern as following:
A Command Dispatcher is an object that links the Action-Request with
the appropriate Action-Handler. It's purpose is to decouple the
command operation from the sending and receiving objects so that
neither has knowledge of the other.
According to the Wikipedia, The mediator pattern is described as:
With the mediator pattern, communication between objects is
encapsulated within a mediator object. Objects no longer communicate
directly with each other, but instead communicate through the
mediator. This reduces the dependencies between communicating objects,
thereby reducing coupling.
So, as my understanding both of them are separating the command form the commander which allow us to decouple from the caller.
I've seen some projects on Github that are using the Command Dispatcher Pattern to invoke the desired handler for the requested command while the other ones are using mediator pattern to dispatch the messages. (E.g. in most of the DotNet projects, the MediatR library is used to satisfy that).
However, I'd like to know what are the differences and benefits of using one pattern than another in our project that is based on the DDD approach and CQRS pattern?
The Command Dispatcher and Mediator patterns (as well as the Event Aggregator Pattern) share similarities in that they introduce a mediating component to decouple direct communication between components. While each can be used to achieve use cases for which the other pattern was conceived, they are each concrete patterns which differ in their original targeted problems as well as the level to which they are suited for each need.
The Command Dispatcher Pattern is primarily a convention-over-configuration approach typically used for facilitating UI layer calls into an Application Layer using discrete types to handle commands and queries as opposed to a more traditional Application Service design. When representing the queries and commands that might typically be represented in a course-grained service (e.g. OrderService) as discrete components (e.g. CreateOrderCommand, GetOrderQuery, etc.), this can result in quite a bit of noise in UI-level components such as ASP.Net MVC Controllers where a constructor might otherwise need to inject a series of discrete interfaces, only one of which would typically be needed for each user request (e.g. Controller Action). Introducing a dispatcher greatly reduces the amount of code needed in implementing components such as ASP.Net MVC Controllers since the only dependency that need be injected is the dispatcher. While not necessarily a primary motivation of the pattern, it also introduces the ability to uniformly apply other patterns such as Pipes and Filters, and provides a seam where command handler implementations can be determined at run time. The MediatR library is actually an implementation of this pattern.
The Mediator Pattern concerns the creation of mediating components which encapsulate domain-specific orchestration logic that would otherwise require coupling between components. That is to say, the mediating component in this case isn't just a dumb dispatcher ("Hey, anybody know how to handle an XYZRequest?"), but is purpose-built to follow a specific set of operations that need to occur when a given operation happens, potentially across multiple components. The example given in the GoF Design Patterns book is a UI component with many interconnected elements such that changes to one need to effect changes to a number of other components and vice-versa (e.g. typing into a text field causes changes to a drop-down and multiple check boxes and radio buttons, while selecting entries within the dropdown effect changes to what's in the text field, check boxes, and radio buttons, etc.). With the provided solution, a mediating component contains logic to know exactly which components need to get updated, and how, when each of the other components change.
So, the Mediator Pattern would be used when you need a component custom-built to facilitate how a number of other components interact where normal coupling would otherwise negatively affect maintainability whereas the Command Dispatcher Pattern is simply used as a dumb function router to decouple the caller from the called function.
Mediator pattern is more low level and generic in its pure concept. Mediator pattern does not define the kind of communication or the kind of message you use. In Command Dispatcher you are in a upper layer (contextually and conceptually) in which the kind of communication and message is already defined.
You should be able to implement a Command Dispatcher patter with a Mediator pattern (ergo with MediatR) as foundation.

Should my API be a Class, Struct, or Protocol?

It is an abstract API upon which more domain specific APIs are based for querying URLs. Should the abstract version (which contains the networking functions, and the data structure) be written as a Class, Struct, or Protocol?
Given your requirements, it should be either a class, or a combination of a class and a protocol.
You cannot use protocol by itself, because it is incapable of holding data
Structs are a poor fit for anything abstract, because Swift structs are good for small types that have value semantic.
One approach that is good for data hiding is to expose a protocol, along with a method to obtain an instance of that protocol, but make the class implementing the protocol private to your implementation. This way the users would have to program to interface, because they have no access to the class itself.

Scala Actors - any suggestions when converting OOP based approach?

I'm learning Scala and its Actors (via Akka lib) approach for handling concurrency. I'm having some questions while trying to convert typical OOP (think - Java style OOP) scenarios to Actor based ones.
Let's consider the overused e-commerce example Webstore where Customers are making Orders that contain Items. If it is simulated in OOP-style you end up with appropriately named domain model classes that interact between themselves by calling methods on each other.
If we want to simulate concurrency e.g. many customers buying items at once we throw in some sort of threading (e.g. via ExecutorService). Basically each Customer then implements Runnable interface and its run() method calls e.g. shop.buy(this, item, amount). Since we want to avoid data corruption caused by many threads possibly modifying shared data at once, we have to use synchronization. So the most typical thing to do is synchronize the shop.buy() method.
Now let's move on to Actor based approach. What I understand is that Shop and each Customer now become Actors who, instead of calling buy() method on shop directly, sends message to shop. But here come the difficulties.
Should all the other domain models (Order, Item) become Actors too and all the communication between all the domain models be message driven? In other words it is a question whether it is OK or not to leave some OOP style interaction between domain models via method invoking. For example, in OOP based approach Order would typically have a reference to List which you could populate when user is buying something by calling add(item) in buy() method. Does this (and similar) interactions have to be remodeled by messaging to make most use of Actor based approach? In yet another words it is a question when do we communicate with internal state of an actor directly and when do we extract internal state to another Actor?
In OOP based solution you pass to methods instances of classes. I read in documentation that in Actor model one is supposed to pass immutable messages. So if I understand correctly, instead of messaging objects themselves you message only some data that makes it possible to identify which entities have to be processed e.g. by messaging their IDs and the type of action you want to perform.
Answering your questions:
2) Your domain model (including shops, orders, buyers, sellers, items) should be described with immutable case classes. Actors should exchange (immutable) commands, which may use these classes, like AddItem(count: Int, i: Item) - AddItem case class represents command and encapsulates business entity called Item.
1) Your protocol, e.g. interaction between shops, orders, sellers, buyers etc., should be encapsulated inside actor (one actor class per protocol, one instance per state). Simply saying, an actor should manage any (mutable) state, changing between requests, like current basket/order. For instance, you may have actor for every basket, which will contain information about choosed items and receive commands like AddItem, RemoveItem, ExecuteOrder. So you don't need actor for every business entity, you need actor for every business process.
In addition, there is some best practices and also recommendations about managing concurrency with routers.
P.S. The nearest JavaEE-based approach is EJB with its entities (as case-classes) and message-driven beans (as actors).

Simulink model interface to external C++ application

I have a fairly complex Simulink model with multiple referenced models that I am trying to interface with external C++ code. I am using 2014a and generating code using Simulink Coder. Below are two important requirements I would like to satisfy while doing this:
Have the ability to create multiple instances of the model in my
external C++ code.
Have the ability to overwrite data that is input to the model and
have it reflected to the external application.
Given these requirements, at the top level what is the best method to interface to external IO, datastores or IO ports?
From my understanding using datastores (or Simulink.Signal objects) and by specifying the appropriate storage class I may be able to satisfy 2 above, but in doing so I would have to specify signal names and this would not allow me to satisfy 1.
If I use IO port blocks at top level, I may be able to satisfy 1 but not 2.
Update: #2 constraint has been removed due to design change. Hence the io port approach works for me.

Scala actors with concurrent acces to shared cache of objects, scala.concurrent.Lock, react vs receive

I'm writing a soft in which various actors concurrently create portions of a same graph.
Nodes of the graph are modeled through a class hierarchy, each concrete class of the hierarchy has a companion object.
abstract class Node
class Node1(k1:Node, k2:Node) extends Node
object Node1 {
def apply(k1:Node, k2:Node) = ...
}
class Node2(k1:Node, k2:Node) extends Node
object Node2 {
def apply(k1:Node, k2:Node) = ...
}
...
So far so good.
We perfom structural hashing on the nodes on creation.
That is, each companion object has a HashTable which stores node instances keyed under their constructor arguments, this is used to detect that an instance of a given node class with the same subnodes already exists and to return that one instead of creating a new instance. This allows to avoid memory blowup, and allows have a node equality test that takes constant time (reference comparison instead of graph comparison). Access to this map is protected using a scala.concurrent.Lock.
Yet the problem is that the Lock operate at jvm thread level, and that depending on how actors are coded, they can either be allocated on their own jvm threads, or instead be interleaved with several other actors in a same JVM thread in which case the structural hashing ceases to work (i.e., several structurally identical nodes can be created, and only one of them will be stored in the cache, and the structural equality will cease to work).
First, I know that this structural hashing architecture goes against the actor share-nothing philosophy, but we really need this hashing to work for performance reasons (constant time equality brings us an order of magnitude improvement), but is there a way to implement mutual exclusion on shared ressources with actors that would work at actor level rather than jvm thread level?
I thought of encapsulating the node companion in an actor to completely sequentialize access to the factory but this would imply a complete rewrite of all the existing code, any other idea?
Thanks,
If you have shared mutable state, have a single actor which mutates this state. You can have other actors read, but have one actor that does the writes.