Polymorphism in sysml - android-activity

currently i am trying to design an executable activity where a logical component send a request to several other logical components to start initialization. The usual way in activity diagram is to create for each block a swimline. But, as the number of LCs are high in this case the diagram will be extremely big and also later its modification will be a hurdle.
In programming, the solution to this problem is polymorphism where the objects are casted as the mother class and pushed into a vector and then within the loop the abstract function of the mother class is called.
I wonder if there is any similar solution to this problem in sysml or not?
I tried to assign one swimline to the all the LCs which recevie the request but it seems each swimline can only be assigned to a block.

Yes, there is a similar solution. Actually, it is even easier. Just let the ActivtyPartition represent a property with a multiplicity greater than 1. Any InvocationAction in this partition will then mean a concurrent invocation on all instances held in this property.
The UML specification says about this:
section 15.6.3.1 Activity Partitions
If the Property holds more than one value, the
invocations are treated as if they were made concurrently on each
value and the invocation does not complete until all the concurrent
instances of it complete.
Typical InvocationActions are CallActions and SendSignalActions. So, you could call the initialize() operation in a partition representing the property myLogicalComponents:LC[1..*].
If you don't already have defined such a property, you can derive it, by making it a derived union and then define all existing properties with components to be initialized as subsets of it:
/myLogicalComponents:LC[1..*]{union}
LC1:LC {subsets myLogicalComponents}
LC2:LC {subsets myLogicalComponents}
Of course, this assumes, that all your logical components are specializing LC, so that they all inherit and redefine the initialize() operation.
In SysML you also have AllocateActivityPartitions. No clear semantics is described for it, so I think you are better served with the UML ActivityPartitions, which are also included in SysML.

Related

GKEntity/Component update cycle best practices

The subject
My question is about the division of the update cycle when combining Apple's frameworks in order to respect the typical patterns and good practices on the subject, since most of the documentation and example code has not been adapted to Swift yet (or at least I can't find it anywhere).
There are so many ways to manage the update cycles in GameplayKit that I'm not too sure what is a good way of combining everything.
The elements
First and foremost: every class (GKComponent and GKEntity (sub)classes) in the Entity/Component has an update() method that you can override to perform per-frame updates. That has to come from the update cycle of the current GKScene/SKScene.
Then you have the GKComponentSystem that you can use to fire up the update() methods of every component from a given type that has been added to it. I see the point, it's very handy.
But I also want to use the state machine system and it also has an update cycle of it's own... Combining all that got me confused.
My situation
In the case where I have a subclass of GKEntity with an instance of a GKStateMachine created on initialization. The state machine has a few states (for the moment: 'Spawn', 'Normal', 'Stunned' and 'Death'.
Right now, I'm creating a big "cookie cutter" with my GKEntity subclasses and create all the components it's gonna use during the initialization. But it's becoming very impractical. For instance, I have a MovementComponent, which is a subclass of GKAgent2D. I created a singleton that manages entity creation, so after the instance is created, if loops through all the entity's components and add them to the related GKComponentSystems. The singleton has an update() method of it's own that updates passes the call to the GKComponentSystems. Some of the components I use do not need per-frame update, so no GKComponentSystem has been created for them and I update them manually as required.
If I come back to my entity, since I create everything at once and use GKComponentSystems to update the components, my component's update method is loaded with guard and if-let statements because I need to access the entity's state machine, check if it's a state where the entity can move (Normal state) and do its thing or escape the function. This is not efficient in my view: the move component doesn't need to get updated when it's spawning, stunned, or dying.
On top of that it makes my use of GKStateMachine completely overkill since my update methods are empty: the components get updated by the GKComponentSystem anyway.
My ideas
Drop the GKComponentSystems completely and simply loop through all my entities (maybe sort them in different collections at some point if need be) and call their update() methods. Dispatch the updates to the state machine which in turn will update the components involved in that state.
Keep the GKComponentSystems and use the state machine to juggle with the components, for example by adding and removing the MovementComponent from the component systems when entering and exiting the Normal state.
Option 1 is straightforward but could cause problems on the long run when my structure gets more complex becasue some components could need to be updated before others. Having each entity update its own components would scatter the update process.
Option 2 can get confusing too in a way, but my biggest concern is about the creation/removal of the components. Do I only take them out of the GKComponentSystems or do I take them out of the entity completely? What is the most efficient way of doing it?
The actual question
which one of my options would be the best? is there any better way of doing it.
If you are using GKComponentSystem to perform updates, then I would go with just that. I think a component should update only once per frame.
It's not clear from your question what you need to do with StateMachines, but there is no reason to not have them involved either directly in your GKEntity or contained within a GKComponent like the DemoBots IntelligenceComponent.

Drools 6 Fusion Notification

We are working in a very complex solution using drools 6 (Fusion) and I would like your opinion about best way to read Objects created during the correlation results over time.
My first basic approach was to read Working Memory every certain time, looking for new objects and reporting them to external Service (REST).
AgendaEventListener does not seems to be the "best" approach beacuse I dont care about most of the objects being inserted in working memory, so maybe, best approach would be to inject particular "object" in some sort of service inside DRL. Is this a good approach?
You have quite a lot of options. In decreasing order of my preference:
AgendaEventListener is probably the solution requiring the smallest amount of LOC. It might be useful for other tasks as well; all you have on the negative side is one additional method call and a class test per inserted fact. Peanuts.
You can wrap the insert macro in a DRL function and collect inserted fact of class X in a global List. The problem you have here is that you'll have to pass the KieContext as a second parameter to the function call.
If the creation of a class X object is inevitably linked with its insertion into WM, you could add the registry of new objects into a static List inside class X, to be done in a factory method (or the constructor).
I'm putting your "basic approach" last because it requires much more cycles than the listener (#1) and tons of overhead for maintaining the set of X objects that have already been put to REST.

When is my struct too large?

We're encouraged to use struct over class in Swift.
This is because
The compiler can do a lot of optimizations
Instances are created on the stack which is a lot more performant than malloc/free calls
The downside to struct variables is that they are copied each time when returning from or assigned to a function. Obviously, this can become a bottleneck too.
E.g. imagine a 4x4 matrix. 16 Float values would have to be copied on every assign/return which would be 1'024 bits on a 64 bit system.
One way you can avoid this is using inout when passing variables to functions, which is basically Swifts way of creating a pointer. But then we're also discouraged from using inout.
So to my question:
How should I handle large, immutable data structures in Swift?
Do I have to worry creating a large struct with many members?
If yes, when am I crossing the line?
This accepted answer is not entirely answering the question you had: Swift always copies structs. The trick that Array/Dictionary/String/etc do is that they are just wrappers around classes (which contain the actual stored properties). That way sizeof(Array) is just the size of the pointer to that class (MemoryLayout<Array<String>>.stride == MemoryLayout<UnsafeRawPointer>.stride)
If you have a really big struct, you might want to consider wrapping its stored properties in a class for efficient passing around as arguments, and checking isUniquelyReferenced before mutating to give COW semantics.
Structs have other efficiency benefits: they don't need reference-counting and can be decomposed by the optimiser.
In Swift, values keep a unique copy of their data. There are several
advantages to using value-types, like ensuring that values have
independent state. When we copy values (the effect of assignment,
initialization, and argument passing) the program will create a new
copy of the value. For some large values these copies could be time
consuming and hurt the performance of the program.
https://github.com/apple/swift/blob/master/docs/OptimizationTips.rst#the-cost-of-large-swift-values
Also the section on container types:
Keep in mind that there is a trade-off between using large value types
and using reference types. In certain cases, the overhead of copying
and moving around large value types will outweigh the cost of removing
the bridging and retain/release overhead.
From the very bottom of this page from the Swift Reference:
NOTE
The description above refers to the “copying” of strings, arrays, and dictionaries. The behavior you see in your code will always be as if a copy took place. However, Swift only performs an actual copy behind the scenes when it is absolutely necessary to do so. Swift manages all value copying to ensure optimal performance, and you should not avoid assignment to try to preempt this optimization.
I hope this answers your question, also if you want to be sure that an array doesn't get copied, you can always declare the parameter as inout, and pass it with &array into the function.
Also classes add a lot of overhead and should only be used if you really must have a reference to the same object.
Examples for structs:
Timezone
Latitude/Longitude
Size/Weight
Examples for classes:
Person
A View

How to extend the Java API to be able to introduce new annotations [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Can you explain me how I can extend or change the JAVA-API to using two new Annotations #any and #option to allow Multiplicies in Java?
The main idea for the Multiplicities is the follow:
Multiplicities help to solve many maintenance problems, when we change a to-many-relationship into a to-one-relationship or vice-versa.
I would like to use the above annotations for "fields", "parameter methods" and "return parameters".
For example:
class MyClass {
String #any name; // instead of using List<String>, i would like to use #any
public String setFullname(String #option name) { // #option is another multiplicity
...
}
}
To allow this definition, i have to change the JAVA-API and extends it with these two annotations, but I don't know how to do this.
Can you tell me how to change the API, and which steps I must follow to achieve my requirements?
Please look at this paper to understand the issue.
As explained in that paper, using multiplicities for building to-many relationship causes number of problems:
"It makes maintenance tedious and error-prone."
<< If the maintenance of a program requires a change of a relationship from to-one to to-many (or vice versa), nearly every occurrence of the variable representing this relationship in the program needs to be touched. In a language with static type checking, these occurrences
will be identified by the compiler (as type errors) after the declaration of the field has been changed so that at least, no use is forgotten; in a language without it, the change is extremely error-prone>>
"It changes conditions of subtyping"
<< If S is a subtype of T, an expression of type S can be assigned to a variable (representing a to-one relationship) of type T. When the relationship is upgraded to to-many and the types of the expression and variable are changed to Collection and Collection to reflect this, the assignment is no longer well-typed [18]. To fix this, the use of a (former to-one and now tomany) variable must be restricted to retrieving elements of its collection, which may require substantial further code changes. We consider this dependency of
subtyping on multiplicity to be awkward.>>
"It changes call semantics"
Yet another manifestation of the discontinuity is that when a variable holding a related object is used as an actual parameter of a method
call with call-by-value semantics, the method cannot change the value of the variable (i.e., to which object the variable points), and thus cannot change which object
the variable’s owner is related to. By contrast, when the variable holds a collection of related objects, passing this variable by-value to a method allows the
method to add and remove from the collection, and thus to change which objects the variable’s owner is related to, effectively giving the call by-reference semantics.
We consider this dependency of semantics on multiplicity to be awkward. Surely, there is an easy fix to all problems stemming from the noted discontinuity:
implement to-one relationships using containers also. For instance, the Option class in Scala has two subclasses, Some and None, where Some wraps an object of type
E with an object of type Option, which can be substituted by None, so that the object and no object have a uniform access protocol (namely that of Option). By making Option
implement the protocol of Collection, the above noted discontinuity will mostly disappear. However, doing so generalizes the problems of collections that stem from
putting the content over the container. Specifically:
"Related objects have to be unwrapped before they can be used".
Using containers for keeping related objects, the operations executable on a variable representing the relationship are the operations of the container and not of the related objects. For instance, if cookies have the operation beNibbled(), the same operation can
typically not be expected from a collection of cookies (since Collection is a general
purpose class).
"It subjects substitutability to the rules of subtyping of generics". While the difference in subtyping between to-one and to-many variables (item 2 above) has been
removed, the wrong version has survived: now, a to-one relationship with target type T, implemented as a field having type Option, cannot relate to an object of T’s subtype S (using Option, unless restrictions regarding replacement of the object are accepted).
"It introduces an aliasing problem".
While aliasing is a general problem of objectoriented programming (see, e.g., [11, 19]), the use of container objects to implement relationships introduces the special problem of aliasing the container: when two objects share the same container, the relationship of one object cannot evolve differently from that of the other. This may however not model the domain correctly, and can lead to subtle programming errors.
"Expressive poverty".
More generally, given only collections it is not possible to
express the difference between “object a has a collection, which contains objects b1 through bn” and “object a has objects b1 through bn”. While one might maintain that the former is merely the object-oriented way of representing the latter, and that the used collection is merely an implementation object, it could be the case that the collection is actually a domain object (as which it could even have aliases; cf. above). In object-oriented modelling, by contrast, collections serving as implementation classes are abstracted from by specifying multiplicities larger than 1 (possibly complemented by constraints on the type of the collection, i.e., whether it is ordered, has duplicates, etc.). A collection class in a domain model is therefore always a domain class.
The following figure highlights these problems using a sample program from the internet service provider domain.
http://infochrist.net/coumcoum/Multiplicities.png
Initially, a customer can have a single email account which, according to the price plan selected, is either a POP3 or an IMAP account. Accounts are created by a factory (static method Account.make, Figure 1 left, beginning in line 4) and, for reasons of symmetry, are also deleted by a static method (Account.delete; line 19); due to Java’s lack of support for calling by reference (or out parameters), however, delete does not work as expected. Therefore, resetting of the field account to null has been replicated in the method Customer.transferAccount (line 40). When the program is upgraded to support multiple accounts per customer, the first change is to alter the type of account to List (Figure 1 right, line 30). As suggested by above Problem 1, this entails a number of changes. In class Customer it requires the introduction of an iteration over all accounts (line 35), and the replacement of the receiver of the method set, account, with the iteration variable a (Problem 4). In class Account, make must be changed to return a list of accounts (line 4), and the construction of accounts (lines 7 and 12) must be replaced by the construction of lists that contain a single account of the appropriate type. Admittedly, making the Account factory return a list seems awkward; yet, as we will see, it only demonstrates Problem 7. Also, it brings about a change in the conditions of subtyping (Problem 2): for make to be well-typed (which it is not in Figure 1, right), its return type would either have to be changed to List (requiring a corresponding change of the type of Customer.account, thus limiting account’s use to read access; Problem 5), or the created lists would need to be changed to element type Account. The parameter type of Account.delete needs to be changed to List also; replacing the assignment
of null with clearing the list (line 20) to better reflect the absence of an account (cf. the above discussions on the different meanings of null) makes delete work as intended, which may however change the semantics of a program actually calling delete (Problem 3). An analogous change from assigning null to calling clear() in class Account, line 40, introduces a logical error, since the transferred account is accidentally cleared as well (Problem 6).
The solution is to use multiplicities, as followed (look at the comment below for the image):
The question is now, how can I implement multiplicities in Java?
You are confused about what API means. To implement this idea, you would need to edit the source code of the Java compiler, and what you would end up with would no longer be Java, it would be a forked version of Java, which you would have to call something else.
I do not think this idea has much merit, to be honest.
It's unclear why you think this would solve your problem, and using a non-standard JDK will -- in fact -- give you an even greater maintenance burden. For example, when there are new versions of the JDK, you will need to apply your updates to the new version as well when you upgrade. Not to mention the fact that new employees you hire will not be familiar with your language that deviates from Java.
Java does allow one to define custom annotations:
http://docs.oracle.com/javase/1.5.0/docs/guide/language/annotations.html
... and one can use reflection or annotation processors to do cool things with them. However, annotations cannot be used to so drastically change the program semantics (like magically making a String mean a List of strings, instead) without forking your own version of the JDK, which is a bad idea.

Specification: Use cases for CRUD

I am writing a Product requirements specification. In this document I must describe the ways that the user can interact with the system in a very high level. Several of these operations are "Create-Read-Update-Delete" on some objects.
The question is, when writing use cases for these operations, what is the right way to do so? Can I write only one Use Case called "Manage Object x" and then have these operations as included Use Cases? Or do I have to create one use case per operation, per object? The problem I see with the last approach is that I would be writing quite a few pages that I feel do not really contribute to the understanding of the problem.
What is the best practice?
The original concept for use cases was that they, like actors, and class definitions, and -- frankly everything -- enjoy inheritance, as well as <<uses>> and <<extends>> relationships.
A Use Case superclass ("CRUD") makes sense. A lot of use cases are trivial extensions to "CRUD" with an entity type plugged into the use case.
A few use cases will be interesting extensions to "CRUD" with variant processing scenarios for -- maybe -- a fancy search as part of Retrieve, or a multi-step process for Create or Update, or a complex confirmation for Delete.
Feel free to use inheritance to simplify and normalize your use cases. If you use a UML tool, you'll notice that Use Cases have an "inheritance" arrow available to them.
The answer really depends on how complex the interactions are and how many variations are possible from object to object. There are two real reasons why I suggest that you develop specific use cases for each CRUD
(a) If you really are only doing a high-level summary of the interaction then the overhead is very small
(b) I've found it useful to specify a set of generic Use Cases for modifying 'Resources' and then extending / overriding particular steps for particular objects. Obviously the common behaviour is captured in the generic 'Resource' use cases.
As your understanding of the domain develops (i.e. as business users dump more requirements on you), you are more likely to add to the CRUD rather than remove it.
It makes sense to distinguish between workflow cases and resource/object lifecycles.
They interact but they are not the same; it makes sense to specify them both.
Use case scenarios or more extended workflow specifications typically describe how a case may proceed through the system's workflow. This will typically include interaction with various different resources. These interactions can often be characterized as C,R,U or D.
Resource lifecycles provide the process model of what may happen to a particular (type of) resource (object). They are often trivial "flower" models that say: any of C,R,U,D may happen to this resource in any order, so they are not very interesting by themselves.
The link between the two is that steps from the workflow and from the lifecycles coincide.
I feel representation - as long as it makes sense and is readable - does not matter. Conforming to the UML spec in all details is especially irrelevant.
What does matter, that you spec clearly states the operations and operation types the implementaton requires.
C: What form of insert operations exists. Can you insert rows not fully populated? Can you insert rows without an ID? Can you retrieve the ID last inserted? Can you cancel an insert selectively? What happens on duplicate keys or constraints failure? Is there a REPLACE INTO equivalent?
R: By what fields can you select? Can you do arbitrary grouping, orders? Can you create aggregate fields, aliases? How can you retrieve embedded (has many etc.) data? How do you specify depth of recursion, limits?
U, D: see R + C