I have pools and lanes with some activities within them in BPMN 2.0 Business Process Diagram. I want showing Lanes ( or pools) with their activity in Relationship Matrix.
I choose lanes (pools) in Source and activities in Target or vica versa in relationship matrix, but the relations could not be shown.
How can i select Link Type in relashionship matrix? How can I resolve my problem? how should I relate activities to lanes for showing relations in Relationship Matrix?
You can't in this way. The relationship matrix is based on connections established between elements. But this is a structural relation. And this is already visible in the project browser. But of course there you can't play with that relation wizardry (which in the case of pools does not make sense anyway).
Related
I'm working on a family practitioner model, my goal is to model the population of a city and all of the family practitioners in that given city by taking into account the distribution of the treatment time and the arrival schedule of patients etc... I've started with a basic model with two practitioners and a small population.
Basic Model
Now if im going to model all of the 10k practitioners by duplicating the same blocks its gonna be like undoable, the other solution is to add more units in the resourcepool which does not model the real situation because basicly every family practitioner have his own queue and increasing the resourcepool is like modelling 'n' family practitioner in the same clinic with the same shared queue of patients. Is there any other solution? (first time using anylogic so im basicly a newbie)
thanks
You need to change your model architecture fundamentally. Practitioners become a custom agent type, you create a population of them and provide a flow chart in that agent type. Then, you can have 10k of them each with their own flow chart.
Strongly recommend you do all the tutorials in the help first to get a better understanding of the capabilities of AnyLogic. Having some blocks on Main is really just 0.1% of what the tool is capable of :)
So for the first time I'm gonna do a project that involves maps and layers on top of maps which have many points and many polygons on them.
I have the tendency to create separate tables for points and polygons and then create many-to-many relationships between them and the layers table. If I do that I end up with 5 tables: points, polygons, layers, layers_points and layers_polygons.
However, I see PostGIS also offers types called MULTIPOINT and MULTIPOLYGON. If I use those types then I could put it all in the layers table. I guess that would make queries faster, because I need less joins. However, I'm not sure if later I might regret it, if it means that working with the individual points and polygons becomes impossible. I'm not even sure yet if it will be necessary to work perform calculations on the individual points and polygons, but it would be nice to know whether that's possible or not in both approaches.
So basically I'm asking, what the pros and cons are of these different approaches?
In general, you would consider using multipolygons to represent entities that have disjoint surfaces (for example, the geometry of Alaska) or other topologies that you can't represent as polygons. The key here is that a single entity needs to be expressed with a multipolygon
What you wouldn't do is group unrelated polygons into a multipolygon, because you won't be able to perform queries at a child polygon level, unless you extract the rings into another geometry. If the polygons are unrelated, chances are you will need to query them individually. Even if they share a layer, you can manage that relation with business logic without merging them as they aren't representing the same entity.
Keep in mind that geometry tools in the frontend won't necesarilly treat multipolygons as a valid geometry or a multi object. Algorithms of point-in-polygon that looks like your use case, won't necesarilly work when checking if a point is contained in a multipolygon.
Tools like Wicket.js (transform from/to WKT/geojson/native objects) don't support multipolygons. Google maps api v3 doesn't support multipolygons except for the data layer (but you can't operate on the data layer as you would on a polygon feature). Turf.js has operations that would run on a Featurecollection containing several polygons, yet not over a multipolygon.
Without knowing your exact use case, that's the best I can tell you, and TL/DR: keep your polygons as they are.
Just curious: is there some reason why one cannot do all necessary normalizations
in a single step? Isnt normalization ultimately the redrawing of the Functional Dependency (FD) graph? We start out with an FD diagram/graph and we want to end up with a graph (vertices are attributes, there is an edge between attributes a,b if b is FD on a ) representing a relation in (Edit) BCNF ?
EDIT: What I mean is : we start with a FD graph , which is a graph pairing attributes a,b iff b is FD on A, i.e., we join a and b with an edge iff b=f(a).
From this graph we want to obtain a graph (FD)_2 with certain traits, which are equivalent to having been fully normalized, i.e., (FD)_2 is in 5NF or 6NF, using the graph-theoretical relation between a graph and a given normal form. If So we are basically mapping one graph to another graph. Can we use this approch-- drawing (FD)_2 directly, as a function of FD, to skip normalization steps?
Yes: Normalization can be characterized by rearranging (hyper)graphs. It does not have to be done by moving through normal forms in some order. (It's just a common misconception that it is.)
The normal forms on the continuum from 1NF to 6NF are those dealing with problematic FDs (functional dependencies) and JDs (join dependencies). They can be ordered so that if a relation value or variable satisfies a form then it satisfies the forms before but not necessarily after. Currently: 1NF, 2NF, 3NF, EKNF, BCNF, 4NF, ETNF, RFNF, SKNF, 5NF aka PJ/NF, Overstrong PJ/NF, 6NF. This ordering has nothing to do per se with decomposing to relation values or variables that are in higher normal forms. It is not necessary to decompose through a sequence of forms.
The normal forms are just different conditions that have been found with helpful properties. Moreover, the normal forms are just those that have been discovered; there may well be other helpful properties to be distinguished. We don't pass through them to normalize now. ETNF is 2012!
As to your graph characterization:
A FD has a set of attributes as determinant. Which determines another set. But since the one determines the other if and only if the one determines each of the sets that contain exactly one member of the other, informally but unambiguously we also talk about a set of attributes determining an attribute. A FD {...} -> a holds iff a = f(...). (There can be zero or more determinant attributes.) BCNF is the highest normal form re problematic FDs, but there are higher normal forms re problematic JDs. A JD with given components holds in a relation iff it is always their join. Ie its meaning/predicate can be expressed as the AND of the components'. So a FD {...} -> A holds iff a JD holds corresponding to a meaning/predicate with conjunct A = F(...)! A MVD (multi-valued dependency) corresponds to a certain binary JD. 5NF means that every JD that holds is "implied by the keys" (a technical term).
There are algorithms that starting with FDs decompose directly to 2NF, directly to 3NF and directly to BCNF (with various other properties like preservation of FDs). See the Alice book. One can decompose to 6NF simply by decomposing until there are no nontrivial JDs, without regard to FDs.
(See C. J. Date's Database Design and Relational Theory: Normal Forms and All That Jazz.)
So i've finished my dimensional modeling, it resulted in 2 business process, 1 simple with only one fact table and a few dimension, the other one a bit more complex with 2 fact tables (related in a similar way has Invoice and InvoiceRecord) and a lot more dimensions.
My question now is how to start building the OLAP cube(s), one for each Business Process? Or one for each Business Process and for each fact table?
You need to consider all the fact tables and dimensional tables for creating a common star schema. You should consider creating single cube unless fact and dimension pairs are not interrelated at all. It all depends on your design.
Here's a theoretical question. Let's assume that I have implemented two types of collaborative filtering: user-based CF and item-based CF (in the form of Slope One).
I have a nice data set for these algorithms to run on. But then I want to do two things:
I'd like to add a new rating to the data set.
I'd like to edit an existing rating.
How should my algorithms handle these changes (without doing a lot of unnecessary work)? Can anyone help me with that?
For both cases, the strategy is very similar:
user-based CF:
update all similarities for the affected user (that is, one row and one column in the similarity matrix)
if your neighbors are precomputed, compute the neighbors for the affected user (for a complete update, you may have to recompute all neighbors, but I would stick with the approximate solution)
Slope-One:
update the frequency (only in the 'add' case) and the diff matrix entries for the affected item (again, one row and one column)
Remark: If your 'similarity' is asymmetric, you need to update one row and one column. If it is symmetric, updating one row automatically results in updating the corresponding column.
For Slope-One, the matrices are symmetric (frequency) and skew symmetric (diffs), so if you handle you also need to update one row or column, and get the other one for free (if your matrix storage works like this).
If you want to see an example of how this could be implemented, have a look at MyMediaLite (disclaimer: I am the main author): https://github.com/zenogantner/MyMediaLite/blob/master/src/MyMediaLite/RatingPrediction/ItemKNN.cs
The interesting code is in the method RetrainItem(), which is called from AddRatings() and UpdateRatings().
The general thing are called online algorithms.
Instead of retraining the whole predictor, it can be updated "online" (while remaining useable) with the new data only.
If you google for "online slope one predictor" you should be able to find some relevant approaches from literature.