In BPMN how to pack existing lanes into new pool without messing the whole diagram? - pool

I've created a BPMN diagram with Enterprise Architect 14. It contains two separate lanes. Now I noticed that I have to pack the into pool to project collaboration with other one.
I added new pool "Cloud" and want to put both lanes into it. Unfortunately drag and drop makes a terrible mess. How to wrap my lanes into pool without necessity of reformatting of whole diagram?
1

You just have to make Cloud large enough to take the lanes. I just tried that with no issue.

Related

MongoDB Atlas Projects/Clusters

I have recently finished a course for the merng tech stack and now I would like to take what I have learnt and create a project of my own. During the setup for MongoDB Atlas (Free tier) on my new project, I thought of these questions:
Should I start a new project on my MongoDB Atlas account and create a new cluster in that project? Or just create a new cluster in the previous project? (I would assume I should start a new project as the new one has no relevance to the previous one)
Why would/can you have more than one cluster for one project?
I'm still fairly new to this tech stack and would like some clarity on these questions, so I apologise in advance if these come across as stupid. Thanks.
If you're on the free tier, it's somewhat irrelevant as you can only have a single free cluster by project.
As to why you could want more than one cluster for a single project, it's mostly relevant for bigger and more complex projects. I expect a personal project to be able to scope itself to a single cluster. Where I work at, we mostly use clusters to separate domains between teams. It's also one of the easiest permission restriction to set. If you really get down to it, multiple clusters is a mean of organizing a project and you may want different configurations between your clusters, maybe you have a cluster where frequent backups is less necessary and since backups are very costly, you want to make sure you backup frequently only what needs to be.
Update
You might also want to explore sharding to remain on a single cluster, but that is also a costly and complex solution compared to maintaining multiple clusters since finding a relevant shard key to distribute equally the load is not a benign task. We've also moved away from separating clusters by domain, we now separate databases by domain. Databases are then distributed across clusters to balance the loads.

How can I compactly store a shared configuration with Kubernetes Kustomize?

First, I'm not sure this question is specific enough for Stack Overflow. Happy to remove or revise if someone has any suggestions.
We use Kubernetes to orchestrate our server side code, and have recently begun using Kustomize to modularize the code.
Most of our backend services fit nicely into that data model. For our main transactional system we have a base configuration that we overlay with tweaks for our development, staging, and different production flavors. This works really well and has helped us clean things up a ton.
We also use TensorFlow Serving to deploy machine learning models, each of which is trained and at this point deployed for each of our many clients. The only way that these configurations differ is in the name and metadata annotations (e.g., we might have one called classifier-acme and another one called classifier-bigcorp), and the bundle of weights that are pulled from our blob storage (e.g., one would pull from storage://models/acme/classifier and another would pull from storage://models/bigcorp/classifier). We also assign different namespaces to segregate between development, production, etc.
From what I understand of the Kustomize system, we would need to have a different base and set of overlays for every one of our customers if we wanted to encode the entire state of our current cluster in Kustomize files. This seems like a huge number of directories as we have many customers. If we have 100 customers and five different elopement environments, that's 500 directories with a kustomize.yml file.
Is there a tool or technique to encode this repeating with Kustomize? Or is there another tool that will work to help us generate Kubernetes configurations in a more systematic and compact way?
You can have more complex overlay structures than just a straight matrix approach. So like for one app have apps/foo-base and then apps/foo-dev and apps/foo-prod which both have ../foo-base in their bases and then those in turn are pulled in by the overlays/us-prod and overlays/eu-prod and whatnot.
But if every combo of customer and environment really does need its own setting then you might indeed end up with a lot of overlays.

Best practices for a microstrategy workflow

we are a team of 5 people working with microstrategy. We share every role, but we have no worklfow.
Everybody may build or change attributes and change the schema. This leads often to reports not working. Furthermore, there is no "good" documentation. We tried to establish a documentation with sharepoint, but there we also had no workflow.
Originally, we had an old project where for every report all the attributes where constructed newly. So we did not reuse any existing schema object.
Hence, we started a new project. We realized that due to lack of understanding and lack of workflow we make and made a lot of mistakes. We feel that we understand things better slowly (parent child), but the workflow is still horrible.
We have a development project and a lice project, but with the way we are working now, we have a lot of problems. Particularly, the missing version control system is killing us. We perform changes and forget what we did. Hence, we have to use backups, destroying useful work on a given day
So what are best practices to:
* deploy new attributes, facts and reports
* ensure that old reports work after constructing new attributes and facts
* improve documentation
* attributes defined on fact tables and parent-child relationships
Any help is appreciated
MicroStrategy development in a team environment, deploying from development to live, can be very challenging. As you rightly point out, the lack of version control, and unknown interdependencies between objects can cause untold problems. There's no one right answer to this question, but I would suggest the following:
Use all the tools provided by MicroStrategy. When you're deploying from one project to another, don't just drag and drop in Object Manager, create a package. When you deploy that package, make sure you choose to create an undo package, so you can rollback changes if you encounter any problems.
On that note, try to catch these problems in advance. Running Integrity Manager before and after a deployment, even if it's just to generate SQL for the reports, will point out if you've broken anything. On that note:
Create a third environment/project. Call this test/release control, whatever you prefer. Here you can test packages created in Object Manager, to ensure that they have the desired effect, and don't break anything. In effect, this is a dry run for your deployment to live. This environment should be regularly refreshed from live (via project duplication), to make sure it doesn't get in an unexpected state (as the result of a broken Object Manager package import for example).
Over and above that, I can only offer organisational advice. It's not uncommon for one person to take responsibility for schema objects (i.e. facts, attributes, transformations) so that developers don't undo each other's changes. If you have a large project, these objects could be split into functional areas, and individuals assigned.
Documentation is always tricky, but I like to put as much as possible into the object descriptions. This has the advantage of being visible in the Web interface (via tooltips), and included in the automated project documentation, should you choose to generate that. There is obviously the change log functionality for each object, but in my experience, those logs are soon not completed by developers, as saving happens too frequently. Still, if you can get people to populate that, you'd have a head start on understanding the change in your project.
To summarise:
Use Object Manager packages to deploy changes
Test changes with Integrity Manager, to catch any issues as early as possible
Use a release control project/environment, so you're not catching issues in your production environment
Assign responsibility for schema objects to a specific person or persons where possible.

BPMN - reusable process over several pools

I am using the Enterprise Architect and it seems like what I want to model with BPMN 2.0 is forbidden, but I just don't get it, maybe someone can help.
According to BMPN specs, an activity cannot be used in several pools, as it is always bound to one pool.
BUT activities can be marked as "call activities", which actually can have their own pools and be reused, right? Meaning if I have a sub-process marked as a call activity, using its own pool, shouldn't I be able to use this one in different pools as well?
To clarify what I need to model: In a warehouse, I have several processes, all with different pools. I need to use pools and not lanes, as they can only communicate via messages, which would not be allowed in one pool (right?).
Now there is one process, which all other processes can result in, the general "error handling".
But now matter what I try, I cannot use this activity more than once, the EA keeps crashing (version 10) or telling me I can only use sequence flows within one pool (version 11).
Can anyone help me understand which part of BPMN I did not get correctly here?
Thanks in advance
I cannot answer why Enterprise Architecture is crashing/ not supporting your modelling approach, but I can assure that referencing a global task or another process via call activities from different pools is valid BPMN 2.0.
The specification (p 183 ff/213 ff in PDF doc: Call Activities) does not mention restrictions regarding the pools from which global tasks can be referenced (it wouldn't make sense to put such a restriction on referencing something "global", either) and other modeling tools seem to support your approach as well. I just tested the case with Signavio and it works fine, the syntax checker is not throwing any error.
Another approach for solving your case might be referencing another process as Link Intermediate Events (p 183/213 in PDF doc). However, I don't know if this will be possible using the Enterprise Architect, but it might be worth a try.

How to move to a new version control system

My employer has tasked me with becoming our new version control admin. We are currently using two different version control systems for two different code bases. The code/functionality in the two code bases overlap in some areas. We will be moving both code bases to a new version control system.
I am soliciting ideas on how to do this. I suppose we could add the two code bases to the new version control as siblings in the new depot's hierarchy and gradually remove redundancy by gradually promoting to a third sibling in the hierarchy, ultimately working out of the third sibling exclusively. However, this is just a 30,000 ft view of the problem, not a solution. Any ideas, gotchas, procedures to avoid catastrophe?
Thanks
Git can be setup in such a way that svn, git, and cvs clients can all connect. This way you can move over to a central Git repo, but people who are still used to svn can continue to use it.
It sounds that in your specific situation, with two code-bases you want to combine, you should make three repositories and start to combine the first two into the third one.
My advice is to experiment with a few "test" migrations. See how it goes and adjust your scripts as necessary.
Then once your set, you can execute it for real and your done. Archive your old repos too.
Another place you might find inspiration is OpenOffice.org. They are in the middle of going from SVN to Mercurial. They probably have published information on their migration work.
Issues to consider:
How much history will you migrate?
How long will you need to continue using the old systems for patch-work, etc?
How long will you need to keep the old systems around to access historical information?
Does the new target VCS provide an automatic or quasi-automatic migration migration method from either of the two old VCS?
How will you reconcile branching systems in the two old VCS with the model used in the new VCS?
Will tagging work properly?
Can tags be transferred (which will not matter if you are not importing much history)?
What access controls are applied to the old VCS that must be reproduced in the new?
What access controls are to be applied to the new VCS?
This is at least a starting point - I've no doubt forgotten many important topics.