Best practices for a microstrategy workflow - microstrategy

we are a team of 5 people working with microstrategy. We share every role, but we have no worklfow.
Everybody may build or change attributes and change the schema. This leads often to reports not working. Furthermore, there is no "good" documentation. We tried to establish a documentation with sharepoint, but there we also had no workflow.
Originally, we had an old project where for every report all the attributes where constructed newly. So we did not reuse any existing schema object.
Hence, we started a new project. We realized that due to lack of understanding and lack of workflow we make and made a lot of mistakes. We feel that we understand things better slowly (parent child), but the workflow is still horrible.
We have a development project and a lice project, but with the way we are working now, we have a lot of problems. Particularly, the missing version control system is killing us. We perform changes and forget what we did. Hence, we have to use backups, destroying useful work on a given day
So what are best practices to:
* deploy new attributes, facts and reports
* ensure that old reports work after constructing new attributes and facts
* improve documentation
* attributes defined on fact tables and parent-child relationships
Any help is appreciated

MicroStrategy development in a team environment, deploying from development to live, can be very challenging. As you rightly point out, the lack of version control, and unknown interdependencies between objects can cause untold problems. There's no one right answer to this question, but I would suggest the following:
Use all the tools provided by MicroStrategy. When you're deploying from one project to another, don't just drag and drop in Object Manager, create a package. When you deploy that package, make sure you choose to create an undo package, so you can rollback changes if you encounter any problems.
On that note, try to catch these problems in advance. Running Integrity Manager before and after a deployment, even if it's just to generate SQL for the reports, will point out if you've broken anything. On that note:
Create a third environment/project. Call this test/release control, whatever you prefer. Here you can test packages created in Object Manager, to ensure that they have the desired effect, and don't break anything. In effect, this is a dry run for your deployment to live. This environment should be regularly refreshed from live (via project duplication), to make sure it doesn't get in an unexpected state (as the result of a broken Object Manager package import for example).
Over and above that, I can only offer organisational advice. It's not uncommon for one person to take responsibility for schema objects (i.e. facts, attributes, transformations) so that developers don't undo each other's changes. If you have a large project, these objects could be split into functional areas, and individuals assigned.
Documentation is always tricky, but I like to put as much as possible into the object descriptions. This has the advantage of being visible in the Web interface (via tooltips), and included in the automated project documentation, should you choose to generate that. There is obviously the change log functionality for each object, but in my experience, those logs are soon not completed by developers, as saving happens too frequently. Still, if you can get people to populate that, you'd have a head start on understanding the change in your project.
To summarise:
Use Object Manager packages to deploy changes
Test changes with Integrity Manager, to catch any issues as early as possible
Use a release control project/environment, so you're not catching issues in your production environment
Assign responsibility for schema objects to a specific person or persons where possible.

Related

Rule development and deployment management with Drools Guvnor

Introduction
Drools Guvnor has it's own versioning system, that in production use allows the users of an application to modify the rules and decision tables in order to adapt to change in their business. Yet, the same assets continue to live on the development version control system, where new features to the app are developed.
This post is for looking insight/ideas/experience on rule development and deployment when working with Drools rules and Guvnor.
Below are some key concepts I've been puzzling about.
Deployment to Guvnor
First of all, what is the best way to deploy the drl files and decision tables to production environment? Just simply put them on a zip package and then unzip to Web-Dav folder? What I have navigated around Drools, I haven't found a way to import more than one file at a time. The fact model can be added as a jar archive, though. Guvnor seems to have a REST API of some sort, but using that would require custom deployment scripts.
Change management
Secondly, once the application is in production, the users will likely want to change the values in decision tables in order to set the discount percentages to higher for premium clients etc. This is all fine and dandy, until comes the time to start development of version 2.0 of the app.
Now what we have at this point is
drl files and decision tables in version controlling system
drl files and decision tables in production environment with user modifications, versioned by Guvnor
Now we are in the point of getting the rules and decision tables back from the Guvnor. And again is the Web-Dav folder the best for this, what other options there are?
Merge tools today can even handle Excel file diffs, but sounds like a merge hell to me on a big scale projects.
Keeping the fact model backwards compatible
Yet another topic is fact model integrity. For the assumed version 2.0, developers always want to make refactoring and tear the whole fact model upside down. Still, it must remain backwards compatible with the previous versions as there may be user modified rules that depend on that. Any tips on this? Just keep the fact model simple and clean? Plan ahead / suggest what the users could want to change?
Summary
I'm certain I'm not the first, and surely not the last, to consider options on deployment and change management with Drools and Guvnor. So, what I'd like to hear is comment, discussion, tips etc. on some best (and also the worst in order to avoid them) practices to handle these situations.
Thanks.
The best way to do things depends very much on your specific application and the environment you work in. However the following are pointers from my own experience. Note that I'll add just a few points for now. I'll probably come back to this answer when things come to me.
After the initial go-live, keep releases incremental and small
For your first release you have the opportunity to try things out. Take advantage of this opportunity, and do as much refactoring as possible, because...
Your application has gone live and your business users are maintaining rules in decision tables. You have made great gains in what folks in the industry like to call "business agility". Unfortunately, this tends to be at the expense of application development agility. All of your guided editor rules and decision table rules are tied to your fact model, so any changes existing properties of that fact model will break your decision tables. Unlike in most IDEs these days, you can't just right-click on a fact's getX() method, rename it, and expect all code which relies on that property to be updated.
Decision tables and guided rules are hard to refactor. If a fact has been renamed, then in many (all?) versions of Guvnor, that rule/table will no longer open. You need to get at the underlying XML file via WebDav and do some text searching and replacing. That may be very difficult, considering that to do this, you need to download the file from production to a test environment, make changes, test them, deploy them to a test environment. When you're happy with your changes you need to push them back up to the 'production' Guvnor. Unfortunately, while you were doing that, the users have updated a number of decision tables and you need to either start again, or re-apply the past couple of days' changes. In an ideal world, your business users will agree to make no rule changes for a period of time. But the only way to make that feasible is to keep the changes small so that you can make them in a couple of hours or a day depending on how generous they feel.
To mitigate this:
Keep facts used within Guvnor separate from your application domain classes. Your application developers can then refactor the internal application model to their hearts content, but such changes will not affect the business model. Maintain mappings between the two and ensure there are tests covering those mappings.
Avoid changes such as renaming facts or their properties. Make sure that facts you create and their properties have names which suit the domain and agree these with the business. On the other hand, adding a new property is relatively painless. It is well worth prompting the users to give you an eye on their future plans.
Keep facts as simple as possible. Don't go more complex than name-value pairs unless you really need to. For one thing, your business users will find it much easier to maintain rules. Your priority with anything managed within Guvnor should always be about making it easy for business users to maintain.
Keep external dependencies out of your facts. A developer may think it's a good idea to annotate a fact as a JPA #Entity, for easy persistence. Unfortunately, that adds dependencies which need to be added to Guvnor's classpath, requiring a restart.
Tips & tricks
My personal technique for making cross-environment changes is to connect Eclipse to two Guvnor WebDav directories, and checkout the rules into local directories, where each local directory maps to an environment. I then use the Eclipse diff tooling.
When building a Guvnor-managed knowledge base, I create a separate Maven project containing only the facts, and with no dependencies on anything else. It makes it a lot easier to keep them clean this way. Also, when I really do need to add a dependency (i.e. I use JodaTime where possible), then the build can have a step to generate a shaded JAR containing all the dependencies. That way you only ever deploy one JAR to Guvnor, which is guaranteed to contain the correct versions of your dependencies.
I'm sure there will be more that I think of. I'll try to remember to come back to this...

Web Application deployment and database/runtime data management

I have decided to finally nail down my team's deployment processes, soup-to-nuts. The last remaining pain point for us is managing database and runtime data migration/management. Here are two examples, though many exist:
If releasing a new "Upload" feature, automatically create upload directory and configure permisions. In later releases, verify existence/permissions - forever, automatically.
If a value in the database (let's say an Account Status of "Signup") is no longer valid, automatically migrate data in database to proper values, given some set of business rules.
I am interested in implementing a framework that allows developers to manage and deploy these changes with the same ease that we manage and deploy our code.
So the first question is: 1. What tools/frameworks are out there that provide this capacity?
In general, this seems to be an issue in any given language and platform. In my specific case, I am deploying a .NET MVC2 application which uses Fluent NHibernate for database abstraction. I already have in my deployment process a tool which triggers NHibernate's SchemaUpdate - which is awesome.
What I have built up to address this issue in my own way, is a tool that will scan target assemblies for classes which inherit from a certain abstract class (Deployment). That abstract class exposes hooks which you can override and implement your own arbitrary deployment code - in the context of your application's codebase. the Deployment class also provides for a versioning mechanism and the tool manages the current "deployment version" of a given running app. Then, a custom NAnt task glues this together with the NAnt deployment script, triggering the hooks at the appropriate times.
This seems to work well, and does meet my goals - but here's my beef, and leads to my second question: 2. Surely what I just wrote has to already exist. If so, can you point me to it? and 3. Has anyone started down this path and have insight into problems with this approach?
Lastly, if something like this exists, but not on the .NET platform, please still let me know - as I would be more interested in porting a known solution than starting from zero on my own solution.
Thanks everyone, I really appreciate your feedback!
Each major release, have a script to create the environment with the exact requirements you need.
For minor releases, have a script that is split into the various releases and incrementally alters the environment. There are some big benefits to this
You can look at the changes to the environment over time by reading the script and matching it with release notes and change logs.
You can create a brand new environment by running the latest major and then latest minor scripts.
You can create a brand new environment of a previous version (perhaps for testing purposes) by specifying it to stop at a certain minor release.

SDLC: Managing changes in a 'Closed System' (M1 - ERP)

I am working with a client who has an ERP system in place, called M1, that they are looking to make custom changes to.
I have spent a little bit of time investigating the ERP system in terms of making customizations. Here is a list of what I have found with regards to custom changes:
Custom changes cannot be exported/imported. There is an option in the M1 Design Studio, however, they always appear to be disabled... I tried everything and I couldn't find a mention of it in the help documentation.
You can export a customizations change log (CSV, XML, Excel, HTML) that provides type, name, location and description. In essence, it is a read-only document that provides a list of changes you made. You cannot modify the contents of this log.
Custom form changes made, go into effect for all data sources (Test, Stage, LIVE). In other words, there does not appear an ability to limit the scope of a form change.
Custom field changes must be made in each data source (Test, Stage, LIVE). What's odd here is that if add a field in Test, adjust a grid to display it, subsequently change to LIVE, it detects that the field doesn't exist and negates the grid changes.
I'm unable to find documentation indicating that this application supports version control.
sigh
....
So...
How do I manage changes from an SDLC: ALM methodology and tools standpoint?
I could start by bringing in a change request system to manage pending and completed customizations. But then what? How should changes me managed and released? Put backups of application under source control and deploy when needed?
There might not be a good answer to this question since I'm unable to take advantage of version control and create a separation of environments, but I figured I'd ask in case anybody has had similar experience or worked with M1.
I take it from the lack of answers in two months that your question is unanswerable. SDLC is something you could write a textbook on, or read a textbook on, and not know enough about your environment, other than that probably in order to get hired at your shop, "SDLC" would be a bullet point on the hiring qualifications.
I have no experience with M1, but I am assuming that you're going to have to ask your peers at work for their ideas, because it sounds like you're asking a vertically closed (your shop, your tools, your practices) question that has no exact technical answer.
As for best practices; I suggest you investigate best practices outside your M1 ERP silo and apply them as makes sense to you.
The company I work for also uses M1 erp. We have similar issues regarding version control of the customisations. From what I can tell, all customisations are stored in the M1DD database. You could backup a copy of this database before any major development work as a basic revision control system.
I am familiar with the issue of all changes becoming immediately active in all datasets. This is particularly annoying when you are making changes to a commonly used modules as you don't know how live data will be affected during the development process. One technique I have found useful is to surround untested code with an if statement so it is only executed when I am logged in.
If App.UserID = "MYUSERNAME" Then
'new code here
End If
I would be interested in hearing how you solved this problem.

Speccing out new features

I am curious as to how other development teams spec out new features. The team I have just moved up to lead has no real specification process. I have just implemented a proper development process with CI, auto deployment and logging all bugs using Trac and I am now moving on to deal with changes.
I have a list of about 20 changes to our product to have done over the next 2 months. Normally I would just spec out each change going into detail of what should be done but I am curious as to how other teams handle this. Any suggestions?
I think we had a successful approach in my last job as we delivered the project on time and with only a couple of issues found in production. However, there were only 3 people working on the product, so I'm not entirely sure how it would scale to larger teams.
We wrote specs upfront for the whole product but without going into too much detail and with an emphasis on the UI. This was a means for us to get a feel for what had to be done and for the scope of the project.
When we started implementing things, we had to work everything out in a lot more detail (and inevitably had to do some things differently from the spec). To that end, we got together and worked out the best approach to implementing each feature (sometimes with prototypes). We didn't update the original spec but we did make notes after the meetings as it's very easy to forget the details afterwards.
So in summary, my approach is to treat specs as an exploratory tool and to work out finer details during implementation. Depending on the project, it may also be a good idea to keep the original spec up to date as the application evolves (which we didn't need to do this time).
Good question but its can be subjective. I guess it depends on the strategy of the product, if its to be deployed to multiple clients in the same way or to a single client on a bespoke project, the impact, dependency these changes have on the system and each other and the priority these changes need to be made.
I would look at the priority and the dependency, that will naturally start grouping things?

What should I propose for a reusable code library organization?

My organization has begun slowly repurposing itself to a less product-oriented business model and more contract-oriented business model over the last year or two. During the past year, I was shifted into the new contracting business to help put out fires and fill orders. While the year as a whole was profitable (and therefore, by at least one measure, successful, we had a couple projects that really dinged our numbers for the year back around June.
I was talking with my manager before the Christmas holiday, and he mentioned that, while he doesn't like the term "post-mortem" (I have no idea what's wrong with the term, any business folks or managers out there know?), he did want to hold a meeting sometime mid-January where the entire contract group would review the year and try to figure out what went right, what went wrong, and what initiatives we can perform to try to improve profitability.
For various reasons (I'll go into more detail if it's requested), I believe that one thing our team, and indeed the organization as a whole, would benefit from is some form of organized code-sharing. The same things get done again and again by different people and they end up getting done (and broken) in different ways. I'd like to at least establish a repository where people can grab code that performs a certain task and include (or, realistically, copy/paste) that code in their own projects.
What should I propose as a workable common source repository for a team of at least 10-12 full-time devs, plus anywhere from 5-50 (very) part time developers who are temporarily loaned to the contract group for specialized work?
The answer required some cultural information for any chance at a reasonable answer, so I'll provide it here, along with some of my thoughts on the topic:
Developers will not be forced to use this repository. The barrier to
entry must be as low as possible to
encourage participation, or it will
be ignored. Sadly, this means
that anything which requires an
additional software client to be
installed and run will likely fail.
ClickOnce deployment's about as
close as we can get, and that's awfully iffy.
We are a risk-averse, Microsoft shop. I may be able to sell open-source solutions, but they'll be looked upon with suspicion. All devs have VSS, the corporate director has declared that VSTS is not viable going forward. If it isn't too difficult a setup and the license is liberal, I could still try to ninja a VSTS server into the lab.
Some of my fellow devs care about writing quality, reliable software, some don't. I'd like to protect any shared code written by those who care from those who don't. Common configuration management practices (like checking out code while it's being worked on) are completely ignored by at least a fifth of my colleagues on the contract team.
We're better at writing processes than following them. I will pretty much have to have some form of written process to be able to sell this to my manager. I believe it will have to be lightweight, flexible, and enforced by the tools to be remotely relevant because my manager is the only person who will ever read it.
Don't assume best practices. I would very much like to include things like mandatory code reviews to enforce use of static analysis tools (FxCop, StyleCop) on common code. This raises the bar, however, because no such practices are currently performed in a consistent manner.
I will be happy to provide any additional requested information. :)
EDIT: (Responsing to questions)
Perhaps contracting isn't the correct term. We absolutely own our own code assets. A significant part of the business model on paper (though not, yet, in practice) is that we own the code/projects we write and we can re-sell them to other customers. Our projects typically take the form of adding some special functionality to one of the company's many existing software products.
From the sounds of it you have a opportunity during the "post-mortem"to present some solutions. I would create a presentation outlining your ideas and present them at this meeting. Before that I would recommend that you set up some solutions and demonstrate it during your presentation. Some things to do -
Evangelize component based programming (A good read is Programming .NET Components - Jubal Lowy). Advocate the DRY (Don't Repeat Yourself) principle of coding.
Set up a central common location in you repository for all your re-usable code libraries. This should have the reference implementation of your re-usable code library.
Make it easy for people to use your code libraries by providing project templates for common scenarios with the code libraries already baked in. This way your colleagues will have a consistent template to work from. You can leverage the VS.NET project template capabilities to this - check out the following links VSX Project System (VS.Net 2008), Code Project article on creating Project Templates
Use a build automation tool like MSBuild (which is bundled in VS2005 and up) to copy over just the components needed for a particular project. Make this part of your build setup in the IDE (VS.NET 2005 and up have nifty ways to set up pre-compile and post-compile tasks using MSBuild)
I know there is resistance for open source solutions but I would still recommend setting up and using a continuous automation system like CruiseControl.NET so that you can leverage it to compile and test your projects on a regular basis from a central repository where the re-usable code library is maintained. This way any changes to the code library can be quickly checked to make sure it does not break anything, It also helps bring out version issues with the various projects.
If you can set this up on a machine and show it during your post-mortem as part of the steps that can be taken to improve, you should get better buy since you are showing something already working that can be scaled up easily.
Hope this helps and best of luck with your evangelism :-)
I came across this set of frameworks recently called the Chuck Norris Frameworks - They are available on NuGet at http://nuget.org/packages/chucknorris . You should definitely check them out, as they have some nice templates for your ASP.NET projects. Also definitely checkout Nuget.
organize by topic, require unit tests (feature-level) for check-in/acceptance into library; add a wiki to explain what/why and for searching
One question: You say this is a consulting group. What code assets do you have? I would think most of your teams' coding efforts would be owned by your clients as part of your work-for-hire contract. If you are going to do this you need to make absolutely certain that your contracts grant you rights to your employees' work.
Maven has solved code reuse in the Java community - you should go check it out.
I have a .NET developer that's devised something similar for our internal use for .NET assemblies. Because there's no comparable .NET Internet community, this tool will just access an internal repository in our corporate network. Otherwise will work rather much the way Maven does.
Maven could really be used to manage .NET assemblies directly (we use it with our Flex .swf and .swc code modules) is just .NET folk would have to get over using a Java tool and would probably have to write a Maven plugin to drive msbuild.
First of all for code organization check out Microsoft Framework Design Guidelines at http://msdn.microsoft.com/en-us/library/ms229042.aspx and then create a central Location source control for the new framework that your going to create. Set up some default namespaces, assemblies for cleaner seperation and make sure everyone gets a daily build.
Just an additional point, since we have "shared code" in my shop as well.
We found out this is very much a packaging issue:
Whatever code your are producing or tool you are using, what you should have is a common build tool able to package your sources into a "delivery component", with everything used to actually execute the code, but also the documentation (compressed), and the source (compressed).
The main interest into having a such a "delivery package unit" is to have as less files to deploy as possible, in order to ease the download of those units.
The build process can very well be managed by Maven or any other (ant/nant) tool you want.
When some audit team want to examine all our projects, we just deploy on their post the same packages we deploy on a production machine, except they will un-compressed the source files and do their work.
Since our source files also includes whatever files are needed to compile them (like for instance eclipse files), they even can re-compile those projects in their development environment).
That way:
Developers will not be forced to use this repository. The barrier to entry must be as low as possible to encourage participation, or it will be ignored: it is just a script to execute to get the "delivery module" with everything in it they need (a maven repository can be used for that too)
We are a risk-averse, Microsoft shop: you can use any repository you want
Some of my fellow devs care about writing quality, reliable software, some don't: this has nothing to do with the quality of code written in these packages modules
We're better at writing processes than following them: the only process involved in this is the packaging process, and it can be fairly automated
Don't assume best practices: you are not forced to apply any kind of static code analysis before packaging executable and source files.