Why Managed and Unmanaged solution files are so much different in MS CRM? - diff

I created solution in MS Dynamics CRM, then exported it as managed and unmanaged one. Decompressed both and I ran diff tool on customizations.xml files. And there are too many difference between them, it's hard to say what exactly what was changed and why.
Are these changes are crucial?
May I create managed solution just by changing value in <Managed> tag to 1 instead of 0? Will it be safe?

If you are shipping your solution to customer packaging a proper Managed Solution "with proper publisher and version" will be very critical part.
If you'll go deep into customization you'll understand that only changing managed tag doesn't really make a solution Managed in Proper Sense.
By Updating Managed tag you can install/uninstall solution from the CRM but if you'll get deep into proper usage of Managed Solution where you control at field level "which field will be customizable and which field won't be customizable" so that you can control CRM in such a way that other solutions which are being imported afterwards don't break your customizations, you'll start seeing the differences.

There is extremely poor documentation surrounding the specifics of the differences in managed and unmanaged solutions. From my personal experience there is a lot more metadata that is required when specifying a managed solution. For example, if an entity is managed, it has to include the metadata to say if you can add additional fields to the entity, or update the form, etc.
As far as are these changes crucial? One would assume they're crucial to define all of the metadata required to describe a managed solution.
Is changing tag to 1 instead of 0 possible? Yes. Is it supported/will it work?. No. There are lot's of undocumented differences in the XML between a managed solution and unmanaged. Just changing the Managed tag will in the best scenario bomb on import, and at worst, corrupt your CRM solution environment.

Related

Best practices for a microstrategy workflow

we are a team of 5 people working with microstrategy. We share every role, but we have no worklfow.
Everybody may build or change attributes and change the schema. This leads often to reports not working. Furthermore, there is no "good" documentation. We tried to establish a documentation with sharepoint, but there we also had no workflow.
Originally, we had an old project where for every report all the attributes where constructed newly. So we did not reuse any existing schema object.
Hence, we started a new project. We realized that due to lack of understanding and lack of workflow we make and made a lot of mistakes. We feel that we understand things better slowly (parent child), but the workflow is still horrible.
We have a development project and a lice project, but with the way we are working now, we have a lot of problems. Particularly, the missing version control system is killing us. We perform changes and forget what we did. Hence, we have to use backups, destroying useful work on a given day
So what are best practices to:
* deploy new attributes, facts and reports
* ensure that old reports work after constructing new attributes and facts
* improve documentation
* attributes defined on fact tables and parent-child relationships
Any help is appreciated
MicroStrategy development in a team environment, deploying from development to live, can be very challenging. As you rightly point out, the lack of version control, and unknown interdependencies between objects can cause untold problems. There's no one right answer to this question, but I would suggest the following:
Use all the tools provided by MicroStrategy. When you're deploying from one project to another, don't just drag and drop in Object Manager, create a package. When you deploy that package, make sure you choose to create an undo package, so you can rollback changes if you encounter any problems.
On that note, try to catch these problems in advance. Running Integrity Manager before and after a deployment, even if it's just to generate SQL for the reports, will point out if you've broken anything. On that note:
Create a third environment/project. Call this test/release control, whatever you prefer. Here you can test packages created in Object Manager, to ensure that they have the desired effect, and don't break anything. In effect, this is a dry run for your deployment to live. This environment should be regularly refreshed from live (via project duplication), to make sure it doesn't get in an unexpected state (as the result of a broken Object Manager package import for example).
Over and above that, I can only offer organisational advice. It's not uncommon for one person to take responsibility for schema objects (i.e. facts, attributes, transformations) so that developers don't undo each other's changes. If you have a large project, these objects could be split into functional areas, and individuals assigned.
Documentation is always tricky, but I like to put as much as possible into the object descriptions. This has the advantage of being visible in the Web interface (via tooltips), and included in the automated project documentation, should you choose to generate that. There is obviously the change log functionality for each object, but in my experience, those logs are soon not completed by developers, as saving happens too frequently. Still, if you can get people to populate that, you'd have a head start on understanding the change in your project.
To summarise:
Use Object Manager packages to deploy changes
Test changes with Integrity Manager, to catch any issues as early as possible
Use a release control project/environment, so you're not catching issues in your production environment
Assign responsibility for schema objects to a specific person or persons where possible.

Sails.js REST server based on jsonapi.org specification

I need to develop REST server strictly accrding to jsonapi.org specification and I'm not sure if there is some complex solution or even if it's easy to develop such thing.
I've found sails-hook-jsonapi, but it looks unmaintained for some time.
I'm new to Sails and not aware of all it's features and would appreciate any help, I may missed something obvious.
I have needed this too. There is not anything that works yet with Sails. sails-hook-jsonapi does not work correctly. I Forked that code and am maintaining my own version of it but there are still significant attribute serialization issues with multiple records. However, it does work at a basic level. I am also working on a new project sails-generate-jsonapi-blueprints but it is not nearly ready yet.
Sails is great but can ba a royal PIA. The guys maintaining Sails have had many requests for jsoanapi.org support but I do not believe that will happen anytime in the near future. If you REALLY must have JSONapi.org format I would suggest Loopback or some other API that already has support for it out of the box.
Actually, I take part of that back. sails-hook-jsonapi is working. I made a little change in the fork I maintain. https://github.com/NikkiDreams/sails-hook-jsonapi. Ian is maintaining the original project fork too I believe. https://github.com/IanVS/sails-hook-jsonapi
So the catch about the hook is that it hijacks every single request sent to responses/ok.js If you need something like an Authorizer that does not need jsonapi create a variant of ok.js that simply does a res.json(data) without the jsonapi-serializer being called when serializing the response.
sails-hook-jsonapi will serialize most of your data to your needs. But it still has a few limitations. Depending on the complexity of your queries these may not be an issue.
TODOs: Included request parameter handling (400 response if present)
Links
Top-level "self" links
Top-level "related" links
Resource-level "self" links
Related resource relationship links
Metadata links
Pagination
Formatting
Non-dasherized attributes
Sparse fieldsets
Long story short - there is no way to do it out of the box with little time investment. At least for now.
But sails-hook-jsonapi looks like good starting point, repository seems to be active now.
I've done project prototype on loopback.io framework, because I was in hurry and loopback had better jsonapi support.

Epicor Newbie looking for direction

I am an Epicor and Crystal Reports Newbie. I have started working with these programs a month ago, when I was hired. I am still trying to figure out how you know whether you are trying to customize a BAQ, Dashboard, etc. How to know where/when to make a new BOM report and such. If anyone out there has some tips, I would greatly appreciate it. I feel slightly intimidated by the program but am also determined to learn my way through it.
Thanks!
Toohey! Welcome to the world of Epicor!
Although I'm sure in the past couple of months you have learned the ropes, here are some extra tips to keep you moving forward:
That is not part of the system functionality
In order to keep costs under control, err on the side of not making system customizations to meet all user requests. You will quickly see that adding a quick field as a customization to a form isn't just the 5 minute change it seems like. You will soon be creating several custom reports and dashboards to report off of this field, and the cost of the change soon outweighs the benefit in many situations. As you become more familiar with this, try to balance ROI against the high cost of Epicor system customizations. It is best to lead with "that is not part of the system functionality", and when they push the issue, treat even small changes as controlled projects.
BAQ and Report Changes
Inevitably, you will need to customize the system's BAQs and Reports to meet your business needs because the standard system isn't designed exactly for your business.
Epicor has standard BAQs that start with 'z' and many reports. You should avoid editing the stock BAQs and reports, because they will be overwritten with each patch of Epicor. Instead, copy the standard distribution BAQs and rename the copies using your company initials as a prefix. Similarly, you want to create a custom reports folder separate (or within) the standard reports folder where you place all of your modified reports. You can then link the menu to the BAQ Report or Report Data Definition, and link the report style to the location of your new custom report on the server.
Customizations
Maintenance of customizations has a high long-term cost if you do not have in-house developers. A critical piece of advice here is to make sure all of the code, be it in C# or VB, is thoroughly commented. Even if you're generating code with a wizard, do yourself a favor and put a standard header into the script of every customization that includes the first date of the customization, when it was modified, and detail everything that was changed (especially if the change was a property change or a field addition that does not clearly appear in the script). Customizations have been known to fail for unexplained reasons, or create bad script that is not editable through the standard Epicor interface, and there may come a time when you have to rebuild the customization from scratch using only this change log and things you can clearly see in the form. You should save your customizations with some obvious standard naming convention (something like ORDER_ENTRY_CSR_YYMMDD), and make sure you update all menus to reflect the newest customization for the purpose you're using it. We also export our customizations for archival, just in case something should happen. Another note here is if you do not increment the customization name on a change and then update the menu items, users will still be use locally cached versions of the page until they clear their client cache. So, I always recommend incrementing. Another note on customizations and every custom exportable object in Epicor is to do yourself a favor and export them to either a source control system or a file repository so that after you deploy a faulty customization, rolling back to the previous version is quick and painless.
BPM Directives
As you're probably aware by now, BPM directives are powerful tools which can be used to update tables and prevent users from making terrible business decisions. A note on these is similar to customizations - comment comment comment!
Consultant Use
If you are using external consultants to create BPMs or Customizations, mandate distribution of commented source code that can be understood internally by one of your team members.
I hope this helps!
Source: 4 yrs experience as an Epicor ERP programmer
I would like to add that you should develop any Customization, BPM or Baq/Dashboard in the test system because any error on a solution can stop users from perform their job. Also, you can use a powerful tool called tracing options that helps you to recognize where to place the BPM directives. Further more there is a huge Epicor forum where you can post questions and a comunity of consultants , developers and users will answer your questions, and advise you about best Epicor practices, and it is completely free. You need to register on it; this is the link www.e10help.com.

SDLC: Managing changes in a 'Closed System' (M1 - ERP)

I am working with a client who has an ERP system in place, called M1, that they are looking to make custom changes to.
I have spent a little bit of time investigating the ERP system in terms of making customizations. Here is a list of what I have found with regards to custom changes:
Custom changes cannot be exported/imported. There is an option in the M1 Design Studio, however, they always appear to be disabled... I tried everything and I couldn't find a mention of it in the help documentation.
You can export a customizations change log (CSV, XML, Excel, HTML) that provides type, name, location and description. In essence, it is a read-only document that provides a list of changes you made. You cannot modify the contents of this log.
Custom form changes made, go into effect for all data sources (Test, Stage, LIVE). In other words, there does not appear an ability to limit the scope of a form change.
Custom field changes must be made in each data source (Test, Stage, LIVE). What's odd here is that if add a field in Test, adjust a grid to display it, subsequently change to LIVE, it detects that the field doesn't exist and negates the grid changes.
I'm unable to find documentation indicating that this application supports version control.
sigh
....
So...
How do I manage changes from an SDLC: ALM methodology and tools standpoint?
I could start by bringing in a change request system to manage pending and completed customizations. But then what? How should changes me managed and released? Put backups of application under source control and deploy when needed?
There might not be a good answer to this question since I'm unable to take advantage of version control and create a separation of environments, but I figured I'd ask in case anybody has had similar experience or worked with M1.
I take it from the lack of answers in two months that your question is unanswerable. SDLC is something you could write a textbook on, or read a textbook on, and not know enough about your environment, other than that probably in order to get hired at your shop, "SDLC" would be a bullet point on the hiring qualifications.
I have no experience with M1, but I am assuming that you're going to have to ask your peers at work for their ideas, because it sounds like you're asking a vertically closed (your shop, your tools, your practices) question that has no exact technical answer.
As for best practices; I suggest you investigate best practices outside your M1 ERP silo and apply them as makes sense to you.
The company I work for also uses M1 erp. We have similar issues regarding version control of the customisations. From what I can tell, all customisations are stored in the M1DD database. You could backup a copy of this database before any major development work as a basic revision control system.
I am familiar with the issue of all changes becoming immediately active in all datasets. This is particularly annoying when you are making changes to a commonly used modules as you don't know how live data will be affected during the development process. One technique I have found useful is to surround untested code with an if statement so it is only executed when I am logged in.
If App.UserID = "MYUSERNAME" Then
'new code here
End If
I would be interested in hearing how you solved this problem.

Speccing out new features

I am curious as to how other development teams spec out new features. The team I have just moved up to lead has no real specification process. I have just implemented a proper development process with CI, auto deployment and logging all bugs using Trac and I am now moving on to deal with changes.
I have a list of about 20 changes to our product to have done over the next 2 months. Normally I would just spec out each change going into detail of what should be done but I am curious as to how other teams handle this. Any suggestions?
I think we had a successful approach in my last job as we delivered the project on time and with only a couple of issues found in production. However, there were only 3 people working on the product, so I'm not entirely sure how it would scale to larger teams.
We wrote specs upfront for the whole product but without going into too much detail and with an emphasis on the UI. This was a means for us to get a feel for what had to be done and for the scope of the project.
When we started implementing things, we had to work everything out in a lot more detail (and inevitably had to do some things differently from the spec). To that end, we got together and worked out the best approach to implementing each feature (sometimes with prototypes). We didn't update the original spec but we did make notes after the meetings as it's very easy to forget the details afterwards.
So in summary, my approach is to treat specs as an exploratory tool and to work out finer details during implementation. Depending on the project, it may also be a good idea to keep the original spec up to date as the application evolves (which we didn't need to do this time).
Good question but its can be subjective. I guess it depends on the strategy of the product, if its to be deployed to multiple clients in the same way or to a single client on a bespoke project, the impact, dependency these changes have on the system and each other and the priority these changes need to be made.
I would look at the priority and the dependency, that will naturally start grouping things?