Epicor Newbie looking for direction - crystal-reports

I am an Epicor and Crystal Reports Newbie. I have started working with these programs a month ago, when I was hired. I am still trying to figure out how you know whether you are trying to customize a BAQ, Dashboard, etc. How to know where/when to make a new BOM report and such. If anyone out there has some tips, I would greatly appreciate it. I feel slightly intimidated by the program but am also determined to learn my way through it.
Thanks!

Toohey! Welcome to the world of Epicor!
Although I'm sure in the past couple of months you have learned the ropes, here are some extra tips to keep you moving forward:
That is not part of the system functionality
In order to keep costs under control, err on the side of not making system customizations to meet all user requests. You will quickly see that adding a quick field as a customization to a form isn't just the 5 minute change it seems like. You will soon be creating several custom reports and dashboards to report off of this field, and the cost of the change soon outweighs the benefit in many situations. As you become more familiar with this, try to balance ROI against the high cost of Epicor system customizations. It is best to lead with "that is not part of the system functionality", and when they push the issue, treat even small changes as controlled projects.
BAQ and Report Changes
Inevitably, you will need to customize the system's BAQs and Reports to meet your business needs because the standard system isn't designed exactly for your business.
Epicor has standard BAQs that start with 'z' and many reports. You should avoid editing the stock BAQs and reports, because they will be overwritten with each patch of Epicor. Instead, copy the standard distribution BAQs and rename the copies using your company initials as a prefix. Similarly, you want to create a custom reports folder separate (or within) the standard reports folder where you place all of your modified reports. You can then link the menu to the BAQ Report or Report Data Definition, and link the report style to the location of your new custom report on the server.
Customizations
Maintenance of customizations has a high long-term cost if you do not have in-house developers. A critical piece of advice here is to make sure all of the code, be it in C# or VB, is thoroughly commented. Even if you're generating code with a wizard, do yourself a favor and put a standard header into the script of every customization that includes the first date of the customization, when it was modified, and detail everything that was changed (especially if the change was a property change or a field addition that does not clearly appear in the script). Customizations have been known to fail for unexplained reasons, or create bad script that is not editable through the standard Epicor interface, and there may come a time when you have to rebuild the customization from scratch using only this change log and things you can clearly see in the form. You should save your customizations with some obvious standard naming convention (something like ORDER_ENTRY_CSR_YYMMDD), and make sure you update all menus to reflect the newest customization for the purpose you're using it. We also export our customizations for archival, just in case something should happen. Another note here is if you do not increment the customization name on a change and then update the menu items, users will still be use locally cached versions of the page until they clear their client cache. So, I always recommend incrementing. Another note on customizations and every custom exportable object in Epicor is to do yourself a favor and export them to either a source control system or a file repository so that after you deploy a faulty customization, rolling back to the previous version is quick and painless.
BPM Directives
As you're probably aware by now, BPM directives are powerful tools which can be used to update tables and prevent users from making terrible business decisions. A note on these is similar to customizations - comment comment comment!
Consultant Use
If you are using external consultants to create BPMs or Customizations, mandate distribution of commented source code that can be understood internally by one of your team members.
I hope this helps!
Source: 4 yrs experience as an Epicor ERP programmer

I would like to add that you should develop any Customization, BPM or Baq/Dashboard in the test system because any error on a solution can stop users from perform their job. Also, you can use a powerful tool called tracing options that helps you to recognize where to place the BPM directives. Further more there is a huge Epicor forum where you can post questions and a comunity of consultants , developers and users will answer your questions, and advise you about best Epicor practices, and it is completely free. You need to register on it; this is the link www.e10help.com.

Related

How to implement continuous migration for large website?

I am working on a website of 3,000+ pages that is updated on a daily basis. It's already built on an open source CMS. However, we cannot simply continue to apply hot fixes on a regular basis. We need to replace the entire system and I anticipate the need to replace the entire system on a 1-2 year basis. We don't have the staff to work on a replacement system while the other is being worked on, as it results in duplicate effort. We also cannot have a "code freeze" while we work on the new site.
So, this amounts to changing the tire while driving. Or fixing the wings while flying. Or all sorts of analogies.
This brings me to a concept called "continuous migration." I read this article here: https://www.acquia.com/blog/dont-wait-migrate-drupal-continuous-migration
The writer's suggestion is to use a CDN like Fastly. The idea is that a CDN allows you to switch between a legacy system and a new system on a URL basis. This idea, in theory, sounds like a great idea that would work. This article claims that you can do this with Varnish but Fastly makes the job easier. I don't work much with Varnish, so I can't really verify its claims.
I also don't know if this is a good idea or if there are better alternatives. I looked at Fastly's pricing scheme, and I simply cannot translate what it means to a specific price point. I don't understand these cryptic cloud-service pricing plans, they don't make sense to me. I don't know what kind of bandwidth the website uses. Another agency manages the website's servers.
Can someone help me understand whether or not using an online CDN would be better over using something like Varnish? Is there free or cheaper solutions? Can someone tell me what this amounts to, approximately, on a monthly or annual basis? Any other, better ways to roll out a new website on a phased basis for a large website?
Thanks!
I think I do not have the exact answers to your question but may be my answer helps a little bit.
I don't think that the CDN gives you an advantage. It is that you have more than one system.
Changes to the code
In professional environments I'm used to have three different CMS installations. The fist is the development system, usually on my PC. That system is used to develop the extensions, fix bugs and so on supported by unit-tests. The code is committed to a revision control system (like SVN, CVS or Git). A continuous integration system checks the commits to the RCS. When feature is implemented (or some bugs are fixed) a named tag will be created. Then this tagged version is installed on a test-system where developers, customers and users can test the implementation. After a successful test exactly this tagged version will be installed on the production system.
A first sight this looks time consuming. But it isn't because most of the steps can be automated. And the biggest advantage is that the customer can test the change on a test system. And it is very unlikely that an error occurs only on your production system. (A precondition is that your systems are build on a similar/equal environment. )
Changes to the content
If your code changes the way your content is processed it is an advantage when your
CMS has strong workflow support. Than you can easily add a step to your workflow
which desides if the content is old and has to be migrated for the current document.
This way you have a continuous migration of the content.
HTH
Varnish is a cache rather than a CDN. It intercepts page requests and delivers a cached version if one exists.
A CDN will serve up contents (images, JS, other resources etc) from an off-server location, typically in the cloud.
The cloud-based solutions pricing is often very cryptic as it's quite complicated technology.
I would be careful with continuous migration. I've done both methods in the past (continuous and full migrations) and I have to say, continuous is a pain. It means double the admin time for everything, and assumes your requirements are the same at all points in time.
Unfortunately, I would say you're better with a proper rebuilt on a 1-2 year basis than a continuous migration, but obviously you know best about that.
I would suggest you maybe also consider a hybrid approach? Build yourself an export tool to keep all of your content in a transferrable state like CSV/XML/JSON so you can just import into a new system when ready. This means you can incorporate new build requests when you need them in a new system (what's the point in a new system if it does exactly the same as the old one) and you get to keep all your content. Plus you don't need to build and maintain two CMS' all the time.

PLC Version Control

I need to come up with a CM process for PLC code.
Currently, the system is developed using RSLogix 5000. The build product is a monolithic file that can be loaded onto a PLC for execution and edited directly in the development environment. With multiple developers, this has become a problem. They're stepping on each others changes.
As an analogy, it's as if, when doing Java development, the only wway to edit and save the source would be to load up a *.jar file into your IDE, make the change, and then save it back to the jar file. This is less than ideal.
How can I coordinate changes between multiple developers working with PLC's?
If we are talking about one big binary files, then a VCS (centralized or decentralized) is not the best tool for the job.
An external referencial (a shared disk for instance) where a batch will copy and label the current PCL state is better.
See "Tracking Software History"
To avert discontinuities in the historical record of revisions, old versions of programs must be stored.
“We take it a step further, though. Using our MDT AutoSave, we actually go out and interrogate the equipment. Overnight or at whatever frequency is specified, the software reads the programs in the PLCs and then compares that information to the last known program. The version-control software will copy the new program and store it and [then] compare it to the last one.
Launching version control is fairly simple. Required is software installation and then hardware configuration. “You would need a server and a couple of weeks of engineering and you’re good to go,” Perysyn says. However, his company uses a “shrink-wrap approach” that involves installing the software and then customization by users filling in the blanks.
That being said, when you have multiple changes from multiple developers, you need an integration environment where a first delivery can be done and validated, before pushing it to the actual server.
See also this post.
I use Unity Pro, so this may not apply for other brands.
Unity can export an "archive" file which is XML which describes the PLC program and IO setup in its entirety. After commissioning changes, I create an export and check it in to my local Git repo. This gets me an annotated history of changes, but no visual comparison. I can always use UnityDiff for comparison.
Check out http://www.mdtsoft.com/ also
You need specialized versioning system for PLCs like VersionDog.
From the manufacturer:
"Special support with Smart Compares for SIMATIC S5, SIMATIC S7,
SIMATIC PCS 7, WinCC, WinCC flexible, InTouch, CoDeSys, TwinCAT,
Phoenix PC WORX, RSLogix, Schneider Modsoft, Schneider Concept,
Schneider Unity, SINUMERIK 840D, Bosch IndraWorks and more. Also robot
programs from ABB and Kuka and office related data formats like
Microsoft Word, Microsoft Excel and Adobe PDF are perfectly supported
by versiondog.
Update: Here is a screenshot showing ladder version compare. I guess that's what most PLC folks are interested in. We also use it to schedule e-mail report if PLC offline and online application versions are a match, as an alarm that something has been changed in PLC but not put into version control server.
About RSLogix5000 specifically, I have seen developers use an emulated PLC and make their changes online. The final product once developed is then put together with all the comments (as they are not contained in the PLC) and then commissioned. There are issues with changes that cannot be done online, such as AOIs. There are tools in place to stop two people editing the same logic online at once and to take ownership of sections. Backups can be done in the form of uploads, but there isn't any way to track changes.
It is a messy problem, messier still for when you are maintaining a system as you want an .ACD that you can go online with, as unless you are somehow doing a diff with the RSLogix compare tool you just see unreadable machine code like "+|Éû³´¬ÙÆW×晵‚>Ù,"
The most common revision control I have seen (sadly) is just saving the the latest file, then taking a copy and adding the current date to the file name, like the recommended control.com post described.
RSLogix5000 has always prohibited multiple users from opening and editing on the same .ACD simultaneously. However, if multiple users have identical .ACD files, open them, and all make connections to the same target controller, they each can edit on the controller simultaneously, but only if they are working on different routines. Other's edits appear automatically, if they were to look at another programmers routine.
Note that working online like this is usually done with the PLC running, even sometimes with the target system (some kind of machine) operating. This kind of arrangement for the purpose of completing work faster, or in some cases because the system is huge. No one develops like this, as it is really a debug tool and impractical for significant changes.
If one programmer finishes, and another is not done, the unfinished work of the other will be saved to the first programmer's .ACD when they save. Whoever saves last will have everyone's work.
Like others have mentioned in this thread, using file date is fairly reasonable. Some companies use a version control variable that is usually displayed on a connected HMI. Other companies use a separate document that documents who and what changes. Sometimes version notes are placed in a lengthy rung comment in the main routine.
My company uses a separate change log, and dated archive copies are maintained. Multiple programmers are only used in the most extreme cases. Someone is always designated to maintain the offline file integrity, usually the person who will be working the longest, or the project manager.
It is important to note that rung comments are not carried from one user to another before RSLogix5000 v21 because previous versions didn't store comments on the controller.
All this said, you might be trying to manage offline development. I haven't seen any sophisticated methods for this. Usually programmers write the needed routines separately, and a project manager will assemble them into a single project. The cleanest approach I've seen is where a project manager will create an architecture with global functionality, and assign routine work to others, giving them a copy of the .ACD to work with. They return the .ACD with changes, and the project manager copies and pastes their routines into the "master" project.
This is a very good question and it really depends on what you want it to do.
If you are only using Rockwell equipment it might be helpfull to look at their solution, I think it's called FactoryTalk AssetCentre.
Currently I am looking into using Bazaar from Canonical.
One thing that VonC pointed out is that a piece of software that can interogate the PLC is a deffinate plus, not a must in my oppinion but it sure as hell helps.
Am I reading your question properly and you have multiple developers working on the same PLC code at the same time? It's a scary thought but I know it sometimes needs to happen, Siemens PLC's are a bit easier to program with multiple developers but I would assign one person to consolidate and test all the changes before committing to the PLC. Any CVS system will let you create branches for every developer but how you would get them to consolidate their changes is the million dolar question.
Bart.
A simple thing to do would be to do a text diff on the .l5k files so you can easily see whether a developer has been messing with part of the file that is outside of their scope.
I saw this question just now from a link at stack exchange: Are There Realistic/Useful Solutions for Source Control for Ladder Logic Programs. Rather than have a link only answer, I'll dupe my answer here:
There is actually a canned solution - from GE-IP of all places. Check out Proficy Change Management. This product does version control from a PLC control systems point of view, rather than a pure version control of files point of view - it works as a layer sitting on top of a VCS (the scary part is that originally this VCS was Visual SourceSafe) and handles rights management, reporting and checkout/checkin.
While the product is from GE-IP, it is designed to support a variety of PLC and HMI systems out of the box.
Full disclosure, I used for work for a company selling and installing PCM (but that was 7 years ago). So if you ask me what it was like back then I'm likely to tell you where it all went wrong!
In my company we just started a trial with Copia.io
Check it out. Our first tests look very promising!
It brings, branching, merging, ladder diff etc... for multiple PLC platforms (Rockwell, Siemens, Codesys)..
PS. I work for a company that builds machines, we were looking for version-dog alike solutions with a bit more power in collaboration and diffing capabilities. I used tools like Mercurial, Git, Tortoise in past companies (not for PLC though).

SDLC: Managing changes in a 'Closed System' (M1 - ERP)

I am working with a client who has an ERP system in place, called M1, that they are looking to make custom changes to.
I have spent a little bit of time investigating the ERP system in terms of making customizations. Here is a list of what I have found with regards to custom changes:
Custom changes cannot be exported/imported. There is an option in the M1 Design Studio, however, they always appear to be disabled... I tried everything and I couldn't find a mention of it in the help documentation.
You can export a customizations change log (CSV, XML, Excel, HTML) that provides type, name, location and description. In essence, it is a read-only document that provides a list of changes you made. You cannot modify the contents of this log.
Custom form changes made, go into effect for all data sources (Test, Stage, LIVE). In other words, there does not appear an ability to limit the scope of a form change.
Custom field changes must be made in each data source (Test, Stage, LIVE). What's odd here is that if add a field in Test, adjust a grid to display it, subsequently change to LIVE, it detects that the field doesn't exist and negates the grid changes.
I'm unable to find documentation indicating that this application supports version control.
sigh
....
So...
How do I manage changes from an SDLC: ALM methodology and tools standpoint?
I could start by bringing in a change request system to manage pending and completed customizations. But then what? How should changes me managed and released? Put backups of application under source control and deploy when needed?
There might not be a good answer to this question since I'm unable to take advantage of version control and create a separation of environments, but I figured I'd ask in case anybody has had similar experience or worked with M1.
I take it from the lack of answers in two months that your question is unanswerable. SDLC is something you could write a textbook on, or read a textbook on, and not know enough about your environment, other than that probably in order to get hired at your shop, "SDLC" would be a bullet point on the hiring qualifications.
I have no experience with M1, but I am assuming that you're going to have to ask your peers at work for their ideas, because it sounds like you're asking a vertically closed (your shop, your tools, your practices) question that has no exact technical answer.
As for best practices; I suggest you investigate best practices outside your M1 ERP silo and apply them as makes sense to you.
The company I work for also uses M1 erp. We have similar issues regarding version control of the customisations. From what I can tell, all customisations are stored in the M1DD database. You could backup a copy of this database before any major development work as a basic revision control system.
I am familiar with the issue of all changes becoming immediately active in all datasets. This is particularly annoying when you are making changes to a commonly used modules as you don't know how live data will be affected during the development process. One technique I have found useful is to surround untested code with an if statement so it is only executed when I am logged in.
If App.UserID = "MYUSERNAME" Then
'new code here
End If
I would be interested in hearing how you solved this problem.

Will major config changes discourage users from deploying code?

I'm beginning development on a solution that will plug into an existing application. It will be made available for public use.
I have the option of using a newer technology that promotes better architecture, flexibility, speed, etc... or sticking with existing technology that is tried and tested which the application already uses.
The downside of going with the newer technology is that a major change to an essential config file needs to be made to support it. If the change goes wrong the app would be out of service. Uninstall is also an issue as future custom code by other developers may require the newer tech and there's no way this can be determined.
How important is this issue in considering an approach?
Will significant config changes put users off deploying code, or cause problems for them later?
Edit:
Intentionally not going into specifics about technologies here to avoid the question from being siderailed.
Install/uninstall software can be provided but there is some complexity involved which may cause them to foul up on edge cases resulting in a dead app. (A backup of the original config would be a way to mitigate that.) Also see the issue about uninstall above where I essentially can't provide one.
Yes, in my experience, any large amount of work will make users think twice about deploying or upgrading.
It's your standard cost/benefit analysis done by businesses with just about every decision. Will the expected benefits more than outweigh the potential costs?
When we release updates to our software, there's almost always a major component that's there just to assist the users to migrate.
An example (modified enough to protect the guilty): we have a product which generates reports on system performance and other things. But the reports aren't that pretty and the software for viewing them is tied to a specific platform.
We've leveraged BIRT to give us intranet-based reporting that looks much nicer and only needs the client to have a web browser (not some fat client).
Very few customers made the switch until we provided a toolset that would take their standard reports and turn them into BIRT reports. Once we supplied that, customers started taking it seriously - the benefit hadn't changed, but the cost had gone right down.
You've given us no detail, so we can't answer with any specificity. But if your question is, will a significant portion of your potential userbase be deterred from using your product if they have to do significant setup work, then the answer is yes. I've seen this time and time again, with my own products and those that I've installed myself. When the only config change is an uninstall and reinstall. People don't like to do work.
You may want to devote more effort than you've considered so far to making the upgrade painless. Even if you're upgrading someone else's framework, you may find the effort worthwhile and reflected in an increased number of installs.
I have noticed that "power users" - developers, sysadmins, etc. - are willing to put up with more setup work.
I'm not sure what you mean by "major config change", but if you're talking about settings / configuration files, then I've been doing something like this:
An application always contains a default configuration which is useful for most users, and which can't be replaced. Instead, users can override one or more of the default settings in their own, separate configuration file. When a new (major) version is released, most users don't need to reconfigure anything: their own custom configurations are still taken from their own configuration file, and possibly required new parameters are taken from the new release's default settings.
It's obvious that most users don't want waste their time adjusting some settings that already were right - and quite rightfully so.

What should I propose for a reusable code library organization?

My organization has begun slowly repurposing itself to a less product-oriented business model and more contract-oriented business model over the last year or two. During the past year, I was shifted into the new contracting business to help put out fires and fill orders. While the year as a whole was profitable (and therefore, by at least one measure, successful, we had a couple projects that really dinged our numbers for the year back around June.
I was talking with my manager before the Christmas holiday, and he mentioned that, while he doesn't like the term "post-mortem" (I have no idea what's wrong with the term, any business folks or managers out there know?), he did want to hold a meeting sometime mid-January where the entire contract group would review the year and try to figure out what went right, what went wrong, and what initiatives we can perform to try to improve profitability.
For various reasons (I'll go into more detail if it's requested), I believe that one thing our team, and indeed the organization as a whole, would benefit from is some form of organized code-sharing. The same things get done again and again by different people and they end up getting done (and broken) in different ways. I'd like to at least establish a repository where people can grab code that performs a certain task and include (or, realistically, copy/paste) that code in their own projects.
What should I propose as a workable common source repository for a team of at least 10-12 full-time devs, plus anywhere from 5-50 (very) part time developers who are temporarily loaned to the contract group for specialized work?
The answer required some cultural information for any chance at a reasonable answer, so I'll provide it here, along with some of my thoughts on the topic:
Developers will not be forced to use this repository. The barrier to
entry must be as low as possible to
encourage participation, or it will
be ignored. Sadly, this means
that anything which requires an
additional software client to be
installed and run will likely fail.
ClickOnce deployment's about as
close as we can get, and that's awfully iffy.
We are a risk-averse, Microsoft shop. I may be able to sell open-source solutions, but they'll be looked upon with suspicion. All devs have VSS, the corporate director has declared that VSTS is not viable going forward. If it isn't too difficult a setup and the license is liberal, I could still try to ninja a VSTS server into the lab.
Some of my fellow devs care about writing quality, reliable software, some don't. I'd like to protect any shared code written by those who care from those who don't. Common configuration management practices (like checking out code while it's being worked on) are completely ignored by at least a fifth of my colleagues on the contract team.
We're better at writing processes than following them. I will pretty much have to have some form of written process to be able to sell this to my manager. I believe it will have to be lightweight, flexible, and enforced by the tools to be remotely relevant because my manager is the only person who will ever read it.
Don't assume best practices. I would very much like to include things like mandatory code reviews to enforce use of static analysis tools (FxCop, StyleCop) on common code. This raises the bar, however, because no such practices are currently performed in a consistent manner.
I will be happy to provide any additional requested information. :)
EDIT: (Responsing to questions)
Perhaps contracting isn't the correct term. We absolutely own our own code assets. A significant part of the business model on paper (though not, yet, in practice) is that we own the code/projects we write and we can re-sell them to other customers. Our projects typically take the form of adding some special functionality to one of the company's many existing software products.
From the sounds of it you have a opportunity during the "post-mortem"to present some solutions. I would create a presentation outlining your ideas and present them at this meeting. Before that I would recommend that you set up some solutions and demonstrate it during your presentation. Some things to do -
Evangelize component based programming (A good read is Programming .NET Components - Jubal Lowy). Advocate the DRY (Don't Repeat Yourself) principle of coding.
Set up a central common location in you repository for all your re-usable code libraries. This should have the reference implementation of your re-usable code library.
Make it easy for people to use your code libraries by providing project templates for common scenarios with the code libraries already baked in. This way your colleagues will have a consistent template to work from. You can leverage the VS.NET project template capabilities to this - check out the following links VSX Project System (VS.Net 2008), Code Project article on creating Project Templates
Use a build automation tool like MSBuild (which is bundled in VS2005 and up) to copy over just the components needed for a particular project. Make this part of your build setup in the IDE (VS.NET 2005 and up have nifty ways to set up pre-compile and post-compile tasks using MSBuild)
I know there is resistance for open source solutions but I would still recommend setting up and using a continuous automation system like CruiseControl.NET so that you can leverage it to compile and test your projects on a regular basis from a central repository where the re-usable code library is maintained. This way any changes to the code library can be quickly checked to make sure it does not break anything, It also helps bring out version issues with the various projects.
If you can set this up on a machine and show it during your post-mortem as part of the steps that can be taken to improve, you should get better buy since you are showing something already working that can be scaled up easily.
Hope this helps and best of luck with your evangelism :-)
I came across this set of frameworks recently called the Chuck Norris Frameworks - They are available on NuGet at http://nuget.org/packages/chucknorris . You should definitely check them out, as they have some nice templates for your ASP.NET projects. Also definitely checkout Nuget.
organize by topic, require unit tests (feature-level) for check-in/acceptance into library; add a wiki to explain what/why and for searching
One question: You say this is a consulting group. What code assets do you have? I would think most of your teams' coding efforts would be owned by your clients as part of your work-for-hire contract. If you are going to do this you need to make absolutely certain that your contracts grant you rights to your employees' work.
Maven has solved code reuse in the Java community - you should go check it out.
I have a .NET developer that's devised something similar for our internal use for .NET assemblies. Because there's no comparable .NET Internet community, this tool will just access an internal repository in our corporate network. Otherwise will work rather much the way Maven does.
Maven could really be used to manage .NET assemblies directly (we use it with our Flex .swf and .swc code modules) is just .NET folk would have to get over using a Java tool and would probably have to write a Maven plugin to drive msbuild.
First of all for code organization check out Microsoft Framework Design Guidelines at http://msdn.microsoft.com/en-us/library/ms229042.aspx and then create a central Location source control for the new framework that your going to create. Set up some default namespaces, assemblies for cleaner seperation and make sure everyone gets a daily build.
Just an additional point, since we have "shared code" in my shop as well.
We found out this is very much a packaging issue:
Whatever code your are producing or tool you are using, what you should have is a common build tool able to package your sources into a "delivery component", with everything used to actually execute the code, but also the documentation (compressed), and the source (compressed).
The main interest into having a such a "delivery package unit" is to have as less files to deploy as possible, in order to ease the download of those units.
The build process can very well be managed by Maven or any other (ant/nant) tool you want.
When some audit team want to examine all our projects, we just deploy on their post the same packages we deploy on a production machine, except they will un-compressed the source files and do their work.
Since our source files also includes whatever files are needed to compile them (like for instance eclipse files), they even can re-compile those projects in their development environment).
That way:
Developers will not be forced to use this repository. The barrier to entry must be as low as possible to encourage participation, or it will be ignored: it is just a script to execute to get the "delivery module" with everything in it they need (a maven repository can be used for that too)
We are a risk-averse, Microsoft shop: you can use any repository you want
Some of my fellow devs care about writing quality, reliable software, some don't: this has nothing to do with the quality of code written in these packages modules
We're better at writing processes than following them: the only process involved in this is the packaging process, and it can be fairly automated
Don't assume best practices: you are not forced to apply any kind of static code analysis before packaging executable and source files.