Related
I am working with a client who has an ERP system in place, called M1, that they are looking to make custom changes to.
I have spent a little bit of time investigating the ERP system in terms of making customizations. Here is a list of what I have found with regards to custom changes:
Custom changes cannot be exported/imported. There is an option in the M1 Design Studio, however, they always appear to be disabled... I tried everything and I couldn't find a mention of it in the help documentation.
You can export a customizations change log (CSV, XML, Excel, HTML) that provides type, name, location and description. In essence, it is a read-only document that provides a list of changes you made. You cannot modify the contents of this log.
Custom form changes made, go into effect for all data sources (Test, Stage, LIVE). In other words, there does not appear an ability to limit the scope of a form change.
Custom field changes must be made in each data source (Test, Stage, LIVE). What's odd here is that if add a field in Test, adjust a grid to display it, subsequently change to LIVE, it detects that the field doesn't exist and negates the grid changes.
I'm unable to find documentation indicating that this application supports version control.
sigh
....
So...
How do I manage changes from an SDLC: ALM methodology and tools standpoint?
I could start by bringing in a change request system to manage pending and completed customizations. But then what? How should changes me managed and released? Put backups of application under source control and deploy when needed?
There might not be a good answer to this question since I'm unable to take advantage of version control and create a separation of environments, but I figured I'd ask in case anybody has had similar experience or worked with M1.
I take it from the lack of answers in two months that your question is unanswerable. SDLC is something you could write a textbook on, or read a textbook on, and not know enough about your environment, other than that probably in order to get hired at your shop, "SDLC" would be a bullet point on the hiring qualifications.
I have no experience with M1, but I am assuming that you're going to have to ask your peers at work for their ideas, because it sounds like you're asking a vertically closed (your shop, your tools, your practices) question that has no exact technical answer.
As for best practices; I suggest you investigate best practices outside your M1 ERP silo and apply them as makes sense to you.
The company I work for also uses M1 erp. We have similar issues regarding version control of the customisations. From what I can tell, all customisations are stored in the M1DD database. You could backup a copy of this database before any major development work as a basic revision control system.
I am familiar with the issue of all changes becoming immediately active in all datasets. This is particularly annoying when you are making changes to a commonly used modules as you don't know how live data will be affected during the development process. One technique I have found useful is to surround untested code with an if statement so it is only executed when I am logged in.
If App.UserID = "MYUSERNAME" Then
'new code here
End If
I would be interested in hearing how you solved this problem.
I'm tasked with helping to set up the process templates and check-in policies for my company's TFS 2008 installation.
Aside from three check-in policies (a check-in action must have comments against it, a code file must be peer-reviewed, there must be a work item associated with a check-in), I have been asked to consider and implement any others.
What are some of the most important or useful policies to enforce for version control?
The fewer the better.
Usually in an organization you want to ease the friction of check-in to ensure that you are encouraging developers to make frequent small discrete check-ins rather than checking out a load of stuff at once. Then again you want to ensure that you have a working codebase for everyone who needs it and are capturing the data that you need to improve your software delivery process.
Personally, a policy to enforce changeset comments and a work item association policy are ok - as they capture meta-data that is very easy to remember at the time but hard to find afterwards. It also encourages developers to get into the habit of having a work item to track all pieces of work - even experimental development or spikes.
The peer review process might be better performed using branching or another process rather than forcing a peer review on every check-in - however that depends on your process. Remember as well that you can have mandatory check-in notes in TFS to capture meta-data such as code reviewer. A check-in note is slightly different to a check-in policy and is often confused.
If you want read more discussion about check-in policies, take a look at a blog post I did on the balancing act a while ago. Also to hear some more discussion about check-in policies, I recorded a podcast recently with a fellow Team System MVP talking about their use of TFS and it might be interesting (Radio TFS, Using TFS with Ed Blankenship). Finally we also did a Radio TFS episode all about check-in policies in 2008 that might be of interest.
Don't break the build! Of course, finding an automated way to check on that and reject the check-in are the challenge.
Some rules that we follow in our company:
Commit all changes related to the same task at once (that will help review the changes and future rollbacks or merges if needed).
template based comments (eg: prefix all comments with a code that represents what was done, + for adds, - for removes, * for updates, ! for important modifications, etc).
Obviously always check-in code that compiles, and finished work to the main-line.
check-in daily unfinished work to branches.
The ones we use where I work on TFS are:
Code Analysis
This ensures that all the code was compiled on the devs machine before it was checked in
Work Item Association
If you've done a change there should have been an assigned task!
Last Build Successful
Using the TFS Build Server to check that the current code in source control compiled on an independant machine
Check In Comments (part of the TFS Powertools - http://msdn.microsoft.com/en-us/teamsystem/bb980963.aspx)
It's good to be able to see a summary of the check in without having to go to the work item(s)
Try to keep the number of developers working on the same branch small. That way the branch stays stable with respect to compilation, the unit tests, and regressions. It's a nightmare if a developer does a check in which compiles but his code breaks a key area of the application (such as login).
If you really have to have more than 10 developers checking code into the same branch, we've started an email policy where the developer checking in warns everyone that they're checking in, so that no one attempts to update their copy of the branch in the midst of a check in. Sometimes, we've had to have the converse, where we set aside an time in the date to prohibit check ins, so that updates are safe.
Frankly, the less policies, the better. The more policies you have, the greater the incentive for NOT using version control. What happens then is:
Code is developed on parallel, uncontrolled source control systems, and just the final revision goes to the official one.
People delay committing as much as possible, decreasing visibility of what they are doing to other developers.
People will actually avoid committing something if they can get away with it, and some will find a way to get away with it.
In fact, I think your three check-in policies are already too much. For instance:
Having code being peer-reviewed before check-in makes it much more difficult to have work-in-progress stored there. Instead, if the source control system allows it (and many do), control whether the source is peer reviewed or not. With some systems you can create a life cycle for a revision, with others you might create branches, and still others you might use tags.
Having a work-item associated with a check-in makes it impossible for developers to do exploratory programming, or having initiative on possible improvements. It stifles the developers. Instead, make sure that any revision going into integration tests or user acceptance tests, not to mention production itself, is associated with a work item.
This might sound anti-Enterprise, but it's just some things we have learned in a few decades of software development. Most enterprise organizations haven't been clued in to this, but, eventually, they will. So, you might go the very opposite way, but don't say no one ever told you.
I recommend the Agile Manifest, and, particularly, Lean Software Development for general principles.
Or, taking Stack Overflow design philosophy into account, make the system reward the behavior you want.
Is there any form of version control for Linden Scripting Language?
I can't see it being worth putting all the effort into programming something in Second Life if when a database goes down over there I lose all of my hard work.
Unfortunately there is no source control in-world. I would agree with giggy. I am currently moving my projects over to a Subversion (SVN) system to get them under control. Really should have done this a while ago.
There are many free & paid SVN services available on the net.
Just two free examples:
http://www.sourceforge.net
http://code.google.com
You also have the option to set one up locally so you have more control over it.
Do a search on here for 'subversion' or 'svn' to learn more about how to set one up.
[edit 5/18/09]
You added in a comment you want to backup entire objects. There are various programs to do that. One I came across in a quick Google search was: Second Inventory
I cannot recommend this or any other program as I have not used them. But that should give you a start.
[/edit]
-cb
You can use Meerkat viewer to backupt complete objects. or use some of the test programas of libopenmetaverse to backup in a text environment. I think you can backup scripts from the inventory with them.
Jon Brouchoud, an architect working in SL, developed an in-world collaborative versioning system called Wikitree. It's a visual SVN without the delta-differencing that occurs in typical source code control systems. He announced that it was being open sourced in http://archvirtual.com/2009/10/28/wiki-tree-goes-open-source/#.VQRqDeEyhzM
Check out the video in the blog post to see how it's used.
Can you save it to a file? If so then you can use just about anything, SVN, Git, VSS...
There is no good source control in game. I keep meticulous version information on the names of my scripts and I have a pile of old versions of things in folders.
I keep my source out of game for the most part and use SVN. LSLEditor is a decent app for working with the scripts and if you create a solution with objects, it can emulate alot of the in game environment. (Giving Objects, reading notecards etc.) link text
I personally keep any code snippets that I feel are worth keeping around on github.com (http://github.com/cylence/slscripts).
Git is a very good source code manager for LSL since its commits work line-by-line, unlike other SCM's such as Subversion or CVS. The reason this is so crucial is due to the fact that most Second Life scripts live in ONE FILE (since they can't call each other... grrr). So having the comparison done on the file level is not nearly as effective. Comparing line by line is perfect for LSL. With that said, it also (alike SourceForge and Google Code) allows you to make your code publicly viewable (if you so choose) and available for download in a compressed file for easier distribution.
Late reply, I know, but some things have changed in SecondLife, and some things, well, have not. Since the Third Party Viewer policy still keeps a hard wall up against saving and loading objects between viewer and system, I was thinking about another possibility so far completely overlooked: Bots!
Scripted agents, AKA Bots, have all usual avatar actions available to them. Although I have never seen one used as an object repository, there is no reason you couldn't create one. Logged in as a separate account the agent can be wherever you want automatically or by command, then collect any or all objects you are working on at set intervals or by command, and anything they have collected may be given to you or collaborators.
I won't say it's easy to script an agent, and couldn't even speak for making an extension to a scripted agent myself, but if you don't want to start from scratch there is an extensive open source framework to build on, Corrade. Other bot services don't seem to list 'object repository' among their abilities either but any that support CasperVend must already provide the ability to receive items on request.
Of course the lo-fi route, just regularly taking a copy and sending the objects to a backup avatar, may still be a simple backup solution for one user. Although that does necessitate logging in as the other account either in parallel or once every 20 or-so items to be sure they are being received and not capped by the server. This process cannot rename the items or sort them automatically like a bot may. Identically named items are listed in inventory as most recent at the top but this is a mess when working with multiples of various items.
Finally, there is a Coalesce feature for managing several items as one in inventory. This is currently not supported for sending or receiving objects, but in the absence of a bot, can make it easier to keep track of projects you don't wish to actually link as one item. (Caveat; don't rezz 'no-copy' coalesced items near 'no-build' land parcels, any that cannot be rezzed are completely lost)
At my new company, the CMS is ClearCase. I've worked with Perforce before and it had a nice built-in notification mechanism for the team to keep up-to-date with files that changed in the project. I'm trying to have something equivalent in ClearCase. I would like to know if someone have achieved this before.
Basically, there is three requirements :
Have a way to subscribe to a project. One receives only notification on projects it has subscribed to.
When someone deliver an activity, all the subscribers of the impacted project
receive an email notification about that activity.
The email contain the list of the files affected by this activity. Each modified file has a link that perform a diff that shows what this activity change in this file.
So is someone is aware of a module/extension or any other existing way to put that in place or do I have to do all this manually with trigger and perl scripts ?
Thanks,
Martin
we wanted the same here, so we are using a trigger called ucm_complete_delivery.pl that can be found on CM Crossroads.
You need to apply this trigger to your PVOB (as it's a UCM trigger).
Once you have applied it, you need to define the following Custom Attributes on your UCM component(s):
auto_baseline_email user_1#mydomain.com,user_2#mydomain.com,etc...
It's a bit painful as the mailing list as to be maintained by hand (or you need to use group mail address), but it's better than nothing. :)
Cheers,
Thomas
I am not sure if that already exist, I am sure it is not provided natively with the UCM product.
May be a more specialized forum like CMCrossroad have more informations, but you already put a question there ;)
Anyhow, the simplest way to implement such a notification would be to have a process following new baselines made on a stream.
Each baseline being composed of activities, it would be simple to list those.
Each baseline being easily compared with its previous baseline, it would be simple to list the file versions, and build the appropriate diff.
As for the users following a project, I would suggest as a "subscription mechanism" the list of views of one of the streams of a project: any user having a view on (one of the streams of) that project is potentially interested.
The general implementation principle would be through post-operation triggers, as described in the "Ten best triggers" article
AFAIK, almost all CC operations can have triggers (in Perl, IIRC)
You need to add an email trigger to the deliver operation. Long, long time ago I saw a simple example. But you have to take care of keeping the subscription list and email the appropriate persons.
When I'm online it seems that everyone has agreed that using the exclusive locking workflow in source control is a Bad Thing. All the new revision control systems I see appear to be built for edit and merge workflows, and many don't even support exclusive locks at all.
However, everyone I work with is of the opinion that exclusive locks are a "must have" for any source control system, and working any other way would be a nightmare. Even recent hires from other companies seem to come in with this idea.
My question isn't who is right. I'm pretty sure what answer I'd get to that. My question is, is there really still any debate over this matter? Is there an honest "pro locking" camp out there that makes a serious case? Is any work being done to advance the art of revision control based around the locking model? Or are locking fans flogging a dead horse?
EDIT:
The answers so far have done a good job of explaining why exclusive locks are a good feature to use occasionally. However, I'm talking about promoting a workflow where exclusive locks are used for everything.
If you're used to exclusive locking, then it's hard to embrace the edit-merge workflow.
Exclusive locking has its benefits, especially for binary files (images, videos, ...) which can't be merged automatically or not at all.
But: the need for exclusive locking always indicates another problem: good communication between people working on the project. Exclusive locking provides a poor replacement: it tells users that someone else is already working on that particular file - something they should know without using a version control system.
Since there are better ways to help with the communication among team members, most (all?) version control systems don't implement exclusive locking anymore or just a reduced version (i.e., locking, but such that those locks are not enforced).
It's not the job of a version control system to help with the communication.
I like having the option to exclusive-lock some file[s].
Having an exclusive lock is necessary, e.g. for binary files.
It's also semi-necessary for some machine-generated non-binary files (e.g. for Visual Studio project files, which don't 'merge' at all well if ever there are two parallel changes to be merged).
If you believe that merges are hard (and while we've come a long way, they can be in some circumstances), and you don't have programmers frequently wanting to edit the same file, exclusive locking isn't necessarily that bad.
I wouldn't use them on an open source project, obviously, but in the corporate world where the rules are stricter and you can walk over to a guy and say "can I break your lock?", it gives visibility into what people are working on and avoids conflicts so they don't have to be resolved later.
If two people really need to work on a file at the same time, often you can branch that file, and so long as the tool makes it clear that that branch needs to be merged back in, you can do that and resolve any conflicts then.
That said, I don't think I want to have to work in an exclusive locking world again.
Exclusive locking is the best you can do in a worst-case scenario, so its presence always tells me that there are bigger problems.
One of those bigger problems is bad organization of code. On one of my consulting gigs for a major telecomm, eight out of thirty team members were constantly working on the same source file (a VB.NET "god" form). We would wait for someone else to finish their work and release the exclusive lock (VSS), then the next person in the pecking order would immediately lock the file to apply their changes. This took forever because they had to reintegrate all their work into the new code that they could see only just then. Since I was the new guy, I was on the bottom of the pecking order and I NEVER was allowed to check in my code changes. I eventually went to the project manager/director and suggested that I be tasked with another part of the application functionality. This project eventually self-destructed, but most of us left as we realized that inevitability. Note that the use of VSS integration was a crucial part of this failure, too, since it forces early acquisitions of that precious file lock.
So, a well-organized project should almost never result in two people working on the same part of the same source file at the same time. Therefore, no need for exclusive locking.
Another of those bigger problems is putting binary files into source control. Source control tools are not designed to handle binaries, and that is a good thing. Binaries require different treatment, and source control tools cannot support that special treatment. Binaries must be managed as a whole, not as parts (lines). Binaries tend to be far more stable/unchanging. Binaries tend to need explicit versioning different from source control versioning, often with multiple versions available simultaneously. Binaries are often generated from source, so only the source needs to be controlled (and the generation scripts). See Maven repositories for a binary-friendly storage design (but please don't use Maven itself, use Apache Ivy).
The arguments for locking are really bad. They basically say: our tools are so bad they cannot merge, so we lock.
In my experience, it's often necessary for two people to work on the same file at the same time. (There are, presumably, shops where this doesn't happen, but I have fairly varied experience to draw on.)
In these cases, it is necessary for both people to get copies of the original, do their work, and merge their changes. In non-locking VCSs, this is done automatically and accurately for the most part.
With a locking VCS, what happens is that one person gets the lock. If that person finishes first, great; if not, the other person gets to wait before being able to introduce changes. Sometimes it is necessary to make a quick bug fix while somebody is in the middle of a long change process, for example. Once the second person has the lock, the second person needs to merge the changes, typically manually, and check in the new version. On several occasions, I've seen the first person changes dropped entirely, as the second person didn't bother with the manual merge. Frequently this merge is done hastily, either out of distaste for the job or simple time pressure.
Therefore, if two people need to work on the same file at the same time, a non-locking VCS is almost always better.
If this doesn't come up, and two people never need to work on the same file at the same time, it doesn't matter which you use.
On an open-source project like a game, it makes sense to keep images under revision control, and those are nice to be able to lock (Subversion supports this). For source files, it's better to get into the edit-merge work flow. It's not hard and increases productivity in my experience.
Here's my $0.02.
Locking is an old school of thought for textual Code. Once programmers use merging a couple of times they learn and usually like the power of it.
Valid cases for locks still exist.
Graphics alterations. 99% of the time you cannot merge 2 peoples work on the same graphic.
Binary updates.
Sometimes code can be complex/simple enough to justify only 1 person working on it at a time. In this case it's a project management choice to use a feature.
Interesting question. To me, the issue is not so much whether to lock, but how long to lock. In this shop, I'm a minority of one, because I like to merge. I like to know what other people have done to to the code. So what I do is:
Always work in a local copy of the source tree.
Run Windiff often against the "official" code and if necessary merge changes down to my local copy. For merging, I use an old Emacs (Epsilon) and have the compare-buffers command bound to a hot-key. Another key says "make the rest of this line like the one in the other file", because many changes are small.
When I'm ready to check in changes, Windiff tells me what files I need to lock, check in, and unlock. So I keep them locked as short a time as possible, like minutes.
So when Fearless Leader says "Have you checked in your code?" the answer is "I don't have any checked out."
But as I said, I'm a minority of one.
In my opinion, the main reason people use exclusive locking is its simplicity and for them, lack of risk.
If I have exclusive access to a file, I won't have to try and understand someone elses changes to the same file. I don't have to risk making my changes and then having to merge with someone elses when I check in.
If I have an exclusive lock on the files I'm changing, then I know that when I checkin, I will be able to checkin a coherent changeset; it is simpler for me to do this.
The other aspect of merging (esp. automatic merging) is the potential for regression problems. Without good automated tests, every time you do a automatic merge, you may get problems. At least if you have an exclusive lock on something you ensure that someone is looking at the code before it's checked in. This, for some, reduces risk.
What exclusive locking takes away is the potential parallelism of changes. You can't have two people working on a file.
The open source model (lots of people around the world collaborating on different stuff) has promoted the view that locking is bad, but it really does work for some teams. It does avoid real problems. I'm not saying that these problems can't be overcome, but it requires a change in behaviour for people; if you want to change to a non-locking model, you have to persuade them to change to a way of working which can seem harder for them, and can actually (in their view) increase risk cause regressions.
Personally, I prefer not to use locks, but I can see why some people don't like it.
A bit late to this discussion but to anybody who has read and digested Steve Maguire's Writing Solid Code a central point is to have your tools detect and identify as many problems as possible.
A compiler doesn't get tired after 12 or more straight hours and forget to issue a syntax error. But it is quite easy to forget a manual step of initiating a communication and paying for it later.
If you need to version control a binary file then you need some form of locking to prevent - or at least warn of - an accidental overwrite. It must be a fundamental feature of any VCS even a distributed one. To use such a feature in a DVCS may require the creation of a central repository but is that so evil? If you use a DVCS in any sort of corporate environment, you will have a central repo to ensure business continuity.
Addressing your edit comments.
Even RCS and SCCS (the grandfather VCS for most of what runs on Unix/Linux these days) permit concurrent editing access to files, and I'm not referring to separate branches. With SCCS, you could do 'get -k SCCS/s.filename.c' and obtain an editable copy of the file -- and you could use an option ('-p' IIRC) to get it to standard output. You could have other people doing the same. Then, when it came time to check-in, you'd have to ensure that you started with the correct version or do a merge to deal with changes since your starting version was collected, etc. And none of this checking was automated, and conflicts were not handled or marked automatically, and so on. I didn't claim it was easy; just that it could be done. (Under this scheme, the locks would only be held for a short time, while a checkin/merge was in progress. You do have locks still - SCCS requires them, and RCS can be compiled with strict locking required - but only for short-ish durations. But it is hard work - no-one did it because it is such hard work.)
Modern VCS handle most of the issues automatically, or almost automatically. That is their great strength compared to the ancestral systems. And because merging is easy and almost automatic, it allows different styles of development.
I still like locking. The main work system I use is (IBM Rational) Atria ClearCase; for my own development, I use RCS (having given up SCCS around Y2K). Both use locking. But ClearCase has good tools for doing merging, and we do a fair amount of that, not least because there are at least 4 codelines active on the product I work on (4 main versions, that is). And bug fixes in one version quite often apply to the other versions, almost verbatim.
So, locking-only VCS typically do not have good enough merge facilities to encourage the use of concurrent editing of files. More modern VCS have better merging (and also branching) facilities, and therefore do not have as strong a need for locking for more than the shortest term (enough to make the operations on the file - or files in the more advanced systems - atomic).