I have Sparx EA setup with version control. Prior to checking in updates to my model I did a get all latest. But doing so has completely mangled one of my diagrams. This is particularly odd as:
The packages that contained the diagram and all the components in the diagram were checked out by me - so there is no way these should ever be updated the get all latest.
In any case I'm the only one who has ever worked on this model.
What appears to have happened is that that I refactored a large diagram into three separate ones, buy cutting and pasting some components from the original into new diagram, leaving the original with a subset of the original components. The get all latest seems to have re-inserted all the original components and connections back into the original diagram and even re-locating some of the components in the original diagram, leaving an utter mess.
Is there any way to undo the actions of what I can only assume is a pretty serious bug?
many thanks in advance.
No, you can not undo that unless you have a database backup. If you have not committed your work before, it's your own fault.
FWIW: Get All Latest is meant to be used in single distributed EAP files (or RDBMS remote to a central repos). It will erase all work and replace it with what has been stored in the VC system.
Edit: Although the help states
The Get All Latest command does not update any package that is checked-out (to anybody) in the currently loaded project; otherwise, any changes not yet committed to version control would be discarded
this may affect current diagraming work. The simple reason is that a diagram content can well rely on other packages being altered during a Get All Latest. It also depends on whether one saves diagram changes which are otherwise held in memory and can cause trouble.
Anyhow, there is no undo for Get All Latest for sure. Go into a dark chamber, cry out loud and start doing it once again. Just revert to an older commit first.
Related
We are using ClearCase using a single Dev stream for our team, without 'locking' (Unreserved check outs).
ClearCase client version: 7.1.1
ClearCase server version: 7.0.1.2
We have performed the same test, without using the "Graphic merge". This option worked as expected! Maybe this can shed some light on past defects on ClearCase or workarounds.
This means that 2 or more people can make edits to the same file at once, without having to wait for for the file to be checked in.
We have seen a few cases of weird behaviour and experimented a bit today to find the following scenario that takes place:
File.txt is checked out by 2 team members.
Each members makes a change in the file (in other regions of the file).
First developer checks in the code to ClearCase, no problems here.
Second developer checks in, gets a merge popup notification.
When selecting "graphic merge", ClearCase in this case informs that all merges were done automatically and no additional input is needed from the developer.
Looking a little further, the first check in was removed (deleted), keeping only the later check in changes.
Why is this happening? This is causing our team to lose code on several occasions already. Are we doing something unsafe/wrong ?
Edit: Illustrating the problem with images of the issue:
The file Manager.cs is at version 27.
Two developers are checking it out.
One made a change, checked in.
The other checks in, gets the merge notification.
This is what i see in the graphical merge:
Note that on the left is version 27, in the middle version 28 (the latest checked in version), and on the right is the result which is dropping version 28's code change !
Why is this happening automatically??
Image can also be seen here: Image
Note: if you are using ClearCase without 'locking', that means you are doing unreserved checkouts (and not reserved checkout).
If you select "graphic merge", you should see a Windows helping you to reconcile the merge, even if there is no conflict.
Such a merge should not delete any previous checkins: it might cancel the previous modifications, only if all the new changes are selected, but if you have the graphical merge window open, you can control how the merge is applied.
For your past problematic merges, you can easily from the version tree re-apply the merge from the previous version of dev1 to the LATEST version, in order to reapply those canceled changes.
Since my initial answer 4 days ago, 2 new information came about:
ClearCase client version: 7.1.1 ClearCase server version: 7.0.1.2.
That is never good to have a client with a version more recent than the server.
We have performed the same test, without using the "Graphic merge". This option worked as expected!
That would be consistent with some discrepancies already seen between the GUI for merging and the pure command line (as in this other scenario).
When the GUI fails, always try to fall back on the pure CLI (Command Line Interface).
I have a project under version control, but the project has some images, videos and zip files that change every so often. I don't want to store these files under version control because they take up a lot of space and make updates and commits very slow.
What's a good way of dealing with this issue and still commit non-source files that have changed? is there a better way?
I'm currently using subversion, if there is another version control client that is better for dealing with this issue, please recommend it!
I have lots of non-source files in SVN and the only time it slows down the commit is when I change them. I don't see how this is an issue if they're only changing "every so often". Also the size really shouldn't be a concern. If your repository is on a server and you're worried about how much space it's taking up you need to upgrade. Hard drives are cheap. Buy them.
Some people feel strongly that non-source files don't belong in source control, I say an entire project should be stored in source control. That way if my development system goes down I can switch to another and after a couple minutes of downloading the project I'm back to coding.
You added in a comment:
The problem I have with this is that I don't care about the version history of those zip/video files, as long as they're the newest ones, there is no problem.
That means you have a linear workflow of development, only working on the LATEST of one main branch.
You do not seem to deal with the phase "after release", where you have to:
maintain what runs in production
develop small evolutions...
..while doing massive refactoring for experimenting some big evolutions
In the last three cases, the question of "what were the exact images, videos and zip files "I have to use" or "I was using at the time" might become important.
Anyhow, if you feel SVN do not handle them appropriately, I would still recommend having some way to remember that, for SVN revision xxx to yyy, you were using the version 'z' of your set of binaries.
For that, you could setup an external repository like Maven. See question "Is it acceptable/good to store binaries in SVN?" (my answer in that question is near the top of the page, but I link directly to Evan's answer as he mentions Maven).
While I find it a huge pain to deal with non-text files in version control, especially ones that change a lot, I've accepted the practice of "if it is needed for the build/installer it should go in version control". This of course is not a hard rule. I don't keep 3rd party libraries under version control (though know people who do).
I came under this opinion after setting up a Continuous Integration server at my shop. Having everything needed for the build that may change make's it much easier. As previously mentioned I don't keep libs under version control, but that is due to the fact that we rarely upgrade/add new libraries. If this is not the case for your shop then you may consider doing so. Also, if your images/videos/zips change more then once a year then I'd recommend keeping them under version control.
I'm learning mercurial as my solo scm software. With other management software, you can put change comments into the file header through tags. With hg you comment the change set, and that doesn't get into the source. I'm more used to central control like VSS.
Why should I put the file history into the header of the source file? Should I let mercurial manage the history with my changeset comments?
Let the source control system handle it.
If you put change details in the header it will soon become unwieldy and overwhelm the actual code.
Additionally if the scm has the concept of changelists (where many files are grouped into a single change) then you'll be able to write the comment so that it applies to the whole change and not just the edits in the one file (if that makes sense), giving you a clearer picture of why the edit was required.
Yes; let the source control system handle your changeset comments. The rationale for this is that it makes considerably more sense when you're viewing the change log later, trying to work out what's going on between two versions of a file - the source control system can present the change comment to try and enlighten the situation.
There's no reason to manually maintain a file history when SCM software is much better suited to solve this problem. All too often I see partially-completed file histories in the source, which actually hurts, because people incorrectly assume it is accurate.
The difference is not whether it's a centralized or distributed VCS, it's more about what's being changed.
When I moved to .Net, the number of files updated for any individual change seemed to skyrocket. If I had to log the change in each file, I'd never get any real work done. By commenting on the set of changes, it doesn't matter how many files I had to update.
If I ever needed to identify all of the changes for a particular change, I can diff between the two versions of the project.
The biggest difference (and advantage) I saw when switching away from SourceSafe was the switch from file based to project based commits. As soon as I got used to that, I stopped adding change-log type comments to all of my files.
(As a side effect, I've found that my process description comments have gotten better)
I'm not a big proponent of littering the code with change comments. In the event they are needed they can be looked up in the SCM (at least for the SCM variants I have used). If you do want them in the file, consider putting them at the end instead of the beginning. That way you won't have to scroll down past the (uninteresting, to me at least) comments before you get to the actual code.
Another vote for letting the SCM system handle the checkin comments, but I do have one thing to add.
Some systems allow you to use RCS tags in your source code where the SCM can insert the change history directly into the source file being committed automatically. Sounds like a nice balance because the history is then in the SCM system and then automatically put into the source code itself.
The problem is that this process changes the source file. I think that's a bad idea because the file cannot be changed on disk until after you comment is inserted. If you were a good engineer, you should have built and tested changes before the commit. If your source changes after the commit, then you've essentially got a build that could be broken - but most engineers won't build after a commit - why should they?
But it's just a comment you say! True, but I did have a case where there was code in my source file that strangely enough had reason to look like an RCS header tag and that section of the code got replaced on checkin, thereby munging my code. Easy enough to fix, but bad that a build got broken for 20+ users
Much easier to forget to maintain history in the source, as one always (imo) should comment commits to source control system that problem dissappers. Also if changing lots of files before commit, changing history in every file will be annoying work. This is really one of the points with having scm.
I have experience with this. I've had the file history in the comments, it was awful. Nothing but garbage, sometimes you would have to scroll down almost 1k lines of code changes before you finally got to what you wanted. Not to mention, you're slowing down other aspects of your build process by adding more kb to your source code tree.
I've never really worked with a lot of people where we had to check out code and have repositories of old code, etc. I'm not sure I even know what these terms mean. If I want to to start a new project that involves more than myself that tracks all the code changes, does "check out" (again, don't know what that means), how do I get started? Is that what SVN is for? Something else? Do I download a program that keeps up with the code?
What do I do?
It will all be in house. No Internet for storing code.
I don't even know if what I am asking for is called source control. I see things about checking out, SVN, source control, and so on. I don't know if it is all talking about the same thing or not. I was hoping to use something open source.
So, a long time ago, in the bad old days of yore, source control used a library metaphor. If you wanted to edit a file, the only way to avoid conflicts was to make sure that you were the ONLY one editing the file. What you'd do is ask the source control system to "check out" that file, indicating that you were editing it and nobody else was allowed to edit it until you made your changes and the file was "checked in". If you needed to make a change to a checked out file, you had to go find that freakin' developer who'd had everythingImportant.conf checked out since last Tuesday..freakin' Bill...
Anyway, source control doesn't really work like that anymore, but the language stuck with us. Nowadays, "checking out" code means downloading a copy of the code from the code repository. The files will appear in a local directory, allowing you to use them, compile the code, and even make changes to the source that you could perhaps upload back to the repository later, should you need to. Even better, with just a single command, you can get all the changes that have been made by other developers since the last time you downloaded the code. Good stuff.
There are several major source control libraries, of which SVN (also called Subversion) is one (CVS, Git, HG, Perforce, ClearCase, etc are others). I recommend starting with SVN, Git, or HG, since they're all free and all have excellent documentation.
You might want to start using source control even if you're the only developer. There's nothing worse than realizing that last night the thousand lines of code you deleted as useless were actually critically important and are now lost forever. Source control allows you to zoom forwards and backwards in the history of your files, letting you easily recover stuff that you should not have removed, and giving you a lot more confidence about deleting useless stuff. Plus, fiddling around with it on your own is good practice.
Being comfortable with source/revision control software is a critical job skill of any serious software engineer. Mastering it will effectively level you up as a professional developer. Coming onto a project and finding that the team keeps all their source in a folder somewhere is an awful experience. Good luck! You're already on the right path just by being interested!
Check out Eric Sink's excellent series of articles:
Source Control HOWTO
I recommend Git and Subversion (SVN) both as free, open-source version control systems that work very well. Git has some nice features given that it can be easier to work decentralized.
Checkout means retrieving a file from a source control system. A source control system is a database (some, like CVS, use just specially marked up text files, but a file system is also a database) that holds all versions of your code (that are checked in after you make modifications).
Microsoft Visual SourceSafe uses a very proprietary database which is prone to corruption if it is not regularly maintained and uses reserved checkouts exclusively. Don't use it, for all those reasons.
The difference between a reserved checkout and an unreserved checkout is in an unreserved checkout; two people can be modifying the same file at once. The first one to check in gets in no problem, and the second one has to update their code to the latest version and merge the changes into theirs (which usually happens automatically, but if the same area of the file was changed, then there is a conflict, which has to be resolved before it can be checked in).
For some arguments for unreserved checkouts, see here.
Following this, you will be looking at a build process that independently checks out the code and builds the source code, so that everyone's changes are built and distributed together.
Are you creating the project that requires source control? If so, choose a source control system that meets your needs, and read the documentation for how to get it set up. If you are simply using a previously set up source control system for an existing project, ask a coworker who has been using it, or ask the person who set up the source control system.
For choosing a source control system that meets your needs, most source control systems have extensive descriptions of their features online, many provide evaluation or even completely free products, and there are many many many anecdotal descriptions of what working with each individual source control system is like, which can help.
Just don't use Microsoft Visual SourceSafe if you value your sanity and your code.
I've tried using source control for a couple projects but still don't really understand it. For these projects, we've used TortoiseSVN and have only had one line of revisions. (No trunk, branch, or any of that.) If there is a recommended way to set up source control systems, what are they? What are the reasons and benifits for setting it up that way? What is the underlying differences between the workings of a centralized and distributed source control system?
Think of source control as a giant "Undo" button for your source code. Every time you check in, you're adding a point to which you can roll back. Even if you don't use branching/merging, this feature alone can be very valuable.
Additionally, by having one 'authoritative' version of the source control, it becomes much easier to back up.
Centralized vs. distributed... the difference is really that in distributed, there isn't necessarily one 'authoritative' version of the source control, although in practice people usually still do have the master tree.
The big advantage to distributed source control is two-fold:
When you use distributed source control, you have the whole source tree on your local machine. You can commit, create branches, and work pretty much as though you were all alone, and then when you're ready to push up your changes, you can promote them from your machine to the master copy. If you're working "offline" a lot, this can be a huge benefit.
You don't have to ask anybody's permission to become a distributor of the source control. If person A is running the project, but person B and C want to make changes, and share those changes with each other, it becomes much easier with distributed source control.
I recommend checking out the following from Eric Sink:
http://www.ericsink.com/scm/source_control.html
Having some sort of revision control system in place is probably the most important tool a programmer has for reviewing code changes and understanding who did what to whom. Even for single person projects, it is invaluable to be able to diff current code against previous known working version to understand what might have gone wrong due to a change.
Here are two articles that are very helpful for understanding the basics. Beyond being informative, Sink's company sells a great source control product called Vault that is free for single users (I am not affiliated in any way with that company).
http://www.ericsink.com/scm/source_control.html
http://betterexplained.com/articles/a-visual-guide-to-version-control/
Vault info at www.vault.com.
Even if you don't branch, you may find it useful to use tags to mark releases.
Imagine that you rolled out a new version of your software yesterday and have started making major changes for the next version. A user calls you to report a serious bug in yesterday's release. You can't just fix it and copy over the changes from your development trunk because the changes you've just made the whole thing unstable.
If you had tagged the release, you could check out a working copy of it and use it to fix the bug.
Then, you might choose to create a branch at the tag and check the bug fix into it. That way, you can fix more bugs on that release while you continue to upgrade the trunk. You can also merge those fixes into the trunk so that they'll be present in the next release.
The common standard for setting up Subversion is to have three folders under the root of your repository: trunk, branches and tags. The trunk folder holds your current "main" line of development. For many shops and situations, this is all they ever use... just a single working repository of code.
The tags folder takes it one step further and allows you to "checkpoint" your code at certain points in time. For example, when you release a new build or sometimes even when you simply make a new build, you "tag" a copy into this folder. This just allows you to know exactly what your code looked like at that point in time.
The branches folder holds different kinds of branches that you might need in special situations. Sometimes a branch is a place to work on experimental feature or features that might take a long time to get stable (therefore you don't want to introduce them into your main line just yet). Other times, a branch might represent the "production" copy of your code which can be edited and deployed independently from your main line of code which contains changes intended for a future release.
Anyway, this is just one aspect of how to set up your system, but I think giving some thought to this structure is important.