Sourcetree - How to take an old commit and force it to top? - atlassian-sourcetree

Doing a proto with people that barely know the technicalities of sourcetree (myself included).
I pushed a huge update at marked CL "A", then gave it an update at B. Someone somewhere after this nuked the entire thing, I think with a faulty merge.
How do I take the files from that commit and force them over the top level? (My update was huge, the following are worth stomping over if necessary). I don't understand all these functions of Rebase and Patches and such.
sourcetree image

Related

Switching role between fork and parent

I've assumed maintainership for a given project on github. I've done so by cloning the repository of the original authors and pushing my own changes and developments. This was done in accord with the original authors, so they do not expect to work on this in the future. Nevertheless, my repository is marked as a fork of theirs, which makes it appear less official. Is there some way to denote a given repository as official? To swap the relation between my repository and that of the original developers?
I guess I could delete my repository, then ask the original devs to transfer theirs to mine, then let the original devs fork from that, then push my own changes from my local repo. But somehow this feels wrong. It would rely on my local copy. Migration of e.g. the pages branch might be causing extra trouble. I hope there is a cleaner solution.
There doesn't seem to be a clean way to do this.
It seems your best option is to ask GitHub support to convert your repository to "normal mode" as opposed to "forked from" mode.
Another solution is to delete and recreate the repository. However, this can be dangerous, as the wiki and issues data will also be deleted in this process.
If you have further questions about this then let me know in comments and I can amend my answer.

(Mercurial/tortoisehg) Lost commits - how to troubleshoot

A colleague of mine was working on a VS project this morning, which he claims was in a folder clone of our central repository. He built a release version of the application which is still there with the correct date/time stamp on it.
He then said he committed several times during this time, but at some point, he tried pulling down changes from the central repository and merge them in with his changes. He can't really remember what steps he took, but the end result is that all the changes he did to the source were lost. now, i'm trying to help him to see if we can recover any of those changes.
He told me after he committed his changes (several commits), He went into the tortoisehg workbench, pulled, and saw that there were many changes in the central repository. He decided to either "Merge with local" or "Update" to the tip pulled down - he can't remember which. I showed him the two dialogs, and asked him if he discarded changes, shelved or anything else. He couldn't remember, really, but he did remember he had to back out because tortoisehg didn't like what he was doing. Eventually, he did seem to be able to update to the repository tip.
I'm thinking that unless he physically deleted the folder in which he was working, and then got the latest, those commits he claimed to have done would have been recorded somewhere? What are common errors people do when pulling the remote repo? Are there any log files or history i can check to see if i can salvage or at least tell where he was working on this stuff?
any hints to troubleshoot this would be greatly appreciated.
First of all backup the tree somewhere including .hg and everything under it. Then...
hg heads
...to see if he's just lost track of his commits. If it's there, it's just a case of updating back to it and doing a proper merge.
hg log -u 'His Username'
...to see if the commits are anywhere in the repo. If they're are there you can then work forward from them.
hg shelve --list
...to see if he's managed to shelve things somewhere
Take a look in .hg/strip-backups, to see if he's managed to strip his changes somehow. Anything else destructive should have left backups in .hg too.
That should be a good start. If none of those give any clues then others may be able to suggest some things.
I don’t know about TortoiseHG, but usually the first thing would be to check hg out.

Starting to version an already medium size project

I am about to start participating in the development of a medium-sized project (~50k lines) that was until now written by a single person, and not versioned; as a result folders are cluttered with different versions of the same file (named file1, file2, file3, etc.).
I proposed to start using a VCS for it (a priori Mercurial, which is the only one I've ever used -- for my personal projects --, but I'm open to suggestions), so I'm taking any good ideas as to how to "start" the repository. E.g., should I make an initial commit with all the existing files, and immediately make a new commit with the unused files removed? Or something else?
(constructive remarks on mercurial vs bazaar vs git vs whatever are also welcome.)
Thanks for your tips.
E.g., should I make an initial commit with all the existing files, and immediately make a new commit with the unused files removed?
If the size of the repository is not a concern, then yes, that is a good starting point. Otherwise you can just commit what's actually used, and go from there.
As for which system, all DVCSes stick to the same core principles. Which one you pick is entirely subjective — the only way to truly know which one you like is to try each one.
I would say use what you are the most comfortable with and meets your needs. As far as where to start, I personally would seed the repo with the current source as is, that way you can verify that everything builds and runs as expected. you can make this initial seed a branch. That way you can always go back to your starting point before refactoring.
My approach to this was:
create a Mercurial repository in the existing project folder ("existing")
commit all project files to "existing"
create an empty repository in what a different location ("new")
As files are tested and QA'd (this was necessary because there was so much dross in "existing") pull them from "everything" to "new".
Once files had been pulled into "new"; delete the corresponding files from "existing". If access is needed to these files while the migration is under way, push them back from "new" to "existing".
This gave me the advantage of putting everything under some sort of control for recovery purposes, control over introducing the project to the DVCS. Eventually the existing project folder became completely tested and approved for the project moving forward. At this point the "everything" directory could be deleted or changed into a working folder; and "new" became the actual project folder.
I think Mercurial is a good choice. Lightweight, fast, very simple to use and well-integrated with Windows (if that's the platform you're dealing with).
I would probably get rid of all the clutter before the first commit. Delete everything you don't care about, run all the necessary tests and only then do the commit.
Yes, I'm dead set against the 0-day cluttering of repos.
Granted, a 50K SLOC project isn't very big, but if you commit files you already know you won't need, they will make your repo slightly bigger.
Also, remember to check that the tree doesn't contain large binary files. If it does, get rid of them if at all possible.

Keeping experimental history out of shared repository in Mercurial

I'm fairly new to Mercurial, but one of the advantages I see using Mercurial is that while writing a feature you can be more free to experiment, check in changes, share them, etc, while still maintaining a "clean" repo for the finished feature.
The issue is one of history. If I tried 6 different ways to get something to work, now I'm stuck with all of the history for all my mistakes. What I'd like to do is go through and clean up my changes and "collapse" them into one changeset that can be pushed into a shared repository. This is complicated by the fact that I might pull in new changesets from the shared repository, and have those changesets intermingled with my own.
The best way I know of to do that is to use hg export to create a patch of my changes since cloning, clone a fresh repository, and apply the patch to the fresh repository.
Those steps seems a little bit cumbersome and easy to mess up, particularly if this methodology is rolled out to the whole dev team, some of whom are a little resistant to change (don't get me started). TortoiseHg makes the process slightly better since you can highlight the changesets you want to be included in an export.
My question is this: Am I making this more complex than it needs to be? Is there a better workflow I can use to ease my troubles? Is it too much to expect a clean history where entire (small-ish) features are included in one changeset?
Or maybe my whole question could be summed up this way:
Is there an equivalent for this in mercurial? Collapsing a git repository's history
Although I think you should reconsider your use of branches in Mercurial (as per my comment on your post), using named branches doesn't really help with your concern of maintaining useless or unnecessary history - it just organizes them a bit.
I would recommend a combination of these tools:
mercurial queues
histedit (not distributed with Hg)
the mq changeset strip feature
to rework a messy history before pushing to a blessed or master repo. The easiest thing would be to use strip to permanently remove any changeset with no children. Once you've done that you can use mq or histedit to combine, relocate, or modify existing commits. Histedit will even let you redo the comment associated with a changeset.
Some pitfalls:
In your opening paragraph you mention sharing changesets during feature development. Please understand that once you've shared a changeset it's not a good idea to modify using mq or histedit, or strip. Using these extensions can result in a change to the revision hash, which will make them look like a new changeset to everyone else.
Also, I agree with Paul Nathan's comment that mq (and histedit) are power features and can easily destroy a history. It's a good idea to make a safety clone before using these extensions.
Named branches are the simplest solution. Each experimental approach gets its own branch.This retains the history of the experiments.
The next solution is to have a fresh clone for each experiment. The working one gets pushed back to the main repo.
The next solution - and probably what you are really looking for - is the mq extension, which can "squash" a series of patches into a single commit. I consider mq to be "advanced", and "subject to accidently shooting yourself in the foot". I also don't care to squash my commits - I like having my version history present for reference.

Small Shop, Why DVCS?

We have a small programming shop of at most 5 people working on a single project. I fully grok why DVCS is better for open source projects, and for large companies, but what advantages does it have for smaller companies other than "you can work on the airplane." Which would require extra SA work to make sure that our repositories on DEV boxes was properly backed up every night.
We also a have several non technical people (artists, translators) who can (sort of) deal with SVN, in peoples experience how much training is required to get them to move to a DVCS?
I'm going to speak from my experience, which is primarily with SVN and Hg, often working with designers and programmers who are not comfortable with version control.
My big beef with SVN and other CVCS's I've used is that they block you from making commits, not just when the network is down, but also in case of a conflict (or worse, someone locking a file so no one else can make changes to it!). You could of course commit to a branch, but between network bandwidth required to switch branches and the pain involved in merging, you still have a problem.
Of course, SVN blocks you from committing conflicted files so that you don't accidentally overwrite someone else's work; SVN requires you to at least acknowledge that you know one version or the other (or a custom combination of the two) is right. Mercurial, however, has a better solution (2, actually):
1. You can always commit to the local repository now and merge later. (All DVCS's have this feature.)
2. Even if you pull or push conflicting changes, instead of being blocked from committing, you have multiple heads via anonymous branches. (Sorry I can't really explain this in detail here, but you can google it.)
So your workflow goes from:
1. Get the latest, make changes, test them.
2. Get the latest, resolve conflicts, test the result.
3. Commit.
and becomes:
1. Get the latest, make changes, test them.
2. Commit (so you have a place to fall back to).
3. Get the latest, resolve conflicts, test the result.
4. Commit and push.
That extra commit means you're doing less work per commit, so you have more checkpoints to fall back on. And there are other ways you can make more commits without getting in others' way.
SVN is just slow enough to break my concentration and tempt me to go to facebook; mercurial is fast and git is faster. The speed issue becomes very important when reviewing a log or changes to a working copy. With TortoiseHg, I can click through a list of files, and instantly see changes to that file; it takes about a couple seconds per file with TortoiseSVN + WinMerge (not sure how much of this is due to the DVCS). The more I use these tools, the more I feel that a VCS needs to be fast, just like a text editor or a mouse cursor--fast enough that you shouldn't require the network to do it.
Subjectively, I find TortoiseHg to be a heck of a lot easier to use than TortoiseSVN (or other tortoises I've used). TortoiseHg is mutli-platform, too. :)
One more thing: As I understand, a SVN working copy is defined recursively: Each folder is a working copy. This allows you to do some fancy-pants stuff (e.g. having a working copy that contains folders from disparate locations in the repository). I don't know if Hg has a similar feature, but in my experience, SVN's implementation of this feature causes only problems where I work, especially for those not quite comfy with SVN. When they copy-and-paste a WC folder on their machine via the OS shell instead of via svn copy, it goofs up their WC. I've goofed up my WC this way as well. It's less of a problem with Hg--you normally work with an entire repo at once, whether you clone, update, or commit.
SVN improved a lot concerning merging since its release. But it still lacks file rename tracking, often resulting in tree conflicts. Renaming is the killer app of distributed version control tackles this issue, adding some interesting links in the comment section.
DVCS lets you push to a central repository, at the cost of one additional command compared to Subversion. Occasional users should be able to adapt to this minor change in workflow. But giving the freedom of 'local' commits and branches to power users without cluttering a central repository.
Concerning tooling, which might be of importance for user acceptance, Mercurial is on par with Subversion.