I have a branch on Perforce, where I changes the directory structure of the project using Rename/Move command.
During merging back to the mainstream, Something went wrong that caused Perforce to think of the new structure as a whole-new directories.
Subsequently, the history of the files in the new directory structure is totally unrelated to the history of the same files before changing the structure.
Is there anyway to recover this situation ? Or ask Perforce to append the old history with the new history ?
Something went wrong that caused Perforce to think of the new structure as a whole-new directories.
Usually if this happens it means someone didn't use the "rename/move" command and used some other method to rename instead (i.e. they did something that adds the new directory as a new set of files independent of the originals rather than an atomic rename of an existing set of files). It's impossible for me to say how to "recover" without seeing what the history of the files looks like now so I can reverse-engineer what the "something went wrong" was.
I'd recommend either posting on the Perforce forums or contacting Perforce technical support so that somebody with expertise can wheedle the necessary data out of you (I can intuit that this will require an amount of back and forth that stackoverflow frowns on -- "what were the branches you were merging from and to", "okay, now run THIS command to see the history of that branch and send me the output," "okay, which of these five merge operations I can see in the history is the one you're talking about,") and propose a solution.
From another answer:
So, for a file a/b/c, you can look at the by using the -i option where appropriate. For example, p4 filelog -li a/b/c.
This is not necessary if files are renamed via the "move/rename" command, so if you need to use "filelog -i" to see file history, the files were definitely renamed by some other method. (The "p4 move" command was added in 2009 so long-time Perforce users will sometimes use other workflows.)
Related
In the absence of development branches (or sometimes in the overabundance thereof), it is common to have at least half a dozen of other people's shelves unshelved in a workspace, as well as multiple of one's own: both local changes (divided by directory, of course) and shelves from other workspaces.
(As an aside, yes, this is almost as bad as pre-version control email/tarball/fileshare-based nonsense, and no, there's nothing I can do about it.)
As a result of this situation, it is often necessary to identify which of myriad changelists refers to a given file.
Is there a way of retrieving this information, using the p4 command line tool, without recourse to elaborate shell scripting or wrapper programmes?
p4 opened FILE
Note that this tells you which changelist you have the file open in -- it doesn't necessarily correspond to which changelist you may have unshelved from originally. If this is important, make sure that when you unshelve it's into distinct changelists (rather than opening everything in the same changelist) so you can keep track of what came from where.
Is there an established method to tell the SCM, mercurial in my case, that files of the pattern foobaz_1_2_3.csv should all be considered versions of foobaz.csv ?
In my application I rely on data tables from an external source that put the version number in the filename. The importance of tracking changes across their versions was made painfully sharp recently when I spend days troubleshooting a bug on my side of the fence, only to discover it was because they changed some data content and notification of said change did not reach me.
If the filename was constant Hg would have informed me immediately of the internal change and I could have responded appropriately in an hour or two, with very little stress. I could just adopt the habit of renaming foobaz_2_3_4 to foobaz myself before checking in, or running diff old new and one or both of those is likely what I'll do from now on.
The whole experience has me wondering though if there might be other methods I've not thought of that don't mess with the external file. (for example what of I have a downstream user who doesn't use SCM and relies on the filename+version number, which I've thrown away?)
If you get data in file with permanently changed name and (possibly) changeable data, you can:
Store data-file under version-control (mercurial is OK)
replace old file with new every time
hg addremove -s nnn (Check Manual hg help addremove) will detect possible rename and include new file in history of old
My original problem is that I have a directory where I write various scripts. Each of them is independent of others, and usually one-file-long. I want to have some versioning applied to them, but I have the following problems/requirements:
I don't want to have to store each small script in a separate directory!
I don't want to store them all in one repository OTOH, as they are completely unrelated, and:
some of them may later grow to more files (and then they will need a separate dir),
I sometimes want to copy one of them to a different machine (and I want to clone the whole repo).
I want to benefit from (distributed) version control mechanisms -- at least:
"infinite" number of revisions,
ability to clone repositories on different computers,
ability to do "atomic" multi-file commits.
Is it possible?
I'd prefer to do it in some mainstream distributed VCS (a solution using Mercurial would be preferable, but I'm not fixed).
EDIT: the solution has to be free (at least "as in beer") and cross-platform (at least Win32 & Linux).
Related, but didn't help:
"two-git-repositories-in-one-directory" -- didn't find it helpful: the accepted answer looks like point 2. (above) to me; the current "community voted" answer sounds like 1.
"Version control of single files using Subversion" -- also too much of 2. or 1.
These requirements seem pretty "special" to me, so here is a solution on par with them ^^
You may use two completely different VCS, in the same directory. Even two "instances" of SVN might work: SVN stores its metadata in a directory called .SVN and has (for historical reasons regarding ASP) the option to use _SVN. The Directory listing should look like this
.SVN // Metadata for rep1
_SVN // Metadata for rep2
script1 // in rep1
script2 // in rep2
...
Of course, you will need to hide or ignore the foreign scripts or folders from each VCS...
Added:
This only accounts for two scripts in one folder and needs one additional VCS per script beyond that, so if you even consider this route and need more repositories, rename each Metadir and use a script to rename it back before updating:
MOVE .SVN-script1 .SVN
svn update
MOVE .SVN .SVN-script1
Why don't you simply create a separate branch (in the git sense) for each (group of) script(s)?
You can develop them individually as you please. Switching to a branch will show you only the scripts from that branch. It's sort of like directories but managed by the version control system. If you later want to pluck a branch out into another repository, you can do that and if you want to combine two scripts into a single project, you can do that as well. The copying them to the different machine point might be a problem but you can clone the branch you're interested in and you it should work for you.
Another proposition for my own consideration is "Using Convert to Decompose Your Repository" article on hgtip.com. It fails as a "standalone" solution, but could be helpful as an addition to the "mv .hgN .hg / MOVE .SVN-script1 .SVN" idea.
You can create multiple hidden repository directories and symlink .hg to whichever one you want to be active. So if you have two repositories, create directories for them:
.hg_production
.hg_staging
Then to activate either of them just do:
ln -sf .hg_production .hg
You could easily create a bash command to do this. So instead you could write something like activate-repo production, which would run ln -sf .hg_production .hg.
Note: Mac doesn't seem to support ln -sf so instead you'll need to do:
rm .hg; ln -s .hg_production .hg
I can only think of these two lightweight versioning systems:
1) Using Dropbox with the Pack-Rat upgrade, to keep a full history of versions for each file automatically backed up and with the possibility to be shared with multiple Dropbox users: https://www.dropbox.com/help/113
If you have multiple machines managed by the same user (you), the synching would be automatic. Also if the machines are in the same LAN, Dropbox is smart enough to sync the files over the local network, so big files shouldn't be a worry.
2) Using a 'Versions' aware text editor for Mac OS X Lion. I'd expect TextMate, Coda and other popular Mac code editors to be updated to support this feature when Lion is released.
How about a compromise between 1 and 2? Instead of a folder+repo for each script, can you bundle them into loosely related groups, such as "database", "backup", etc. and then make one folder+repo for each group? Then if you clone a repo on another machine, you're only pulling down a smaller number of unrelated files. (Is the bandwidth/drivespace really a concern?) To me, this sounds WAAAY simpler than all of the other suggestions so far.
(Technically this approach meets your requirements because (1) each script isn't in its own directory, (2) not all scripts are in the same repository, and (3) you can easily do this with any popular DVCS. :D)
UPDATE (2016): Apparently, a guy named Cosmin Apreutesei created a tool named multigit, which seems to implement what I wished for in this question! If you ever read it, thanks a lot Cosmin! I've started using your tool this year and find it awesome.
I'm starting to think of some kind of an overlay over Mercurial/git/... which would keep a couple "disabled" repository meta-directories, let's say:
.hg1/
.hg2/
.hg3/
etc., and then on hg commit FILENAME would find the particular .hgN that is linked to FILENAME, and would then temporarily:
mv .hgN .hg
hg commit FILENAME
mv .hg .hgN
The main disadvantage is that it would require me to spend some time writing the tool. Or does anybody know of some ready-made one like this? If you do, please post as a full-featured answer (not a comment), I'm more than willing to accept it.
I am doing a refactoring of my C++ project containing many source files.
The current refactoring step includes joining two files (say, x.cpp and y.cpp) into a bigger one (say, xy.cpp) with some code being thrown out, and some more code added to it.
I would like to tell my version control system (Perforce, in my case) that the resulting file is based on two previous files, so in future, when i look at the revision history of xy.cpp, i also see all the changes ever done to x.cpp and y.cpp.
Perforce supports renaming files, so if y.cpp didn't exist i would know exactly what to do. Perforce also supports merging, so if i had 2 different versions of xy.cpp it could create one version from it. From this, i figure out that joining two different files is possible (not sure about it); however, i searched through some documentation on Perforce and other source control systems and didn't find anything useful.
Is what i am trying to do possible at all?
Does it have a conventional name (searching the documentation on "merging" or "joining" was unsuccessful)?
You could try integrating with baseless merges (-i on the command line). If I understand the documentation correctly (and I've never used it myself), this will force the integration of two files. You would then need to resolve the integration however you choose, resulting in something close to the file you are envisioning.
After doing this, I assume the Perforce history would show the integration from the unrelated file in it's integration history, allowing you to track back to that file when desired.
I don't think it can be done in a classic VCS.
Those versioning systems come in two flavors (slide 50+ of Getting git by Scott Chacon):
delta-based history: you take one file, and record its delta. In this case, the unit being the file, you cannot associate its history with another file.
DAG-based history: you take one content and record its patches. In this case, the file itself can vary (it can be renamed/moved at will), and it can be the result of two other contents (so it is close of what you want)... but still within the history of one file (the contents coming from different branches of its DAG).
The easy part would be this:
p4 edit x.cpp y.cpp
p4 move x.cpp xy.cpp
p4 move y.cpp xy.cpp
Then the tricky part becomes resolving the move of y.cpp and doing your refactoring. But this will tell Perforce that the files are combined.
Is there any way to have the same file be a part of multiples changelists in perforce? With that I mean that from the set of changed lines in the file one subset will belong to a changelist, while the other subset will belong to a second changelist.
Bonus question: If perforce does not support this, then which Source Control Systems, if any, do?
To answer the bonus question: GIT allows for per-line changelists.
For a comparison between the two view this question: GIT vs. Perforce- Two VCS will enter... one will leave.
The same copy of the file? No, unfortunately this isn't possible.
Another way to do this without branching is create additional workspaces (clients). Unless you really know what you're doing, be sure to set a different root directory in each of your workspaces. To save time (and disk), don't bother syncing the whole depot in the new workspace.
Sometimes, I'll have two copies of a depot (using two workspaces); one which contains work-in-progress and one which I keep unmodified. If I need to make a quickie change on a file that's heavily modified in my WIP workspace, I can use the 'virgin' workspace to make the change and submit it.
If you are using p4 server 2009.2, there is a workaround to do it. You can shelve a particular file and the diff is stored on the server. After shelving you may want to revert the file to its original version and then work it on in another change-list.
I know this is not a way you wanted it but it is quite easy to create another workspace/client and then sync the code. The later exercise becomes more tedious when you have volumes of code that goes into another application.
For more info read:
http://blog.perforce.com/blog/?p=1872
http://www.perforce.com/perforce/doc.current/manuals/cmdref/shelve.html
You could make a copy of the file with all of the changes, revert, edit the file copy one set of changes into the file, submit, edit, copy the next set of changes, submit, edit, etc...
Bonus answer: I found this feature in Rational Team Concert (http://www-03.ibm.com/software/products/en/rtc/). You can have the same file in many changesets. If you want to add File1 to Changeset1 and Changeset2, you must complete Changeset1 first. This allows you to add File 2 to Changeset2 but then a dependency between changesets is created, so you can not deliver Changeset2 without delivering Changeset1 too. Moreover you can not make changes to a complete changeset.