Perforce verify checksum before flush? - command-line

In perforce manuals about p4 flush, they say it's dangerous operation.
Because it does not actually transfer files. but I was thinking if it's possible to make this less dangerous by first verifying the checksum of files, then performing the flush if checksum are equal.
Me and my friend have identical files in our workspace, but the files are too big, and it takes time to upload them on the server. so when the upload is finished, I want to make sure that files are still identical.
we could calculate SHA-1 of our workspace and manually make sure files are still identical. (we are working on a unreal engine project with lots of binary files and some files might be changed by now.)
also the project contains some ignored files and we have to make sure to exclude them from checksum.
does perforce perform this verification by itself? or is there a command(or script) for this?

The p4 diff -se command will tell you whether your unopened workspace files match the corresponding depot revisions.
For your particular use case, I'd recommend skipping that step; if you follow p4 flush with p4 clean, it will force a re-sync of everything that doesn't match the depot (but only those files).

Related

Deleting files and folders from a helix server while keeping them on the pc

I am new to perforce, i submittet my previous project to it as asked and added the p4ignore later. but now i dont want the files from the p4ignore like the .vscode folder on the server however when i try to mark for delete it also deletes them from my machine. how can i remove them on my server but keep them on the local machines
You probably want to use the p4 obliterate command; this is used to permanently remove files from the server (including all their history), which will leave your local files in an untracked state. Note that this requires admin level permission since file history is normally considered immutable.
If you can't get an admin to help with this, you can use the p4 delete -k command to open the files for delete while keeping the local files. This is a little tricky because it still results in a deleted revision, and if you're not careful you might end up getting surprised at some point by having a sync operation delete your local files (e.g. a force sync may delete your local files to force them into agreement with the head depot revision even though they aren't on the client have list).
To avoid that potential problem, after you delete the files, exclude them from your client view. That will not only prevent them from being added (similar to .p4ignore) but will also firmly exclude them from any operation that touches client files, including sync. (I usually recommend using the client view to exclude files in the first place instead of p4ignore -- it has the advantage of being tracked on the server, and it also prevents you from syncing down "ignored" files submitted by other workspaces whose settings don't match yours.)
tl;dr: use obliterate for mistakenly added files if you can, otherwise use a combination of delete -k and client view tuning to make sure the depot and client files are hidden from each other.

Perforce Commandline Get Contents of Deleted File

I've done this several times in git, but not sure how to do it in p4 commandline. Google is not helping me - or maybe I'm not searching correctly.
I have a file that was deleted: /path/to/file/index.html Now, I need to get the contents of that file as it was before being deleted. I do not want to bring it back to life, I just need the contents.
The changelist for the delete is 125325.
What would be the easiest way to do this?
To sync it to your workspace (this is kind of similar to the git checkout method that you're probably familiar with):
p4 sync /path/to/file/index.html#125324
If you just want to see the content (e.g. dump it to stdout), you can use p4 print (if you were to use the depot path of the file rather than a local path, p4 print doesn't require that the file is mapped to your workspace):
p4 print /path/to/file/index.html#125324
Note that the rev specifier I'm using is the changelist before the file was deleted. You can also use the prior revision number, or an earlier rev/changelist, a particular date, etc. See p4 help revisions for all the ways you can reference older versions of files.

SVN - lock local changes for check in

I am working with eclipse subversion plugin, and made some local changes to crucial files, which I don't wan't to check in. I'm looking for a way to "lock" (I know that the lock term means something else in svn...) these local files and disable checking them in, so that I won't accidentally check them in.
Maybe you can just ignore them:
Team --> add to svn:ignore
if necessary you can do this via the svn command line. This way you might add a pattern to svn ignore
svn pe svn:ignore .
then you can fill in something like:
*.d
NOTE that svn:ignore is different for each subfolder (hence the "." in the svn command)
The answer largely depends on if the file is already being tracked (versioned) by Subversion or not.
Not Versioned
Setting up an ignore via one of the several methods we have of ignoring files will do what you want. If you're using 1.8 we also have svn:global-ignores which supports inheritance (so if you want to say ignore all files with the .o extension you could just set a svn:global-ignores with *.o as a pattern.)
Versioned
If the file is already in the repository and is versioned then ignore won't help you since versioned files are not ignored by any configuration you do. One alternative, as mentioned in the answer to this question "How do I avoid checking in local changes to the SVN repository?", is to use a changelist and add the file to the changelist.
A better option might be to restructure your setup to not require making local changes to versioned files. A common pattern you will see is configuration files where the committed file is a template, developers then copy the template into another name that is used and customize it.

How to force a directory to stay in exact sync with subversion server

I have a directory structure containing a bunch of config files for an application. The structure is maintained in Subversion, and then a few systems have that directory struture checked out. Developers make changes to the struture in the repository, and a script on the servers just runs an "svn update" periodically.
However, sometimes we have people who will inadvertently remove a .svn directory under one of the directories, or stick a file in that doesn't belong. I do what I can to cut off the hands of the procedural unfaithful, but I'd still prefer for my update script to be able to gracefully (well, automatically) handle these changes.
So, what I need is a way to delete files which are not in subversion, and a way to go ahead and stomp on a local directory which is in the way of something in the repository. So, warnings like
Fetching external item into '/path/to/a/dir'
svn: warning: '/path/to/a/dir' is not a working copy
and
Fetching external item into '/path/to/another/dir'
svn: warning: Failed to add directory '/path/to/another/dir': an unversioned directory of the same name already exists
should be automatically resolved.
I'm concerned that I'll have to either parse the svn status output in a script, or use the svn C API and write my own "cleanup" program to make this work (and yes, it has to work this way; rsync / tar+scp, and whatever else aren't options for a variety of reasons). But if anyone has a solution (or partial solution) which takes care of the issue, I'd appreciate hearing about it. :)
How about
rm -rf $project
svn checkout svn+ssh://server/usr/local/svn/repos/$project
I wrote a perl script to first run svn cleanup to handle any locks, and then parse the --xml output of svn status, removing anything which has a bad status (except for externals, which are a little more complicated)
Then I found this:
http://svn.apache.org/repos/asf/subversion/trunk/contrib/client-side/svn-clean
Even though this doesn't do everything I want, I'll probably discard the bulk of my code and just enhance this a little. My XML parsing is not as pretty as it could be, and I'm sure this is somewhat faster than launching a system command (which matters on a very large repository and a command which is run every five minutes).
I ultimately found that script in the answer to this question - Automatically remove Subversion unversioned files - hidden among all the suggestions to use Tortoise SVN.

Inefficient handling of file renames in Mercurial

When I rename a file using Mercurial, and then commit without any changes, why does it still send the full file to the repository? (I can tell because the subsequent push to the remote repository shows how much data is being transferred). Isn't it obvious to it that it simply needs a rename?
I'm using the latest version of TortoiseHG under Windows, and the file in question is a 20MB text file.
This is a known deficiency in the storage format used by Mercurial. You can search for "lightweight copies" for the full story, but briefly, the problem is that a new revlog is created for the new file name when you rename. The new revlog starts with a compressed snapshot of the full file — this is normally not a big problem, but it's still bigger than a zero-sized delta.
There's little you can do about it now unless you want to patch your Mercurial and run experimental code. The good news is that you just have to wait: the patches that we've been working on will be able to convert your existing repository into a more space efficient one automatically. This will happen when you hg clone over the network or if you use hg clone --pull locally.