How do I lock a Perforce label from the command line? - command-line

I recently imported a VSS repository into Perforce. This included hundreds of labels, which the developer that was using VSS (now using Perforce) relies upon. I accidentally deleted them and had to do the import again. To prevent such accidental deletion in the future, I want to lock all the labels, but doing it through P4V would take forever. I would like to write a script to do it for me.
I can get all the labels into a text file with the p4 labels command, and with some text editor macro processing I could build up a script to lock them all. I just need to know the command(s) to do this.

This can be done by automating the process of editing the label spec. The process is as follows:
Send the label spec to standard output with the -o switch.
Pipe that output to a utility that can manipulate it and set the label's "Options" to "locked". In this case, the Unix utility sed gets the job done. (I'm on Windows, so I used this port. Others can be found in this answer.)
Pipe this updated spec back into the label command with the -i switch.
Put it all together and you get a command that looks like this.
p4 label -o <label name> | sed 's/^Options:.*/Options: locked/' | p4 label -i

The relevant Perforce doc is here.
To dump a label spec to standard output:
p4 -o *labelname*
To read a label spec from standard input:
p4 -i *labelname*
in between you'll need to process the text to include the 'options: locked' probably by redirecting standard output to a text file e.g. ('p4 -o labelname > labelspect.txt'), process the text file in your chosen manner, and then read the file into standard in ('p4 -i labelname < labelspect.txt')

Related

How to read a file without checking out in perforce

I'm writing a syntax check tool to parse several files on different branches.
Is there a way for me to read the contents without checking out the file?
The tool is written in Perl.
`p4 print //depot/path/to/file`;
(Usual requirements for running a p4 command apply -- make sure the p4 executable is in your PATH, make sure you're authenticated with p4 login, make sure you're connecting to the right server, etc.)
See p4 help print for more info on the print command -- you might find the -q and/or -o flags helpful depending on what exactly you need to do with the output.

Perforce: Prevent keywords from being expanded when syncing files out of the depot?

I have a situation where I'd like to diff two branches in Perforce. Normally I'd use diff2 to do a server-side diff but in this case the files on the branches are so large that the diff2 call ends up filling up /tmp on my server trying to diff them and the diff fails.
I can't bring down my server to rectify this so I'm looking at checking out the the content to disk and using diff on the command line to inspect and compare the content.
The trouble is: most of the files have RCS keywords in them that are being expanded.
I know can remove keyword expansion from a file by opening the files for edit and removing the -k attribute from the files in the process, but that seems a bit brute force. I was hoping I could just tell the p4 sync command not to expand the keywords on checkout. I can't seem to find a way to do this? Is it possible?
As a possible alternative solution, does anyone know if you can tell p4 diff2 which directory to use for temporary space when you call it? If I could tell it to use abundant NAS space instead of /tmp on the Perforce server I might be able to make it work.
I'm using 2010.x version of Perforce if that changes the answer in any way.
There's no way I know of to disable keyword expansion on sync. Here's what I would try:
1) Create a branch spec between the two sets of files
2) Run "p4 files //path/to/files/... | cut -d '#' -f 1 > tmp"
Path to files above should be the right hand side of the branch spec you created
3) p4 -x tmp diff2 -b
This tells p4 to iterate over the lines of text in 'tmp' and treat them as arguments to the command. I think /tmp on your server will get cleared in-between each file this way, preventing it from filling up.
I unfortunately don't have files large enough to test that it works, so this is entirely theoretical.
To change the temp directory that p4d uses just TEMP or TMP to a different path and restart p4d. If you're on Windows make sure to call 'p4 set -S perforce TMP=' to set variable for the Perforce service; without the -S perforce you'll just set it for the current user.

when using stcmd co with -vl flag file is being checkout even at the second time the command is being executed

Actually I have 2 different questions regarding the stcmd co with -vl flag:
1) when using stcmd co without -vl flag, the file is being checked out only at the first time. If I run it again the file is being skipped. BUT when adding the -vl flag to the stcmd co command, the file is being checked out at each and every run. How can I avoid it? (I tried to run -f NCO but then when using different label the file was not checked out as well.)
2) I had 1 file with 2 revisions, after I checked out the file by label of the first revision using stcmd co and then tried to run stcmd co without any label specified in order to get latest version, I got message that the file is modified therefor it hasn't been checkout. Since I want to get only the changed files I want to avoid the -force option. Any other way to force the file to be checked out?
Thanks
Three things needs to be changed:
Checkout by Config-label, not by View-Label - use -cfgl LABELNAME instead of -vl LABELNAME
this will correctly identify the status of your local files in comparison to the given label.
Use a filter to check-out only files that needs to be checked-out:
-filter MGIOU
(this means: All files, except those that are 'Current')
Do use force (the -o flag) to make sure the filter works as intended.
To sum it up, the command should look like this:
stcmd co -p "user:pwd#host:port/MyProject/MyView/" ... -o -filter MGIOU -cfgl "MY_LABEL" ...

CVS command to get brief history of repository

I am using following command to get a brief history of the CVS repository.
cvs -d :pserver:*User*:*Password*#*Repo* rlog -N -d "*StartDate* < *EndDate*" *Module*
This works just fine except for one small problem. It lists all tags created on each file in that repository. I want the tag info, but I only want the tags that are created in the date range specified. How do I change this command to do that.
I don't see a way to do that natively with the rlog command. Faced with this problem, I would write a Perl script to parse the output of the command, correlate the tags to the date range that I want and print them.
Another solution would be to parse the ,v files directly, but I haven't found any robust libraries for doing that. I prefer Perl for that type of task, and the parsing modules don't seem to be very high quality.

Unable to use SED to edit files fast

The file is initially
$cat so/app.yaml
application: SO
...
I run the following command. I get an empty file.
$sed s/SO/so/ so/app.yaml > so/app.yaml
$cat so/app.yaml
$
How can you use SED to edit the file and not giving me an empty file?
$ sed -i -e's/SO/so/' so/app.yaml
The -i means in-place.
The > used in piping will open the output file when the pipes are all set up, i.e. before command execution. Thus, the input file is truncated prior to sed executing. This is a problem with all shell redirection, not just with sed.
Sheldon Young's answer shows how to use in-place editing.
You are using the wrong tool for the job. sed is a stream editor (that's why it's called sed), so it's for in-flight editing of streams in a pipe. ed OTOH is a file editor, which can do everything sed can do, except it works on files instead of streams. (Actually, it's the other way round: ed is the original utility and sed is a clone that avoids having to create temporary files for streams.)
ed works very much like sed (because sed is just a clone), but with one important difference: you can move around in files, but you can't move around in streams. So, all commands in ed take an address parameter that tells ed, where in the file to apply the command. In your case, you want to apply the command everywhere in the file, so the address parameter is just , because a,b means "from line a to line b" and the default for a is 1 (beginning-of-file) and the default for b is $ (end-of-file), so leaving them both out means "from beginning-of-file to end-of-file". Then comes the s (for substitute) and the rest looks much like sed.
So, your sed command s/SO/so/ turns into the ed command ,s/SO/so/.
And, again because ed is a file editor, and more precisely, an interactive file editor, we also need to write (w) the file and quit (q) the editor.
This is how it looks in its entirety:
ed -- so/app.yaml <<-HERE
,s/SO/so/
w
q
HERE
See also my answer to a similar question.
What happens in your case, is that executing a pipeline is a two-stage process: first construct the pipeline, then run it. > means "open the file, truncate it, and connect it to filedescriptor 1 (stdout)". Only then is the pipe actually run, i.e. sed is executed, but at this time, the file has already been truncated.
Some versions of sed also have a -i parameter for in-place editing of files, that makes sed behave a little more like ed, but using that is not advisable: first of all, it doesn't support all the features of ed, but more importantly, it is a non-standardized proprietary extension of GNU sed that doesn't work on many non-GNU systems. It's been a while since I used a non-GNU system, but last I used one, neither Solaris nor OpenBSD nor HP-UX nor IBM AIX sed supported the -i parameter.
I believe that redirecting output into the same file you are editing is causing your problem.
You need redirect standard output to some temporary file and when sed is done overwrite the original file by the temporary one.