when using stcmd co with -vl flag file is being checkout even at the second time the command is being executed - checkout

Actually I have 2 different questions regarding the stcmd co with -vl flag:
1) when using stcmd co without -vl flag, the file is being checked out only at the first time. If I run it again the file is being skipped. BUT when adding the -vl flag to the stcmd co command, the file is being checked out at each and every run. How can I avoid it? (I tried to run -f NCO but then when using different label the file was not checked out as well.)
2) I had 1 file with 2 revisions, after I checked out the file by label of the first revision using stcmd co and then tried to run stcmd co without any label specified in order to get latest version, I got message that the file is modified therefor it hasn't been checkout. Since I want to get only the changed files I want to avoid the -force option. Any other way to force the file to be checked out?
Thanks

Three things needs to be changed:
Checkout by Config-label, not by View-Label - use -cfgl LABELNAME instead of -vl LABELNAME
this will correctly identify the status of your local files in comparison to the given label.
Use a filter to check-out only files that needs to be checked-out:
-filter MGIOU
(this means: All files, except those that are 'Current')
Do use force (the -o flag) to make sure the filter works as intended.
To sum it up, the command should look like this:
stcmd co -p "user:pwd#host:port/MyProject/MyView/" ... -o -filter MGIOU -cfgl "MY_LABEL" ...

Related

Can we wget with file list and renaming destination files?

I have this wget command:
sudo wget --user-agent='some-agent' --referer=http://some-referrer.html -N -r -nH --cut-dirs=x --timeout=xxx --directory-prefix=/directory/for/downloaded/files -i list-of-files-to-download.txt
-N will check if there is actually a newer file to download.
-r will turn the recursive retrieving on.
-nH will disable the generation of host-prefixed directories.
--cut-dirs=X will avoid the generation of the host's subdirectories.
--timeout=xxx will, well, timeout :)
--directory-prefix will store files in the desired directorty.
This works nice, no problem.
Now, to the issue:
Let's say my files-to-download.txt has these kind of files:
http://website/directory1/picture-same-name.jpg
http://website/directory2/picture-same-name.jpg
http://website/directory3/picture-same-name.jpg
etc...
You can see the problem: on the second download, wget will see we already have a picture-same-name.jpg, so it won't download the second or any of the following ones with the same name. I cannot mirror the directory structure because I need all the downloaded files to be in the same directory. I can't use the -O option because it clashes with --N, and I need that. I've tried to use -nd, but doesn't seem to work for me.
So, ideally, I need to be able to:
a.- wget from a list of url's the way I do now, keeping my parameters.
b.- get all files at the same directory and being able to rename each file.
Does anybody have any solution to this?
Thanks in advance.
I would suggest 2 approaches -
Use the "-nc" or the "--no-clobber" option. From the man page -
-nc
--no-clobber
If a file is downloaded more than once in the same directory, >Wget's behavior depends on a few options, including -nc. In certain >cases, the local file will be
clobbered, or overwritten, upon repeated download. In other >cases it will be preserved.
When running Wget without -N, -nc, -r, or -p, downloading the >same file in the same directory will result in the original copy of file >being preserved and the second copy
being named file.1. If that file is downloaded yet again, the >third copy will be named file.2, and so on. (This is also the behavior >with -nd, even if -r or -p are in
effect.) When -nc is specified, this behavior is suppressed, >and Wget will refuse to download newer copies of file. Therefore, ""no->clobber"" is actually a misnomer in
this mode---it's not clobbering that's prevented (as the >numeric suffixes were already preventing clobbering), but rather the >multiple version saving that's prevented.
When running Wget with -r or -p, but without -N, -nd, or -nc, >re-downloading a file will result in the new copy simply overwriting the >old. Adding -nc will prevent this
behavior, instead causing the original version to be preserved >and any newer copies on the server to be ignored.
When running Wget with -N, with or without -r or -p, the >decision as to whether or not to download a newer copy of a file depends >on the local and remote timestamp and
size of the file. -nc may not be specified at the same time as >-N.
A combination with -O/--output-document is only accepted if the >given output file does not exist.
Note that when -nc is specified, files with the suffixes .html >or .htm will be loaded from the local disk and parsed as if they had been >retrieved from the Web.
As you can see from this man page entry, the behavior might be unpredictable/unexpected. You will need to see if it works for you.
Another approach would be to use a bash script. I am most comfortable using bash on *nix, so forgive the platform dependency. However the logic is sound, and with a bit of modifications, you can get it to work on other platforms/scripts as well.
Sample pseudocode bash script -
for i in `cat list-of-files-to-download.txt`;
do
wget <all your flags except the -i flag> $i -O /path/to/custom/directory/filename ;
done ;
You can modify the script to download each file to a temporary file, parse $i to get the filename from the URL, check if the file exists on the disk, and then take a decision to rename the temp file to the name that you want.
This offers much more control over your downloads.

How to check if a file is already marked for add?

I'd like to check if a file is already locally marked for add, without adding the said file. Is this check possible with a Perforce command?
If you use the -n option it will preview the add operation telling you which files would be added but not actually adding the file.
Source
p4 add -n testfile
I personally use below two commands:
p4 diff -sa // to show list of all opened files
p4 diff -se // to show list of all files that are with changes but not opened
You can check if a file is marked for add by running the fstat command. If the command output contains the string "action add", that means the file is already marked for add.

Perforce p4 sync command fails for a folder which has a space in it's name

There is a folder with the name Test Logs. As it can be seen there is a space between Test and Logs
When I try to get it locally using sync command in perl script it fails.
The script has the code:
system("p4 sync -f //depot/Test Logs/OnTargetLogs/...");
I get the following error:
> //depot/Test - no such file(s).
> Logs/OnTargetLogs/... - no such file(s).
Quote the argument maybe?
system("p4 sync -f \"//depot/Test Logs/OnTargetLogs/...\"");
– Sobrique
What you said worked. Also I found another way of doing this :
my #a1 = ("p4","sync","-f","//depot/Test Logs/OnTargetLogs/..."); system #a1;
– Vishal Khemani

Perforce: Prevent keywords from being expanded when syncing files out of the depot?

I have a situation where I'd like to diff two branches in Perforce. Normally I'd use diff2 to do a server-side diff but in this case the files on the branches are so large that the diff2 call ends up filling up /tmp on my server trying to diff them and the diff fails.
I can't bring down my server to rectify this so I'm looking at checking out the the content to disk and using diff on the command line to inspect and compare the content.
The trouble is: most of the files have RCS keywords in them that are being expanded.
I know can remove keyword expansion from a file by opening the files for edit and removing the -k attribute from the files in the process, but that seems a bit brute force. I was hoping I could just tell the p4 sync command not to expand the keywords on checkout. I can't seem to find a way to do this? Is it possible?
As a possible alternative solution, does anyone know if you can tell p4 diff2 which directory to use for temporary space when you call it? If I could tell it to use abundant NAS space instead of /tmp on the Perforce server I might be able to make it work.
I'm using 2010.x version of Perforce if that changes the answer in any way.
There's no way I know of to disable keyword expansion on sync. Here's what I would try:
1) Create a branch spec between the two sets of files
2) Run "p4 files //path/to/files/... | cut -d '#' -f 1 > tmp"
Path to files above should be the right hand side of the branch spec you created
3) p4 -x tmp diff2 -b
This tells p4 to iterate over the lines of text in 'tmp' and treat them as arguments to the command. I think /tmp on your server will get cleared in-between each file this way, preventing it from filling up.
I unfortunately don't have files large enough to test that it works, so this is entirely theoretical.
To change the temp directory that p4d uses just TEMP or TMP to a different path and restart p4d. If you're on Windows make sure to call 'p4 set -S perforce TMP=' to set variable for the Perforce service; without the -S perforce you'll just set it for the current user.

How do I lock a Perforce label from the command line?

I recently imported a VSS repository into Perforce. This included hundreds of labels, which the developer that was using VSS (now using Perforce) relies upon. I accidentally deleted them and had to do the import again. To prevent such accidental deletion in the future, I want to lock all the labels, but doing it through P4V would take forever. I would like to write a script to do it for me.
I can get all the labels into a text file with the p4 labels command, and with some text editor macro processing I could build up a script to lock them all. I just need to know the command(s) to do this.
This can be done by automating the process of editing the label spec. The process is as follows:
Send the label spec to standard output with the -o switch.
Pipe that output to a utility that can manipulate it and set the label's "Options" to "locked". In this case, the Unix utility sed gets the job done. (I'm on Windows, so I used this port. Others can be found in this answer.)
Pipe this updated spec back into the label command with the -i switch.
Put it all together and you get a command that looks like this.
p4 label -o <label name> | sed 's/^Options:.*/Options: locked/' | p4 label -i
The relevant Perforce doc is here.
To dump a label spec to standard output:
p4 -o *labelname*
To read a label spec from standard input:
p4 -i *labelname*
in between you'll need to process the text to include the 'options: locked' probably by redirecting standard output to a text file e.g. ('p4 -o labelname > labelspect.txt'), process the text file in your chosen manner, and then read the file into standard in ('p4 -i labelname < labelspect.txt')