Use wildcards with perforce move - version-control

I would like to move files filtered using wildcards to subfolders, however perforce move does not accept my usage of the wild card.
given this structure
filea_mk1.txt
fileb_mk2.txt
mk1/
mk2/
to move all files some thing like p4 move ./... ./mk1/... works, however when replacing the selected file to use wild cards I get:
p4 edit filea_mk1.txt
p4 move *_mk1*.* ./mk1/...
Usage: move [-c changelist#] [ -f ] [ -k ] [-t type] from to
Missing/wrong number of arguments.
I have thought about using p4 fstat as that does accept the wildcards, and could then pass filenames into to xargs.
p4 fstat *_mk1*.*
However I can not get the -A options correct to only show client names.
TL;DR
Is there a way to filter *_mk1*.* into the mk1 folder and *_mk2*.* into the mk2 folder, using perforce commands?

Instead of this:
p4 move *_mk1*.* ./mk1/...
Do this:
p4 move "*_mk1*.*" "mk1/*_mk1*.*"
Note the double quotes to keep the shell from expanding the asterisks.
Alternatively, this simpler form will probably work fine unless the paths are trickier than your example makes them appear:
p4 move ..._mk1... mk1/..._mk1...

I believe that what's happening here is that your operating system shell is expanding the asterisk wildcards in your command, so the actual command that the Perforce server is seeing is:
`p4 move filea_mk1.txt fileb_mk1.txt ./mk1/...`
and that command has three file-spec arguments, rather than the expected two file-spec arguments, hence the Usage: message that you receive.
By using operating-system filenames (*_mk1*.* and ./mk1/...), you are providing the file-spec arguments in what Perforce calls "local" syntax.
But this causes your operating system shell to think it should expand the wildcards, when what you want is to have the Perforce server expand the wildcards.
You can try using different quoting strategies for your arguments, to defeat that local wildcard expansion, but this is a situation where you can benefit from using one of the other forms of file-spec syntax, either "client" syntax" or "depot" syntax.
For example, suppose that your client root is actually located in a section of the depot that begins with the path //depot/projects/project1/main/.
Then, you could specify your command as:
`p4 move //depot/projects/project1/main/*_mk1*.* //depot/projects/project1/main/mk1/*_mk1*.*`
In this case, these file-spec arguments will not be seen as syntax that your operating-system shell should expand, so it will leave the arguments alone and pass them unaltered to the server, so that the Perforce server can perform the wildcard expansion, rather than allowing your shell to perform the wildcard expansion.
Note that I also re-specified your wildcard expansion slightly. The Perforce move command, like the integrate command and several other commands, is typically happiest if the "wildcards on the left side" match up with the "wildcards on the right side", so that when it is constructing new destination file names for the files being moved, it can replace the wild-carded elements 1-1 with the wild-carded elements from the source file names.

Related

Include only specific file patterns in meld comparison

I want to compare directories with Meld, but only specific file patterns.
E.g., only *.c;*.cc;*.icc;*.h files.
Meld can use File Filters, but I could only use them for exclusion filtering specific file patterns. That approach is not useful for me, I guess.
Can an inclusion filter be applied?
I tried with the idea of "double negation": Excluding "everything but *.c;*.cc;*.icc;*.h", which in effect would include only those patterns.
I tried using File Filters which worked well for listing "everything but ..." using ls -d -- <my_filter> at the command line (see this). I assume this is a necessary (but not sufficient) condition for any filter to work in Meld.
None of these worked:
!(*.c|*.h|*.cc)
!(*#(.c|.h|.cc))
!(*.#(c|h|cc))
Note: I do not mean to use any type of Text Filter, since I do not care at this point about the contents of files, but only about the file names.
Note : I have bash and
$ shopt extglob
extglob on

Beyond compare as merge tool on P4 Eclipse plugin

I'm trying to configure the P4Eclipse plugin (2014.1.965331) to use Beyond Compare as external Merge tool.
I have configured the Bcomp.exe as Perforce Merge in Perferences -> Team -> Perforce ->External Tools. so right now when resolve is requested it's opens the Beyond compare, but without the content of the files.
I know there is a list of arguments that needed to be passed (in P4V it's passed in the argument line as %1 %2 %b %r), as documented here : Using Beyond Compare with Version Control Systems
But no luck with the arguments, the trick for adding the arguments is to create a .bat file that calles to Bcomp.exe with additional arguments and set external merge toll to run the .bat file.
Is there any chance to configure it to work fine with beyond compare. (for now only 2 way merge is requested)
The list of arguments is fixed in the P4Eclipse code.
You're right, you're going to have to write a .bat/.cmd to adjust the parameter list.
P4Eclipse code is in our workshop.
The class that runs the command:
https://swarm.workshop.perforce.com/projects/perforce-software-p4eclipse/files/2014-1/src/3.7/plugins/com.perforce.team.ui/src/com/perforce/team/ui/p4merge/MergeRunner.java
Note method getBuilder() that makes the argument list. The constructor too. That's what we've got for documentation right now.
What it passes to the constructor depends on what you're doing - like merge vs diff.
For example, see the "new MergeRunner(...)" in
https://swarm.workshop.perforce.com/projects/perforce-software-p4eclipse/files/2014-1/src/3.7/plugins/com.perforce.team.ui/src/com/perforce/team/ui/p4merge/P4MergeResolveAction.java

vifm search files in subfolders

How can I search files just like with / command but recursively scanning subfolders?
Or maybe there are other approaches to get a list of files that match some pattern in the current folder including all subfolders.
:find command
There is :fin[d] command for that. Internally it invokes find utility (this is configurable via 'findprg' option), so you can do everything find is capable of. That said, in most cases the simple form of the command suffices:
:find *.sh
Note that by default argument is treated as regular file pattern (-name option of find), which is different from regular expressions accepted by /. For searching via regexp, use:
:find -regex '.*_.*'
If you want to scan only specific subfolders, just select them before running the command and search will be limited only to those directories.
:find command brings up a menu with search results. If you want to process them like regular files (e.g. delete, copy, move), hit b to change list representation.
Alternative that uses /
Alternatively you can populate current view with list of files in all subdirectories with command like (see %u):
:!find%u
and then use /, although this might be less efficient.

Using diff3 where filenames contain a dash (-)

I'm trying to use diff3 in this way
diff3 options... mine older yours
My problem is that I probably can't use it, since all my 3 files contain a "dash" within.
The manual mentions:
At most one of these three file names may be `-', which tells diff3 to read the standard input for that file.
so I probably have to rename filenames before running diff3.
If you know for a better solution or a workaround, please let me know about. Thank you!
At most one of these three file names may be `-', which tells diff3 to read the standard input for that file.
It does not state, that your filenames should not contain dash symbols. It simply says, that if you want to, you can put - instead of one of the names, in which case the standard input will be read instead of reading one of the files.
So, you can have as many dashes in your filenames as you like and diff3 should work just fine.
However, on Windows putting filenames in "" for escaping space characters does not work, and I failed to find a suitable workaround. However, you can automatize the process of renaming files (if the files are relatively small, this would not even be too inefficient):
#echo off
copy %1 tempfile_1.txt
copy %2 tempfile_2.txt
copy %3 tempfile_3.txt
"C:\Program Files (x86)\KDiff3\bin\diff3.exe" -E tempfile_1.txt tempfile_2.txt tempfile_3.txt
del tempfile_1.txt tempfile_2.txt tempfile_3.txt
Put this in a file like diff3.cmd, then run diff3.cmd "first file.txt" "second file.txt" "third file.txt".
P.S. Moving files would be more efficient (if they are on the same disk volume as the script, which they are not in your case), you could even move them back to where they were initially, but for some time they would not be present at their original folder.

SAS- Reading multiple compressed data files

I hope you are all well.
So my question is about the procedure to open multiple raw data files that are compressed.
My files' names are ordered so I have for example : o_equities_20080528.tas.zip o_equities_20080529.tas.zip o_equities_20080530.tas.zip ...
Thank you all in advance.
How much work this will be depends on whether:
You have enough space to extract all the files simultaneously into one folder
You need to be able to keep track of which file each record has come from (i.e. you can't tell just from looking at a particular record).
If you have enough space to extract everything and you don't need to track which records came from which file, then the simplest option is to use a wildcard infile statement, allowing you to import the records from all of your files in one data step:
infile "c:\yourdir\o_equities_*.tas" <other infile options as per individual files>;
This syntax works regardless of OS - it's a SAS feature, not shell expansion.
If you have enough space to extract everything in advance but you need to keep track of which records came from each file, then please refer to this page for an example of how to do this using the filevar option on the infile statement:
http://www.ats.ucla.edu/stat/sas/faq/multi_file_read.htm
If you don't have enough space to extract everything in advance, but you have access to 7-zip or another archive utility, and you don't need to keep track of which records came from each file, you can use a pipe filename and extract to standard output. If you're on a Linux platform then this is very simple, as you can take advantage of shell expansion:
filename cmd pipe "nice -n 19 gunzip -c /yourdir/o_equities_*.tas.zip";
infile cmd <other infile options as per individual files>;
On windows it's the same sort of idea, but as you can't use shell expansion, you have to construct a separate filename for each zip file, or use some of 7zip's more arcane command-line options, e.g.:
filename cmd pipe "7z.exe e -an -ai!C:\yourdir\o_equities_*.tas.zip -so -y";
This will extract all files from all of the matching archives to standard output. You can narrow this down further via the 7-zip command if necessary. You will have multiple header lines mixed in with the data - you can use findstr to filter these out in the pipe before SAS sees them, or you can just choose to tolerate the odd error message here and there.
Here, the -an tells 7-zip not to read the zip file name from the command line, and the -ai tells it to expand the wildcard.
If you need to keep track of what came from where and you can't extract everything at once, your best bet (as far as I know) is to write a macro to process one file at a time, using the above techniques and add this information while you're importing each dataset.