How to Move/Copy Multiple Folders to Other Multiple Folders - solaris

I want to move multiple folders to other multiple folders like
I have folders with name like dates e.g 20141101 and Oct-2014
Now i want to move folders like
mv 201401* Jan-2014
mv 201402* Feb-2014
I have these folders for one whole year so want to move in one command rather then running 12 commands.
Regards

A simple command like so can be done in one line;
mv 201401* Jan-2014; mv 201402* Feb-2014; ....
Alternatively, you can write a script to perform this action, but I think a single line shown above will do the trick.

Related

merging multiple pdf files into one per file name using PDFtk pro

I have a situation that I need to merge files again by file names. Now, I have files in one folder like this -
A1.pdf,
A2.pdf,
B1.pdf,
C1.pdf,
C2.pdf,
C3.pdf.
The goal is to merge files by file names and I will get A.pdf, B.pdf, C.pdf. I tried different things in the batch file, but none worked so far. Can you please help?
The real files names are like this below.
115_11W0755_70258130_841618403_01.PDF
115_12W0332_70258122_202990692_01.PDF
115_12W0332_70258122_202990692_02.PDF
115_12W0332_70258122_202990692_03.PDF
115_14W0491_70258174_562605608_01.PDF
115_14W0491_70258174_562605608_02.PDF
115_14W0776_70258143_680477806_01.PDF
115_16W0061_70258083_942231888_01.PDF
115_16W0065_70258176_202990692_01.PDF
115_16W0065_70258176_202990692_02.PDF
the 3rd part (70258083) is the element that works as uinque per batch. In other words, I want to merge files per this element. from the file names listed above, there will be 6 PDF files.
I am using the batch script below to merge two files into one. I don't know how to tweak this to more than 2 files to merge OR leave a single file alone.
Please help.
setlocal enabledelayedexpansion
for %%# in (115_*.pdf) do (
set n=%%~n#
set n=!n:~,-30!
pdftk A=!n!.pdf B=%%# cat B A output C:\IN\_fileNames\Merge\Files\!n!.pdf
)
here is the error screen

Extracting Multiple 7z Files Overrides Same Folder

I'm currently working on a project where I take multiple 7z files and extract the contents of these files in a folder named the same way as the 7z file itself. I also apologize if something like this has been answered already; I spent time trying to investigate this issue but I can't seem to find anyone else who has had a similar issue.
For example:
a1.7z -> <targetpath>/a1/<contents within a1.7z>
The following shell line: a1.7z | % {& "C:\Program Files\7-Zip\7z.exe" "x" $_.fullname "-o<targetpath>\a1" -y -r}
Works like a dream, but only for one 7z file. However, whenever I start extracting a second 7z file, it won't create a new folder but instead will continue to add into the same first folder that is created; second folder is never made. When I manually highlight all of the 7z files I want to extract, right click and select "Extract to "*\", it does what I would like it to do but I can't figure out how to script this action. I should also mention that some of the 7z files, when extracted, can contain subfolders of the same name. I'm not sure if this is throwing off the recursion cycle, but I'm assuming this might be the case.
Any help or advice on this topic would be greatly appreciated!
If you get all the .7z files as IOFileInfo objects (Using get-ChildItem) you can use Mathias comment, as one way to do this with the pipeline, but I recommend you put this inside a loop and look for a better way to choose the names of the folders I.e. "NofFolder_$_.BaseName" just in case of more than 1 folder with the same name.
It really depends on the format you want.

How to compare with Robocopy and save files in different directory

I would like to compare two folders, and the differences between them, I would like to save in a third directory, keeping the original file-tree.
Can I do this with Robocopy?
Thank you

What's the best way to perform a differential between a list of directories?

I am interested in looking at a list of directories and comparing the previous list with a current list of directories and setting up a script to do so. Maybe in perl or as a shell script.
Should I use something like diff? Programatically, what would be an ideal way to do this? For example let say I output the diff to an output file, if there is no diff then exit, if there is results, I want to see it.
Let's for example I have the following directories today:
/foo/bar/staging/abc
/foo/bar/staging/def
/foo/bar/staging/a1b2c3
Next day would look like this where a directory is either added, or renamed:
/foo/bar/staging/abc
/foo/bar/staging/def
/foo/bar/staging/ghi
/foo/bar/staging/a1b2c4
There might be better ways, but the way I typically do something like this is to run a find command in each directory root, and pipe the output to separate files. You can then diff the files using the diff tool of your choice. If you want to filter out certain directories or files, you can throw in some grep or grep -v commands in the pipeline, or you can experiment with options on the find command.
The other main option is to find a diff tool that offers directory/folder comparisons. Most of the goods ones support this, but I like the command line method, because you get more control over what you're diffing.
cd /my/directory/one
find . -print | sort > /temp/one.txt
cd /my/directory/two
find . -print | sort > /temp/two.txt
diff /temp/one.txt /temp/two.txt
also check the inotifywait command. it allows you to monitor files in RT.
You might also consider the find command using the -newer switch.
The usage is:
find . -newer timefile.txt -print
The -newer switch makes find return a list of files that are created or updated after the specified file's modification time. In the example above, any file created or updated after timefile.txt would be returned. You'd have to create a timefile.txt file, most likely once per day. Some versions of find have variations of newer that compare against other time stamps for a file (last modified, last accessed, last created, etc.)
This technique would not report a file that was deleted, however. A daily diff of the file listings could report that.

a command line or a script who is able to display each version of each element archived in a vob since the beginning?

For example,
I am in a vob called: avob
I add to source control a folder:
avob/afolder
I add to source control two files afile1 and afile2
avob/afolder/afile1
avob/afolder/afile2
I uncatalogued the file afile2
I add to source control afile3
avob/afolder/afile3
I would like (a command line or a script who is able to) display each version of each element archived since the beginning, here:
avob#version1
avob#version2
avob#version2/afile1#version1
avob#version2/afile2#version1
avob#version3
avob#version3/afile1#version1
avob#version4
avob#version4/afile1#version1
avob#version4/afile3#version1
The only command which could come close of what you are looking for would be
ct lshist vob:\yourVob
which would list all the events for all versions of all files (add to source control, rmname, merges, ...)
But that would involve a lot of parsing of a huge log file if your vob has a few years of history...
A script could scan the entire VOB for all its elements/history and generate a report.
If you wanted to script it, you'd start by collecting version information on the root dir of the vob:
cleartool lsvtree -all /vobs/myvob/
It would list all the versions of the root dir of the VOB, and then you could lsvtree every single version of that dir for every element in every version of the root dir, keeping track of them all and recursing into directory versions, etc., until you have all the elements and versions cataloged. It could use a lot of memory.
It will take a long time, as Von points out.