I have a program that will output timesheets as separate .xps files into a folder. I am looking for a way to use the command line to print all of these files at the same time. Since there could be hundreds of these files it is also important that they be printed as one print job so that other documents aren't printed in the middle of them.
I have been searching on and off for about two months for a way to do this. So far I have come up with nothing. I would appreciate any advice on how to do this.
Thanks.
I ended up converting my files to PDFs. Then I downloaded pdftk which is a command-line tool that can be used to combine multiple PDF files into one. Now I can just run pdftk and then print the combined file. It's unfortunate that there is not a better way to handle this in Windows but at least it works.
Related
I've checked "Similar questions" and went through a lot of search but I can't seem to find a way to combine the snippets I already figured out; would be awesome if someone is able to help.
Using pdftk, alternatively running through PowerShell
I got two .pdf files (f.e.: A=1000 pages, B=5000 pages) which I need to combine in a specific way to generate a new .pdf file. In detail I need page 1-3, 4-6[...] of file A merged with page 1-4, 4-8[...] of file B with a blank page between 1-3 & 4-6.
So far I figured how to burst the files, add a blank page and combine them to a new .pdf file. Yet I'm only able to that for one needed document at a time (a new file with 8 pages).
pdftk fileC.pdf fileD.pdf cat output fileE.pdf
pdftk A=fileE.pdf B=blankpage.pdf cat A1-1 B1-1 A2-4 output conclusion.pdf
Now I'm wondering if there's a way to output the complete file with a command? Otherwise I'd have to do it for every merge of two long files.
Thanks in advance!
I have dir1 and dir2 which have subfolders and files of the same name. Both folders have roughly 1800 items and I need to compare to find which files are different. I need to be able to report the name of any files that are either, in one and not the other, or in both but different.
I have used tools such as WinMerge which can spot it in under a minute. However, I am trying to automate this process so being able to do it in powershell or as a batch command would be ideal.
From a powershell standpoint, my searches have suggested to pull the hash and compare them between files, which works, but takes forever due to the size of the directories.
If anyone could help steer me in the right direction or how I should approach this, it would be much appreciated.
WinMerge has a CLI which should give you exactly what you need.
I hope you are all well.
So my question is about the procedure to open multiple raw data files that are compressed.
My files' names are ordered so I have for example : o_equities_20080528.tas.zip o_equities_20080529.tas.zip o_equities_20080530.tas.zip ...
Thank you all in advance.
How much work this will be depends on whether:
You have enough space to extract all the files simultaneously into one folder
You need to be able to keep track of which file each record has come from (i.e. you can't tell just from looking at a particular record).
If you have enough space to extract everything and you don't need to track which records came from which file, then the simplest option is to use a wildcard infile statement, allowing you to import the records from all of your files in one data step:
infile "c:\yourdir\o_equities_*.tas" <other infile options as per individual files>;
This syntax works regardless of OS - it's a SAS feature, not shell expansion.
If you have enough space to extract everything in advance but you need to keep track of which records came from each file, then please refer to this page for an example of how to do this using the filevar option on the infile statement:
http://www.ats.ucla.edu/stat/sas/faq/multi_file_read.htm
If you don't have enough space to extract everything in advance, but you have access to 7-zip or another archive utility, and you don't need to keep track of which records came from each file, you can use a pipe filename and extract to standard output. If you're on a Linux platform then this is very simple, as you can take advantage of shell expansion:
filename cmd pipe "nice -n 19 gunzip -c /yourdir/o_equities_*.tas.zip";
infile cmd <other infile options as per individual files>;
On windows it's the same sort of idea, but as you can't use shell expansion, you have to construct a separate filename for each zip file, or use some of 7zip's more arcane command-line options, e.g.:
filename cmd pipe "7z.exe e -an -ai!C:\yourdir\o_equities_*.tas.zip -so -y";
This will extract all files from all of the matching archives to standard output. You can narrow this down further via the 7-zip command if necessary. You will have multiple header lines mixed in with the data - you can use findstr to filter these out in the pipe before SAS sees them, or you can just choose to tolerate the odd error message here and there.
Here, the -an tells 7-zip not to read the zip file name from the command line, and the -ai tells it to expand the wildcard.
If you need to keep track of what came from where and you can't extract everything at once, your best bet (as far as I know) is to write a macro to process one file at a time, using the above techniques and add this information while you're importing each dataset.
I have several files in a GridFS Document Store and what I'd like to do is to pipe this data into a zip file via stdin in NodeJS. So that I will end up with a zip file containing all these files.
Now my question is how can I give the files a valid filename inside of the zip file. I think I need to emulate/fake a file header containing the filename?
Any help is appreciated!
Thanks
I had problems when writing zip files with Node.js not long ago. I ended up doing something similar to what is described in Zip archives in node.js
I can't help you directly with your problem, but at least I hope I can point out some things:
Don't try to use node-archive. Even if the description says it allows to create zip files, the moment I read the source code (since documentation is unexistant) I realized that's just a lie. It only exposes methods for reading.
Using zip by spawning a process, like recommended on the provided link, seems to be the best way. Something that would work is copying the files to a local folder with whatever name you desire and then calling the zip command, just to delete the files afterwards.
The other option, which seems ok, is to use zipper (https://github.com/rubenv/zipper, although better just use npm). The reason I'm not really wishing to use it is because there's not that much flexibility, it seems to have been done in a day and it hasn't been modified since the first commit, so I'm not sure it will receive maintenance (sure, you could just fork it...).
I swear the day I have an entire free weekend with no work I will write a freaking module that does this as complete as possible. It's silly that there isn't and it shouldn't be that much struggle. blablablarant.
Edit:
Not sure if it was there before, but now I've been using the node-compress module (also using gzippo). It works fine.
Does anyone know of a free Perl program (command line preferable), module, or anyway to search and replace text in a PDF file without using it like an editor.
Basically I want to write a program (in Perl preferably) to automate replacing certain words (e.g. our old address) in a few hundred PDF files. I could use any program that supports command line arguments. I know there are many modules on CPAN that manipulate or create pdfs but they don't have (that I've seen) any sort of simple search and replace.
Thanks in advance for any and all advice!!!
Take a look at CAM::PDF. More specifically the changeString method.
How did you generate those PDFs in the first place? Search-and-replace in the original sources and re-generate PDFs seems to be more viable. Direct editing PDFs can be very difficult, and I'm not aware of any free tools that can do it easily.