Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
This question is based on the answer.
I run at home
find -- ./ Desktop
I understand the command as
find without parameters
at the current directory that is home (= /Users/masi/)
find the folder name Desktop at the current directory
How do you read the command?
The answer to your question in the title is
$ find . -type f
Now, keep in mind that
$ find -- ./ Desktop
will return the files in Desktop twice.
In your example, "--" says to stop looking for further options. Everything else after that is a path, so it finds anything else matching that. And since "./" means "the current directory" it matches everything under the current directory (the Desktop will cause that directory, as well as anything inside it, to be reported twice.)
You probably want something like:
find ./Desktop -type f
Which will find any files inside the ./Desktop directory, that is a file (not directories, symbolic links, etc...)
I know that manpages can be quite technical sometimes, but "man find" will give you a wealth of other options that might help, as well as a few examples that may help with common problems.
I think what you want is:
find ./ -name Desktop
Well, you can pass multiple directories to search to find:
$ find --help
Usage: find [-H] [-L] [-P] [-Olevel] [-D help|tree|search|stat|rates|opt|exec] [path...] [expression]
[...]
Note the "[path...]" indicating you can specify multiple paths.
So your example will find all files and directories under ./ (current dir) and under Desktop.
Related
This question already has answers here:
How to rename multiple files in a folder with a specific format?
(2 answers)
Closed 2 years ago.
I asked a similar question previously, but need help to understand the Perl commands that achieve the renaming process. I have many files in a folder with format '{galaxyID}-psf-calexp-pdr2_wide-HSC-I-{#}-{#}-{#}.fits'. Here are some examples:
7-psf-calexp-pdr2_wide-HSC-I-9608-7,2-205.41092-0.41487.fits
50-psf-calexp-pdr2_wide-HSC-I-9332-6,8-156.64674--0.03277.fits
124-psf-calexp-pdr2_wide-HSC-I-9323-4,3-143.73514--0.84442.fits
I want to rename all .fits files in the directory to match the following format:
7-HSC-I-psf.fits
50-HSC-I-psf.fits
124-HSC-I-psf.fits
namely, I want to remove "psf-calexp-pdr2_wide", all of the numbers after "HSC-I", and add "-psf" to the end of each file after HSC-I. I have tried the following command:
rename -n -e 's/-/-\d+-calexp-/-\d+pdr2_wide; /-/-//' *.fits
which gave me the error message: Argument list too long. You can probably tell I don't understand the Perl syntax. Thanks in advance!
First of all, Argument list too long doesn't come from perl; it comes from the shell because you have so many files that *.fits expanded to something too long.
To fix this, use
# GNU
find . -maxdepth 1 -name '*.fits' -exec rename ... {} +
# Non-GNU
find . -maxdepth 1 -name '*.fits' -print0 | xargs -0 rename ...
But your Perl code is also incorrect. All you need is
s/^(\d+).*/$1-HSC-I-psf.fits/
which can also be written as
s/^\d+\K.*/-HSC-I-psf.fits/
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I have files with name out-of-ASCII on a Linux server to zip. Unfortunately, UTF8-encoded files always have file name corrupted on extracting on Windows.
Is there a way to zip files with name encoded in a specific charset different to the local system charset? Or is there a tool that can extract UTF8-encoded files with correct name on Windows?
(If the solution is a script, PHP or Python are preferred.)
Use 7z or 7zip.
Compress your files on Linux and decompress on Windows both by 7-zip.
For python2 (filenames is russian) pls use cp866
with zipfile.ZipFile(file_handle, mode='w') as zip_file:
for file_ in self._files.all():
path = file_.file.path
filename = u'Название файла.txt'
try:
filename = filename_utf.encode('cp866')
except:
ext = str(path.split('.')[-1])
filename = '%s.%s' % (uuid4().hex, ext)
zip_file.write(path, filename)
For python 3
file_handle = BytesIO()
with zipfile.ZipFile(file_handle, mode='w') as zip_file:
for file_obj in files:
zip_file.write(filename=file_obj.full_path, arcname=file_obj.file_name)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
So I was just archiving an assignment for email submission, and was asked by the instructor to do so using the tar command and create a .tgz file, which I did with the following command line script:
tar -cvf filename.tgz {main.cpp other filenames here}
No problems on the archive or anything, but when I went to email the file, gmail prevented me saying that my file contained an executable (I'm assuming main.cpp?), and that this was not allowed for security reasons.
So, I ran the same script, but this time created a .tar file instead, like so:
tar -cvf filename.tar {main.cpp filenames here}
Again, archives just fine, but now gmail is fine with me emailing the archive. So what is the difference? I've only really used tar for this purpose, so I'm not really familiar with what the different extensions are utilized for. Obviously, I've figured out a way to get the functionality I need, but like all tinkerers, I'm curious.
What say you?
Absolutely no difference. A filename is just a filename. Usually, when you use the tgz form, it's to indicate that you've gzipped the tar file (either as a second step or using the z flag):
tar zcvf filename.tgz {filenames}
or
tar cvf filename {filenames}
gzip -S .tgz filename
.tar, on the other hand, normally means "this is an uncompressed tar file":
tar cvf filename.tar {filenames}
Most modern tar implementations also support the j flag to use bzip2 compression, so you might also use:
tar jcvf filename.tar.bz2 {filenames}
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I want to download a big file from a normal http-link to an ftp-server (under ubuntu) without storing the file locally (as my local storage is too small).
Do you have any ideas how to do this with wget or a small perl-script? (I don't have sudo-rights on the local machine).
Here's my take, combining wget and Net::FTP on the commandline.
wget -O - http://website.com/hugefile.zip | perl -MNet::FTP -e 'my $ftp = Net::FTP->new("ftp.example.com"); $ftp->login("user", "pass"); $ftp->put(\*STDIN, "hugefile.zip");'
Of course, you can put it in a file (ftpupload.pl) as well and run it.
#!/usr/bin/perl
use strict;
use warnings;
use Net::FTP;
my $ftp = Net::FTP->new("ftp.example.com"); # connect to FTP server
$ftp->login("user", "pass"); # login with your credentials
# Because of the pipe we get the file content on STDIN
# Net::FTP's put is able to handle a pipe as well as a filehandle
$ftp->put(\*STDIN, "hugefile.zip");
Run it like this:
wget -O - http://website.com/hugefile.zip | perl ftpupload.pl
There's - of course - a CPAN module which makes life easy for FTP:
http://search.cpan.org/search?mode=module&query=Net%3A%3AFTP
And WWW::Mechanize looks up files, follows links, etc.
With these modules I think you can solve your problem.
You can try to use wput. It is not very known tool, but i think you can use it.
Use wget's output-document option
wget -O /dev/null http://foo.com/file.uuu
From wget's manual page:
"-O file
--output-document=file
The documents will not be written to the appropriate files, but all will be
concatenated together and written to file. If - is used as file, documents will be
printed to standard output, disabling link conversion. (Use ./- to print to a file
literally named -.)
Use of -O is not intended to mean simply "use the name file instead of the
one in the URL;" rather, it is analogous to shell redirection: wget -O file http://foo is
intended to work like wget -O - http://foo > file; file will be truncated immediately,
and all downloaded content will be written there."
However, I can't see what could be the purpose of that
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am using Gnuwin32 binaries on a Windows environment.
When I want to find files of a certain type, let's say PDF, I usually run:
find . -iname '*.pdf' -print
This works perfectly on any UNIX system.
find.exe . -iname "*.pdf" -print
But under Windows, having replaced single quotes with double-quotes, it only works when there is no pdf file in the current directory, otherwise the * gets expanded.
Worse: when there is exactly one PDF file in the current directory, it will expand, there will be no syntax error and you will get wrong results.
I have tried escaping the * with a caret, a backslash, a star itself, putting inside double quotes: nothing works for me.
Real example:
Okay, here are all my files:
C:\tmp>find . -type f
./a/1.pdf
./a/2.pdf
./a/aa/1.pdf
./b/1.pdf
./b/bb/1.pdf
./b/bb/2.pdf
Good behaviour, wildcard was not expanded
C:\tmp>find . -iname "*.pdf"
./a/1.pdf
./a/2.pdf
./a/aa/1.pdf
./b/1.pdf
./b/bb/1.pdf
./b/bb/2.pdf
C:\tmp>cd a
Caution, inconsistent behaviour, wildcard was expanded:
C:\tmp\a>find . -iname "*.pdf"
find: paths must precede expression
Usage: find [-H] [-L] [-P] [path...] [expression]
C:tmp\a>cd ..\b
Caution, inconsistent behaviour, wildcard was expanded :
C:\tmp\b>find . -iname "*.pdf"
./1.pdf
./bb/1.pdf
Thank you
I have found myself the solution to my problem.
Gnuwin32's find.exe is not working on recent Windows Versions (Vista, Seven) because it expands wildcards matching only the contents of the current directory.
Similarly, an old version of find.exe from UnxUtils suffered the same bug.
The latest find.exe from UnxUtils is working.
One workaround is to add a wildcard/expansion that the Windows shell does not expand, but GNU find does:
find.exe . -name *[.:]pdf -print
The Windows shell[*] does not interpret/expand square braces. In addition, colon is not a valid character in Windows filenames, so this pattern cannot match any Windows filename, and the Windows shell will always pass the pattern through to find.exe.
Find.exe will then find any files ending in .pdf or :pdf , but since no files can have a name ending in :pdf under Windows, it will only find files ending in .pdf.
[*] It's actually the C runtime that does/not perform these wildcard expansions. I don't understand the Win32 C runtime well enough to refine the distinction, so for now for the purpose of this workaround, I'm just saying 'shell'.
I suffered this problem this afternoon.
Benoit's UnxUtils can work.
I also find MinGW's find.exe can work,it is under my
"MinGW\msys\1.0\bin"
directory. And it is consistent with the manual.
gnuwin32 and UnxUtils: find.exe . -name GameCli* work, but
find.exe . -name 'GameCli*' doesn't work.
MinGW's find.exe . -name 'GameCli*' work.
I haven't found anything better than just avoiding wildcard characters
find.exe . -iregex ".+\.pdf" -print
#OP, i have consistent behaviour
C:\test\temp>find . -iname "*.txt"
./1.txt
./2.txt
C:\test\temp>cd a
C:\test\temp\a>find . -iname "*.txt"
C:\test\temp\a>cd ..\b
C:\test\temp\b>find . -iname "*.txt"
C:\test\temp\b>find --version
GNU find version 4.2.20
Features enabled: CACHE_IDS D_TYPE
You may want to try to use findutils instead of UnxUtils.