I would like to delete multiple files that contain a substring. Say for example I would like to delete all the files that has the substring my. Assume that my directory contains 4 files: photo.jpg, myPhoto.jpg, beachMyPhoto.jpg, anyPhoto.jpg, since the term of search is my the files that I am interested to delete are myPhoto.jpg and beachMyPhoto.jpg (case insensitive).
My proposed solution (which I know how to do) is to use NSFileManager class, and use the function contentsOfDirectoryAtPath:error: to read all the directory contents, and then search by a loop for a hit. If a hit is found I delete that file.
What I don't like in my proposed solution is that it is not that efficient especially if the directory contains too many files and the hit is a small number. Is there a more efficient way to do this?
If you don't want a big array loaded into memory, you can try -[NSFileManager enumeratorAtURL:includingPropertiesForKeys:options:errorHandler:]. Since you only want the immediate contents of the directory, you would invoke -[NSDirectoryEnumerator skipDescendants] for each directory that it returns.
If your concern is iterating over all of the items in the directory, testing for your match pattern, well that's unavoidable. Any technique you would hope to use has to somehow iterate over all of the items in the directory and test for a match. The only question is whether that iteration is exposed to you or not. In Cocoa, it is. You could drop down to the glob() function if you want an alternative where it isn't.
Related
This is similar to PowerShell Flatten Directory Structure , but different.
I have a bunch of folders from ripping albums of the form artist/title/<disc number (an int)>. This is good if there's more than one disc, but if not, you have artist/title/1/, where it'd make more sense to move the contents of 1 up and delete the now empty 1 directory.
This would be only for the case where 1 is the only sub-dir; and note that there may be more multiple titles under artist.
Any suggestions on doing this in Powershell, or another way without going to Python?
I'm trying to copy several files with changing filenames. It seems very easy but I can't seem to work out how to do it without actually listing out the filenames in their entirety. The first few letters of the filenames correspond to the subject names which I'm looping through one by one. In each folder, there are 2 files, one is something like this subj1_load1_vs_load2.img, one is subj1_load1_vs_load2.hdr. I want both of them copied. Below is what I have:
subj={'subj1','subj2','subj3','subj4','subj5'}
for i=1:length(subj)
source=fullfile(filedir,subj{i},sprintf('^%s_.*\.*',subj{i})); % this doesn't seem to work
destination=fullfile(destdir,subj{i});
copyfile(source,destination);
end
I've also tried:
source=dir([filedir subj{i} strcat(subj{i},'*')]);
This appears to needlessly complicate since I will need to deal with .name. But perhaps I don't know how to use this well.
Anyway, the problem is with source as I'm trying to find the files i want to copy.
I'd appreciate any suggestions.
Below is Daniel's answer (which solved the issue for me)
source=fullfile(filedir,subj{i},strcat(subj{i},'*'))
I use this code to see existed files in a specific folder from FTP :
ftp_client = ftp('IP','Username','Password');
aa = dir(ftp_client,'First_folder/Second_folder');
I can see file names with these codes :
aa(1,1).name
aa(2,1).name
aa(3,1).name
How can I see all files names in a cell aray in this specific folder? Is there any command for it?
How can I count number of existed files in this folder?
How can I count number of existed files in this folder with a specific format?
Thanks.
A simple way is to collect the values into the cell array by using curly brackets: filenames = {aa.name};
The simplest method would be length(aa); or length(filenames);
A couple ways. You can either refine your dir call, aa = dir(ftp_client,'First_folder/Second_folder/*.jpeg') for example, or use your own filter (regexp is one option) on your filenames to return the indices of what you want.
As a general aside, if you're utilizing this program on different operating systems, I would recommend utilizing fullfile (or filesep at the very least) to build your full pathnames to ensure the right separator is used. Though I didn't do it in my example above...
I have a small perl script that I use to search archives for members matching a name. I'd like to enhance this so that if it finds any members in the archive that are also archives (zip, jar, etc) it will then recursively scan those, looking for the original desired pattern.
I've looked through the "Archive::Zip" documentation, and I thought I saw how to do this. I noticed the "fh()" and "readFromFileHandle()" methods. However, in my testing, it appears that the "fh()" call on an archive member returns the file handle for the containing archive, not the member. Perhaps I'm doing it wrong, but I would appreciate an example of how to do this.
You can't read the contents of any sort of archive member (whether it is text, picture, or another archive) without extracting it from the archive file.
Once you have identified a member that you want to view, you must call extractMember (or, more likely, extractMemberWithoutPaths if the file is to be temporary) to extract it to a disk file. Then you can create a new Archive::Zip object and read the new file while keeping the old one open.
You will presumably want to unlink the archive file once you have catalogued its contents.
Edit
I hadn't come across the Archive::Zip::MemberRead module before. It appears you were on the right track with readFromFileHandle. I would guess that it should work like this, but it would be awkward for me to test it at present.
my $zip = Archive::Zip->new;
$zip->read('myfile.zip');
my $zipfh = Archive::Zip::MemberRead->new($zip, 'archive/path/to/member.zip');
my $newzip = Archive::Zip->new;
$newzip->readFromFileHandle($zipfh)
Whenever I list the contents of a directory with a function like readdir, the returned file names also include "." and "..". I have the suspicion that these are just normal links in the file system and therefore indistinguishable from actual files, but I always have to filter them out because they are not actual objects in the directory I am listing. Is there a good reason for functions like readdir to include them? Do some operating systems or file systems contain more or different virtual file names? Is there a better way to filter them out other than by doing string comparison with "." and ".."?
Update: thank you all for answering. I suppose I always thought that things like ./ and ../ were mere conventions that could be handled by searching and replacing. I find it a bit surprising, though probably more efficient and transparent, to have them be part of the file system itself.
One question remains, though: since . and .. are arbitrary names for these links, are there file systems that use different ones?
. and .. are actually hard links in filesystems. They are needed so that you can specify relative paths, based on some reference path (consider "../sibling/file.txt"). Since these hard links are actually existing in the filesystem, it makes sense for readdir to tell you about them. (actually the term hard link just means some name that is indistinguishable from the actual directory referred to: they both point to the same inode in the filesystem).
Best way is to just strcmp and ignore them, if you don't want to list them.
Originally they were hard links, and the number of special cases in the filesystem code for . and .. were minimal. That's not true for all modern filesystems, however.
But the conventions have been established so that even filesystems where these two directory entries don't actually exist still report their existence through APIs like readdir. Changing this would now would break a lot of code.
I have the suspicion that these are
just normal links in the file system
and therefore indistinguishable from
actual files
They are. While you may perceive the file system as a hierarchy of "folders" "containing" folders, it is actually a doubly linked tree1, with directories being nodes and files being leafs. So, . and .. are needed links for accessing the leaves of the current node and for traversing the tree, and they are the same thing as all the other links.
When you call readdir, you get all the places you can directly go to from the current node. If you do not want to list places that you perceive as "up", you have to sort them out yourself. You should write a little function for that, perhaps called readdir_down. I do not know in which order readdir lists the directories, but perhaps you can just throw away the first two entries.
1) this is a first approximation, there are also "hard links" possible that make the tree actually a net.
One reason is that without them there is no way to get to the parent directory. Or get a handle to the current directory.
Without them, we cannot do such things as:
./run_this
Indeed, we couldn't add '.' to the $PATH, meaning we couldn't ever execute files that weren't already in the path.
These are normal directories, they are "hard links" to the current directory and directory above. They are present in all directories (even at the root level, where .. is exactly the same as .).
When using ls, you can filter out . and .. with ls -A (note the capital -A).
When applying a command to all dot-files, but not . or .., I often use .??* which matches only dot-file with a name of three characters or more.
touch .??*
Note this pattern also excludes any other file that begins with dot and is only two characters long (e.g. .x) but those files are uncommon.
When using programmatic file-listers like readdir() I do have to exclude . and .. manually. Since these two files are supposed to be first in the list returned by readdir() you can do this:
#files = readdir(DIR);
for (1..2) { shift #files; } # get rid of . and ..
# go on with your business
They are reported because they are stored in the directory listing. That's the way unices have always worked.
Because on Unix-like operating systems, the directory-listing commands include those, and you use them to move up and down in the filesystem hierarchy.
Something like grep { not /^.{1,2}\z/ } readdir HANDLE should work for you.
there is no good reason a directory scan should return these filenames.