find-name-dired followed by deletion of root owned files - emacs

I'm using find-name-dired to find a bunch of files (all with .orig file ending)
I would then like to mark all the files in the resulting *Find* buffer for deletion then delete them
Unfortunately they are root owned, so the delete fails due to lack of permissions
Is there some workaround here, tramp or something like that?

You can presumably mark the files, then use ! sudo rm

You can do this using sudo through tramp. When find-name-dired prompts for the directory name, modify it and put /sudo:: at the start. E.g. change /foo/bar into /sudo::/foo/bar. (Take care of relative paths and ~ paths.) It will prompt for your sudo password, and then you should be able to delete files as usual.

Related

Using "rm" to remove files remotely from another directory?

I'm unable to use the rm command to remove files remotely from another directory. I'm a beginner so I apologise for my inability to elaborate properly.
Here's what I'm trying to do:
I'm trying to delete all .srt files from a sub directory. It works when I cd into the specific directory like so:
Command 1:
cd /users/jakubdonovan/library/cloudstorage/iCloud\ drive/the-modern-python3-bootcamp/target_folder
Command 2:
rm *.srt
However, let's say I want to quickly delete a specific file type from a folder without first using the "cd" command, like so:
rm *.srt /users/jakubdonovan/library/cloudstorage/iCloud\ drive/the-modern-python3-bootcamp/target_folder
It returns with "No matches for wildcard '*.srt'. See help expand."
Which is strange because I can use the touch, cp and and all the other commands remotely without a problem.
Is there a way to make the command "rm *.filetype" remove all the files with that specific filetype from a folder and all its subfolders in one swoop?
If you would like to rm in a sub-directory you just have to specify that sub-directory in the command.
rm /path/to/folder/*.filetype
or if you know that the folder is inside your current directory you can try...
rm ./folder/*.filetype

Can we wget with file list and renaming destination files?

I have this wget command:
sudo wget --user-agent='some-agent' --referer=http://some-referrer.html -N -r -nH --cut-dirs=x --timeout=xxx --directory-prefix=/directory/for/downloaded/files -i list-of-files-to-download.txt
-N will check if there is actually a newer file to download.
-r will turn the recursive retrieving on.
-nH will disable the generation of host-prefixed directories.
--cut-dirs=X will avoid the generation of the host's subdirectories.
--timeout=xxx will, well, timeout :)
--directory-prefix will store files in the desired directorty.
This works nice, no problem.
Now, to the issue:
Let's say my files-to-download.txt has these kind of files:
http://website/directory1/picture-same-name.jpg
http://website/directory2/picture-same-name.jpg
http://website/directory3/picture-same-name.jpg
etc...
You can see the problem: on the second download, wget will see we already have a picture-same-name.jpg, so it won't download the second or any of the following ones with the same name. I cannot mirror the directory structure because I need all the downloaded files to be in the same directory. I can't use the -O option because it clashes with --N, and I need that. I've tried to use -nd, but doesn't seem to work for me.
So, ideally, I need to be able to:
a.- wget from a list of url's the way I do now, keeping my parameters.
b.- get all files at the same directory and being able to rename each file.
Does anybody have any solution to this?
Thanks in advance.
I would suggest 2 approaches -
Use the "-nc" or the "--no-clobber" option. From the man page -
-nc
--no-clobber
If a file is downloaded more than once in the same directory, >Wget's behavior depends on a few options, including -nc. In certain >cases, the local file will be
clobbered, or overwritten, upon repeated download. In other >cases it will be preserved.
When running Wget without -N, -nc, -r, or -p, downloading the >same file in the same directory will result in the original copy of file >being preserved and the second copy
being named file.1. If that file is downloaded yet again, the >third copy will be named file.2, and so on. (This is also the behavior >with -nd, even if -r or -p are in
effect.) When -nc is specified, this behavior is suppressed, >and Wget will refuse to download newer copies of file. Therefore, ""no->clobber"" is actually a misnomer in
this mode---it's not clobbering that's prevented (as the >numeric suffixes were already preventing clobbering), but rather the >multiple version saving that's prevented.
When running Wget with -r or -p, but without -N, -nd, or -nc, >re-downloading a file will result in the new copy simply overwriting the >old. Adding -nc will prevent this
behavior, instead causing the original version to be preserved >and any newer copies on the server to be ignored.
When running Wget with -N, with or without -r or -p, the >decision as to whether or not to download a newer copy of a file depends >on the local and remote timestamp and
size of the file. -nc may not be specified at the same time as >-N.
A combination with -O/--output-document is only accepted if the >given output file does not exist.
Note that when -nc is specified, files with the suffixes .html >or .htm will be loaded from the local disk and parsed as if they had been >retrieved from the Web.
As you can see from this man page entry, the behavior might be unpredictable/unexpected. You will need to see if it works for you.
Another approach would be to use a bash script. I am most comfortable using bash on *nix, so forgive the platform dependency. However the logic is sound, and with a bit of modifications, you can get it to work on other platforms/scripts as well.
Sample pseudocode bash script -
for i in `cat list-of-files-to-download.txt`;
do
wget <all your flags except the -i flag> $i -O /path/to/custom/directory/filename ;
done ;
You can modify the script to download each file to a temporary file, parse $i to get the filename from the URL, check if the file exists on the disk, and then take a decision to rename the temp file to the name that you want.
This offers much more control over your downloads.

How to make a file executable using Makefile

I want to copy a particular file using Makefile and then make this file executable. How can this be done?
The file I want to copy is a .pl file.
For copying I am using the general cp -rp command. This is done successfully. But now I want to make this file executable using Makefile
Its a bad practice to use cp and chmod, instead use install command.
all:
install -m 0777 hello ../hello
You can use -m option with install to set the permission mode, and even note that by using the install you will preserve not only the permission but also the owner of the file.
You can still use chmod accordingly but it would be a bad practice
all:
cp hello ../hello
chmod +x ../hello
Update: install vs cp
cp would simply copy files with current permissions, install not only copies, but also can change perms/ownership as arg flags. (This is what your requirement was)
One significant difference is that cp truncates the destination file and starts copying data from the source into the destination file. install, on the other hand, removes the destination file first.
This is significant because if the destination file is already in use, bad things could happen to whomever is using that file in case you cp a new file on top of it. e.g. overwriting an executable that is running might fail. Truncating a data file that an existing process is busy reading/writing to could cause pretty weird behavior. If you just remove the destination file first, as install does, things continue much like normal - the removed file isn't actually removed until all processes close that file.[source]
For more details check these,
install vs. cp; and mmap
How is install -c different from cp

how to use backup files to create regular files in emacs

I am trying to create a file named caseexp.sml . Emacs created a backup file of this file when I was working on it at some earlier point, and now when I try to open it as caseexp.sml, emacs opens a #caseexp.sml# file and everytime I try to save it using C-x C-w, emacs saves it as another backup file with another tilde added to its name. Several attempts later, I have only managed to save it as #caseexp.sml"~~~.
How can I avoid creating these "tilde" backup files and save my file simply as caseexp.sml ?
There are a few unexpected behaviors here, so I can't be sure that this is what's going on, but usually what happens is that if files with hashes are left around, it's possible that Emacs crashed while you had unsaved changes. However, usually Emacs should prompt you to run "M-x recover-this-file" to restore changes from the unsaved-changes file (the filename with the hashes) to the actual file, so it's not clear what's going on there. Try fixing this from the command line.
You probably want to cp all the files to another location first, in order to have a backup (I'm assuming a Unix-like OS):
$ cp *caseexp* /tmp
Then delete the extra files while preserving the one with the most recent changes:
$ cp <most recent file with latest changes> caseexp.sml
$ rm \#caseexp*

How to create temporary files `.#filename` in `/tmp`, not working directory

When files are being modified in Emacs, a temporary file is created in the working directory that looks like this: .#filename. The file is deleted when the buffer is saved.
I found a few of these types of temporary files in my Git remote repositories, and I thought it might be better to nip the bud at the source instead of configuring Git to ignore them for every project.
How can we configure Emacs to create those files in the /tmp directory instead of the working directory?
The file at issue is called a lock file -- commencing with Emacs version 24.3, it can be controlled with the following setting:
(setq create-lockfiles nil)
https://stackoverflow.com/a/12974060/2112489
These files are auto-save files. The variable auto-save-file-name-transforms controls what modifications to make to the buffer's file name to generate the auto save file name. Usually, the default in file.el will suffice to put all the auto save files in the /tmp directory. It's default value is:
(("\\`/[^/]*:\\([^/]*/\\)*\\([^/]*\\)\\'" "/tmp/\\2" t))
That /tmp comes by reading the variable temporary-file-directory. Check that value so that it points to /tmp. Then, the value constructed for auto-save-file-name-transforms (and hence for the auto save file name) will be correct.
As a more general solution, you could also make a global exclude file, which applies to all repositories locally. By default, this will be in $XDG_CONFIG_HOME/git/ignore (usually ~/.config/git/ignore). The path can be overridden using the core.excludesFile option. See the gitignore manpage for more details.
$ mkdir -p ~/.config/git
$ echo '.#*' >> ~/.config/git/ignore