Can't Delete File On Terminal - emacs

Please, open File 1 to access the problem. If I type it here some important characters that I believe to be what's causing the drama won't show on the post. Thank you!
I basically created a file that I can't get rid of.
Screen shoot of what I have done

Space, <, and > are all treated specially by your shell.
Because of that, you'll need to quote the name. Try this:
rm 'index.html <RET>'

In cases where you can't figure out how to quote/escape the filename appropriately, a convenient approach is:
rm -i index.html*
You will be prompted (because of the -i option) to delete each filename matching the specified glob pattern, one at a time. Simply answer y to the ones you want to delete.
Quoting the arguments correctly is safest (it avoids any possibility of accidentally deleting something you didn't mean to delete), so I recommend always doing so when you can; but if you've managed to generate a really garbled filename (Unix places very few constraints upon filenames) then this method can be very useful.

Related

How to find files by words inside parentheses in filenames?

I have a bunch of files with filenames like Matroid (bifrons's conflicted copy 2019-11-19).scala due to problems of synchronization. I want to find this files to remove or manually correct the problems (merge the two versions of the file). I tried the command below:
$ find . -iname '*conflicted*'
But it returns nothing! Nada!
I am guessing this is because the word conflicted occurs inside parentheses, but this is just a conjecture. Anyway, why my command do not find the files? How can I find them?
Thanks.
Actually the problem had nothing to do with parentheses at all. It as a (silly) mistake of mine. The directory containing the files with the *conflicted* filename was actually a link and find does not follow links by default. After informing the option -L it worked perfectly.

How to rename partly the downloaded file using wget?

I'd like to download many files (about 10000) from ftp-server. Names of the files are too long. I'd like to save them only with the date in names. For example: ABCDE201604120000-abcde.nc I prefer to be 20160412.nc
Is it possible?
I am not sure if wget provides similar functionality, nevertheless with curl, one can profit from the relatively rich syntax it provides in order to specify the URL of interest. For example:
curl \
"https://ftp5.gwdg.de/pub/misc/openstreetmap/SOTMEU2014/[53-54].{mp3,mp4}" \
-o "file_#1.#2"
will download files 53.mp3, 53.mp4, 54.mp3, 54.mp4. The output file is specified as file_#1.#2 - here, #1 is replaced by curl with the value of the sequence [53-54] corresponding to the file being downloaded. Similarly, #2 is replace with either mp3 or mp4. Thus, e.g., 53.mp3 will be saved as file_53.mp3.
ewcz's answer works fine if you can enumerate the file names as shown in the post. However, if the filenames are difficult to enumerate, for example, because the integers are sparsely populated, this solution would result in a lot of 404 Not Found requests.
If this is the case, then it is probably better to download all the files recursively, as you have shown, and rename them afterwards. If the file names follow a fixed pattern, you can select the substring from the original name and use it as the new name. In the given example, the new file names start at position 5 and are 8 characters long. The following bash command renames all *.nc files in the current directory.
for f in *.nc; do mv "$f" "${f:5:8}.nc" ; done
If the filenames do not follow a fix pattern and might vary in length, you can use more complex pattern substitution using sed, see SO post for an example.

Get SVN keyword ($Id$) of a file directly from a Perl script

Are there any Perl methods or modules to get the SVN keyword of a given file without calling the system SVN installation?
Thanks!
If you do not involve SVN, then the file is just a text file. Whether or not it is version controlled is not a property of the file but is stored in the hidden .svn folder on the same level. That means, if you want to analyze the file without involving SVN, you have to treat it as a plain text file. However, expanded SVN keywords do follow a certain syntax (or SVN itself would be unable to find them after they have been expanded once).
In your case, any file that had the keyword substituted most likely has a line like
$Id: ActualIdHere $
and if the keyword has not yet been substituded, it will just be $Id$. And Perl is very able to extract data from a known syntax. Especially given that you should know what characters can make up an Id (hint: Avoid $ if any possible).
To answer your specific question: A regex like /^\$Id\: (.*) \$$/ should probably find the Id you are looking for and store it in $1. That is of course assuming there is nothing else in that line. But only you can know how and where you have what sort of keywords, so it is up to you to craft the regex or to provide more detailed information, if you struggle to.
you may try something like this, to get the svn id into a perl variable.
my ($SVN_ID) = ('$Id: ActualIdHere $' =~ /^\$Id\: (.*) \$$/);
print $SVN_ID;
But for SVN you also have to enable keyword substitution in SVN, see svn documentation
Note: the inserted id is the value at the time the file was last changed. The id may have changed later than that for other files.

zip recursively each file in a dir, where the name of the file has spaces in it

I am quite stuck; I need to compress the content of a folder, where I have multiple files (extension .dat). I went for shell scripting.
So far I told myself that is not that hard: I just need to recursively read the content of the dir, get the name of the file and zip it, using the name of the file itself.
This is what I wrote:
for i in *.dat; do zip $i".zip" $i; done
Now when I try it I get a weird behavior: each file is called like "12/23/2012 data102 test1.dat"; and when I run this sequence of commands; I see that zip instead of recognizing the whole file name, see each part of the string as single entity, causing the whole operation to fail.
I told myself that I was doing something wrong, and that the i variable was wrong; so I have replaced echo, instead than the zip command (to see which one was the output of the i variable); and the $i output is the full name of the file, not part of it.
I am totally clueless at this point about what is going on...if the variable i is read by zip it reads each single piece of the string, instead of the whole thing, while if I use echo to see the content of that variable it gets the correct output.
Do I have to pass the value of the filename to zip in a different way? Since it is the content of a variable passed as parameter I was assuming that it won't matter if the string is one or has spaces in it, and I can't find in the man page the answer (if there is any in there).
Anyone knows why do I get this behavior and how to fix it? Thanks!
You need to quote anything with spaces in it.
zip "$i.zip" "$i"
Generally speaking, any variable interpolation should have double quotes unless you specifically require the shell to split it into multiple tokens. The internal field separator $IFS defaults to space and tab, but you can change it to make the shell do word splitting on arbitrary separators. See any decent beginners' shell tutorial for a detailed account of the shell's quoting mechanisms.

How can I force emacs (or any editor) to read a file as if it is in ASCII format?

I could not find this answer in the man or info pages, nor with a search here or on Google. I have a file which is, in essence, a text file, but it somehow got screwed up upon saving. (I think there are a few strange bytes at the front of the file accidentally.)
I am able to open the file, and it makes sense, using head or cat, but not using any sort of editor.
In the end, all I wish to do is open the file in emacs, delete the "messy" characters, and save it once cleaned up. The file, however, is huge, so I need something powerful like emacs to be able to open it.
Otherwise, I suppose I can try to create a script to read this in line by line, forcing the script to read it in text format, then write it. But I wanted something quick, since I won't be doing this over & over.
Thanks!
Mike
perl -i.bk -pe 's/[^[:ascii:]]//g;' file
Found this perl one liner here: http://www.perlmonks.org/?node_id=619792
Try M-xfind-file-literally in Emacs.
You could edit the file using hexl-mode, which lets you edit the file in hexadecimal. That would let you see precisely what those offending characters are, and remove them.
It sounds like you either got a different line ending in the file (eg: carriage returns on a *nix system) or it got saved in an unexpected encoding.
You could use strings to grab "printable characters in file". You might have to play with the --encoding though I have only ever used it to grab ascii strings from executable files.