when i use sed -i it is creating some temporary files
When i use the above command for replace string
it is creating some temporary files like sed6Y5vk6 with same original file size.
how can we avoid this.
The same bug over here, backup files are not deleted.
I'm back on sed 4.1.5 which works as expected for the time now.
Related
I need help to rename multiple files in a directory based on the delimeter.
Sample:
From
R01235-XYZ-TRAIL.PDF
TO
R01234-TRAIL.PDF
and
From
XYZ-C12345-TRAIL.PDF
TO
C12345-TRAIL.PDF
is it possible to delete based on - delimeter?
I am not specifically removing XYZ but rather remove anything before the first - and the middle occurence between two -.. XYZ is just a representation of the characters in that field.
Thanks!
I tried SED, LS, MV, I also tried RENAME but it seems not working for me.
This might work for you:
rename -n 's/XYZ-//' file
This removes XYZ- from the file name.
If this meets your requirements, remove the -n option for the renaming to take place.
On retrospect, perhaps:
rename -n 's/([A-Z][0-9]{5}-).*-/$1/;s/^.*-([A-Z][0-9]{5}-)/$1/' file
With sed:
sed -E 's/^([A-Z][0-9]{5}-).*-|^.*([A-Z][0-9]{5}-.*)/mv & \1\2/' file
Check the results and then:
sed -E 's/^([A-Z][0-9]{5}-).*-|^.*([A-Z][0-9]{5}-.*)/mv & \1\2/' file | sh
I am trying to create a file named caseexp.sml . Emacs created a backup file of this file when I was working on it at some earlier point, and now when I try to open it as caseexp.sml, emacs opens a #caseexp.sml# file and everytime I try to save it using C-x C-w, emacs saves it as another backup file with another tilde added to its name. Several attempts later, I have only managed to save it as #caseexp.sml"~~~.
How can I avoid creating these "tilde" backup files and save my file simply as caseexp.sml ?
There are a few unexpected behaviors here, so I can't be sure that this is what's going on, but usually what happens is that if files with hashes are left around, it's possible that Emacs crashed while you had unsaved changes. However, usually Emacs should prompt you to run "M-x recover-this-file" to restore changes from the unsaved-changes file (the filename with the hashes) to the actual file, so it's not clear what's going on there. Try fixing this from the command line.
You probably want to cp all the files to another location first, in order to have a backup (I'm assuming a Unix-like OS):
$ cp *caseexp* /tmp
Then delete the extra files while preserving the one with the most recent changes:
$ cp <most recent file with latest changes> caseexp.sml
$ rm \#caseexp*
I'm trying to do use the sed command in a shell script where I want to remove lines that read STARTremoveThisComment and lines that read removeThisCommentEND.
I'm able to do it when I copy it to a new file using
sed 's/STARTremoveThisComment//' > test
But how do I do this by using the same file as input and output?
sed -i (or the extended version, --in-place) will automate the process normally done with less advanced implementations, that of sending output to temporary file, then renaming that back to the original.
The -i is for in-place editing, and you can also provide a backup suffix for keeping a copy of the original:
sed -i.bak fileToChange
sed --in-place=.bak fileToChange
Both of those will keep the original file in fileToChange.bak.
Keep in mind that in-place editing may not be available in all sed implementations but it is in GNU sed which should be available on all variants of Linux, as per your tags.
If you're using a more primitive implementation, you can use something like:
cp oldfile oldfile.bak && sed 'whatever' oldfile >newfile && mv newfile oldfile
You can use the flag -i for in-place editing and the -e for specifying normal script expression:
sed -i -e 's/pattern_to_search/text_to_replace/' file.txt
To delete lines that match a certain pattern you can use the simpler syntax. Notice the d flag:
sed -i '/pattern_to_search/d' file.txt
You really should not use sed for that. This question seems to come up ridiculously often, and it seems very strange that it does since the general solution is so trivial. It seems bizarre that people want to know how to do it in sed, and in python, and in ruby, etc. If you want to have a filter operate on an input and overwrite it, use the following simple script:
#!/bin/sh -e
in=${1?No input file specified}
mv $in ${bak=.$in.bak}
shift
"$#" < $bak > $in
Put that in your path in an executable file name inline, and then the problem is solved in general. For example:
inline input-file sed -e s/foo/bar/g
Now, if you want to add logic to keep multiple backups, or if you have some options to change the backup naming scheme, or whatever, you fix it in one place. What's the command line option to get 1-up counters on the backup file when processing a file in-place with perl? What about with ruby? Is the option different for gnu-sed? How does awk handle it? The whole friggin' point of unix is that tools do one thing only. Handling logic for backup files is a second thing, and needs to be factored out. If you are implementing a tool, do not add logic to create backup files. Tell your users to use a 2nd tool for that. Integration is bad. Modularity is good. That is the unix way.
Notice that this script has several problems. The permissions/mode of the input file may be changed, for example. I'm sure there are innumerable other issues. However, by putting the backup logic in a wrapper script, you localize all of these issues and don't have to worry that sed overwrites the files and changes mode, while python keeps the file in place and does not change the inode (I made up those two cases, the point being that not all tools will use the same logic, while the wrapper script will.)
As far as I know it is not possible to use the same file for input and output. Though one solution is make a shell script which will save it to another file, delete the old input and rename the output to the input file name.
sed -e s/try/this/g input.file > output.file;mv output.file input.file
I suggest using sponge
sponge reads standard input and writes it out to the specified file.
Unlike a shell redirect, sponge soaks up all its input before writing
the output file. This allows constructing pipelines that read from and
write to the same file.
cat test | sed 's/STARTremoveThisComment//' | sponge test
I am looking for a safe and reliable way to overwrite ONE line in a text file. I don't care if it's using sed, grep, perl whatever. It just needs to be portable and reliable. Specifically what I am trying to do is replace the value of a variable I have saved in a text file at runtime. Let's say I have a file named variables.txt which contains a line that reads userName=stephen. Let's say my program wants to change the userName to frank. Here's what I've come up with using sed:
sed -i '' 's/userName.*/userName=frank/' variables.txt
The concern I have with this is that I've read that on some versions of sed using the '-i' switch without specifying a backup file could cause the command to fail or risk possible file corruption. Not an option. What do you guys think?
Edit
For those asking where I read about command failure and file corruption. The manpage for my version of sed recommends against providing an empty value for the -i switch as well as take a look at the comments on this page here:
It seems that some versions of sed require the argument after -i and others do not. With GNU sed version 4.1.x, it seems that the -i does not require an argument and specifying an empty argument after it actually fails.
It sounds like the unanimous recommendation is to provide a backup file and then delete it after the command completes. However I'm still concerned about this solution since my version of sed doesn't even support the --version switch. My primary concern here is that the solution is both reliable and portable.
t=`tempfile`
sed -e 's/userName.*/userName=frank/' variables.txt >$tempfile
cp $tempfile >variables.txt
rm $tempfile
you can also use mv but that won't preserve file rights
if tempfile is not available then use some other method ($$.bak) to create the filename.
As long as you don't run on windows, sed -i is as safe as anything. Even if the machine crashes mid-process, variables.txt will either have the old content or the new content -- it should never be missing or corrupt.
The concern I have with this is that I've read that on some versions of sed using the '-i' switch without specifying a backup file could cause the command to fail or risk possible file corruption.
Where have you read it?
It is not strictly true: this is not an usual problem in sed but all programs (including sed) can fail and in very unlikely situations corrupt data. So you should not be too afraid of data corruption with sed.
Anyway, why do you not use -i with an extension (such as -i.bak) for granting more safety? In any case you can erase the backup file with rm...
You could copy variables.txt to variables.txt.bak before running the command if you wanted to keep a backup. Or go the other way around:
sed 's/userName.*/userName=frank/' variables.txt > variables.txt.fixed
if [ $? -eq 0 ]
then
cp variables.txt.fixed variables.txt
fi
I'm trying to run the following command in Windows Server 2003 but sed creates a pile of files that I can't delete from the command line inside the current directory.
for /R %f in (*.*) do "C:\Program Files\gnuwin32\bin\sed.exe" -i "s/bad/good/g" "%f"
Does anyone have any suggestions? Mysteriously enough, I'm able to delete the files using Windows Explorer.
As requested, here are some example filenames:
sed0E3WZJ
sed5miXwt
sed6fzFKh
And, more troubleshooting info...
It occurs from both the command prompt & batch files
If I just need to run sed on a single directory, then I use sed "s/bad/good/g" *.* and everything is OK. Alas, I also need it to tackle all the subdirectories.
I only have Sed installed.
Sed is creating the files
I have replicated your setup and I have the following observations.
I dont think there is a problem in the loop. The simple command "C:\Program Files\gnuwin32\bin\sed.exe" -i "s/bad/good/g" . - creates the same set of temporary files.
The files are indeed created by sed. sed creates these temporary files when the "in place" (-i) option is turned on. In the normal course, sed actually deletes the files (that is what happens in cygwin) using a call to the 'unlink' library. In case of gnuwin32, it looks like the 'unlink' fails. I have not been able to figure out why. I took a guess that maybe the unlink call is dependent on the gnuwin32 'coreutils' library and tried to download and install the coreutils library - no dice.
If you remove the 'read-only' restriction in the parent folder before executing the sed command, you can delete the temporary files from windows command prompt. So that should give you some temporary respite.
I think we now have enough information to raise a bug report. If you agree, I think it may be a good idea to bring it to the notice of the good folks responsible for gnuwin32 and ask them for help.
Meanwhile, the following version cleans up its temporary file:
https://github.com/mbuilov/sed-windows
As this is a known bug in sed with the -i option you can run attrib -R <filename> to remove the read only attribute from file after sed completes.
Alternatively do not use the -i option and redirect the output to a new file and then delete and rename the input and output.
Cygwin hoses the ACLs on files sometimes, you'll probably have to use cacls or chmod to fix it up before you can delete the file.
Here is where a bit of troubleshooting comes into play. Does this happen when you run that command from the command-line and a batch file? What if you run sed on an individual file on the command line - does it create these files for every file, or just certain files/filetypes? Does it only happen for that replacement, or all replacements in general, or just always when you run sed.exe on a file? Is it only sed creating these files, or all Gnuwin32 exe's (eg. awk, cat, etc)? Does the same thing happen on a sed.exe from a new install of Gnuwin32? What error message does it give when you try to delete the files? Can you delete the files from explorer while the command prompt is still open? What if you close the command prompt and reopen it, then try to delete the files?
you can just run sed without for loop
c:\test> sed -i.bak "s/bad/good/g" file*.*
This is a stab in the dark, but it wouldn't surprise me in the least if the gnuwin32 implementation of sed is duff (i.e., faulty in some way). Can you try to replicate the problem using the AT&T U/Win POSIX support for windows? It is easy to install and includes the Korn shell, sed, and find, so you can use find instead of the FOR /R. (I'm wondering if part of the problem is that the MS FOR and gnuwin32 sed don't play nicely together.)
I realize this is an old thread but It's still an issue. My fix is to add
"DEL sed*" to the end of a batch file after sed. Quick and dirty.
I am using this command to clean up the temporary files created by gnuwin32's sed:
FOR /f "tokens=*" %%a in ('dir /b ^| findstr /i "^sed[0-9a-zA-Z][0-9a-zA-Z][0-9a-zA-Z][0-9a-zA-Z][0-9a-zA-Z][0-9a-zA-Z]$"') DO del %%a
i know this is old. But just want to share with people what I did that cause this.
It was in fact the temp file for an already open file through gvim example .swap file that causing the sed tmp file didnt get remove completely.
So sed trying to read and append into the opened file which the user is currently viewing and having trouble doing it which causes the tmp file error.