SED saving 60k+ backup files - sed

I have A LOT of sed commands to do, and parse A LOT of files, so it generates me A LOT of backup files (60k+) I want to sed edit files in place, but don't make backup. Or saving a backup to a single file overwriting it every time a new command is executed. Trying to add anything after the -i would generate me a backup file with another extension.
Deleting all the *. files (sed makes backups without extensions) can take A REALLY LONG time.
Thank you

Try the following:
sed -i '' ....

Will
sed -i /dev/null ...
work for you?

Related

Bash Script to append for specific file extensions in a directory

How do I prepends a special character in front of all the lines in all .txt files in my directory? Im new to writing bash scripts and having trouble doing this. I only know of using the grep function but thats only to search for keyword.
For now, I have this,
sed -i 's/^/#/' Machine1.txt
However, this is only for that specific .txt file. I want to do this for all files with a .txt extension in my directory. There are other extensions like .tar, .rpm, .sh files which I want to ignore. Thank you!
Just give a wildcard filename argument.
sed -i 's/^/#/' *.txt
you can use for loop.
For example
cd your_folder
for f in *.txt; do
sed -i 's/^/#/' "$f";
done

Combine sed truncate x lines into a find command

We have a large log file in the same location on multiple servers and I want to create a cron job to truncate the file to last 100k lines.
The following command works:
sed -i 1,$(($(wc -l < /root/server123.example.com.log) -100000))d /root/server123.example.com.log
But the hostname on each server is different (server1, server2, server3 etc.), and I'd like to have a single command I can paste into each cron file. During my testing I wasn't sure how to achieve a wildcard in the above command.
I think the best way might be to combine it with a find command, but I'm clueless on how to do that..
find /root/server*.example.com.log -type f -exec sed <NOT SURE..> \;
Any help would be appreciated.
During my testing I wasn't sure how to achieve a wildcard in the above command.
If there is just one log file on each server, you can simply insert the wildcard:
sed -i 1,$(($(wc -l < /root/server*.example.com.log) -100000))d /root/server*.example.com.log

sed stripping hex from start of file including pattern

I've been at this most of this afternoon hacking with sed and it's a bit of a minefield.
I have a file of hex of the form:
485454502F312E31203230300D0A0D0AFFD8FFE000104A46494600
I'm pattern matching on 0D0A0D0A and have managed to delete the contents from the start of the file to there. The problem is that it leaves the 0D0A0D0A, so I have to do a second pass to pick that up.
Is there a way in one command to delete up to and including the pattern that you match to and save it back into the same file ?
thanks in advance.
ID
This should work:
sed -e 's/.*0D0A0D0A//' file.txt
You need to provide better description of your problem.
Based on what you wrote you can use -i switch (Edit files in-place) of sed to save the changed file:
sed -i.bak 's/^.*0D0A0D0A//' file
PS: On posix and on some older versions of sed doesn't have -i switch available. If that's the case use it like this:
sed 's/^.*0D0A0D0A//' file > _temp && mv _temp file

sed text replace

How can I replace text with other text using GNU sed? I was hacked and am just trying to see if I can remove some of the code that was placed into my php files. The text is of the
eval(base64_decode('blah'));
variety. All of them are identical, I would just like to find and replace all of them in all files. I have tried some commands, but they either needlessly alter and damage text in the files or simply fail to launch at all.
sed -i 's/text/other text/g' filename
(sed -i "s/eval(base64_decode('blah'))/huh/g" filename in your case).
find . -name \*.php -exec sed -i "s/text/other/g" {} \;
You may want to do a dry run and leave off the -i and just direct it to a file as a test first.
On Mac the -i usually doesn't work.

Why do I have to specify the -i switch with a backup extension when using ActivePerl?

I cannot get in-place editing Perl one-liners running under ActivePerl to work unless I specify them with a backup extension:
C:\> perl -i -ape "splice (#F, 2, 0, q(inserted text)); $_ = qq(#F\n);" file1.txt
Can't do inplace edit without backup.
The same command with -i.bak or -i.orig works a treat but creates an unwanted backup file in the process.
Is there a way around this?
This is a Windows/MS-DOS limitation. According to perldiag:
You're on a system such as MS-DOS that gets confused if you try reading from a deleted (but still opened) file. You have to say -i.bak, or some such.
Perl's -i implementation causes it to delete file1.txt while keeping an open handle to it, then re-create the file with the same name. This allows you to 'read' file1.txt even though it has been deleted and is being re-created. Unfortunately, Windows/MS-DOS does not allow you to delete a file that has an open handle attached to it, so this mechanism does not work.
Your best shot is to use -i.bak and then delete the backup file. This at least gives you some protection - for example, you could opt not to delete the backup if perl exits with a non-zero exit code. Something like:
perl -i.bak -ape "splice...." file1.txt && del file1.bak
Sample with recursive modify and delete both done by find. Works on e.g. mingw git bash on windows.
$ find . -name "*.xml" -print0 | xargs -0 perl -p -i.bak -e 's#\s*<property name="blah" value="false" />\s*##g'
$ find . -name "*.bak" -print0 | xargs -0 rm
Binary terminated values passed between find/xargs to handle spaces. Unusual s/ prefix to avoid mangling xml in search term. This assumes you didn't have any .bak files hanging around to begin.