Trying to do this sort of thing in perl:
sed '1 a<!-- $Header: $\n Purpose: system generated file -->' -i test.xml
Add the header block and purpose to line #2 in the file for xml, shell scripts, etc...
Don't want to do this either:
`sed '1 a<!-- \$Header: \$\n Purpose: system generated file -->' -i test.xml`
But realize it's an option if absolutely necessary.
If you only pass one file, you can use the following:
perl -i -pe'
$_ .= "<!-- \$Header: \$\n Purpose: system generated file -->\n" if $. == 1;
' test.xml
If you might pass multiple files, you'll need to add a line so that $. is reset at the end of each file.
perl -i -pe'
$_ .= "<!-- \$Header: \$\n Purpose: system generated file -->\n" if $. == 1;
close(ARGV) if eof;
' test*.xml
(Note: eof() means something different than just eof. how awful is that!)
I added line breaks for readability. The commands will work as is, but you can remove the line breaks if you so desire.
Try this way:
perl -ple '++$i == 2 and $_ = "changed" # change $_ as you want' in.txt > out.txt
Related
I have a file that contains:
$conf['minified_version'] = 100;
I want to increment that 100 with sed, so I have this:
sed -r 's/(.*minified_version.*)([0-9]+)(.*)/echo "\1$((\2+1))\3"/ge'
The problem is that this strips the $conf from the original, along with any indentation spacing. What I have been able to figure out is that it's because it's trying to run:
echo " $conf['minified_version'] = $((100+1));"
so of course it's trying to replace the $conf with a variable which has no value.
Here is an awk version:
$ awk '/minified_version/{$3+=1} 1' file
$conf['minified_version'] = 101
This looks for lines that contain minified_version. Anytime such a line is found the third field, $3, is incremented by.
My suggested approach to this would be to have a file on-disk that contained nothing but the minified_version number. Then, incrementing that number would be as simple as:
minified_version=$(< minified_version)
printf '%s\n' "$(( minified_version + 1 ))" >minified_version
...and you could just put a sigil in your source file where that needs to be replaced. Let's say you have a file named foo.conf.in that contains:
$conf['minified_version'] = #MINIFIED_VERSION#
...then you could simply run, in your build process:
sed -e "s/#MINIFIED_VERSION#/$(<minified_version)/g" <foo.conf.in >foo.conf
This has the advantage that you never have code changing foo.conf.in, so you don't need to worry about bugs overwriting the file's contents. It also means that if you're checking your files into source control, so long as you only check in foo.conf.in and not foo.conf you avoid potential merge conflicts due to context near the version number changing.
Now, if you did want to do the native operation in-place, here's a somewhat overdesigned approach written in pure native bash (reading from infile and writing to outfile; just rename outfile back over infile when successful to make this an in-place replacement):
target='$conf['"'"'minified_version'"'"'] = '
suffix=';'
while IFS= read -r line; do
if [[ $line = "$target"* ]]; then
value=${line##*=}
value=${value%$suffix}
new_value=$(( value + 1 ))
printf '%s\n' "${target}${new_value}${suffix}"
else
printf '%s\n' "$line"
fi
done <infile >outfile
I have this:
perl -pi -e 'print "code I want to insert\n" if $. == 2' *.php
which puts the line code I want to insert on the second line of the file, which is what I need done to every single PHP file
If I run it in a directory with both PHP files and non-PHP files it does the right thing, but only to one PHP file. I thought *.php would apply it to all PHP files, but it doesn't do it.
How can I write it so it will modify every PHP file in a directory? Bonus if there is an easy way to do this recursively through all directories. I don't mind running the Perl script for each directory as there aren't that many, but don't want to hand edit every single file.
The problem is that the file handle ARGV that Perl uses to read the files passed on the command line is never explicitly closed, so the line number $. just keeps incrementing after the end of the first file and never goes back to one.
Fix this by closing ARGV when it has reached end of file. Perl will reopen it to read the next file in the list, and so reset $.
perl -i -pe 'print "code I want to insert\n" if $. == 2; close ARGV if eof' *.php
If you can use sed, this should work:
sed -si '2i\CODE YOU WANT TO INSERT' *.php
To do it recursively, you might try:
find -name '*.php' -execdir sed -si '2i\CODE YOU WANT TO INSERT' '{}' +
Using File::Find.
Note, I've included 3 sanity checks to verify that things are actually being processed they way that you want.
Initially the script will just print out the found files until you comment out the bare return.
Then the script will save backups unless you uncomment the unlink statement.
Finally, the script will only process a single file until you comment out the exit statement.
These three checks are just so you can verify that everything is working as you desire before editing a whole directory tree.
use strict;
use warnings;
use File::Find;
my $to_insert = "code I want to insert\n";
find(sub {
return unless -f && /\.php$/;
print "Edit $File::Find::name\n";
return; # Comment out once satisfied with found files
local $^I = '.bak';
local #ARGV = $_;
while (<>) {
print $to_insert if $. == 2 && $_ ne $to_insert;
print;
}
# unlink "$_$^I"; # Uncomment to delete backups once certain that first file is processed correctly.
exit; # Comment out once certain that first file is processed correctly
}, '.')
I found some command line with Perl that inserts headers into my files without going through the tedious process of inserting them one by one. Can someone walk me through the Perl aspect of this command line? I'm new to this and can't seem to find the right explanations for what I wrote.
cat header.txt | perl -0 -i -pe 'BEGIN{$h = <STDIN>}; print $h' 1*
-e
rather than provide a script in a xxxx.pl file, provide it on the command line
-p
makes it iterate over filename arguments somewhat like sed but also prints the contents of $_ at the end of the script.
the two above are combined in -pe
-i
indicate you want to edit the file in place and write the output to the same file. In practice, Perl renames the input file and reads from this renamed version while writing to a new file with the original name
-0
redefines the end of record character (\n by default) so that you can read the entire input file as a single line
1*
is the command line argument to your script, so I guess you are modifying any file with a name that starts with 1 (you could have used *.c, or whatever depending on the type of files you are trying to modify)
print $h
prints the variable $h that is the "main" of your script. if it was initialized with the content of the header file (the intent of this one-liner) then it will print the header file
BEGIN{ some code here }
this is stuff you execute before the script starts. this is where I'm stumped. this doesn't seem like valid perl code
so basically:
this will supposedly slurp the entire header file (because of -0) in the BEGIN block and store it in the variable $h
iterate over all the files specified by the wildcards at the end of the command line
for each file: print the header (print $h) then print hte file itself (because of -pe)
so it's equivalent to spelling the script out:
$h = gets content of the entire header file
while (<>){ #loop implied by -pe, iterates over all the 1* files
# the main contents of the "-e" script are inserted below as part of executing -pe
print h$; #print the header we saved
print $_; # implied by -pe, and since we are using -0, this prints the entire content in one shot
# end of the "-e" script. again it was a single print $h statement, the second print is implied by -pe
}
It's a bit hard to explain, take a look at the perlrun documentation for details (run man perlrun).
This is not 100% complete explanation because I don;t think the BEGIN block is right. I tried it on my ubuntu machine and it complained about its syntax too
Here's something similar, with an explanation. The program in the question doesn't run on my mac.
I needed to add the #nullable disable directive to the top of all my csharp files as part of migrating to nullable reference types.
perl -w -i -p -0777 -e 's/^/#nullable disable\n\n/' $(find . -iname '*.cs')
-w enable warnings
-i edit files in place
-p read each file block by block, printing each block after applying a perl expression. the default block size is one line
-0777 changes the default block size to the entire file
-e the perl expression to execute
The final argument uses shell command substitution to create a list of files. It passes that list of file paths to the perl command. The find command searches for files that end in .cs.
The perl program is a single substitution command. It matches the very beginning of the block and replaces (prepends, really) with "#nullable disable" and a couple new-lines.
This question already has answers here:
How do I add a line of text to the middle of a file using bash?
(6 answers)
Closed 10 years ago.
How do insert lines of text into file after a particular line in unix ?
Background: The file is an autogenerated textfile but I manually have to edit it every time it is regenerated to add in 4 additional lines after a particular line. I can gurantee that this line will always be in the file but I cannot guarantee excalty what line it will be on so I want the additional lines to be added on the basis of the position of this line rather than adding to a fixed rownumber. I want to automate this process as it is part of my build process.
I'm using Mac OSX so I can make use of unix comand line tools, but Im not very familiar with such tools and cannot work out how to do this.
EDIT
Thanks for the solution, although I havent managed to get them working yet:
I tried Sed solution
sed -i '/<string>1.0</string>/ a <key>CFBundleHelpBookFolder</key>\
<string>SongKongHelp</string>\
<key>CFBundleHelpBookName</key>\
<string>com.jthink.songkong.Help</string>
' /Applications/SongKong.app/Contents/Info.plist
but get error
sed: 1: "/Applications/SongKong. ...": invalid command code S
and I tried the bash solution
#!/bin/bash
while read line; do
echo "$line"
if [[ "$line" = "<string>1.0</string>"]]; then
cat mergefile.txt # or echo or printf your extra lines
fi
done < /Applications/SongKong.app/Contents/Info.plist
but got error
./buildosx4.sh: line 5: syntax error in conditional expression: unexpected token `;'
./buildosx4.sh: line 5: syntax error near `;'
./buildosx4.sh: line 5: ` if [[ "$line" = "<string>1.0</string>"]]; then'
EDIT 2
Now working, i was missing a space
#!/bin/bash
while read line; do
echo "$line"
if [[ "$line" = "<string>1.0</string>" ]]; then
cat mergefile.txt # or echo or printf your extra lines
fi
done < /Applications/SongKong.app/Contents/Info.plist
Assuming the marker line contains fnord and nothing else;
awk '1;/^fnord$/{print "foo"; print "bar";
print "baz"; print "quux"}' input >output
Another way to look at this is that you want to merge two files at some point in one of the files. If your extra four lines were in a separate file, you could make a more generic tool like this:
#!/usr/bin/awk
BEGIN {
SEARCH=ARGV[1]; # Get the search string from the command line
delete ARGV[1]; # Delete the argument, so subsequent arguments are files
}
# Collect the contents of the first file into a variable
NR==FNR {
ins=ins $0 "\n";
next;
}
1 # print every line in the second (or rather the non-first) file
# Once we're in the second file, if we see the match, print stuff...
$0==SEARCH {
printf("%s", ins); # Using printf instead of print to avoid the extra newline
next;
}
I've spelled this out for ease of documentation; you could obviously shorten it to something that looked more like triplee's answer. You'd invoke this like:
$ scriptname "Text to match" mergefile.txt origfile.txt > outputfile.txt
Done this way, you'd have a tool that could be used to achieve this kind of merge on different files and with different text.
Alternately, you could of course do this in pure bash.
#!/bin/bash
while read line; do
echo "$line"
if [[ "$line" = "matchtext" ]]; then
cat mergefile.txt # or echo or printf your extra lines
fi
done < origfile.txt
The problem can be solved efficiently for any filesize by this algorithm:
Read each line from the original file and print it to a tempfile.
If the last line was the marker line, print your insertion lines to the tempfile
Print the remaining lines
Rename the tempfile to the original filename.
As a Perl script:
#!perl
use strict; use warnings;
$^I = ".bak"; # create a backup file
while (<>) {
print;
last if /regex to determine if this is the line/;
}
print <<'END';
Yourstuff
END
print while <>; # print remaining lines
# renaming automatically done.
Testfile:
foo
bar
baz
qux
quux
Regex is /^ba/.
Usage: $ perl this_script.pl source-file
The testfile after processing:
foo
bar
Yourstuff
baz
qux
quux
use the sed 'a command with a regex for the line you need to match
sed -i '/the target line looks like this/ a this is line 1\
this is line 2\
this is line 3\
this is line 4
' FILENAME
I have a file that has some entries like
--ERROR--- Failed to execute the command with employee Name="shayam" Age="34"
--Successfully executed the command with employee Name="ram" Age="55"
--ERROR--- Failed to execute the command with employee Name="sam" Age="23"
--ERROR--- Failed to execute the command with employee Name="yam" Age="3"
I have to extract only the Name and Age of those for whom the command execution was failed.
in this case i need to extract shayam 34 sam 23 yam 3. I need to do this in perl. thanks a lot..
perl -p -e 's/../../g' file
Or to inline replace:
perl -pi -e 's/../../g' file
As a one-liner:
perl -lne '/^--ERROR---.*Name="(.*?)" Age="(.*?)"/ && print "$1 $2"' file
Your title makes it not clear. Anyway...
while(<>) {
next if !/^--ERROR/;
/Name="([^"]+)"\s+Age="([^"]+)"/;
print $1, " ", $2, "\n";
}
can do it reading from stdin; of course, you can change the reading loop to anything else and the print with something to populate an hash or whatever according to your needs.
As a one liner, try:
perl -ne 'print "$1 $2\n" if /^--ERROR/ && /Name="(.*?)"\s+Age="(.*?)"/;'
This is a lot like using sed, but with Perl syntax.
The immediate question of "how do I use perl like sed?" is best answered with s2p, the sed to perl converter. Given the command line, "sed $script", simply invoke "s2p $script" to generate a (typically unreadable) perl script that emulates sed for the given set of commands.
Refer to comments :
my #a = <>; # Reading entire file into an array
chomp #a; # Removing extra spaces
#a = grep {/ERROR/} #a; # Removing lines that do not contain ERROR
# mapping with sed-like regexp to keep only names and ages :
#a = map {s/^.*Name=\"([a-z]+)\" Age=\"([0-9]+)\".*$/$1 $2/; $_} #a;
print join " ",#a; # print of array content