Unable to replace a line using sed [duplicate] - sed

This question already has answers here:
find matching text and replace next line
(3 answers)
Closed 6 years ago.
I have a file which has the below lines-
[arakoon_scaler_thresholds]
checker_loop_time = 3600
I want to replace the above 2 lines with the below line -
[arakoon_scaler_thresholds]
checker_loop_time = 60
I am using the below command but changes are not taking place.
sed -i "s/arakoon_scaler_thresholds\nchecker_loop_time = 3600/arakoon_scaler_thresholds\nchecker_loop_time = 60/g" file.ini
Note There are multiple parameters with name checker_loop_time, I want to change only that checker_loop_time which is under the section [arakoon_scaler_thresholds]

Try this and once it is okay, use the -i option for inplace editing
$ cat ip.txt
checker_loop_time = 5463
[arakoon_scaler_thresholds]
checker_loop_time = 3600
checker_loop_time = 766
$ sed -E '/arakoon_scaler_thresholds/{N;s/(checker_loop_time\s*=\s*)3600/\160/}' ip.txt
checker_loop_time = 5463
[arakoon_scaler_thresholds]
checker_loop_time = 60
checker_loop_time = 766
This searches for arakoon_scaler_thresholds, then gets next line with N and then performs the required search and replace
You can also use awk
$ awk 'p ~ /arakoon_scaler_thresholds/{sub("checker_loop_time = 3600","checker_loop_time = 60")} {p=$0}1' ip.txt
checker_loop_time = 5463
[arakoon_scaler_thresholds]
checker_loop_time = 60
checker_loop_time = 766
where previous line is saved in a variable

Related

sed read entire lines from one file and replace lines in another file

I have two files: file1, file with content as follows:
file1:
f1 line1
f1 line2
f1 line3
file2:
f2 line1
f2 line2
f2 line3
f2 line4
Wonder how I can use sed to read line 1 to line 3 from file1 and use these lines to replace line2 to line 3 in file 2.
The output should be like this:
f2 line1
f1 line1
f1 line2
f1 line3
f2 line4
Can any one help? Thanks,
If you have GNU sed which supports s///e, you can use the e to call head -n3 file1
I think (w/o yet testing):
sed '2d;3s/.*/head -n3 file1/e' file2
I'll go verify...
sed is the best tool for doing s/old/new on individual strings. That's not what you're trying to do so you shouldn't be considering trying to use sed for it. Using any awk in any shell on every UNIX box:
$ cat tst.awk
NR==FNR {
if ( FNR <= num ) {
new = new $0 ORS
}
next
}
(beg <= FNR) && (FNR <= end) {
if ( !done++ ) {
printf "%s", new
}
next
}
{ print }
.
$ awk -v num=3 -v beg=2 -v end=3 -f tst.awk file1 file2
f2 line1
f1 line1
f1 line2
f1 line3
f2 line4
To read a different number of lines from file1 just change the value of num, to use the replacement text for a different range of lines in file2 just change the values of beg and end. So, for example, if you wanted to use 10 lines of data from a pipe (e.g. seq 15 |) instead of file1 and wanted to replace between lines 3 and 17 of a file2 like you have but with 20 lines instead of 4 then you'd leave the awk script as-is and just tweak how you call it:
$ seq 15 | awk -v num=10 -v beg=3 -v end=17 -f tst.awk - file2
f2 line1
f2 line2
1
2
3
4
5
6
7
8
9
10
f2 line18
f2 line19
f2 line20
Try doing the same with sed for such a minor change in requirements and note how you're changing the script and it's not portable to other sed versions.

Delete \n characters from line range in text file

Let's say we have a text file with 1000 lines.
How can we delete new line characters from line 20 to 500 (replace them with space for example)?
My try:
sed '20,500p; N; s/\n/ /;' #better not to say anything
All other lines (1-19 && 501-1000) should be preserved as-is.
As I'm familiar with sed, awk or perl solutions are welcomed, but please give an explanation with them as I'm a perl and awk newbie.
You could use something like this (my example is on a slightly smaller scale :-)
$ cat file
1
2
3
4
5
6
7
8
9
10
$ awk '{printf "%s%s", $0, (2<=NR&&NR<=5?FS:RS)}' file
1
2 3 4 5 6
7
8
9
10
The second %s in the printf format specifier is replaced by either the Field Separator (a space by default) or the Record Separator (a newline) depending on whether the Record Number is within the range.
Alternatively:
$ awk '{ORS=(2<=NR&&NR<=5?FS:RS)}1' file
1
2 3 4 5 6
7
8
9
10
Change the Output Record Separator depending on the line number and print every line.
You can pass variables for the start and end if you want, using awk -v start=2 -v end=5 '...'
This might work for you (GNU sed):
sed -r '20,500{N;s/^(.*)(\n)/\2\1 /;D}' file
or perhaps more readably:
sed ':a;20,500{N;s/\n/ /;ta}' file
Using a perl one-liner to strip the newline:
perl -i -pe 'chomp if 20..500' file
Or to replace it with a space:
perl -i -pe 's/\R/ / if 20..500' file
Explanation:
Switches:
-i: Edit <> files in place (makes backup if extension supplied)
-p: Creates a while(<>){...; print} loop for each “line” in your input file.
-e: Tells perl to execute the code on command line.
Code:
chomp: Remove new line
20 .. 500: if Range operator .. is between line numbers 20 to 500
Here's a perl version:
my $min = 5; my $max = 10;
while (<DATA>) {
if ($. > $min && $. < $max) {
chomp;
$_ .= " ";
}
print;
}
__DATA__
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Output:
1
2
3
4
5
6 7 8 9 10
11
12
13
14
15
It reads in DATA (which you can set to being a filehandle or whatever your application requires), and checks the line number, $.. While the line number is between $min and $max, the line ending is chomped off and a space added to the end of the line; otherwise, the line is printed as-is.

Splitting file based on variable

I have a file with several lines of the following:
DELIMITER ;
I want to create a separate file for each of these sections.
The man page of split command does not seem to have such option.
The split command only splits a file into blocks of equal size (maybe except for the last one).
However, awk is perfect for your type of problem. Here's a solution example.
Sample input
1
2
3
DELIMITER ;
4
5
6
7
DELIMITER ;
8
9
10
11
awk script split.awk
#!/usr/bin/awk -f
BEGIN {
n = 1;
outfile = n;
}
{
# FILENAME is undefined inside the BEGIN block
if (outfile == n) {
outfile = FILENAME n;
}
if ($0 ~ /DELIMITER ;/) {
n++;
outfile = FILENAME n;
} else {
print $0 >> outfile;
}
}
As pointed out by glenn jackman, the code also can be written as:
#!/usr/bin/awk -f
BEGIN {
n = 1;
}
$0 ~ /DELIMITER ;/ {
n++;
next;
}
{
print $0 >> FILENAME n;
}
The notation on the command prompt awk -v x="DELIMITER ;" -v n=1 '$0 ~ x {n++; next} {print > FILENAME n}' is more suitable if you don't use the script more often, however you can also save it in a file as well.
Test run
$ ls input*
input
$ chmod +x split.awk
$ ./split.awk input
$ ls input*
input input1 input2 input3
$ cat input1
1
2
3
$ cat input2
4
5
6
7
$ cat input3
8
9
10
11
The script is just a starting point. You probably have to adapt it to your personal needs and environment.

Concatenate Lines in Bash

Most command-line programs just operate on one line at a time.
Can I use a common command-line utility (echo, sed, awk, etc) to concatenate every set of two lines, or would I need to write a script/program from scratch to do this?
$ cat myFile
line 1
line 2
line 3
line 4
$ cat myFile | __somecommand__
line 1line 2
line 3line 4
sed 'N;s/\n/ /;'
Grab next line, and substitute newline character with space.
seq 1 6 | sed 'N;s/\n/ /;'
1 2
3 4
5 6
$ awk 'ORS=(NR%2)?" ":"\n"' file
line 1 line 2
line 3 line 4
$ paste - - < file
line 1 line 2
line 3 line 4
Not a particular command, but this snippet of shell should do the trick:
cat myFile | while read line; do echo -n $line; [ "${i}" ] && echo && i= || i=1 ; done
You can also use Perl as:
$ perl -pe 'chomp;$i++;unless($i%2){$_.="\n"};' < file
line 1line 2
line 3line 4
Here's a shell script version that doesn't need to toggle a flag:
while read line1; do read line2; echo $line1$line2; done < inputfile

SAG challenge (sed, awk, grep): multi patterns file filtering

So my dear SOers, Let me be direct to the point:
specification: filter a text file using pairs of patterns.
Example: if we have a file:
line 1 blabla
line 2 more blabla
line 3 **PAT1a** blabla
line 4 blabla
line 5 **PAT1b** blabla
line 6 blabla
line 7 **PAT2a** blabla
line 8 blabla
line 9 **PAT2b** blabla
line 10 **PAT3a** blabla
line 11 blabla
line 12 **PAT3b** blabla
more and more blabla
should give:
line 3 **PAT1a** blabla
line 4 blabla
line 5 **PAT1b** blabla
line 7 **PAT2a** blabla
line 8 blabla
line 9 **PAT2b** blabla
line 10 **PAT3a** blabla
line 11 blabla
line 12 **PAT3b** blabla
I know how to filer only one part of it using 'sed':
sed -n -e '/PAT1a/,/PAT1b/{p}'
But how to filter all the snippets, do i need to write those pairs of patterns in a configuration file, read a pair from it, use the sed cmd above, go to next pair...?
Note: Suppose PAT1, PAT2 and PAT3, etc share no common prefix(like 'PAT' in this case)
One thing more: how to make a newline in quota text in this post without leaving a whole blank line?
I assumed the pattern pairs are given as a separate file. Then, when they appear in order in the input, you could use this awk script:
awk 'NR == FNR { a[NR] = $1; b[NR] = $2; next }
!s && $0 ~ a[i+1] { s = 1 }
s
s && $0 ~ b[i+1] { s = 0; i++ }' patterns.txt input.txt
And a more complicated version when the patterns can appear out of order:
awk 'NR == FNR { a[++n] = $1; b[n] = $2; next }
{ for (i = 1; !s && i <= n; i++) if ($0 ~ a[i]) s = i; }
s
s && $0 ~ b[s] { s = 0 }' patterns.txt input.txt
Awk.
$ awk '/[0-9]a/{o=$0;getline;$0=o"\n"$0;print;next}/[0-9]b/' file
line 3 PAT1a blabla
line 4 blabla
line 5 PAT1b blabla
line 7 PAT2a blabla
line 8 blabla
line 9 PAT2b blabla
line 10 PAT3a blabla
line 11 blabla
line 12 PAT3b blabla
Note: Since you said "share no common prefix", then I use the number and [ab] pattern for regex
Use the b command to skip all lines between the patterns and the d command to delete all other lines:
sed -e '/PAT1a/,/PAT1b/b' -e '/PAT2a/,/PAT2b/b' -e '/PAT3a/,/PAT3b/b' -e d