I've been struggling with this for more than an hour now and I'm not sure what's wrong. Using Perl, I'm trying to use sed to do an in-line replacement of a string in /etc/nginx/nginx.conf as noted by using the sed command below:
my $replacement_string = getstringforreplace();
my $command = qq ( sudo sed -i "s~default_type application/octet-stream;~default_type application/octet-stream;$replacement_string~" /etc/nginx/nginx.conf );
system ( $command );
die ( $command ); # Using this for debugging purposes.
I'm really trying to place the $replacement_string after matching that 'default type' line in nginx.conf but I'm not sure what to use besides sed.
I've (1) changed the delimiters to avoid any issues with the forward slashes, (2) double quoted the replacement (I'm really not sure why, I was using single quotes before), and (3) removed a newline character I had right before the $replacement_string, among other things.
I went ahead and put the die ( $command ); in there as noted in this answer, but I'm not seeing what's wrong. This is what that returns -- which is pretty much what I want:
sudo sed -i "s~default_type application/octet-stream;~default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
tserver_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
~" /etc/nginx/nginx.conf
The $replacement_string is returned by the call to the subroutine getstringforreplace() below:
sub getstringforreplace
{
my $message = qq (
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
tserver_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
);
return $message;
}
Any guidance would be really appreciated as I'm not sure how to get rid of this unterminated `s' command issue. I'm thinking now it has to do with that qq() from the subroutine I'm calling.
sed doesn't like newlines in the replacement literal.
$ sed 's~a~b~' /dev/null
$ sed 's~a~b
~' /dev/null
sed: -e expression #1, char 5: unterminated `s' command
It does accept \n, so you could replace the newlines with \n. Of course, you could simply do the work in Perl. This will help you address a number of other issues:
Shell command injection bug.
Lack of escaping \ in the replacement literal.
Lack of error detecting and handling.
Thanks to #Beta's comments above I was able to obtain the result I wanted. It involved:
changing the subroutine getstringforreplace() to print the contents of qq() to a temporary file,
doing an in-place replacement using sed to read the contents of that file into /etc/nginx/nginx.conf,
and then removing the temporary file.
...which is below:
getstringforreplace(); # Prints $replacement_string to temp.txt.
my $command = qq ( sudo sed -i -e '/octet-stream;/r temp.txt' /etc/nginx/nginx.conf );
system ( $command );
system ( 'sudo rm temp.txt' );
Ideally, I would have liked to not have to print to a file, etc., but for the moment this yields the desired result.
Related
Need to change one line in nginx.conf
client_max_body_size 1m to client_max_body_size 10m I used this command
sed -i "s/^client_max_body_size 1m;$/client_max_body_size 10m;/g" /etc/nginx/nginx.conf
sed: 1: "nginx.conf": extra characters at the end of n command
got this message... I don't know what I did wrong.
You can use
sed -i '' 's/^client_max_body_size 1m;$/client_max_body_size 10m;/g' /etc/nginx/nginx.conf
The '' after -i option in FreeBSD sed enable inplace matching.
I am not familiar with perl. I am reading an installation guide atm and the following Linux command has come up:
perl -p -i -e "s/enforcing/disabled/" /etc/selinux/config
Now, I am trying to understand this. Here is my understanding so far:
-e simply allows for executing whatever follows
-p puts my commands that follow -e in a loop. Now this is strange to me, as to me this command seems to be trying to say: Write "s/enforcing/disabled/" into /etc/selinux/config. Then again, where is the "write" command? And what is this -i (inline) good for?
-p changes
s/enforcing/disabled/
to something equivalent to
while (<>) {
s/enforcing/disabled/;
print;
}
which is short for
while (defined( $_ = <ARGV> )) {
$_ =~ s/enforcing/disabled/;
print($_);
}
What this does:
It reads a line from ARGV into $_. ARGV is a special file handle that reads from the each of the files specified as arguments (or STDIN if no files are provided).
If EOF has been reached, the loop and therefore the program exits.
It replaces the first occurrence of enforcing with disabled.
It prints out the modified line to the default output handle. Because of -i, this is a handle to a new file with the same name as the one from which the program is currently reading.*
Repeat.
For example,
$ cat a
foo
bar enforcing the law
baz
enforcing enforcing
$ perl -pe's/enforcing/disabled/' -i a
$ cat a
foo
bar disabled the law
baz
disabled enforcing
* — In old versions of Perl, the old file has already been deleted at this point, but it's still accessible as long as there's an open file handle to it. In very new versions of Perl, this writes to temporary file that will later overwrite the file from which the program is reading.
To find out exactly what Perl is going to do, you can use the O module
perl -MO=Deparse -p -i -e "s/enforcing/disabled/" file
outputs
BEGIN { $^I = ""; }
LINE: while (defined($_ = readline ARGV)) {
s/enforcing/disabled/;
}
continue {
die "-p destination: $!\n" unless print $_;
}
-e syntax OK
I have a directory there are lots of .wav files. I read the data of each file through curl command and store it into different files.
This is not complete script. some part of the script -
#command= '--request POST --data-binary "#Raajpal_long.wav"
"https://xxxx.xx.xxxx:xxxx/SpeakerId
action=search&confidence_threshold=0.0&number_of_matches=20&format=8K_PCM16"';
$stdout=system("curl #command");
when I run the perl script it gives the output on command line window :
{"status": 0, "processing_time": 96.0, "enrollment_audio_time": 131.10000610351562, "matches": [{"speaker": "sw", "identification_score": 252.54136657714844}]}
I want to store this output into a file.
I used -
open (FILE, ">1.txt") or die "Unable to open "1.txt";
$stdout=system("curl #command");
print FILE $stdout;
It's save only zero(0);
Can any one tell me how to solve this ?
You're already shelling out to curl to make the request; it would be cleaner to just use curl's -o/--output option to write to a file instead of stdout.
-o, --output <file>
Write output to instead of stdout. If you are using {} or [] to
fetch multiple documents, you can use '#' followed by a number in the
specifier. That variable will be replaced with the current
string for the URL being fetched. Like in:
curl http://{one,two}.example.com -o "file_#1.txt"
or use several variables like:
curl http://{site,host}.host[1-5].com -o "#1_#2"
You may use this option as many times as the number of URLs you have.
For example, if you specify two URLs on the same command line, you can
use it like this:
curl -o aa example.com -o bb example.net
and the order of the -o options and the URLs doesn't matter, just that
the first -o is for the first URL and so on, so the above command line
can also be written as
curl example.com example.net -o aa -o bb
You can't use system to capture output, you can use backticks `` in place of system.
Something like:
my #command= '--request POST --data-binary "#Raajpal_long.wav"
"https://services.govivace.com:49162/SpeakerId
action=search&confidence_threshold=0.0&number_of_matches=20&format=8K_PCM16"';
my $result = `curl #command`;
if ( $? == -1 ){
print "\n Curl Command failed: $!\n";
} elsif ($? == 0 ) {
print "$result\n";
}
I have a list of URI: uri.txt with
category1/image1.jpeg
category1/image32.jpeg
category2/image1.jpeg
and so on, and need to download them from domain example.com with wget, with additional changing filename (final at save) to categoryX-imageY.jpeg
I understand, that I should read uri.txt line by line, add "http://example.com/" in front of each line and change "/" to "-" in each line.
What I have now:
Reading from uri.txt [work]
Adding domain name in front of each URI [work]
Change filename to save [fail]
I'm trying to do this with:
wget 'http://www.example.com/{}' -O '`sed "s/\//-/" {}`' < uri.txt
but wget fails (it depends what type of quotation sign I'm using: ` or ') with:
wget: option requires an argument -- 'O'
or
sed `s/\//-/` category1/image1.jpeg: No such file or directory
sed `s/\//-/` category1/image32.jpeg: No such file or directory
Could you tell, what I'm doing wrong?
Here is how I would do that:
while read LINE ; do
wget "http://example.com/$LINE" -O $(echo $LINE|sed 's=/=-=')
done < uri.txt
In other words, read uri.txt line by line (the text being placed in $LINE bash variable), before performing the wget and saving with modified name (I use another sed delimitor, to avoid escaping / and making it more readable)
When I want to construct a list of args to be executed, I like to use xargs:
cat uri.txt | sed "s#\(.*\)/\(.*\)#http://example.com/\1/\2 -O \1-\2#" | xargs -I {} wget {}
I want to redirect this awk output to the file handle but no luck.
Code:
open INPUT,"awk -F: '{print $1}'/etc/passwd| xargs -n 1 passwd -s | grep user";
while (my $input=<INPUT>)
{
...rest of the code
}
Error:
Use of uninitialized value in concatenation (.) or string at ./test line 12.
readline() on closed filehandle INPUT at ./test line 13.
The error message shown is not directly related to the question in the subject.
In order to open a pipe and retrieve the result in Perl you have to add "|" at the very end of the open call.
The error message comes from the fact that Perl interprets the $1 you use in that double-quoted string. However, your intention was to pass that verbatim to awk. Therefore you have to escape the $ on the Perl side with \$.
There's a space missing in front of the /etc/passwd argument.
Summary: this should work better:
open INPUT,"awk -F: '{print \$1}' /etc/passwd| xargs -n 1 passwd -s | grep user|";
However, you should also check for errors etc.
It looks like $1 in the string you've passed is making Perl look for a variable $1 which you've not defined. Try escaping the $ in the string by putting a \ in front of it.
Because the string is not valid it doesn't do the open which then produces your second error.