Change variable value in Script shell with sed command ; error syntax - sed

sed -i 's|from_infura_hex=?|from_infura_hex=$(curl -s -X POST --connect-timeout 5 -H "Content-Type: application/json" --data \'{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}\' https://ropsten.infura.io/X/X | jq .result | xargs)|' /home/ec2-user/LastBlockNode.sh
I tried to execute this command but I always get this error:
-bash: syntax error near unexpected token `)'
The purpose of this command is to modify the value from_infura_hex=? in the script LastBlockNode.sh by the curl command.
Can anyone help with this sed command?

If you choose a pipe character | as a delimiter for s command,
the character should not appear in pattern or replacement without escaping. As you are using | as a pipeline in your command, it is better to pick other character such as #.
You cannot nest single quotes even if you escape it with a backslash.
In order to use a command substitution within the replacement,
you need to say sed -i '/pattern/'"$(command)"'/', not
sed -i '/pattern/$(command)/'.
Then would you please try something like:
sed -i 's#from_infura_hex=?#from_infura_hex='"$(curl -s -X POST --connect-timeout 5 -H "Content-Type: application/json" --data "{\"jsonrpc\":\"2.0\",\"method\":
\"eth_blockNumber\",\"params\":[],\"id\":1}" https://ropsten.infura.io/X/X | jq .result | xargs)"'#' /home/ec2-user/LastBlockNode.sh
But it will be safer and more readable to split the command into
multiple lines:
replacement="$(curl -s -X POST --connect-timeout 5 -H "Content-Type: application/json" --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' https://ropsten.infura.io/X/X | jq .result | xargs)"
sed -i 's#from_infura_hex=?#from_infura_hex='"$replacement"'#' /home/ec2-user/LastBlockNode.sh
Please note I have not tested the commands above with the actual data.
If either of them still do not work, please let me know with the error message.

Related

sed adding text from matching pattern and add if pattern doesn't exist

I'm hoping there's someone that could help with this. I'm most certain the question have been asked however, i'm having some difficulty understanding some of the answers.
I have the following text in file.txt
value1=192.168.1.2
value2=10.1.1.15
I'd like to replace those ip address and add value3=10.224.100.5 if value3 doesn't exist, using sed.
What i have so far or at least tried.
sed \
-e '/^#\?\(\s*value1\s*=\s*\).*/{s//\newvalue/;:a;n;ba;q}' \
-e '$avalue1=newvalue' \
-e '/^#\?\(\s*value2\s*=\s*\).*/{s//\newvalue/;:a;n;ba;q}' \
-e '$avalue2=newvalue' \
-e '/^#\?\(\s*value3\s*=\s*\).*/{s//\newvalue/;:a;n;ba;q}' \
-e '$avalue3=newvalue' file.txt
This works fine if value(1,2,3) doesn't exist however, if value1 exists in file.txt, it stops at 1.
I'm assuming its because of the ;q
Any advice please? i'm really having a hard time getting this.
One way with awk:
awk -v v1="new1" -v v2="new2"
'BEGIN{FS=OFS="=";addV3=1}
$1=="value1"{$2=v1}
$1=="value2"{$2=v2}
$1=="value3"{addV3=0}7;
END{if(addV3)print "value3=newV3"}' file
test with your example:
kent$ cat f
value1=192.168.1.2
value2=10.1.1.15
kent$ awk -v v1="new1" -v v2="new2" 'BEGIN{FS=OFS="=";addV3=1}$1=="value1"{$2=v1}$1=="value2"{$2=v2}$1=="value3"{addV3=0}7;END{if(addV3)print "value3=newV3"}' f
value1=new1
value2=new2
value3=newV3

sed: -e expression #1, char 62: unknown command: `\'

I am trying to add
-
paths:
- /var/log/consumer.log
document_type: consumer
input_type: log
after prospectors: in my file. I am using command:
sed -i '/prospectors:/a\ \ \ \ \-
\ \ \ \ \ \ paths:\
\ \ \ \ \ \ \- \/var\/log\/consumer.log
\ \ \ \ \ \ document_type: consumer
\ \ \ \ \ \ input_type: log' new.txt
But the above command gives following error:
sed: -e expression #1, char 62: unknown command: `\'
How can I achieve the desired?
In classic (POSIX) sed, each line of data appended needs to be on its own line after the a command, and all lines except the last need a backslash at the end to indicate that the data continues. GNU sed allows some information on the same line as the a command, but otherwise follows the rules.
There's an additional wrinkle: sed removes leading blanks from the data. To get the leading blanks, you can use backslash-blank at the start.
Hence, you can end up with:
sed -i '/prospectors:/a \
\ -\
\ paths:\
\ - /var/log/consumer.log\
\ document_type: consumer\
\ input_type: log' new.txt
The leading blanks are ignored; the backslash is deleted; the following blanks are copied to the output. Thus given an input containing just a line containing prospectors:, the output is:
prospectors:
-
paths:
- /var/log/consumer.log
document_type: consumer
input_type: log
Obviously, you can adjust the spacing to suit yourself.
I note that BSD sed requires a suffix after the -i option; it can be -i '' to get an 'empty string' suffix. To be portable between GNU and BSD sed, use -i.bak (no space; GNU sed doesn't like the space; BSD sed accepts the attached suffix, but you can't attach an empty suffix). And the -i option is not mandated by POSIX, so it isn't available on all Unix-like systems. If you're only using GNU sed, you don't have to worry about this trivia.

How to see the search found using curl o wget?

I need to see the searches found using curl or wget , when it find results with '301' status code.
This is my variable using curl.
website=$(curl -s --head -w %{http_code} https://launchpad.net/~[a-z]/+archive/pipelight -o /dev/null | sed 's#404##g')
echo $website
301
The above works, but only display if the site exists with '301' status code.
I want
echo $website
https://launchpad.net/~mqchael/+archive/pipelight
You can add the "effective URL" to your output. Change %{http_code} to "%{http_code} %{url_effective} ".
From there, it's just a matter of fussing with the regular expression. Change the sed string to 's#404 [^ ]* ##g'. That will eliminate (assuming you don't already know) not just the 404s, but will also eat the URL that follows it.
So:
curl -s --head -w "%{http_code} %{url_effective} " https://launchpad.net/~[a-z]/+archive/pipelight -o /dev/null | sed 's#404 [^ ]* ##g''s#404 [^ ]* ##g'
will give you:
301 https://launchpad.net/~j/+archive/pipelight
You may want to replace the HTTP codes with new-lines, after that.

Why after delete some lines by sed, Postfix can't write maillog [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I want to use cron job, that once per three day will clean and sort maillog.
My job looks like
/bin/sed -i /status=/!d /var/log/maillog |
(/bin/grep "status=bounced" /var/log/maillog | /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" | /bin/sort -u >> /root/unsent.log) |
(/bin/grep "status=deferred" /var/log/maillog | /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" | /bin/sort -u >> /root/deferred.log) |
(/bin/grep "status=sent" /var/log/maillog | /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" | /bin/sort -u >> /root/sent.log) |
/bin/sed -i "/status=/d" /var/log/maillog
Job works fine and do 3 step:
Delete from maillog all lines that don't contain "status="
Sort sent, bounced, deffered in different logs.
Delete from maillog all lines that contain "status"
After this job my maillog is fully clean and sorted to 3 logs.
But Postfix doesn't want to write next records to maillog.
I delete sed command, and Postfix writes next records fine.
Why sed command blocks maillog after execution cron job?
sed -i will unlink the file it modifies, so syslog/postfix will continue writing to a nonexistent file.
From http://en.wikipedia.org/wiki/Sed:
Note: "sed -i" overwrites the original file with a new one, breaking any links the original may have had
It is more common to process log files after rotating them out of place with a tool like logrotate or savelog, so that syslog can continue writing uninterrupted.
If you must edit /var/log/maillog in place, you can add a line to the end of your cron job to reload syslog when you are done. Note that you can lose log lines written to the file while your script is running if you do this. The command will depend on what distribution / operating system you are running. On ubuntu, which uses rsyslog, it would be reload rsyslog >/dev/null 2>&1.
I've reformatted your original code to highlight the pipe-lines you added
/bin/sed -i /status=/!d /var/log/maillog \
| (/bin/grep "status=bounced" /var/log/maillog \
| /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" \
| /bin/sort -u >> /root/unsent.log\
) \
| (/bin/grep "status=deferred" /var/log/maillog \
| /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" \
| /bin/sort -u >> /root/deferred.log\
) \
| (/bin/grep "status=sent" /var/log/maillog \
| /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" \
| /bin/sort -u >> /root/sent.log \
) \
| /bin/sed -i "/status=/d" /var/log/maillog
As #alberge noted, you could very likely lose log messages with all of this sed -i processing on the same file.
I propose a different approach:
I would move the maillog to a dated filename, (the assumption here is that Postfix, will create a new file with the standard name that it 'likes' to use (/var/log/maillog).
Then your real goal seems to be to extract various categories of messages to separately named files, i.e. unsent.log, deferred.log, sent.log AND then you're discarding any lines that don't contain the string status= (although you do that first).
Here's my alternate (please read the whole message, don't copy/paste/excute right away!).
logDate=$(/bin/date +%Y%m%d.%H%M%S)
/bin/mv /var/log/maillog /var/log/maillog.${logDate}
/bin/grep "status=bounced" /var/log/maillog.${logDate} \
| /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" \
| /bin/sort -u \
>> /root/unsent.log.${logDate}
/bin/grep "status=deferred" /var/log/maillog.${logDate} \
| /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" \
| /bin/sort -u \
>> /root/deferred.log.${logDate}
/bin/grep "status=sent" \
| /bin/grep -E -o --color "\b[a-zA-Z0-9.-]+#[a-zA-Z0-9.-]+\.[a-zA-Z0-9.-]+\b" \
| /bin/sort -u \
>> /root/sent.log.${logDate}
To test that this code is working, replace the 2nd line ( /bin/mv .... ) with
/bin/cp /var/log/maillog /var/log/maillog.${logDate}
Copy/paste that into a terminal window, confirm that the /var/log/maillog.${logDate} was copied correctly, then copy/paste each section, 1 at a time and check that the expected output is created in each of the /root logfiles.
(If you get error messages for any of these blocks, make sure there are NO space/tab chars after the last '\' char on each of the continued lines. OR you can fold each of those 3 pipelines back into one line, removing the '\' chars as you go.
(Note that to create each of the /root logfiles, I don't use any connecting sections via pipes surrounded by sub-processes. But, in other situations, I do use this sort of technique for advanced problems, so don't throw the technique away, just use it when it is really required ;-)!
After you confirm that all of this is working as you needed, then you extend the script to do a final cleaning up :
/bin/rm /var/log/maillog.${logDate}
I've added ${logDate} to each of your output files, but as I see you're using sort -u >> you may want to remove that 'extension' to your sub-logfile names (unsent.log, deferred.log, sent.log) And just let those files get grow naturally. In either case, you'll have to comeback at some point and determine how far back you want to keep this data, and develop a plan and method for how you'll clean up these logfiles when they're not useful. I think someone mentioned logrotate package. You might want to look into that as your long-term solution.
This solution avoids a lot of extra processes being created, and it eliminates (mostly) the possibility of lost log records. I'm think you might lose all or part of a record if Postfix is writing to the logfile in the same split-second as you are moving the file. But your solution would have similar problems AND more opportunities for that to happen.
If I have misunderstood the intention of your design, using the nested ( .... ) | ( .... ) sub-processes, sorry! Consider updating your post to include why you are using that techinque.
I hope this helps.

parsing a curl in the command line

I have this:
curl -H \"api_key:{key}\" http://api.wordnik.com/api/word.xml/dog/definitions
How do I parse this (within the commandline) to make it take whatever's in between <text> and </text> in this page?
$ curl ....... | awk -vRS="</text>" '/<text>/{ gsub(/.*<text/,""); print "->"$0}'
$ curl ....... | awk 'BEGIN{RS="</text>"}/<text>/{ gsub(/.*<text/,""); print "->"$0}'
Note, use GNU awk. (gawk)