I created a text file in Ubuntu called text.txt with some text and no newline at the end.
When I use online sha512 hash generators, I get different results than sha512sum. Why is that?
# echo "8====D" > test.txt
# sha512sum test.txt
549f38836f34b6fe2ca8661f5bd91dfcbcb2e675c338e7eb50390f8ebb509f28fb6df9ebcb0493cfa661b042180a9b351f6c06dbd628300e47cbdf4d13e6d9b2 test2.txt
http://passwordsgenerator.net/sha512-hash-generator/ and http://hash.online-convert.com/sha512-generator show the hash as: ba54cdfcc32c0789acd1ee74ccd7cf2e5140f58b3d6864620c24793a93f01253d040bb3264a17629f1f0448eb22f600c6c1e5274162db97b913bde30ff16c6eb
echo implicitly adds a "\n" newline character. If you omit that, the output is the same as that from the mentioned online tools:
$ echo -n "8====D" | sha512sum
ba54cdfcc32c0789acd1ee74ccd7cf2e5140f58b3d6864620c24793a93f01253d040bb3264a17629f1f0448eb22f600c6c1e5274162db97b913bde30ff16c6eb
Related
I need to create a line in makefile which will extract the version from string, and will work cross-platform, ideally without dependencies.
This is what I had
echo "golangci-lint has version 1.42.0 built..." | grep -oP '\d+\.\d+\.\d'
retuslt: 1.42.0
But it doesn't work on mac.
Trying to do it with sed like this, but doesn't work
echo "golangci-lint has version 1.42.0 built ..." | sed -n 's/.*\(\d+\.\d+\.\d\).*/\1/p'
grep -ow '[0-9][0-9.]\+[0-9]'
That uses only a basic regular expression, and options that BSD grep and GNU grep share.
You can use
echo "golangci-lint has version 1.42.0 built ..." | sed -En 's/.*([0-9]+\.[0-9]+\.[0-9]+).*/\1/p'
Details:
-E - enables the POSIX ERE syntax
n - default line output is suppressed now
.*([0-9]+\.[0-9]+\.[0-9]+).* - any text, then Group 1 capturing one or more digits, ., one or more digits, ., one or more digits and the rest of the line
\1 - the replacement is just Group 1 value
p - only the substitution result is printed.
With your shown samples, you could try following awk program which will print only matched value of version out of whole line.
echo "golangci-lint has version 1.42.0 built ..." |
awk '
{
match($0,/[0-9]+\.[0-9]+\.[0-9]+/)
print substr($0,RSTART,RLENGTH)
}
'
Explanation: Simple explanation would be, printing line's value with echo command of shell here and sending its output as a standard input to awk code, where using match function to match mentioned regex in it. If there is a match then printing matched value.
Explanation of regex:
[0-9]+\.[0-9]+\.[0-9]+: Matching 1 or more occurrences of digits followed by . followed by 1 or more occurrences of digits followed by another dot. followed by 1 or more digits.
-P is an experimental feature in gnu-grep which is not available on Mac BSD. However default grep available in Mac can handle it easily with -E switch but you have to use [0-9] or [[:digit:]] in place of \d in your search pattern:
s="golangci-lint has version 1.42.0 built..."
grep -Eo '([0-9]+\.)+[0-9]+' <<< "$s"
# or else
grep -Eo '([[:digit:]]+\.)+[[:digit:]]+' <<< "$s"
1.42.0
As a side note I have gnu-grep installed on my Mac using home brew package.
Suggesting the following:
echo "golangci-lint has version 1.42.0 built..." | grep -o '[0-9\.]\{4,\}'
Explanation
[0-9\.] --- match a single digit or dot(.)
\{4,\} --- the matched charterer 4 or more times.
This awk is 100% POSIX:
awk 'match($0, /[0-9][0-9.]+[0-9]/) {print substr($0, RSTART, RLENGTH)}'
It will always print the first match and only (up to) one match per line. There can be zero or more dots in the number, but leading/trailing dots won't get printed.
grep -o is quite portable, but not every platform supported by Go has it. Eg. IBM AIX. Also note that if a line has multiple matches, it will print each match on a new line.
I have tried to scan through the other posts in stack overflow for this, but couldn't get my code work, hence I am posting a new question.
Below is the content of file temp.
<?xml version="1.0" encoding="UTF-8"?>
<env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/<env:Body><dp:response xmlns:dp="http://www.datapower.com/schemas/management"><dp:timestamp>2015-01-
22T13:38:04Z</dp:timestamp><dp:file name="temporary://test.txt">XJzLXJlc3VsdHMtYWN0aW9uX18i</dp:file><dp:file name="temporary://test1.txt">lc3VsdHMtYWN0aW9uX18i</dp:file></dp:response></env:Body></env:Envelope>
This file contains the base64 encoded contents of two files names test.txt and test1.txt. I want to extract the base64 encoded content of each file to seperate files test.txt and text1.txt respectively.
To achieve this, I have to remove the xml tags around the base64 contents. I am trying below commands to achieve this. However, it is not working as expected.
sed -n '/test.txt"\>/,/\<\/dp:file\>/p' temp | perl -p -e 's#<dp:file name="temporary://test.txt">##g'|perl -p -e 's#</dp:file>##g' > test.txt
sed -n '/test1.txt"\>/,/\<\/dp:file\>/p' temp | perl -p -e 's#<dp:file name="temporary://test1.txt">##g'|perl -p -e 's#</dp:file></dp:response></env:Body></env:Envelope>##g' > test1.txt
Below command:
sed -n '/test.txt"\>/,/\<\/dp:file\>/p' temp | perl -p -e 's#<dp:file name="temporary://test.txt">##g'|perl -p -e 's#</dp:file>##g'
produces output:
XJzLXJlc3VsdHMtYWN0aW9uX18i
<dp:file name="temporary://test1.txt">lc3VsdHMtYWN0aW9uX18i</dp:response> </env:Body></env:Envelope>`
Howeveer, in the output I am expecting only first line XJzLXJlc3VsdHMtYWN0aW9uX18i. Where I am commiting mistake?
When i run below command, I am getting expected output:
sed -n '/test1.txt"\>/,/\<\/dp:file\>/p' temp | perl -p -e 's#<dp:file name="temporary://test1.txt">##g'|perl -p -e 's#</dp:file></dp:response></env:Body></env:Envelope>##g'
It produces below string
lc3VsdHMtYWN0aW9uX18i
I can then easily route this to test1.txt file.
UPDATE
I have edited the question by updating the source file content. The source file doesn't contain any newline character. The current solution will not work in that case, I have tried it and failed. wc -l temp must output to 1.
OS: solaris 10
Shell: bash
sed -n 's_<dp:file name="\([^"]*\)">\([^<]*\).*_\1 -> \2_p' temp
I add \1 -> to show link from file name to content but for content only, just remove this part
posix version so on GNU sed use --posix
assuming that base64 encoded contents is on the same line as the tag around (and not spread on several lines, that need some modification in this case)
Thanks to JID for full explaination below
How it works
sed -n
The -n means no printing so unless explicitly told to print, then there will be no output from sed
's_
This is to substitute the following regex using _ to separate regex from the replacement.
<dp:file name=
Regular text
"\([^"]*\)"
The brackets are a capture group and must be escaped unless the -r option is used( -r is not available on posix). Everything inside the brackets is captured. [^"]* means 0 or more occurrences of any character that is not a quote. So really this just captures anything between the two quotes.
>\([^<]*\)<
Again uses the capture group this time to capture everything between the > and <
.*
Everything else on the line
_\1 -> \2
This is the replacement, so replace everything in the regex before with the first capture group then a -> and then the second capture group.
_p
Means print the line
Resources
http://unixhelp.ed.ac.uk/CGI/man-cgi?sed
http://www.grymoire.com/Unix/Sed.html
/usr/xpg4/bin/sed works well here.
/usr/bin/sed is not working as expected in case if the file contains just 1 line.
below command works for a file containing only single line.
/usr/xpg4/bin/sed -n 's_<env:Envelope\(.*\)<dp:file name="temporary://BackUpDir/backupmanifest.xml">\([^>]*\)</dp:file>\(.*\)_\2_p' securebackup.xml 2>/dev/null
Without 2>/dev/null this sed command outputs the warning sed: Missing newline at end of file.
This because of the below reason:
Solaris default sed ignores the last line not to break existing scripts because a line was required to be terminated by a new line in the original Unix implementation.
GNU sed has a more relaxed behavior and the POSIX implementation accept the fact but outputs a warning.
I am porting a sh script that was apparently written using GNU implementation of sed to BSD implementation of sed. The exact line in the script with the original comment are:
# escape dot in file extension to grep it
ext="$(echo $ext | sed 's/\./\\./' -)"
I am able to reproduce a results with the following (obviously I am not exhausting all possibilities values for ext) :
ext=.h; ext="$(echo $ext | sed 's/\./\\./' -)"; echo [$ext]
Using GNU's implementation of sed the following is returned:
[\.h]
Using BSD's implementation of sed the following is returned:
sed: -: No such file or directory
[]
Executing ext=.h; ext="$(echo $ext | sed 's/\./\\./')"; echo [$ext] returns [\.h] for both implementation of sed.
I have looked at both GNU and BSD's sed's man page have not found anything about the trailing "-". Googling for sed with a "-" is not very fruitful either.
Is the "-" a typo?
Is the "-" needed for some an unexpected value of $ext?
Is the issue not with sed, but rather with sh?
Can someone direct me to what I should be looking at, or even better, explain what the purpose of the "-" is?
On my system, that syntax isn't documented in the man page, but it is in the
'info' page:
sed OPTIONS... [SCRIPT] [INPUTFILE...]
If you do not specify INPUTFILE, or if INPUTFILE is -',sed'
filters the contents of the standard input.
Given that particular usage, I think you could leave off the '-' and it should
still work.
You got your specific question answered BUT your script is all wrong. Take a look at this:
# escape dot in file extension to grep it
ext="$(echo $ext | sed 's/\./\\./')"
The main problems with that are:
You're not quoting your variable ($ext) so it will go through file name expansion plus if it contains spaces will be passed to echo as multiple arguments instead of 1. Do this instead:
ext="$(echo "$ext" | sed 's/\./\\./')"
You're using an external command (sed) and a pipe to do something the shell can do trivially itself. Do this instead:
ext="${ext/./\.}"
Worst of all: You're escaping the RE meta-character (.) in your variable so you can pass it to grep to do an RE search on it as if it were a string - that doesn't make any sense and becomes intractable in the general case where your variable could contain any combination of RE metacharacters. Just do a string search instead of an RE search and you don't need to escape anything. Don't do either of the above substitution commands and then do either of these instead of grep "$ext" file:
grep -F "$ext" file
fgrep "$ext" file
awk -v ext="$ext" 'index($0,ext)' file
I have a CSV. I want to edit the 35th field of the CSV and write the change back to the 35th field. This is what I am doing on bash:
awk -F "," '{print $35}' test.csv | sed -i 's/^0/+91/g'
so, I am pulling the 35th entry using awk and then replacing the "0" in the starting position in the string with "+91". This one works perfet and I get desired output on the console.
Now I want this new entry to get written in the file. I am thinking of sed's "in -place" replacement feature but this fetuare needs and input file. In above command, I cannot provide input file because my primary command is awk and sed is taking the input from awk.
Thanks.
You should choose one of the two tools. As for sed, it can be done as follows:
sed -ri 's/^(([^,]*,){34})0([^,]*)/\1+91\3/' test.csv
Not sure about awk, but #shellter's comment might help with that.
The in-place feature of sed is misnamed, as it does not edit the file in place. Instead, it creates a new file with the same name. eg:
$ echo foo > foo
$ ln -f foo bar
$ ls -i foo bar # These are the same file
797325 bar 797325 foo
$ echo new-text > foo # Changes bar
$ cat bar
new-text
$ printf '/new/s//newer\nw\nq\n' | ed foo # Edit foo "in-place"; changes bar
9
newer-text
11
$ cat bar
newer-text
$ ls -i foo bar # Still the same file
797325 bar 797325 foo
$ sed -i s/new/newer/ foo # Does not edit in-place; creates a new file
$ ls -i foo bar
797325 bar 792722 foo
Since sed is not actually editing the file in place, but writing a new file and then renaming it to the old file, you might as well do the same.
awk ... test.csv | sed ... > test.csv.1 && mv test.csv.1 test.csv
There is the misperception that using sed -i somehow avoids the creation of the temporary file. It does not. It just hides the fact from you. Sometimes abstraction is a good thing, but other times it is unnecessary obfuscation. In the case of sed -i, it is the latter. The shell is really good at file manipulation. Use it as intended. If you do need to edit a file in place, don't use the streaming version of ed; just use ed
So, it turned out there are numerous ways to do it. I got it working with sed as below:
sed -i 's/0\([0-9]\{10\}\)/\+91\1/g' test.csv
But this is little tricky as it will edit any entry which matches the criteria. however in my case, It is working fine.
Similar implementation of above logic in perl:
perl -p -i -e 's/\b0(\d{10})\b/\+91$1/g;' test.csv
Again, same caveat as mentioned above.
More precise way of doing it as shown by Lev Levitsky because it will operate specifically on the 35th field
sed -ri 's/^(([^,]*,){34})0([^,]*)/\1+91\3/g' test.csv
For more complex situations, I will have to consider using any of the csv modules of perl.
Thanks everyone for your time and input. I surely know more about sed/awk after reading your replies.
This might work for you:
sed -i 's/[^,]*/+91/35' test.csv
EDIT:
To replace the leading zero in the 35th field:
sed 'h;s/[^,]*/\n&/35;/\n0/!{x;b};s//+91/' test.csv
or more simply:
|sed 's/^\(\([^,]*,\)\{34\}\)0/\1+91/' test.csv
If you have moreutils installed, you can simply use the sponge tool:
awk -F "," '{print $35}' test.csv | sed -i 's/^0/+91/g' | sponge test.csv
sponge soaks up the input, closes the input pipe (stdin) and, only then, opens and writes to the test.csv file.
As of 2015, moreutils is available in package repositories of several major Linux distributions, such as Arch Linux, Debian and Ubuntu.
Another perl solution to edit the 35th field in-place:
perl -i -F, -lane '$F[34] =~ s/^0/+91/; print join ",",#F' test.csv
These command-line options are used:
-i edit the file in-place
-n loop around every line of the input file
-l removes newlines before processing, and adds them back in afterwards
-a autosplit mode – split input lines into the #F array. Defaults to splitting on whitespace.
-e execute the perl code
-F autosplit modifier, in this case splits on ,
#F is the array of words in each line, indexed starting with 0
$F[34] is the 35 element of the array
s/^0/+91/ does the substitution
I have a file that contains this kind of paths:
C:\bad\foo.c
C:\good\foo.c
C:\good\bar\foo.c
C:\good\bar\[variable subdir count]\foo.c
And I would like to get the following file:
C:\bad\foo.c
C:/good/foo.c
C:/good/bar/foo.c
C:/good/bar/[variable subdir count]/foo.c
Note that the non matching path should not be modified.
I know how to do this with sed for a fixed number of subdir, but a variable number is giving me trouble. Actually, I would have to use many s/x/y/ expressions (as many as the max depth... not very elegant).
May be with awk, but this kind of magic is beyond my skills.
FYI, I need this trick to correct some gcov binary files on a cygwin platform.
I am dealing with binary files; therefore, I might have the following kind of data:
bindata\bindata%bindataC:\good\foo.c
which should be translated as:
bindata\bindata%bindataC:/good/foo.c
The first \ must not be translated, despite that it is on the same line.
However, I have just checked my .gcno files while editing this text and it looks like all the paths are flanked with zeros, so most of the answers below should fit.
sed -e '/^C:\\good/ s/\\/\//g' input_file.txt
I would recommend you look into the cygpath utility, which converts path names from one format to another. For instance on my machine:
$ cygpath `pwd`
/home/jericson
$ cygpath -w `pwd`
D:\root\home\jericson
$ cygpath -m `pwd`
D:/root/home/jericson
Here's a Perl implementation of what you asked for:
$ echo 'C:\bad\foo.c
C:\good\foo.c
C:\good\bar\foo.c
C:\good\bar\[variable subdir count]\foo.c' | perl -pe 's|\\|/|g if /good/'
C:\bad\foo.c
C:/good/foo.c
C:/good/bar/foo.c
C:/good/bar/[variable subdir count]/foo.c
It works directly with the string, so it will work anywhere. You could combine it with cygpath, but it only works on machines that have that path:
perl -pe '$_ = `cygpath -m $_` if /good/'
(Since I don't have C:\good on my machine, I get output like C:goodfoo.c. If you use a real path on your machine, it ought to work correctly.)
You want to substitute '/' for all '\' but only on the lines that match the good directory path. Both sed and awk will let you do this by having a LHS (matching) expression that only picks the lines with the right path.
A trivial sed script to do this would look like:
/[Cc]:\\good/ s/\\/\//g
For a file:
c:\bad\foo
c:\bad\foo\bar
c:\good\foo
c:\good\foo\bar
You will get the output below:
c:\bad\foo
c:\bad\foo\bar
c:/good/foo
c:/good/foo/bar
Here's how I would do it in awk:
# fixpaths.awk
/C:\\good/ {
gsub(/\\/,"/",$1);
print $1 >> outfile;
}
Then run it using the command:
awk -f fixpaths.awk paths.txt; mv outfile paths.txt
Or with some help from good ol' Bash:
#!/bin/bash
cat file | while read LINE
do
if <bad_condition>
then
echo "$LINE" >> newfile
else
echo "$LINE" | sed -e "s/\\/\//g" >> newfile
fi
done
try this
sed -re '/\\good\\/ s/\\/\//g' temp.txt
or this
awk -F"\\" '{if($2=="good"){OFS="\/"; $1=$1;} print $0}' temp.txt