New to sed and could use some help.
I would like to turn this "a/b/c a/b/c" into this "a/b/c a-b-c".
where a/b/c is any path.
thanks
Give this a try:
sed 'h; s/ .*//; x; s/.* //; s:/:-:g; x; G; s/\n/ /'
Since you want to use whitespace to delemit, I'd just use perl:
perl -ane '$F[1] =~ s/\//-/; print "#F\n"'
you can use awk,
$ echo "a/b/c a/b/c" | awk '{gsub("/","-",$NF)}1'
a/b/c a-b-c
This might work:
echo "a/b/c a/b/c" | sed ':a;s|\(.* [^/]*\)/|\1-|;ta'
a/b/c a-b-c
Or this:
echo "a/b/c a/b/c" | sed 's/.* //;h;y/\//-/;x;G;y/\n/ /'
a/b/c a-b-c
Related
How can I do these in sed?
#input #output
file.txt "nothing"
dir1/ ../
dir1/file.txt ../
dir1/dir2/ ../../
dir1/dir2/file.txt ../../
Let's say #input is placed to $var1
sed "do something" <<< $var1
echo $var1
You can try this GNU sed
sed "s#dir[0-9]\+/*#\.\./#g; s#file\.txt##g"
Is your test for dir1/dir2 (without trailing slash) correct? How does sed know if dir2 is a file or a directory? Otherwise you could use:
echo "dir1/dir2/file.txt" | sed s#[^/]*/#../#g | sed 's#[^/]*$##'
I have a example cut down from a log file.
112 172.172.172.1#50912 (ssl.bing.com):
I would like some how to remove the # and numbers after and (): from the url.
Would like the result.
112 172.172.172.1 ssl.bing.com
Here is the sed oneliner I have been working on.
cat newdns.log | sed -e 's/.*query: //' | cut -f 1 -d' ' | sort | uniq -c | sort -k2 > old.log
Thanks
Using sed, you could say:
sed 's/#[0-9]*//;s/(\(.*\)):$/\1/' filename
or, in a single substitution:
sed 's/#[0-9]* *(\(.*\)):$/ \1/' filename
Another sed:
sed -r 's/#[^ ]+|[():]//g'
$ echo '112 172.172.172.1#50912 (ssl.bing.com):' | sed -r 's/#[^ ]+|[():]//g'
112 172.172.172.1 ssl.bing.com
I am trying to execute a shell file, in which there is a line:
sed -ne ':1;/PinnInstitutionPath/{n;p;b1}' Institution | sed -e s/\ //g | sed -e s/\=//g | sed -e s/\;//g | sed -e s/\"//g | sed -e s/\Name//g
And un error message turns out : "Label too long: :1;/PinnInstitutionPath/{n;p;b1}"
I am a noob at linux, so can anyone help me to solve this problem, thank you!
Try changing
sed -ne ':1;/PinnInstitutionPath/{n;p;b1}'
to
sed -ne ':1' -e '/PinnInstitutionPath/{n;p;b1}'
Also, you don't need to call sed so many times:
sed -ne 's/[ =;"]//g; s/Name//g' -e ':1' -e '/PinnInstitutionPath/{n;p;b1}'
Concerning 'sed: Label too long' in Solaris (SunOS) - you will need to split your command into several lines, if you use labels.
In your casesed -ne ':1
/PinnInstitutionPath/{
n
p
b 1
}' Institution | sed -e s/\ //g -e s/\=//g -e s/\;//g -e s/\"//g -e s/\Name//g
I have the following line in a Perl script:
my $temp = `sed 's/ /\n/g' /sys/bus/w1/devices/w1_bus_master1/10-000802415bef/w1_slave | grep t= | sed 's/t=//'`;
Which throws up the error:
"sed: -e expression #1, char 2: unterminated `s' command"
If I run a shell script as below it works fine:
temp1=`sed 's/ /\n/g' /sys/bus/w1/devices/w1_bus_master1/10-000802415bef/w1_slave | grep t= | sed 's/t=//'`
echo $temp1
Anyone got any ideas?
Perl interpretes your \n as a literal newline character. Your command line will therefore look something like this from sed's perspective:
sed s/ /
/g ...
which sed doesn't like. The shell does not interpret it that way.
The proper solution is not to use sed/grep in such a situation at all. Perl is, after all, very, very good at handling text. For example (untested):
use File::Slurp;
my #lines = split m/\n/, map { s/ /\n/g; $_ } scalar(read_file("/sys/bus...));
#lines = map { s/t=//; $_ } grep { m/t=/ } #lines;
Alternatively escape the \n once, e.g. sed 's/ /\\n/g'....
You need to escape the \n in our first regular expression. The backtick-operator in perl thinks it is a control-character and inserts a newline instead of the string \n.
|
V
my $temp = `sed 's/ /\\n/g' /sys/bus/ # ...
I have a list:
asd#domain.com
fff#domain.com
yyy#domain.com
ttt#test.com
rrr#test.com
fff#test.com
yyy#my.com
yyy#my.com
How it possible to do this:
if in whole list we see three or more email with same domain - all duplicates except first one need to remove.
Output:
asd#domain.com
ttt#test.com
yyy#my.com
yyy#my.com
#!/usr/bin/env perl
use strict; use warnings;
use Email::Address;
my %data;
while (my $line = <DATA>) {
my ($addr) = Email::Address->parse($line =~ /^(\S+)/);
push #{ $data{ $addr->host } }, $addr->original;
}
for my $addrs (values %data) {
if (#$addrs > 2) {
print "$addrs->[0]\n";
}
else {
print "$_\n" for #$addrs;
}
}
__DATA__
asd#domain.com
fff#domain.com
yyy#domain.com
ttt#test.com
rrr#test.com
fff#test.com
yyy#my.com
yyy#my.com
sed -s 's/#/#\t/g' test.txt | uniq -f 1 | sed -s 's/#\t/#/g'
The first sed separates the email in 2 fields (name + domain) with a tab character, so that uniq can skip the first field when removing the duplicate domains, and the last sed removes the tab.
I am puzzled why your example output contains yyy#my.com twice but assume it is a mistake.
As long as there are no issues with trailing space characters or more complex forms of email addresses you can do this simply in Perl with
perl -aF# -ne 'print unless $seen{$F[1]}++' myfile
output
asd#domain.com
ttt#test.com
yyy#my.com
This might work for you:
sed ':a;$!N;s/^\([^#]*#\([^\n]*\)\)\n.*\2/\1/;ta;P;D' file
asd#domain.com
ttt#test.com
yyy#my.com
If you don't mind the order, just use sort:
sort -t '#' -u -k 2,2 your_file
If you do mind the order, do
gawk '{print NR "#" $0}' your_file | sort -t '#' -u -k 3,3 | sort -t '#' -k 1,1n | cut -d \# -f 2-