Error when execute sed comand with tcl - sed

The file that I use contains lines like following:
first0.1.second1.1.third1
first0.1.second2.2.third1
first0.n.second2.n.third1
I want to replace ".n." with "{j}" // n belong to [1-100]
So the desired lines are like the following:
first0.{j}.second1.{j}.third1
first0.{j}.second2.{j}.third1
I use the following command under tcl
exec sed -i 's/\.[1-9]\./\.{j}\./g' file
but I got
invalid command name "1-9"
How can I do this substitution?

Brace your expressions...
Single quotes are not quoting mechanism for Tcl, so brace your awk expressions as,
exec sed -i {s/\.[1-9]\./\.{j}\./g} file
Reference : Frequently Made Mistakes in Tcl

If you want to do that in plain Tcl:
set filename "file"
set fh [open $filename r]
set data [read -nonewline $fh]
close $fh
set fh [open $filename w]
puts $fh [regsub -all {\.\d\.} $data {.{j}.}]
close $fh
exec cat $filename
first0.{j}.second1.{j}.third1
first0.{j}.second2.{j}.third1

Related

How to remove carriage return in Perl properly?

I have a code that looks like this
sub ConvertDosToUnix{
my $file = shift;
open my $dosFile, '>', $file or die "\nERROR: Cannot open $file";
select $dosFile;
print("Converting Dos To Unix");
`perl -p -i -e "s/\r//g" $dosFile`;
close $dosFile;
}
Also, the perl command works when I used that outside the subroutine or in the main function. But when I created a separate subroutine for converting dos to unix, I got an error that says:
sh: -c: line 0: syntax error near unexpected token `('
//g" GLOB(0x148b990)' -p -i -e "s/
In which I don't understand.
Also, I also tried dos2unix but for some reason, it doesn't totally remove all the carriage returns like the perl command.
Honestly, you seem a little confused.
The code you have inside backticks is a command that is run by the shell. It needs to be passed a filename. You have your filename in the variable $file, but you pass it the variable $dosFile which contains a file handle (which stringifies to "GLOB(0x148b990)" - hence your error message).
So all your work opening the file is wasted. Really, all you wanted was:
`perl -p -i -e "s/\r//g" $file`
But your system almost certainly has dos2unix installed.
`dos2unix $file`
I should also point out that using backticks is only necessary if you want to capture the output from the command. If, as in this case, you don't need the output, then you should use system() instead.
system('dos2unix', $file);

Use sed to read file into middle of line

File A contains this text (assume that "alpha" and "bravo" are arbitrarily long chunks on a single line):
alpha {FOO} bravo
File B contains an arbitrary amount of text, including all sorts of wacky characters.
I want to replace the string "{FOO}" in file A with the contents of file B. Using sed's 'r' command as follows doesn't work because it inserts the content of file B after that line:
cat A | sed -e "/{FOO}/r B"
Is there any way, using sed, to end up with a file that consists of:
alpha [the contents of B] bravo
? If it would be easier to do this with, say, perl, that's fine too. But I know even less about perl than I do about sed. ;)
Short perl solution:
FOO="$( cat replacement.txt )" perl -pe's/\{FOO\}/$ENV{FOO}/g'
This will work with any character except 00 NUL. If you have to deal with binary files, you can use:
perl -pe'
BEGIN {
open(my $fh, "<", shift(#ARGV)) or die $!;
local $/;
$FOO = <$fh>;
}
s/\{FOO\}/$FOO/g
' replacement.txt
Usage:
perl -i~ -pe'...' file # In-place edit with backup
perl -i -pe'...' file # In-place edit without backup
perl -pe'...' file.in >file.out # Read from named file(s)
perl -pe'...' <file.in >file.out # Read from STDIN
If this is Bash on Linux, this seems to work:
sed -i "s/{FOO}/$(cat B.txt)/g" A.txt
This will directly edit the file A.txt - they don't have to be .txt files, I just added those in to make it more obvious.
As #ikegami points out, this will have problems if there any / in the file - also any \ will probably be ignored.
So in an attempt to solve that, you should be able to use:
sed -i "s/\//%2F/g" B.txt
sed -i "s/{FOO}/$(cat B.txt)/g" A.txt
sed -i "s/%2F/\//g" B.txt
You won't have to use %2F though.
#!/usr/bin/perl
use strict;
use warnings;
open my $fh_a, '<', $ARGV[0] or die "Failed to open $ARGV[0] for reading";
open my $fh_b, '<', $ARGV[1] or die "Failed to open $ARGV[1] for reading";
my $a;
my $b;
{
local $/;
$a = <$fh_a>;
$b = <$fh_b>;
}
close $fh_a;
close $fh_b;
$a =~ s/{FOO}/$b/;
print $a;
As long as your files both fit in memory twice, this should be fine. The local $/; puts the I/O system into 'slurp' mode, reading the whole file in a single operation.
Usage:
perl replace_foo.pl fileA fileB

Executing grep via Perl

I am new to Perl. I am trying to execute grep command with perl.
I have to read input from a file and based on the input, the grep has to be executed.
My code is as follows:
#!/usr/bin/perl
use warnings;
use strict;
#Reading input files line by line
open FILE, "input.txt" or die $!;
my $lineno = 1;
while (<FILE>) {
print " $_";
#This is what expected.
#our $result=`grep -r Unable Satheesh > out.txt`;
our $result=`grep -r $_ Satheesh > out.txt`;
print $result
}
print "************************************************************\n";
But, if I run the script, it looks like a infinite loop and script is keep on waiting and nothing is printed in the out.txt file.
The reason it's hanging is because you forgot to use chomp after reading from FILE. So there's a newline at the end of $_, and it's executing two shell commands:
grep -r $_
Satheesh > out.txt
Since there's no filename argument to grep, it's reading from standard input, i.e. the terminal. If you type Ctl-d when it hangs, you'll then get an error message telling you that there's no Satheesh command.
Also, since you're redirecting the output of grep to out.txt, nothing gets put in $result. If you want to capture the output in a variable and also put it into the file, you can use the tee command.
Here's the fix:
while (<FILE>) {
print " $_";
chomp;
#This is what expected.
#our $result=`grep -r Unable Satheesh > out.txt`;
our $result=`grep -r $_ Satheesh | tee out.txt`;
print $result
}

Dynamic Perl find and replace using grep inside backticks

I am trying to do a dynamic search and replace with Perl on the command line with part of the replacement text being the output of a grep command within backticks. Is this possible to do on the command line, or will I need to write a script to do this?
Here is the command that I thought would do the trick. I thought that Perl would treat the backticks as a command substitution, but instead it just treats the backticks and the content within them as a string:
perl -p -i -e 's/example.xml/http:\/\/exampleURL.net\/`grep -ril "example_needle" *`\/example\/path/g' `grep -ril "example_needle" *`
UPDATE:
Thanks for the helpful answers. Yes, there was a typo in my original one-liner: the target file of grep is supposed to be *.
I wrote a small script based on Schewrn's example, but am having confusing results. Here is the script I wrote:
#!/usr/bin/env perl -p -i
my $URL_First = "http://examplesite.net/some/path/";
my $URL_Last = "/example/example.xml";
my #files = `grep -ril $URL_Last .`;
chomp #files;
foreach my $val (#files) {
#dir_names = split('/',$val);
if(#dir_names[1] ne $0) {
my $url = $URL_First . #dir_names[1] . $URL_Last;
open INPUT, "+<$val" or die $!;
seek INPUT,0,0;
while(<INPUT>) {
$_ =~ s{\Q$URL_Last}{$url}g;
print INPUT $_;
}
close INPUT;
}
}
Basically what I am trying to do is:
Find files that contain $URL_Last.
Replace $URL_Last with $URL_First plus the name of the directory that the matched file is in, plus $URL_Last.
Write the above change to the input file without modifying anything else in the input file.
After running my script, it completely garbled the HTML code in the input file and it cut off the first few characters of each line in the file. This is strange, because I know for sure that $URL_Last only occurs once in each file, so it should only be matched once and replaced once. Is this being caused by a misuse of the seek function?
You should use another delimiter for s/// so that you don't need to escape slashes in the URL:
perl -p -i -e '
s#example.xml#http://exampleURL.net/`grep -ril "example_needle"`/example/path#g'
`grep -ril "example_needle" *`
Your grep command inside the regex will not be executed, as it is just a string, and backticks are not meta characters. Text inside a substitution will act as though it was inside a double quoted string. You'd need the /e flag to execute the shell command:
perl -p -i -e '
s#example.xml#
qq(http://exampleURL.net/) . `grep -ril "example_needle"` . qq(/example/path)
#ge'
`grep -ril "example_needle" *`
However, what exactly are you expecting that grep command to do? It lacks a target file. -l will print file names for matching files, and grep without a target file will use stdin, which I suspect will not work.
If it is a typo, and you meant to use the same grep as for your argument list, why not use #ARGV?
perl -p -i -e '
s#example.xml#http://exampleURL.net/#ARGV/example/path#g'
`grep -ril "example_needle" *`
This may or may not do what you expect, depending on whether you expect to have newlines in the string. I am not sure that argument list will be considered a list or a string.
It seems like what you're trying to do is...
Find a file in a tree which contains a given string.
Use that file to build a URL.
Replace something in a string with that URL.
You have three parts, and you could jam them together into one regex, but it's much easier to do it in three steps. You won't hate yourself in a week when you need to add to it.
The first step is to get the filenames.
# grep -r needs a directory to search, even if it's just the current one
my #files = `grep -ril $search .`;
# strip the newlines off the filenames
chomp #files;
Then you need to decide what to do if you get more than one file from grep. I'll leave that choice up to you, I'm just going to take the first one.
my $file = $files[0];
Then build the URL. Easy enough...
# Put it in a variable so it can be configured
my $Site_URL = "http://www.example.com/";
my $url = $Site_URL . $file;
To do anything more complicated, you'd use URI.
Now the search and replace is trivial.
# The \Q means meta-characters like . are ignored. Better than
# remembering to escape them all.
$whatever =~ s{\Qexample.xml}{$url}g;
You want to edit files using -p and -i. Fortunately we can emulate that functionality.
#!/usr/bin/env perl
use strict;
use warnings; # never do without these
my $Site_URL = "http://www.example.com/";
my $Search = "example-search";
my $To_Replace = "example.xml";
# Set $^I to edit files. With no argument, just show the output
# script.pl .bak # saves backup with ".bak" extension
$^I = shift;
my #files = `grep -ril $Search .`;
chomp #files;
my $file = $files[0];
my $url = $Site_URL . $file;
#ARGV = ($files[0]); # set the file up for editing
while (<>) {
s{\Q$To_Replace}{$url}g;
}
Everyone's answers were very helpful to my writing a script that wound up working for me. I actually found a bash script solution yesterday, but wanted to post a Perl answer in case anyone else finds this question through Google.
The script that #TLP posted at http://codepad.org/BFpIwVtz is an alternative way of doing this.
Here is what I ended up writing:
#!/usr/bin/perl
use Tie::File;
my $URL_First = 'http://example.com/foo/bar/';
my $Search = 'path/example.xml';
my $URL_Last = '/path/example.xml';
# This grep returns a list of files containing "path/example.xml"
my #files = `grep -ril $Search .`;
chomp #files;
foreach my $File_To_Edit (#files) {
# The output of $File_To_Edit looks like this: "./some_path/index.html"
# I only need the "some_path" part, so I'm going to split up the output and only use #output[1] ("some_path")
#output = split('/',$File_To_Edit);
# "some_path" is the parent directory of "index.html", so I'll call this "$Parent_Dir"
my $Parent_Dir = #output[1];
# Make sure that we don't edit the contents of this script by checking that $Parent_Dir doesn't equal our script's file name.
if($Parent_Dir ne $0) {
# The $File_To_Edit is "./some_path/index.html"
tie #lines, 'Tie::File', $File_To_Edit or die "Can't read file: $!\n";
foreach(#lines) {
# Finally replace "path/example.xml" with "http://example.com/foo/bar/some_path/path/example.xml" in the $File_To_Edit
s{$Search}{$URL_First$Parent_Dir$URL_Last}g;
}
untie #lines;
}
}

How can I grep for a value from a shell variable?

I've been trying to grep an exact shell 'variable' using word boundaries,
grep "\<$variable\>" file.txt
but haven't managed to; I've tried everything else but haven't succeeded.
Actually I'm invoking grep from a Perl script:
$attrval=`/usr/bin/grep "\<$_[0]\>" $upgradetmpdir/fullConfiguration.txt`
$_[0] and $upgradetmpdir/fullConfiguration.txt contains some matching "text".
But $attrval is empty after the operation.
#OP, you should do that 'grepping' in Perl. don't call system commands unnecessarily unless there is no choice.
$mysearch="pattern";
while (<>){
chomp;
#s = split /\s+/;
foreach my $line (#s){
if ($line eq $mysearch){
print "found: $line\n";
}
}
}
I'm not seeing the problem here:
file.txt:
hello
hi
anotherline
Now,
mala#human ~ $ export GREPVAR="hi"
mala#human ~ $ echo $GREPVAR
hi
mala#human ~ $ grep "\<$GREPVAR\>" file.txt
hi
What exactly isn't working for you?
Not every grep supports the ex(1) / vi(1) word boundary syntax.
I think I would just do:
grep -w "$variable" ...
Using single quotes works for me in tcsh:
grep '<$variable>' file.txt
I am assuming your input file contains the literal string: <$variable>
If variable=foo are you trying to grep for "foo"? If so, it works for me. If you're trying to grep for the variable named "$variable", then change the quotes to single quotes.
On a recent linux it works as expected. Do could try egrep instead
Say you have
$ cat file.txt
This line has $variable
DO NOT PRINT ME! $variableNope
$variable also
Then with the following program
#! /usr/bin/perl -l
use warnings;
use strict;
system("grep", "-P", '\$variable\b', "file.txt") == 0
or warn "$0: grep exited " . ($? >> 8);
you'd get output of
This line has $variable
$variable also
It uses the -P switch to GNU grep that matches Perl regular expressions. The feature is still experimental, so proceed with care.
Also note the use of system LIST that bypasses shell quoting, allowing the program to specify arguments with Perl's quoting rules rather than the shell's.
You could use the -w (or --word-regexp) switch, as in
system("grep", "-w", '\$variable', "file.txt") == 0
or warn "$0: grep exited " . ($? >> 8);
to get the same result.
Using single quote it wont work. You should go for double quote
For example:
this wont work
--------------
for i in 1
do
grep '$i' file
done
this will work
--------------
for i in 1
do
grep "$i" file
done